Sign up for the KDAB Newsletter
Stay on top of the latest news, publications, events and more.
Go to Sign-up
Find what you need - explore our website and developer resources
26 August 2025
In this episode, we explore how Large Language Models (LLMs) work behind the scenes — from predicting the next word to powering tools like GitHub Copilot. Jesper introduces insights from his master’s thesis, Jan explains the core mechanics, and we discuss how agents extend LLMs beyond simple text generation.
Chapters
00:00 Welcome
02:06 Jesper's master thesis - the setup for the discussion
04:07 Jan's explantion
07:39 ...but how can it guess next word with code?
09:18 How tools differentiate
12:05 How co-pilot learns about my algorithm library?
13:50 Agents
18:22 What's next?
Links: