Sign up for the KDAB Newsletter
Stay on top of the latest news, publications, events and more.
Go to Sign-up
Find what you need - explore our website and developer resources
20 August 2025
In this episode, we explore how to host and run your own Ollama models on a dedicated server, from setup to integration. We discuss the difficulty of self-hosting, how Ollama works, its scalability, and the available model repository.
You’ll also see a live demo, watch Jan interact directly with a model, and learn how to connect Qt Creator to your own hosted model.
Chapters
00:00 Welcome
01:00 How hard is it to host your own models?
01:32 Ollama
03:30 How does it scale?
06:28 The demo
09:16 The Ollama repository of models
11:30 Jan interacting directly with the model
14:27 Is this useful for checking the model when you integrate it with your own product?
16:28 Connecting Qt Creator to our own model
All videos from "The Curious Developer" series: https://youtu.be/nQPumFkN-Ow?list=PL6CJYn40gN6hJTctPIDqLciH9E_PoIklY
All videos about AI for Coding (with Qt and beyond): https://www.youtube.com/playlist?list=PL6CJYn40gN6gxKdn6HK3CYqPo4B3JxKSz
Please note that non-English dubs for all KDAB videos are auto-generated. These translations have not been moderated by us and may contain inaccuracies. We appreciate your understanding and apologize for any confusion this may cause.