Sign up for the KDAB Newsletter
Stay on top of the latest news, publications, events and more.
Go to Sign-up
Timo Buske
22 May 2025
As early as the 1980s, the first stereo 3D visualizations appeared on computers using shutter glasses and anaglyph glasses. The theoretical foundations of stereoscopy are largely established and considered solved. Nevertheless, there is still room for improvement when it comes to usability.
Together with Schneider Digital, a leading manufacturer of 3D stereo display systems, KDAB tackled the issue of usability by developing a demonstrator. In a second phase, we applied the findings and, with support from Bullinger GmbH as a sponsor, created a 3D stereo prototype for QGIS, an open-source geospatial software.
Screenshot: Schneider / KDAB 3D-Stereo-Demonstrator OpenGL / Vulkan
When considering static configurations - such as the playback of a 3D movie in a cinema - all parameters are predefined. The screen size is approximately the same for all viewers. Virtual conditions such as the virtual camera distance, the focal distance to the object, and the “pop-out” effect are either fixed or implicitly embedded in the media. Viewers can simply put on their 3D glasses and enjoy the experience without any prior setup.
The situation is different with dynamic, real-time 3D scenes. Display sizes vary, and the virtual scale of the scenes can differ significantly. This introduces several parameters that must be configured in advance to ensure a comfortable 3D stereo experience:
3D PluraView
3D PluraView
The goal is to define all these parameters automatically, based on virtual and physical conditions, so that the user ideally doesn’t need to adjust anything manually.
The field of view can be determined relatively easily: While it's generally a free parameter set according to the task (e.g., CAD, games, etc.) or personal preference, we can compute a good starting value from the real-world viewing angle, which depends on the display size and physical distance from the viewer.
Automatically determining the focal distance is more challenging. The trivial options are: 1) The user adjusts a slider to manually set the focal distance, or 2) the user clicks in the 3D scene to set a focal point. While the second method is relatively convenient, both require user interaction. We’ve developed a third approach inspired by digital cameras. Most allow users to define a "focus area“, and the camera adjusts its optics to focus within this region. Similarly in our case, we cast several rays within a defined focus area into the 3D scene and calculate the optimal focal distance using median or average depth values. This focus field is adjustable in size and position, initially centered in the image, and typically works out of the box. The focal distance is then updated each frame, allowing continuous adaptation. Within the focus field, incorrect focus is effectively eliminated.
3D PluraView
3D PluraView
Once the focal distance is determined, objects closer to the viewer appear in front of the screen (popping out), and those farther away appear inside it. Depending on taste or application, this impression can be fine-tuned by shifting the pop-out effect. Technically, this can be done by adjusting the focal distance but requiring the user to manually do this whenever focal distance changes would be inconvenient. Therefore, we introduced a dedicated pop-out parameter in our demonstrator, which indirectly adjusts the focal distance each frame “under the hood” without requiring any user input.
The final piece is camera separation, which may initially seem counterintuitive. Aren’t human eyes all roughly the same distance apart? Yes and no. 3D graphics coordinates are not bound to real-world units - a "unit" might represent a millimeter, a meter, or even a kilometer. So what value should be used for camera separation? A common solution, also used in our demonstrator, is to define a base value (e. g., 1/30) and multiply it by the focal distance to derive the camera separation. While this isn’t physically accurate - human eye distance doesn’t change (at least that would be disconcerting) - it works well in practice for stereo 3D rendering.
VR Wall
VR Wall
With these techniques, we’ve managed to define all four parameters in a way that allows users to immediately immerse themselves in a 3D stereo experience without needing to configure settings beforehand or during runtime.
The stereo demonstrator runs on both OpenGL and Vulkan. The source code for the OpenGL / Qt3D version is freely available at: https://github.com/KDABLabs/stereo3ddemo
About KDAB
The KDAB Group is a globally recognized provider for software consulting, development and training, specializing in embedded devices and complex cross-platform desktop applications. In addition to being leading experts in Qt, C++ and 3D technologies for over two decades, KDAB provides deep expertise across the stack, including Linux, Rust and modern UI frameworks. With 100+ employees from 20 countries and offices in Sweden, Germany, USA, France and UK, we serve clients around the world.
Stay on top of the latest news, publications, events and more.
Go to Sign-up
Learn Modern C++
Our hands-on Modern C++ training courses are designed to quickly familiarize newcomers with the language. They also update professional C++ developers on the latest changes in the language and standard library introduced in recent C++ editions.
Learn more