iOS video rendering
The multimedia layer in Qt contains various possibilities for including streaming video in your QtQuick applications – most commonly there’s the
Video element, which you include in your QML interface, specifying the source URL, playback options and so on.
Unfortunately, on iOS you’ll discover a limitation: the backend for QtMultimedia only supports window-level integration. In practice this means any QtQuick items supposed to be positioned on top of the video, in fact appear behind it. At KDAB we have several clients who want to show customised playback interfaces on top of the video stream, with application-specific data, and of course taking advantage of all the visual power and flexibility of QtQuick.
Since Qt is based around the idea of people contributing and collaborating, we decided to investigate the work involved in fixing this limitation, and we’re pleased to say that we found a high-performance solution which will be included in Qt 5.5. The most complex piece of the problem, getting hardware-decoded video frames into an OpenGL texture on the GPU, is handled for us by a family of CoreVideo APIs, and an object called a
CVOpenGLESTextureCache. This interfaces to the iOS DRM and hardware-accelerated video decoding layers, to give us video frames in a format we can use. Even better, the layer takes care of delivering frames with colour-space conversion from YUV (the standard for video) into the RGB space we need for QtQuick rendering.
Here’s the result: Qt Quick, Controls and graphical effects on top of video, on iOS:
Of course, interfacing the CoreVideo classes into the Qt Multimedia code required some experimentation, especially to correctly manage the lifetime of the video frames. Since each frame is consuming GPU texture memory, it’s important we are able to know when they can be safely discarded. Fortunately the existing window-based implementation of video on iOS already provides the OpenGL context needed to initialise the texture cache.
In the end we’re delighted with the result. In particular there are no intermediate copies of the video frame data between what CoreVideo produces and what is passed to the scene-graph for display, so this solution should be suitable for HD video without excessive power consumption.