OpenGL – KDAB https://www.kdab.com The Qt Experts Fri, 29 May 2020 09:29:07 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.2 https://www.kdab.com/wp-content/uploads/stories/cropped-Favicon-32x32.png OpenGL – KDAB https://www.kdab.com 32 32 32184545 Why is my screen still black https://www.kdab.com/why-is-my-screen-still-black/ https://www.kdab.com/why-is-my-screen-still-black/#respond Mon, 18 May 2020 09:00:55 +0000 https://www.kdab.com/?p=23340 Part 2 If you are here from Part 1, welcome back. If you are wondering why we started at Part 2, go to Part 1. So, you tried everything from the first part (that was relative to you), and your screen is still a backlighting test? No problem. (Well.. I mean, clearly there is but […]

The post Why is my screen still black appeared first on KDAB.

]]>
Part 2

If you are here from Part 1, welcome back. If you are wondering why we started at Part 2, go to Part 1.

Black Screen

Uncanny Valley!

So, you tried everything from the first part (that was relative to you), and your screen is still a backlighting test? No problem. (Well.. I mean, clearly there is but you know what I mean) We’ve gathered another five reasons this could be happening and how to go about fixing them.

Issue 6: Directional light is pointing in the wrong direction.

There are a few choices when it comes to choosing a light source for your scene. You can even add more than one. So we have Point lights, Directional lights, Spot lights and Area lights. You can then also have emissive textures that will act as a light source. Some of these get tricky… say, if you have a Spot or Directional light, you have to make sure they are pointing at something. For testing, you can always have a point light in the scene but even with these, you have to make sure they won’t be ‘inside’ an object, aren’t black and have a decent brightness.

Issue 7: It’s so dark in here. Did you not turn any lights on?

If using a lighting model, ensure the lights are actually enabled. A good test here is to set a temporary glClearColor (eg pink) – this will usually show un-lit geometry as black. Another solution is to set an ambient colour term and/or changing the background colour to something lighter.

Issue 8: Are you supplying enough power to your Raspberry Pi 4?

The Raspberry Pi 4 has dual HDMI outputs. Using the Qt EGLFS backend, these are available as different displays – so can run two different Qt applications, one full-screen on each. But if your system is near the limits of its power supply (and the onboard power monitor is quite picky, on the Pi) – the second display might fail to initialise. When debugging this problem, we initially tested different OpenGL backends and QPA plugins, but none of these made any difference. Very occasionally both displays would work, but mostly the second one would fail to initialise with an EGL error. A beefier power supply fixed the problem immediately. (Sometimes, it’s really not a software problem)

Issue 9: The GL context won’t create.

This can happen for instance if you upgrade your drivers (especially common with Linux and NVidia cards) and don’t reboot, there’s a high chance that your system won’t be able to create a GL context anymore. To make sure that this is the issue, start a simple 3D program such as glxgears. If it does not work, only one solution : reach for that restart button.

For more info, see: Checkliste Allgemein (German)

Issue 10: Return of the mac. – You’re using the wrong profile

OpenGL has been around for a while and has many version. Maybe you are using something that requires a more recent version of OpenGL? One thing that is more subtle, especially when updating old code to more modern practices, is the concept of profile. As of 3.2, contexts can be created with a Core profile or a Compatibility profile. The latter preserves compatibility with older fixed function pipeline code and settings. However, it is optional for drivers to implement that profile. Apple, in its wisdom, has decided not to do so. So if you ask for a Compatibility profile, it will create a 2.1 context, and you will not be able to use 3.0 or later features.

So, make sure Core profile is enabled, on the default QSurfaceFormat instance.

Other cross-platform issues are quite common. For example, NVIDIA drivers tend to be forgiving and accept using texture2D() in shaders even though it should not be allowed in a Core profile. So test on as many platforms and driver setups you can lay your hands on.

Once you’ve double checked the camera settings, shaders and your model settings, 6, 7, 8, 9, and 10, you should be good to go! If not, why not comment your issue below and we’ll try to get it in the next part.

About KDAB

If you like this blog and want to read similar articles, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Why is my screen still black appeared first on KDAB.

]]>
https://www.kdab.com/why-is-my-screen-still-black/feed/ 0 23340
Why is my screen black? https://www.kdab.com/why-is-my-screen-black/ https://www.kdab.com/why-is-my-screen-black/#respond Mon, 20 Apr 2020 09:00:48 +0000 https://www.kdab.com/?p=23137 Part 1 So, you’ve just poured your heart and soul into some real-time 3D rendering code and hit render. Wringing your hands in anticipation you wait for the screen to show your marvellous creation. Still… waiting. It says it’s done but, nothing. Well, maybe not nothing but simply darkness. You stare into the deep dark […]

The post Why is my screen black? appeared first on KDAB.

]]>
Part 1

So, you’ve just poured your heart and soul into some real-time 3D rendering code and hit render. Wringing your hands in anticipation you wait for the screen to show your marvellous creation. Still… waiting. It says it’s done but, nothing. Well, maybe not nothing but simply darkness. You stare into the deep dark void as your own reflection stares back at you.

Black Screen

Your beautifully rendered item

So, what went wrong?

Issue 1: The rendered object is actually, like some good old English Panto, “behind you!”

This is usually a common issue for the first render. It’s easy to overlook the Z position of the camera and even which way the camera was facing. There are then issues of various libraries and programs using different coordinate systems. OpenGL, for example, looks along the negative Z axis whilst Direct3D uses the positive.

Also if it’s a planar piece of geometry such as a quad to be used as a sprite, ensure that you are not rendering it exactly edge on. Yes we’ve done this too!

Try setting your window clear color to something other than black. That way even if your fragment shader (see later) is broken and outputting all fragments as black you will at least see the silhouette of the object.

Issue 2: You’re inside the object!

So, a few things can be at play here. First of all make sure the coordinates of the camera aren’t the same or too close to the item(s) you are rendering, like with Issue 1. However, you should also double check the ‘model units’. Is the model using mm instead of m, for example. This can be a common issue with shared or imported models.

Once you’ve checked the relative positions and orientation of your camera, also check that the object falls within the limits of the view frustum’s near and far planes.

Issue 3: Your triangle winding could be backwards.

If your winding order is opposite to what you expect and you have back or front face culling enabled then the rasterizer may not be generating any fragments for your object at all.

A fix for this would be to use a tool such as GammaRay or Apitrace to check geometry. In OpenGL you can also disable culling via glDisable(GL_CULL_FACE).

Issue 4: Shaders – are they compiling and linking? Are you feeding them the correct resources?

Make sure that #version is the very first token – no new lines, or anything of the sort before, as some drivers check that religiously. Have your application code check for compilation and linker failures and output any errors. Often it is simple syntactical errors or issues with the interface between shader stages. Also check that your shader stages are outputting exactly what you expect. For fragment shaders, output intermediate variables as a color to see what is going on.

Also use tools such as apitrace, renderdoc or nSight to introspect frames and check that you really have bound the correct set of buffers and textures.

Issue 5: Qt 3D specific: No techniques matched the actual renderer.

When building Qt 3D scenes that are designed to run on multiple platforms, materials need to provide multiple shaders targeting each specific version of OpenGL. Each version information is stored on QTechnique nodes attached to a QEffect node. Similarly, you can implement different algorithms (forward vs deferred rendering for example), so they get assigned filter keys which are key/value pairs. Finally, some algorithms require multiple passes, but may use different shaders in different passes. This pass information is stored in QRenderPass nodes (attached to the technique), also using filter keys.

When Qt 3D comes to do the render it needs to select the technique based on the available hardware. It will also need to select the technique appropriate to the rendering algorithm that is used. And when it processes each render pass, it will also need to select the appropriate shader based on the render pass. This can be controlled by building a frame graph which QTechniqueFilter nodes and QRenderPassFilter nodes.

You can find a more detailed explanation here.

A common source of “not seeing anything” (or missing some objects) is not providing valid shaders for a specific combination of active technique and current render pass.

In order to help debug this, the new debugging overlay introduced in Qt 3D 5.15 provides a way of dumping the filter state of the scene graph and frame graph which helps understand why some object may not be renderer. It will dump technique details only for the active graphics API (i.e. if you’re running on the desktop, it will not show details relative to OpenGL ES techniques).

For example, here’s a dump of the information for a very simple scene using the default forward renderer:

Active Graphics API: OpenGL 4.1 (Core Profile) (ATI Technologies Inc.)
Render Views:
  1 [ Qt3DExtras::QForwardRenderer <renderingStyle: forward> ]
Scene Graph:
  Qt3DCore::Quick::Quick3DEntity{1}
    Qt3DRender::QCamera{13}
    Qt3DExtras::QOrbitCameraController{16}
    Qt3DCore::Quick::Quick3DEntity{75} [ T <renderingStyle: forward>  ]
    Qt3DCore::Quick::Quick3DEntity{86} [ T <renderingStyle: forward>  ]

This shows the active technique (desktop OpenGL on macOS); the technique filter used in the frame graph (QForwardRenderer is derived from QTechniqueFilter); the details of which matching techniques are assigned to materials.

So, once you’ve double checked the camera settings, shaders and your model settings, go again and you should be bright as rain!

About KDAB

If you like this blog and want to read similar articles, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Why is my screen black? appeared first on KDAB.

]]>
https://www.kdab.com/why-is-my-screen-black/feed/ 0 23137
Debugging and Profiling Qt 3D applications https://www.kdab.com/debugging-profiling-qt-3d-apps/ https://www.kdab.com/debugging-profiling-qt-3d-apps/#comments Tue, 24 Mar 2020 10:00:53 +0000 https://www.kdab.com/?p=22490 Qt 3D, being a retained mode high level graphic API abstraction, tries to hide most of the details involved in rendering the data provided by applications. It makes a lot of decisions and operations in the background in order to get pixels on the screen. But, because Qt 3D also has very rich API, developers […]

The post Debugging and Profiling Qt 3D applications appeared first on KDAB.

]]>
Qt 3D, being a retained mode high level graphic API abstraction, tries to hide most of the details involved in rendering the data provided by applications. It makes a lot of decisions and operations in the background in order to get pixels on the screen. But, because Qt 3D also has very rich API, developers can have a lot of control on the rendering by manipulating the scene graph and, more importantly, the frame graph. It is however sometimes difficult to understand how various operations affect performance.

In this article, we look at some of the tools, both old and new, that can be used to investigate what Qt 3D is doing in the back end and get some insight into what is going on during the frame.

 

Built in Profiling

The first step in handling performance issues is, of course, measuring where time is spent. This can be as simple as measuring how long it took to render a given frame. But to make sense of these numbers, it helps to have a notion of how complex the scene is.

In order to provide measurable information, Qt 3D introduces a visual overlay that will render details of the scene, constantly updated in real time.

 

The overlay shows some real time data:

  • Time to render last frame and FPS (frames per second), averaged and plotted over last few seconds. As Qt 3D is by default locking to VSync, this should not exceed 60fps on most configurations.
  • Number of Jobs: these are the tasks that Qt 3D executes on every frame. The number of jobs may vary depending on changes in the scene graph, whether animations are active, etc.
  • Number of Render Views: this matches loosely to render pass, see below discussion on the frame graph.
  • Number of Commands: this is total number of draw calls (and compute calls) in the frame.
  • Number of Vertices and Primitives (triangles, lines and points combined).
  • Number of Entities, Geometries and Textures in the scene graph. For the last two, the overlay will also show the number of geometries and textures that are effectively in use in the frame.

As seen in the screen shots above, the scene graph contains two entities, each with one geometry. This will produce two draw calls when both objects are in frame. But as the sphere rotates out of the screen, you can see the effect of the view frustum culling job which is making sure the sphere doesn’t get rendered, leaving a single draw call for the torus.

This overlay can be enabled by setting the showDebugOverlay property of the QForwardRenderer to true.

 

Understanding Rendering Steps

To make sense of the numbers above, it helps to understand the details of the scene graph and frame graph.

In the simple case, as in the screen shots, an entity will have a geometry (and material, maybe a transform). But many entities may share the same geometry (a good thing if appropriate!). Also, entities may not have any geometry but just be used for grouping and positioning purposes.

So keeping an eye on the number of entities and geometries, and seeing how that effects the number of commands (or draw calls), is valuable. If you find one geometry drawn one thousand times in a thousand separate entities, if may be a good indication that you should refactor your scene to use instanced rendering.

In order to provide more details, the overlay has a number of buttons that can be used to dump the current state of the rendering data.

For a deeper understanding of this, you might consider our full Qt 3D Training course.

Scene Graph

Dumping the scene graph will print data to the console, like this:

Qt3DCore::Quick::Quick3DEntity{1} [ Qt3DRender::QRenderSettings{2}, Qt3DInput::QInputSettings{12} ]
  Qt3DRender::QCamera{13} [ Qt3DRender::QCameraLens{14}, Qt3DCore::QTransform{15} ]
  Qt3DExtras::QOrbitCameraController{16} [ Qt3DLogic::QFrameAction{47}, Qt3DInput::QLogicalDevice{46} ]
  Qt3DCore::Quick::Quick3DEntity{75} [ Qt3DExtras::QTorusMesh{65}, Qt3DExtras::QPhongMaterial{48},
                                       Qt3DCore::QTransform{74} ]
  Qt3DCore::Quick::Quick3DEntity{86} [ Qt3DExtras::QSphereMesh{76}, Qt3DExtras::QPhongMaterial{48}, 
                                       Qt3DCore::QTransform_QML_0{85} ]

This prints the hierarchy of entities and for each of them lists all the components. The id (in curly brackets) can be used to identify shared components.

Frame Graph

Similar data can be dumped to the console to show the active frame graph:

Qt3DExtras::QForwardRenderer
  Qt3DRender::QRenderSurfaceSelector
    Qt3DRender::QViewport
      Qt3DRender::QCameraSelector
        Qt3DRender::QClearBuffers
          Qt3DRender::QFrustumCulling
            Qt3DRender::QDebugOverlay

This is the default forward renderer frame graph that comes with Qt 3D Extras.

As you can see, one of the nodes in that graph is of type QDebugOverlay. If you build your own frame graph, you can use an instance of that node to control which surface the overlay will be rendered onto. Only one branch of the frame graph may contain a debug node. If the node is enabled, then the overlay will be rendered for that branch.

The frame graph above is one of the simplest you can build. They may get more complicated as you build effects into your rendering. Here’s an example of a Kuesa frame graph:

Kuesa::PostFXListExtension
  Qt3DRender::QViewport
    Qt3DRender::QClearBuffers
      Qt3DRender::QNoDraw
    Qt3DRender::QFrameGraphNode (KuesaMainScene)
      Qt3DRender::QLayerFilter
        Qt3DRender::QRenderTargetSelector
          Qt3DRender::QClearBuffers
            Qt3DRender::QNoDraw
          Qt3DRender::QCameraSelector
            Qt3DRender::QFrustumCulling
              Qt3DRender::QTechniqueFilter
                Kuesa::OpaqueRenderStage (KuesaOpaqueRenderStage)
                  Qt3DRender::QRenderStateSet
                    Qt3DRender::QSortPolicy
            Qt3DRender::QTechniqueFilter
              Kuesa::OpaqueRenderStage (KuesaOpaqueRenderStage)
                Qt3DRender::QRenderStateSet
                  Qt3DRender::QSortPolicy
            Qt3DRender::QFrustumCulling
              Qt3DRender::QTechniqueFilter
                Kuesa::TransparentRenderStage (KuesaTransparentRenderStage)
                  Qt3DRender::QRenderStateSet
                    Qt3DRender::QSortPolicy
            Qt3DRender::QTechniqueFilter
              Kuesa::TransparentRenderStage (KuesaTransparentRenderStage)
                Qt3DRender::QRenderStateSet
                  Qt3DRender::QSortPolicy
          Qt3DRender::QBlitFramebuffer
            Qt3DRender::QNoDraw
    Qt3DRender::QFrameGraphNode (KuesaPostProcessingEffects)
      Qt3DRender::QDebugOverlay
        Qt3DRender::QRenderStateSet (ToneMappingAndGammaCorrectionEffect)
          Qt3DRender::QLayerFilter
            Qt3DRender::QRenderPassFilter

If you are not familiar with the frame graph, it is important to understand that each path (from root to leaf) will represent a render pass. So the simple forward renderer will represent a simple render pass, but the Kuesa frame graph above contains eight passes!

It is therefore often easier to look at the frame graph in term of those paths. This can also be dumped to the console:

[ Kuesa::PostFXListExtension, Qt3DRender::QViewport, Qt3DRender::QClearBuffers, Qt3DRender::QNoDraw ]
[ Kuesa::PostFXListExtension, Qt3DRender::QViewport, Qt3DRender::QFrameGraphNode (KuesaMainScene),
  Qt3DRender::QLayerFilter, Qt3DRender::QRenderTargetSelector, Qt3DRender::QClearBuffers, Qt3DRender::QNoDraw ]
[ Kuesa::PostFXListExtension, Qt3DRender::QViewport, Qt3DRender::QFrameGraphNode (KuesaMainScene), 
  Qt3DRender::QLayerFilter, Qt3DRender::QRenderTargetSelector, Qt3DRender::QCameraSelector, Qt3DRender::QFrustumCulling, 
  Qt3DRender::QTechniqueFilter, Kuesa::OpaqueRenderStage (KuesaOpaqueRenderStage), Qt3DRender::QRenderStateSet, 
  Qt3DRender::QSortPolicy ]
[ Kuesa::PostFXListExtension, Qt3DRender::QViewport, Qt3DRender::QFrameGraphNode (KuesaMainScene), 
  Qt3DRender::QLayerFilter, Qt3DRender::QRenderTargetSelector, Qt3DRender::QCameraSelector, Qt3DRender::QTechniqueFilter, 
  Kuesa::OpaqueRenderStage (KuesaOpaqueRenderStage), Qt3DRender::QRenderStateSet, Qt3DRender::QSortPolicy ]
[ Kuesa::PostFXListExtension, Qt3DRender::QViewport, Qt3DRender::QFrameGraphNode (KuesaMainScene),
  Qt3DRender::QLayerFilter, Qt3DRender::QRenderTargetSelector, Qt3DRender::QCameraSelector, Qt3DRender::QFrustumCulling,
  Qt3DRender::QTechniqueFilter, Kuesa::TransparentRenderStage (KuesaTransparentRenderStage), Qt3DRender::QRenderStateSet,
  Qt3DRender::QSortPolicy ]
[ Kuesa::PostFXListExtension, Qt3DRender::QViewport, Qt3DRender::QFrameGraphNode (KuesaMainScene),
  Qt3DRender::QLayerFilter, Qt3DRender::QRenderTargetSelector, Qt3DRender::QCameraSelector, Qt3DRender::QTechniqueFilter,
  Kuesa::TransparentRenderStage (KuesaTransparentRenderStage), Qt3DRender::QRenderStateSet, Qt3DRender::QSortPolicy ]
[ Kuesa::PostFXListExtension, Qt3DRender::QViewport, Qt3DRender::QFrameGraphNode (KuesaMainScene),
  Qt3DRender::QLayerFilter, Qt3DRender::QRenderTargetSelector, Qt3DRender::QBlitFramebuffer, Qt3DRender::QNoDraw ]

Hopefully this is a good way of finding out issues you may have when building your custom frame graph.

Draw Commands

On every pass of the frame graph, Qt 3D will traverse the scene graph, find entities that need to be rendered, and for each of them, issue a draw call. The number of objects drawn in each pass may vary, depending on whether the entities and all of their components are enabled or not, or whether entities get filtered out by using QLayers (different passes may draw different portions of the scene graph).

The new profiling overlay also gives you access to the actual draw calls.

So in this simple example, you can see that two draw calls are made, both for indexed triangles. You can also see some details about the render target, such as the viewport, the surface size, etc.

That information can also be dumped to the console which makes it easier to search in a text editor.

 

Built in Job Tracing

The data above provides a useful real time view on what is actually being processed to render a particular frame. However, it doesn’t provide much feedback as to how long certain operations take and how that changes during the runtime of the application.

In order to track such information, you need to enable tracing.

Tracing tracks, for each frame, what jobs are executed by Qt 3D’s backend. Jobs involve updating global transformations and the bounding volume hierarchy, finding objects in the view frustum, layer filtering, picking, input handling, animating, etc. Some jobs run every frame, some only run when internal state needs updating.

If your application is slow, it may be because jobs are taking a lot of time to complete. But how do you find out which jobs take up all the time?

Qt 3D has had tracing built in since a few years already, but it was hard to get to. You needed to do your own build of Qt 3D and enable tracing when running qmake. From thereon, every single run of an application linked against that build of Qt 3D would generate a trace file.

In 5.15, tracing is always available. It can be enabled in two ways:

  • By setting the QT3D_TRACE_ENABLED environment variable before the application starts (or at least before the aspect engine is created). This means the tracing will happen for the entire run of the application.
  • If you’re interested in tracing for a specific part of your application’s life time, you can enable the overlay and toggle tracing on and off using the check for Jobs. In this case, a new trace file will be generated every time the tracing is enabled.

For every tracing session, Qt 3D will generate one file in the current working directory. So how do you inspect the content of that file?

KDAB provides a visualisation tool but it is not currently shipped with Qt 3D. You can get the source and build it from GitHub here. Because jobs change from one version of Qt 3D to the next, you need to take care to configure which version was used to generate the trace files. Using that tool, you can open the trace files. It will render a time line of all the jobs that were executed for every frame.

In the example above, you can see roughly two frames worth of data, with jobs executed on a thread pool. You can see the longer running jobs, in this case:

  • RenderViewBuilder jobs, which create all the render views, one for each branch in the frame graph. You can see some of them take much longer that others.
  • FrameSubmissionPart1 and FrameSubmissionPart2 which contain the actual draw calls.

Of course, you need to spend some time understanding what Qt 3D is doing internally to make sense of that data. As with most performance monitoring tools, it’s worth spending the time experimenting with this and seeing what gets affected by changes you make to your scene graph or frame graph.

Job Dependencies

Another important source of information when analysing performance of jobs is looking at the dependencies. This is mostly useful for developers of Qt 3D aspects.

Using the profiling overlay, you can now dump the dependency graph in GraphViz dot format.

Other Tools

Static capabilities

Qt 3D 5.15 introduces QRenderCapabilities which can be used to make runtime decisions based on the actual capabilities of the hardware the application is running on. The class supports a number of properties which report information such as the graphics API in use, the card vendor, the supported versions of OpenGL and GLSL. It also has information related to the maximum number of samples for MSAA, maximum texture size, if UBOs and SSBOs are supported and what their maximum size is, etc.

Third Party Tools

Of course, using more generic performance tools is also a good idea.

perf can be used for general tracing, giving you insight where time is spent, both for Qt 3D and for the rest of your application. Use it in combination with KDAB’s very own hotspot to get powerful visualisation of the critical paths in the code.

Using the flame graph, as show above (captured on an embedded board), you can usually spot the two main sections of Qt 3D work, the job processing and the actual rendering.

Other useful tools are the OpenGL trace capture applications, either the generic ones such as apitrace and renderdoc, or the ones provided your hardware manufacturer, such as nVidia or AMD.

 

Conclusion

We hope this article will help you get more performance out of your Qt 3D applications. The tools, old and new, should be very valuable to help find bottlenecks and see the impact of changes you make to your scene graph or frame graph. Furthermore, improvements regarding performance are in the works for Qt 6, so watch this space!

About KDAB

If you like this blog and want to read similar articles, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Debugging and Profiling Qt 3D applications appeared first on KDAB.

]]>
https://www.kdab.com/debugging-profiling-qt-3d-apps/feed/ 2 22490
Qt World Summit 2019 talk videos are online https://www.kdab.com/qt-world-summit-2019-talk-videos-are-online/ https://www.kdab.com/qt-world-summit-2019-talk-videos-are-online/#respond Thu, 13 Feb 2020 10:00:07 +0000 https://www.kdab.com/?p=22295 Did you miss the past Qt World Summit? Were you there, but you couldn’t attend that talk or two that you really wanted to see because the conference was so, so packed with awesome content? Fear no more! We are glad to announce that the talks at the past Qt World Summit 2019 in Berlin […]

The post Qt World Summit 2019 talk videos are online appeared first on KDAB.

]]>
Did you miss the past Qt World Summit?

Were you there, but you couldn’t attend that talk or two that you really wanted to see because the conference was so, so packed with awesome content?

Fear no more! We are glad to announce that the talks at the past Qt World Summit 2019 in Berlin (or QtWS19, for the friends) have been video recorded and are now available online! You can now catch up with the latest news, improvements and best practices around Qt and its ecosystem, all from the comfort of your sofa office chair.

We have gathered all the talks given by KDAB engineers on this summary page, where you can find also more information about the contents of each talk and download the slides. For your convenience, we have also collected all of KDAB talks in a YouTube playlist:

The talks by other speakers are available for viewing in the Resource Center on www.qt.io.

Happy hacking!

About KDAB

If you like this blog and want to read similar articles, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Qt World Summit 2019 talk videos are online appeared first on KDAB.

]]>
https://www.kdab.com/qt-world-summit-2019-talk-videos-are-online/feed/ 0 22295
Qt 3D Synchronisation Revisited https://www.kdab.com/qt-3d-synchronisation-revisited/ https://www.kdab.com/qt-3d-synchronisation-revisited/#comments Fri, 25 Oct 2019 08:00:37 +0000 https://www.kdab.com/?p=21309 As mentioned in the previous article in this series, Qt 3D 5.14 is bringing a number of changes aimed at improving performance. Most people familiar with Qt 3D will know that the API is designed around the construction of a scene graph, creating a hierarchy of Entities, each of them having having any number of […]

The post Qt 3D Synchronisation Revisited appeared first on KDAB.

]]>
As mentioned in the previous article in this series, Qt 3D 5.14 is bringing a number of changes aimed at improving performance.

Most people familiar with Qt 3D will know that the API is designed around the construction of a scene graph, creating a hierarchy of Entities, each of them having having any number of Components (the frame graph is similar). Entities and Components are only data. Behaviour, such as rendering or animating, is provided by a number of aspects.

Since Qt 3D was designed to be a general simulation engine, different aspects will care about different things when accessing the scene graph. They will also need to store different state data for each object. On top of that, aspects do much of their work using jobs which are parallelised when possible using a thread pool. So, in order to keep data related to each aspect separate and to avoid locking when accessing shared resources, each aspect maintains, when appropriate, a backend object used to store the state data matching each frontend object.

In this article, we examine how state is synchronised between frontend and backend and how that process was changed in 5.14 to improve performance and memory usage.

Frontend and Backend Nodes

Qt 3D scenes are created by building a tree of Entities, and assigning Components to them. Qt3DCore::QNode is the base class for those types.

Entities and Components have properties that control how that will be handled by the aspects. For example, they all have an enabled flag. The Transform component will have translation, rotation and scale properties, etc.

In order to perform its tasks, each aspect will need to create backend version of most nodes to store its own information. For example, the backend Entity node will store the local bounding volume of the geometry component assigned to it, as well as the world space bounding volume of own geometry and that of all of its children. The backend Transform node will have a copy of the homogenous transformation matrix. And so on…

Obviously, if the data in the frontend changes, say some qml-driven animation changes the translation property of a Transform component, then the backend needs to be notified so it can get a copy of the data and trigger updates in the aspects (like recomputing the transformed bounding volumes for culling).

This process of synchronising frontend changes to backend node was implemented using change messages. Every time a property changed (as determined by tracking signals), the name of the property and the new value would be stored in a message object, which would be put on a queue. On the next frame, all the message in the queue would be delivered to the backend objects in every aspect (if they existed). Each backend node would look at the name of the affected property and copy out the updated value, triggering some updating if necessary.

This messaging system was very useful for Qt 3D:

  • It clearly isolated frontend and backend nodes. In particular, the frontend nodes know nothing about the backend nodes which is vital for the extendability of the aspect engine.
  • Since the aspects (and thus the backend nodes) lived in a separate thread than the frontend nodes, this prevented having any shared properties that would require locking.
  • As we will see below, messaging was a useful pattern that could be extended for communication usage within Qt 3D.

However it had a number of drawbacks, in particular:

  • Every change generated a message, resulting in a potentially large number of allocations (messages are not QObjects but have a private pimple).
  • Backend nodes needed to perform string-based comparisons to find out which property was affected by message.

On some platforms, these drawbacks could have serious performance implications. So 5.14 introduces a new way of propagating changes to address these issues.

Change propagation in Qt 3D 5.14

As described in the previous article by Paul, one of the major changes in 5.14 is the removal of the aspect thread. This means frontend nodes and the matching backend nodes for each aspect now all live in the main thread. So it was now safe to introduce a non-locking synchronisation mechanism to copy the changed data from the frontend to the backend.

The process now works like this:

  • Changes to properties are tracked by listening to signals. When one is emitted, the corresponding node is added to a list of dirty nodes.
  • Once every frame, the aspect engine will take the list of dirty frontend nodes and, for each aspect, will lookup the matching backend node and call a virtual method passing the pointer to the frontend node that was changed.
  • Inside the virtual method, the backend node will copy the data directly from the frontend node instance using its public (and sometimes private) API.

So, no more memory allocations, changes to multiple properties get batched in a single update call, and string-based comparisons are no longer needed to find out what has changed.

In Qt 5.14, all the nodes from the 4 aspects that Qt 3D included by default have been updated to use this new synchronisation mechanism. In an ideal world, the virtual method would have been added to the base class of all backend nodes, Qt3DCore::QBackendNode. However this would have broken binary compatibility. That class has a pimple, the virtual method could have been added there. However, very few of the several dozens of backend node types actually implement derived pimples so it would have required adding a rather large amount of new classes. As you will see looking at the code, each aspect uses an intermediate private class derived from QBackendNode for all the common code for the aspect, so the virtual method was added there. This will be cleaned up in Qt 6 to avoid the duplication of the dispatch logic.

It’s not just about properties

As mentioned earlier, messages were not only used to dispatch changes in properties. They were used in a number of other places also.

Backend node creation and deletion

When frontend nodes get created or deleted, the aspects need to manage the life cycle of the matching backend nodes (if appropriate). This was done previously by, you guessed it, sending messages.

This process has also been changed in favour of a more direct approach. The aspect engine keeps track of created and deleted nodes and will inform the aspects once every frame. The newly created backend nodes will go through the same synching process that is used to update properties.

Hierarchy changes

Other crucial bits of information that need to be synchronised are hierarchy changes and component list changes. If the frontend scene graph is changed in any way (objects added or removed, reparenting, etc), then the backends of every aspect also need to update their internal representation. And, yes, those changes used to be notified using messages.

Now in 5.14, the pimple attached to each QBackendNode has virtual methods that will be called when a node is reparented or a component is added or removed (i.e., it was done properly 🙂 ). Up to now, only Entity in the render aspect cared about those details, so adding a derived private class for that was the way to go.

Spying

Message dispatch is controlled by a subscription mechanism. The backend node would subscribe to message from the frontend node (and vice-versa, see below). But nodes could also subscribe to change messages from other nodes!

For example, the Scene2D backend node subscribed to messages from the ObjectPicked backend node to know when mouse events occurred and forward them to the rendered QtQuick scene.

This has been changed in 5.14 to use the more traditional signal and slot mechanism. This is of course implemented in the frontend nodes (as backend nodes are not QObjects) but it has little overhead as everything now lives in the main thread.

What about the other way?

We saw how property changes in the frontend get propagated to the backend. But what about changes in the backend?

Lots of things happen in the backend jobs: loading of meshes and textures, computation of bounding volumes, updating of animations, etc. Some of the resulting information needs to be propagated to the frontend.

Actually some of it needs to be propagated to the backend of other aspects! For example, as the animation aspect updates the translation value of a transform over time, the changes need to be sent to the frontend but also to the backend in other aspects, in particular the render aspect so it could update the transformation matrices.

Now the actual changes are usually computed in jobs running on a thread pool. Even though the main thread is stopped waiting for all the jobs to complete, it is not safe for these to access frontend nodes directly. So up to now, changes were propagated using messages, flowing backend to frontend and backend to backend, depending on the type of update. This meant, again, potentially lots of allocation on many threads running concurrently.

In order to remove use of messages for this purpose, jobs now get notified, on the main thread, that the processing has completed, so they get a chance to safely update nodes. The process works like this, on every frame:

  • Jobs are run as before in the thread pool.
  • Each job is responsible for keeping track of data that needs to be propagated to the nodes.
  • When all jobs have completed on the thread pool, each job is notified using a virtual method on the job’s private pimple. Jobs can then look up nodes and deliver the changes using public and/or private API.

For example, the animation aspect will interpolate values of properties over time. It knows nothing about which node the properties belong to. This abstraction previously relied on messages: here’s a new value for property “translation”. Now it’s driven by Qt’s very own property system, just calling setProperty on the frontend node. That node will emit a change signal, which will cause the node to be marked dirty and, in the next frame, the new value will be synchronised to the backend node in other aspects. All will animate properly as before.

But, as explained above, this will happen with very little memory allocations and a lot fewer function calls.

This has been implemented for all jobs in Qt 3D’s default aspects. Thankfully, not all of them need to propagate data in this way 🙂

A note to creators of custom nodes and aspects

As you will have no doubt noticed by now, the message mechanism was central to a lot of Qt 3D’s processes. So now that we’ve changed the way all the data flows around, have we broken all your code?

We hope not!

In particular, the new direct syncing mechanism is opt-in. When backend nodes are registered with the aspect, they need to be flagged as supporting the new type of syncing. If they do so, then no messages will be created, neither at creation time, nor at update time.

If they do not, then the old system remains. With one change: since Qt 3D no longer tracks which properties change (it only flags the node as dirty), when it comes to process dirty nodes at each frame, it needs to create a change message for every property of the object. If the node doesn’t support direct syncing, the default implementation will delivery property change messages for every property defined on the object. So the message handling method will be called much more often than before. The good thing though is that it uses a stack allocated message rather than a heap allocated one. So it’s much lighter on the memory.

But we would encourage developers who have created their own nodes and aspects to update their code and use the new synchronisation mechanisms.

Conclusion

Hopefully, this post will have explained how these changes clarify some of the changes going on in the upcoming release of Qt 3D. Performance wise, these should be very beneficial. We have seen property update times improved by 300-500% on highly dynamic scenes (with lots of objects animated from QtQuick). This also vastly reduces the number of allocations, which should be particularly interesting on embedded platforms where fragmented memory and threaded allocations can be problematic.

For example, Kuesa 1.1 contains a demo called manyducks, which renders and animates, well, many ducks, 2000 of them. Animation is driven from the main thread using a timer. Every duck slowly spins. Up to Qt 3d 5.13, a 30 second run of this demo would generate produce 1.7 million(!) instances of the property update message (which allocating a pimple, contains a QVariant, etc). This number was reduced to zero in 5.14.

Of course, this is an atypical example. Rotation is performed around the 3 axis so updating each duck produced 3 messages. And of course, 2000 identical ducks should really be rendered using instance rendering and all transformation matrices stored in a buffer. And no need to tune your screens, if the screenshot above appear blurry, it’s because depth of field effect is enabled 🙂

The post Qt 3D Synchronisation Revisited appeared first on KDAB.

]]>
https://www.kdab.com/qt-3d-synchronisation-revisited/feed/ 2 21309
KDAB at Embedded Technology, Japan https://www.kdab.com/kdab-at-embedded-technology-japan/ https://www.kdab.com/kdab-at-embedded-technology-japan/#respond Thu, 25 Oct 2018 09:39:52 +0000 https://www.kdab.com/?p=18801 KDAB is proud to announce that for the first time ever we will be present at Embedded Technology outside of Tokyo in Japan. Every year more than 25000 visitors attend over 3 days! The event takes place in the Pacifico Yokohama exhibition center and focuses on Embedded AI, IoT Wireless Technology, Smart Sensing and Safety […]

The post KDAB at Embedded Technology, Japan appeared first on KDAB.

]]>
KDAB is proud to announce that for the first time ever we will be present at Embedded Technology outside of Tokyo in Japan. Every year more than 25000 visitors attend over 3 days! The event takes place in the Pacifico Yokohama exhibition center and focuses on Embedded AI, IoT Wireless Technology, Smart Sensing and Safety & Security. KDAB will present Automotive and Industrial customers’ showcases and tools around Modern C++, Qt and 3D. So if you want to learn more about C++ Modernization, Qt migrations, Qt 3D and OpenGL integration as well as profiling and performance optimization make sure to join us. We also take this opportunity to announce the incorporation of TQCS KK, a joint venture between KDAB, tQCS Inc., Software Research Associates Inc. (SRA group) and ISB Corporation. This materializes the will of KDAB to bring its expertize in C++, Qt and 3D to the Asian market and to cooperate with already well-established companies to strengthen the existing ecosystem there. As part of this cooperation we are happy to announce we will be present at two booths this year:

  • Come and see us at the ISB Corporation booth : Hall A, booth 14
  • Come and see us at the SRA Group booth: Hall A, booth 21

Book a meeting with our C++, Qt and Modern 3D experts.

The post KDAB at Embedded Technology, Japan appeared first on KDAB.

]]>
https://www.kdab.com/kdab-at-embedded-technology-japan/feed/ 0 19355
Efficient custom shapes in Qt Quick https://www.kdab.com/efficient-custom-shapes-in-qt-quick/ https://www.kdab.com/efficient-custom-shapes-in-qt-quick/#comments Wed, 26 Sep 2018 13:38:20 +0000 https://www.kdab.com/?p=16585 QtQuick includes basic visual item to construct many common user-interface components, but people often ask how to create different visual appearances, beyond rectangles, round-rectangles and images. There’s various solutions to this problem, and with Qt 5.10, there’s the new Shapes module which makes it easy to define paths, ellipses and other standard SVG drawing elements. […]

The post Efficient custom shapes in Qt Quick appeared first on KDAB.

]]>
QtQuick includes basic visual item to construct many common user-interface components, but people often ask how to create different visual appearances, beyond rectangles, round-rectangles and images. There’s various solutions to this problem, and with Qt 5.10, there’s the new Shapes module which makes it easy to define paths, ellipses and other standard SVG drawing elements.

However, if these built-in solutions aren’t quite what is needed, or you need to squeeze some additional performance from limited hardware, it’s always possible to create a custom Item using the Scene-Graph API of Qt Quick, and some knowledge of hardware drawing APIs, such as OpenGL and GLES. This is usually a somewhat complex solution, and it’s important to stick to the design philosophy of Qt Quick: create simple, efficient and re-usable items which can be configured via properties. Here I want to show an example item which was inspired by one of KDAB’s Automotive customers: an angular sector:

Sector {
    id: baseSector
    color: "white"; endColor: "black"
    outerColor: "red"; outerEndColor: "green"
    outerRadius: (parent.width / 2) - 40
    anchors.centerIn: parent
    innerRadius: outerRadius - thickness.value
    startAngle: -110
    spanAngle: spanAngle.value
    borderColor: "black"; borderWidth: 1.0
}

Defining the Geometry

When creating a subclass of QQuickItem, you define the standard interface to QML as normal: via Q_PROPERTY macros, slots, and Q_INVOKABLE methods. (Such as the radius and color properties in the QML snippet above). But most importantly you must override the updatePaintNode method, and use this to describe the scene-graph nodes which correspond to your item. The scene-graph system in Qt Quick is an efficient mechanism to describe your visual pieces to the hardware APIs, but it’s very different from imperative drawing APIs such as the venerable QPainter. Your visuals must be defined as a tree of nodes, which can specify opacity, clipping, a transformation or most importantly, geometry. Geometry nodes contain both geometry – triangles, usually – and some mechanism to define how the pixels in those triangles are drawn; that mechanism is called a material in the scene-graph, and is closely related to a shader in OpenGL or other hardware APIs.

logical sector

The first step in defining our sector item, then, is to compute a set of triangles covering the item, based on the inner and outer radius, and the start and end angles. Why are we restricted to triangles? Well, that’s really the only thing hardware knows how to draw; APIs such as the Shapes module have to ultimately translate everything you provide into triangles. So we split our sector into triangles, but of course a triangle has straight sides. We could accept this and some ‘polygonal’ edges on our shapes (as happened in the first few generations of 3D video games and graphics), but we can do better. One solution is to use more, smaller triangles – so that the straight outer edges become a closer approximation of the curved sector we want. But this has its own problems – more triangles means more geometry data to be updated when the sector changes, and potentially slower rendering. (Although in practice, most modern graphics hardware can draw a lot of triangles).

However, we can be smarter, and enable some additional features by using a different approach – we can use fewer triangles, but ensure they are larger than we need, so the entire sector is covered by triangles. Inside our shader code, we will use the discard GLSL keyword to make those pixels transparent, and this also gives us an easy way to smooth the edges of the sector (anti-aliasing), or even render a border. In this method, we’re no longer considering our triangle geometry to mean ‘pixels defining our shape’ but rather ‘pixels where we can potentially decide what happens’. Of course we could simply use two triangles covering the entire shape, but there is a cost to evaluating each pixel, so a coarse fit as shown below, is a tradeoff between complexity of the geometry, and covering unecessary amounts of the screen.

Inside the QSGGeometry API, we need to define how many points (vertices) we have, and the X and Y positions of each, using some trigonometry. Unfortunately it’s hard to get very far in APIs such as OpenGL without some basic knowledge of linear algebra and trigonometry, but fortunately there’s many examples to work from. Inside the main code, we subdivide the sector into triangles, and generate the three vertices for each one. The most important piece of trigonometry is to compute X and Y (cartesian) values from polar (angle + radius) form, and of course we have to translate from degrees to radians.

Here’s the code which creates our node and associated geometry:

int vertexCount = .... compute number of unique vertices
int indexCount = ... compute number of triangles * 3;
node = new QSGGeometryNode;
geom = new QSGGeometry(QSGGeometry::defaultAttributes_TexturedPoint2D(),
                       vertexCount, indexCount);
geom->setIndexDataPattern(QSGGeometry::StaticPattern);
geom->setDrawingMode(GL_TRIANGLES);
node->setGeometry(geom);
node->setFlag(QSGNode::OwnsGeometry);

Once this is done, we can ask the geometry to give us a pointer to the storage it allocated internally to hold our vertices and indices. Note that we’re using one of the default vertex attribute formats here, TexturedPoint2D. This means each vertex stores four values: X, Y and two other values called S and T we can use. There’s other built-in formats, but if you need, you can supply a custom one. Very often, one of the basic three (Point2D, TexturedPoint2D or ColoredPoint2D) will suffice however, and they save some typing! Here’s how we retrieve a pointer to the memory we must fill in with our vertices:

QSGGeometry::TexturedPoint2D *points = geom->vertexDataAsTexturedPoint2D();

Then we need to run some loops, computing the values to insert into the arrays:

// computing s & t not shown here for simplicity
points[i].set(innerRadius * cos(angleInRad), 
              innerRadius * sin(angleInRad), s, t);

The S and T values, we will actually pass the cartesian values again, but modified slightly, so we can compute the polar position of each pixel accurately in our material (see the next post in this series!). Of course, many triangles in our geometry share the same vertices. Graphics is all about efficiency, so rather than repeating our vertices, we only include each one a single time. Then we use a separate piece of data to indicate which triangles use which vertices. This is called the index data, and it’s simply a list of integers, specify vertices by, you guessed, their index. Again the QSGGeomtry will allocate storage for this based on a size we pass in, and can give us a pointer to that memory, to be filled in:

quint16* indices = geom->indexDataAsUShort();
// three sequential entries in 'indices' define a
// single triangle, here using vertices 4, 6 and 3
indices[i] = 4;
indices[i+1] = 6;
indices[i+2] = 3;

The scene-graph API requires us to perform explicit management of memory and resources – this is because it’s optimised for speed and efficiency, not comfort. And it allows materials and geometry to be shared between nodes or items, so we need to explicitly state if we own our geometry. (You can see in the snippets above, we have to explicitly tell the node, it owns the geometry and hence can delete it) If our item properties change (for example, the radius), we need to recompute our geometry, but then also tell the scene-graph data we made changes, because internally there’s a hardware buffer which needs to be updated; that’s an operation we need to avoid doing if nothing has changed, so we have to explicitly mark the data as changed (‘dirty’) so it will copied to the GPU on the next Qt Quick drawing frame.

geom->markIndexDataDirty();
geom->markVertexDataDirty();
node->markDirty(QSGNode::DirtyGeometry | QSGNode::DirtyMaterial);

If you forget to tell either the geometry or the node, that something has changed, the hardware data won’t be updated, and you won’t see your changes. Putting this all together, we now have a way to get our geometry object populated with vertices and indices. Each vertex defines an X,Y position, and each group of three index values identify three vertices, and hence one triangle. We can use some basic trigonometry to work out the X,Y values from our angles and radii. Whenever our angles or radii change, we’ll need to recompute our geometric data, and tell the scene-graph that the data is dirty (modified). If we wanted to show our shape with a simple color, we could now use QSGFlatColorMaterial and we’d be done. As the name suggests, this built-in material will draw ever pixel in your triangles, in a single color. But we want something prettier, so we need a custom material. We’ll see how to do that, in the next part.

The post Efficient custom shapes in Qt Quick appeared first on KDAB.

]]>
https://www.kdab.com/efficient-custom-shapes-in-qt-quick/feed/ 10 18709
Python – Tron Demo https://www.kdab.com/python-tron-demo/ https://www.kdab.com/python-tron-demo/#comments Tue, 24 Jul 2018 13:18:22 +0000 https://www.kdab.com/?p=18237 For SIGGRAPH, KDAB has been working on a new Qt 3D based demo. We decided that instead of using C++, it would be interesting to try out PySide2 and harness Python to drive the application. The idea behind this demo is to do with data acquisition of a vehicle’s surrounding environment. Once the data is […]

The post Python – Tron Demo appeared first on KDAB.

]]>

For SIGGRAPH, KDAB has been working on a new Qt 3D based demo. We decided that instead of using C++, it would be interesting to try out PySide2 and harness Python to drive the application. The idea behind this demo is to do with data acquisition of a vehicle’s surrounding environment. Once the data is acquired it can be processed and used to display a 3D scene. The application is structured in two main parts. On the one hand, we use QtQuick and the Qt 3D QML API to declare the UI and instantiate the 3D scene. On the other hand we use Python for the backend logic, data processing and models and definition of the custom Qt 3D meshes elements we’ll need to use in the UI.

Simulating Data Acquisition

Since this is a demo, we simulate the data that we acquire rather than rely on real data acquisition through sensors. We simulate only two things:

  • Vehicle position and orientation
  • Road lines

The information for these is obtained by looping around a generated set of road sections. To define a fake road track, we’ve used cubic bezier curves, each bezier curve defining a road section.

A cubic bezier curve is defined as 2 end points + 2 control points. This allows for a rather compact description of the road section we want our vehicle to travel on.

Going from one curve to a full road track

Using this tool, we generated the bezier curves with these values:

bezier_curves = [
    [(318, 84), (479, 18), (470, 233), (472, 257)],
    [(472, 257), (473, 272), (494, 459), (419, 426)],
    [(419, 426), (397, 417), (354, 390), (324, 396)],
    [(324, 396), (309, 399), (217, 416), (202, 415)],
    [(202, 415), (157, 412), (116, 278), (114, 263)],
    [(114, 263), (119, 219), (151, 190), (182, 192)],
    [(182, 192), (277, 192), (216, 128), (318, 84)]
]

Notice how each bezier curve starts at the position of the last point of the previous curve. That’s because we want no discontinuity between our road sections.

On each curve, we sample 250 subdivisions to generate raw position data. Given we have 7 curves, that gives us a total of 1750 positions. In real life our vehicle is only aware of the immediately surrounding environment. In our case, we’ve decided that would be about 100 positions in front of the vehicle and 50 positions at the rear. Every 16ms, we increase a global index (which goes from 0 to 1750) and select 150 entries starting at our index. From these 150 positions we extrude 4 lines (to make 3 road lanes). The 50th entry we’ve selected is where we assume our vehicle is.

  • Road section start is positions[0]
  • Road section end is positions[149]
  • Vehicle position is positions[50]

Making our track visible through the camera

In the 3D view we assume the vehicle is placed in (0, 0, 0). The camera is placed slightly behind the vehicle, its view center being the vehicle. So if positions[49] is where our vehicle actually is in the real world, we actually need to translate back all our positions to minus positions[49]. We also want our vehicle and our camera to rotate as we are going along curves. For that we know that our camera is looking toward -Z (0, 0, -1). We can compute a vector u (vehicle position – road section start) and then find the angle between u and -z using the dot product.

In code this translates to simply creating a transform matrix:

road_start_position = self.m_points_at_position[0]
screen_origin_position = self.m_points_at_position[50]
def compute_angle_between_road_section_and_z():
    # We want to look toward -Z
    target_dir = QVector3D(0.0, 0.0, -1.0)
    # Our current dir
    current_dir = (screen_origin_position - road_start_position).normalized()
    # Angle between our two vectors is acos(dot, current_dir, target_dir)
    dot = QVector3D.dotProduct(target_dir, current_dir)
    return acos(dot)
rot_angle = compute_angle_between_road_section_and_z()
self.m_road_to_world_matrix = QMatrix4x4()
# Rotate of rot_angle around +Y
self.m_road_to_world_matrix.rotate(degrees(rot_angle), QVector3D(0.0, 1.0, 0.0))
# Translate points back to origin
self.m_road_to_world_matrix.translate(-screen_origin_position)

Then, it’s just a matter of transforming all these positions using the transformation matrix.

3D Rendering

Drawing the road

To render the road lines, we have created a new Qt 3D QGeometry subclass. The python backend generates new buffer data for the road every frame, based on the 150 transformed positions that have been computed. Basically for each 2 positions, a quad made up of 2 triangles is generated to make up one part of a road line. This process is repeated 4 times with an offset on the x-axis for each road line. In turn, this is repeated 150 times so that we have quads for each position and for each line to make up our 3 road lanes. We just upload these buffers to the GPU by using a Qt 3D QBuffer and setting its data property.

from PySide2.QtCore import Property, Signal, QByteArray
from PySide2.Qt3DCore import Qt3DCore
from PySide2.Qt3DRender import Qt3DRender
from array import array
class RoadLineGeometry(Qt3DRender.QGeometry):
    def __init__(self, parent=None):
        Qt3DRender.QGeometry.__init__(self, parent)
        self.m_position_buffer = Qt3DRender.QBuffer(self)
        self.m_position_buffer.setUsage(Qt3DRender.QBuffer.StaticDraw)
        self.m_position_attribute = Qt3DRender.QAttribute(self)
        self.m_position_attribute.setAttributeType(Qt3DRender.QAttribute.VertexAttribute)
        self.m_position_attribute.setDataType(Qt3DRender.QAttribute.Float)
        self.m_position_attribute.setDataSize(3)
        self.m_position_attribute.setName(Qt3DRender.QAttribute.defaultPositionAttributeName())
        self.m_position_attribute.setBuffer(self.m_position_buffer)
        self.addAttribute(self.m_position_attribute)
    def update(self, data, width):
        # Data is a QByteArray of floats as vec3
        float_data = array('f', data.data())
        transformed_point = [v for i in range(0, len(float_data), 3)
                             for v in [float_data[i] - width / 2.0, 0.0, float_data[i + 2],
                                       float_data[i] + width / 2.0, 0.0, float_data[i + 2]]]
        self.m_position_buffer.setData(QByteArray(array('f', transformed_point).tobytes()))
        self.m_position_attribute.setCount(len(transformed_point) / 3)

Note: in Python the QGeometry::geometryFactory API is unavailable, meaning that we need to update directly our QBuffer data in the frontend.

Drawing the bike

As for the bike, it is a .obj model we load and then scale and rotate with a transformation matrix. The scale and rotations are updated as we move through the track so that the bike always aligns with the road. To load the geometry, a custom .obj loader was written. Regular wireframing techniques usually display triangles if the mesh was exported as triangles (which was the case for us). Our custom loader works around that by analyzing faces described in the .obj file and generating lines to match the faces (we have square faces in our case). In addition, the bike uses a special FrameGraph. I won’t highlight it in details but instead just give you a rough idea of what it does:

  1.  A first pass is rendered into a texture.
  2. A loop is made
    1. An input texture is then used as a base for a multi pass gaussian blur.
    2. The fragment color is summed up with the previous blur output which ends up creating a Bloom Effect.
    3. input texture for next pass = output texture of this pass

The Tron-like appearance is simply a result of this FrameGraph Bloom effect.

What about PySide2 and Qt 3D?

Using Python and PySide2 instead of C++ to create a Qt 3D-based application has quite a few advantages but also a couple of disadvantages:

  • Pros:
    • You can leverage 90% of the C++ API through the Python bindings
    • You can still use QML for the UI
    • A lot of C++ boilerplate is avoided, prototyping faster
    • Deployment is easy on desktop (pip install pyside2)
  • Cons
    • Qt 3D documentation for the Python is still lacking, tricky to find how to import the namespaces
    • Conversion of some Qt types to Python types is tricky (QByteArray, QVector<QVector3D> …)
    • Deployment is hard on embedded (no arm pyside2 pip release yet)
    • Some odd behaviors and queue invocations (variable having to be printed for some changes to be take into account).

Overall it was an enjoyable experience and with the bindings, Python is a real alternative to C++ for most of what regular users might wish to do with Qt 3D. You can download Qt for Python here.

The post Python – Tron Demo appeared first on KDAB.

]]>
https://www.kdab.com/python-tron-demo/feed/ 3 18237
KDAB at SIGGRAPH 2018 https://www.kdab.com/kdab-at-siggraph-2018/ https://www.kdab.com/kdab-at-siggraph-2018/#respond Mon, 21 May 2018 14:43:32 +0000 https://www.kdab.com/?p=17772 Yes, folks. This year SIGGRAPH 2018 is in Canada and we’ll be there at the Qt booth, showing off our latest tooling and demos. These days, you’d be surprised where Qt is used under the hood, even by the biggest players in the 3D world! SIGGRAPH 2018 is a five-day immersion into the latest innovations […]

The post KDAB at SIGGRAPH 2018 appeared first on KDAB.

]]>
Yes, folks. This year SIGGRAPH 2018 is in Canada and we’ll be there at the Qt booth, showing off our latest tooling and demos. These days, you’d be surprised where Qt is used under the hood, even by the biggest players in the 3D world! SIGGRAPH 2018 is a five-day immersion into the latest innovations in CG, Animation, VR, Games, Digital Art, Mixed Reality and Emerging Technologies. Experience research, hands-on demos, and fearless acts of collaboration. Meet us at SIGGRAPH 2018! Book your place for the most exciting 3D event of the year!

The post KDAB at SIGGRAPH 2018 appeared first on KDAB.

]]>
https://www.kdab.com/kdab-at-siggraph-2018/feed/ 0 17772
New in Qt 5.10: Texture Based Animations in Qt 3D https://www.kdab.com/new-in-qt-5-10-texture-based-animations-in-qt-3d/ https://www.kdab.com/new-in-qt-5-10-texture-based-animations-in-qt-3d/#comments Tue, 27 Mar 2018 08:26:17 +0000 https://www.kdab.com/?p=17101 Many new features were added to Qt 3D in the 5.10 release. One of them is the support for sprite sheets, contributed by KDAB, as provided by QSpriteGrid and QSpriteSheet and their respective QML items. One way of animating things is to switch between many different versions of the same object at different points in time, like […]

The post New in Qt 5.10: Texture Based Animations in Qt 3D appeared first on KDAB.

]]>
Many new features were added to Qt 3D in the 5.10 release. One of them is the support for sprite sheets, contributed by KDAB, as provided by QSpriteGrid and QSpriteSheet and their respective QML items.

One way of animating things is to switch between many different versions of the same object at different points in time, like the flip books we all enjoyed as kids. If you flip fast enough, you get the illusion of animation.

In the context of OpenGL and Qt 3D, images are simply textures and are very commonly used to add details to 3d models. The naive approach to animating texture would be to use lots of them. However, switching textures has a very high cost so traditionally modellers will use texture atlases, where all the images are arranged into a single texture. This does complicate modelling slightly as the original texture coordinates need to be modified to point to the portion of the atlas that now contains the relevant image.

In effect, sprite sheets are simplified atlases that take care of this for you. They are commonly used in 2d or 2.5d applications to animate effects or characters.

Building Sprite Sheets

Simple sprite sheets are just regular grids of images.

However, as with general atlases, all individual images need not be the same size or be arranged in a grid. In that case, individual sprites need to be specified by their bounding rectangle within the texture.

There’s a number of applications that can be used to create them. Blender can output steps in animations to a sprite sheet. TexturePacker can also be used to assemble pre-existing images.

Texture Transforms

In the simplest case, a sprite will be mapped on to planar surfaces, a simple rectangle. When applying textures to a surface, you of course need to provide texture coordinates, mapping points to pixels in the texture. In the case of the PlanarMesh, precomputed texture coordinates will map to textures such as it covers the entirety of the surface.

However, if the source texture is a sprite sheet, then the texture coordinates do not work any more as we want to cover the specific sprite, not the entire sheet.

One thing you do NOT want to do is replace the texture coordinates every time you want to change sprite.

In effect, texture coordinates need to be:

  • scaled to cover the range of a sprite cell
  • offset to specify the right origin

This transformation can easily be encoded in a 3×3 matrix which is applied to texture coordinates in the vertex shader. In order to support this, QTextureLoader has been extended with a textureTransform parameter which is passed to the shader as a uniform.

Putting it all together

A sprite sheet is conceptually very simple in Qt 3D. It has:

  • a list of “areas”, one for each sprite
  • a current sprite index in that list
  • an input texture to pick sprites from
  • a 3×3 texture transform matrix which is updated every time the current index changes

So simple animations can be achieved by changing the current sprite index and binding the texture transform to the matching property in the QTextureLoader instance.

Sprite Grids

QSpriteGrid is used when sprites are arranged in regular grid.

Entity {

    PlaneMesh {
        id: mesh
    }

    TextureMaterial {
        id: material
        texture: TextureLoader {
            id: textureLoader
            source: "spritegrid.png"
            mirrored: false
        }
        textureTransform: spriteGrid.textureTransform
    }

    SpriteGrid {
        id: spriteGrid
        rows: 2; columns: 6
        texture: textureLoader
    }

    components: [ mesh, material ]
}

Images in the grid are assumed to be arranged in row major order. So currentIndex must remain between 0 and rows * columns. The texture property points to the image containing the sprites. The current index, number of rows and columns and the actual size of the texture are all used to compute the texture transform.

Sprite Sheets

QSpriteSheet is used when the sprites are not all the same size and/or are not organised in a grid. You then need specify the extent of each sprite.

Entity {

    PlaneMesh {
        id: mesh
    }

    TextureMaterial {
        id: material
        texture: TextureLoader {
            id: textureLoader
            source: "spritegrid.png"
            mirrored: false
        }
        textureTransform: spriteGrid.textureTransform
    }

    SpriteSheet {
        id: spriteSheet
        texture: textureLoader

        SpriteItem { x:    0; y:   0; width: 250; height: 172 }
        SpriteItem { x:  276; y:   0; width: 250; height: 172 }
        SpriteItem { x:  550; y:   0; width: 250; height: 172 }
        //...
    }

    components: [ mesh, material ]
}

The currentIndex must remain between 0 and the number to QSpriteItem children.

Example

 

In this example, we use a sprite grid which contains frames of an explosion. A timer is used to change the current index. While the animation runs, the object is faded out.

Looking straight on, you see a rather nice effect. The illusion becomes clear when you look at it sideways and see the plane on which the texture is mapped.

Notes:

  • In the general 3d case, it is common to combine sprite sheets with billboards in order to keep the plane aligned with the screen. This can normally easily be done in the shaders.
  • QSpriteSheet doesn’t currently support rotated sprites.
  • QTextureMaterial does not currently support alpha blending, so transparent portions of the textures will appear black. This can worked around by building a custom material and will be fixed in 5.11.
  • One of the problems with putting all the images in one texture is that you get quickly limited by the maximum texture size. For bigger images, a more modern approach may be to use QTexture2DArray. However, in this case all images need to be the same size. But the only limit then would be the maximum texture size and the amount of texture memory available…

 

The post New in Qt 5.10: Texture Based Animations in Qt 3D appeared first on KDAB.

]]>
https://www.kdab.com/new-in-qt-5-10-texture-based-animations-in-qt-3d/feed/ 7 17101