Skip to content

Setting up Kinect for programming in Linux (part 2)

Last month, we had a look on how to setup the Kinect on a linux (Ubuntu) machine (if you missed it, you can find it here) .

Today, we will continue exploring the Kinect possibilities  by using the device in a program written in Qt. We will first design a class that will be our  interface between OpenKinect/NITE libraries and Qt, then instantiate it in a little example program.

A word on the Kinect API we’ll use

OpenKinect

OpenKinect offers a standard C++ API to operate the device. It comes a with a lot of different functions, but doesn’t offer gesture recognition yet. We will use it to control the LED and the tilt motor of the Kinect.

Useful links related to openKinect library:

OpenNI/NITE

These libraries also comes with a C++ API, we will use them to process the depth map created by the Kinect and generate gestures usable by our library.

Useful links:

The Goal: track your hand and detect when you “push”

Overview

Let’s have a quick look at the example program:

  • The grey rectangle on the top of the window is a representation of the Kinect scene.
  • The white rectangle represents your hand, it has a label attached showing it’s current coordinates. shake your hand in front of the Kinect for it to be recognized
  • Every time you’ll  “push”, a (randomly colored) circle will appear where your hand is.
  • The bottom left panel shows messages generated by the different libraries.
  • The bottom right panel allow you to modify the current physical state of the Kinect (switch the led, tilt the head)
  • You can reset the view (remove colored circles) by clicking on the “clear preview” button.

Prerequisites

You can get the sources for the demo application here, make sure you installed all the required libraries ( see part1 of this tutorial).

Note: The QMake file considers that the different third party libraries (OpenNi, Freenect, NITE) are in the same repertory as the demo application. 

if you get an error like that: libusb couldn’t open USB device /dev/bus/usb/001/006: Permission denied. follow the following procedure (found here):

$ sudo adduser $USER video
$ sudo nano /etc/udev/rules.d/51-kinect.rules

then copy the following lines in the file:

# ATTR{product}=="Xbox NUI Motor" SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02b0", MODE="0666"
# ATTR{product}=="Xbox NUI Audio" SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02ad", MODE="0666"
# ATTR{product}=="Xbox NUI Camera" SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02ae", MODE="0666"

Be sure to log out and back in.

Now that we’re up and running, let’s have a closer look at the code.

Class design

“How should i design my application in order to use the Kinect with Qt?”

For this example,  i chose to encapsulate all third-party APIs inside QObject-based classes, enabling communication to take place through the signal/slot mechanism.

Kinect initialization:

“What process should be followed in order to initialize all the necessary Kinect-related objects into my application? How can i choose which gestures will be generated by NITE and how can i tune them?”

let’s have a look at void QtKinect::initDevice()

  • First of all, we initialize OpenNi related objects

m_context is our main OpenNI object, we start by loading a XML file containing all the “nodes” information.  A Node defines a part of what OpenNi can do, here we’ll need the depth map, the hands generator and the gesture generator so we check that they have been correctly loaded (with the CHECK_RC and CHECK_ERROR macros, see QtKinect_global.h for their definitions.) check this page for more information about OpenNI contexts.

Note:It is not mandatory to use the XML file for the initialization of OpenNI context, you can also use xn::context::init() and manually setup the environment you want. check this page

rc = m_context.InitFromXmlFile(m_xmlPath.toAscii(), m_scriptNode,
                               &errors);
CHECK_ERRORS(rc, errors, "InitFromXmlFile");
CHECK_RC(rc, "InitFromXmlFile");
rc = m_context.FindExistingNode(XN_NODE_TYPE_DEPTH, m_depthGenerator);
CHECK_RC(rc, "Find depth generator");
rc = m_context.FindExistingNode(XN_NODE_TYPE_HANDS, m_handsGenerator);
CHECK_RC(rc, "Find hands generator");
rc = m_context.FindExistingNode(XN_NODE_TYPE_GESTURE,
                                m_gestureGenerator);
CHECK_RC(rc, "Find gesture generator");
  • Once OpenNI objects are initialized, we can continue with NITE objects

This next part is about declaring which gestures we want to recognize.

First, we need to instantiate the session manager, it will manage all our gestures. The initialize() method allow you to define which gesture will trigger hand recognition, here it is set as “click, wave”. You can also set the “refocus” gesture, it allow you to have your hand recognized again after the Kinect recently lost focus. For now, it is set as “raise hand”.

class reference

The RegisterSession() method allow you to assign callbacks to predefined events:

    • A new session has started (a user, a hand, have been recognized)
    • A session stopped 
    • A session is started, but the “subject” is not visible anymore
m_pSessionManager = new XnVSessionManager;
rc = m_pSessionManager->Initialize(&m_context,
                         "Click,Wave", "RaiseHand");
CHECK_RC(rc, "SessionManager::Initialize");
m_pSessionManager->RegisterSession(this, &::SessionStarting,
                         &::SessionEnding, &::FocusProgress);

Now that the session manager is able to recognize user interaction, let’s instantiate the gestures we want, first the hand tracking.

the process is similar for each new gesture:

    • create a NITE gesture object (such as XnVPointControl, XnVPushDetector, XnVSelectableSlider1D, XnVSwipeDetector, etc… have a look here for the whole list)
    • register the callbacks you want for each object, via the Gesture::Register*() methods.
    • set the parameters of the gesture via the Gesture::Set*() method (here you can define options such as push velocity, maximum acceptable angle, etc)
    • add the gesture to a new flow router
m_handTracker = new XnVPointControl("Hand tracker");
m_handTracker->RegisterPointCreate(this, &::PointCreateCB);
m_handTracker->RegisterPointUpdate(this, &::PointUpdateCB);
m_pFlowRouters.append(new XnVFlowRouter);
m_pFlowRouters.last()->SetActive(m_handTracker);

//push
m_pushDetector = new XnVPushDetector("pushes");
m_pushDetector->RegisterPush(this, &::PushCB);
m_pFlowRouters.append(new XnVFlowRouter);
m_pFlowRouters.last()->SetActive(m_pushDetector);

 Once you instantiated all the gestures you want, add each flow router to the session manager we created earlier.

//add the listeners
foreach(XnVFlowRouter * router, m_pFlowRouters)
        m_pSessionManager->AddListener(router);

 Now that the gesture/camera part is initialized, we need to initialize OpenKinect drivers in order to use the Kinect LED and tilt motor.

if (freenect_init(&m_f_ctx, NULL) < 0) {
 emit message(tr("Freenect driver failed to initialize."));
 return;
}
freenect_set_log_level(m_f_ctx, FREENECT_LOG_DEBUG);
int nr_devices = freenect_num_devices (m_f_ctx);
emit message(tr("Number of devices found: %1").arg(nr_devices));
freenect_select_subdevices(m_f_ctx, FREENECT_DEVICE_MOTOR);
if (freenect_open_device(m_f_ctx, &m_f_dev, 0) < 0) {
 emit message("Could not open device");
 m_deviceOpened = false;
}
else {
 emit message(tr("device successfully opened"));
 m_deviceOpened = true;
}

 At this point, both OpenNi and OpenKinect drivers are initialized, we can start generating the gestures, and enter the thread's main loop.

rc = m_context.StartGeneratingAll();
CHECK_RC(rc, "StartGenerating");
rc = m_gestureGenerator.StartGenerating();
CHECK_RC(rc, "StartGenerating gestures");

emit message(tr("Kinect initialized. Entering main loop."));
start();

The main loop:

The main loop just looks for new incoming events from OpenNi driver (gesture, etc) and updates the session manager. 
XnStatus rc = XN_STATUS_OK;
forever {
   if (m_terminateExecution)
       return;
   rc = m_context.WaitAnyUpdateAll();
   CHECK_RC(rc, "updating context");
   m_pSessionManager->Update(&m_context);
}

The example gui

Now that we have a class that use the signal/slot mechanism to forward the generated gestures, it becomes really easy to interface it with any other Qt program. You just need to instanciate the class, connect the signals, and call the public method initDevice() to start the Kinect event loop.

 

Categories: KDAB Blogs / KDAB on Qt / Kinect

3 thoughts on “Setting up Kinect for programming in Linux (part 2)”

  1. Hi There,
    Seems like there is an issue compiling [I am using qt5] :

    XnCppWrapper.h:10045: Error: Macro argument mismatch.
    make: *** [moc_QtKinectExample.cpp] Error 1
    08:30:38: The process “/usr/bin/make” exited with code 2.
    Error while building/deploying project QtKinectExample (kit: Desktop)
    When executing step ‘Make’

    Thanks

  2. Thank for your share. I am a new with kinect. Could you tell how i display RGB and depth data to the GUI

Leave a Reply

Your email address will not be published. Required fields are marked *