Vuforia Support for Custom Devices using Vuforia Driver

In this document we will provide information for OEMs (Original Equipment Manufacturers) working to enable Vuforia on custom hardware using the Vuforia Driver API. This document focuses on hardware which includes a positional device tracker as well as an image (preferably RGB) camera.

Prerequisites

To integrate with Vuforia Engine we require that the device can provide the following data:

  1. Frames from an image camera in one of the Vuforia supported formats (YUYV, NV12, NV21).
  2. For each frame the 6DOF position of the camera when the frame was captured.

For a detailed description of the Vuforia Driver API please see ‘Driver.h’ in the Vuforia Engine SDK.

Integration Overview

A Vuforia Driver is a library (.so or .dll) used by the Vuforia Engine library to access non-standard data sources. The Vuforia Driver and Vuforia Engine libraries must both be included in the App package. The following diagram illustrates this:

The next diagram illustrates our recommended approach for integrating Vuforia with an eyewear device with a see-thru display. The device will already have a 6DOF positional device tracker and the display system is optimized for low latency probably including some form of head pose prediction.

The blue colored inner loop represents the low-latency tracking and rendering path provided by the device. The green colored outer loop represents the Vuforia integration. A Vuforia Driver provides poses and images to the Vuforia Engine which then reports object poses in the device’s world coordinate system. As the device already renders from its world coordinate system with low latency, augmentations rendered relative to Vuforia detected objects will exhibit the same visual stability that is provided by the device.

NOTE: Vuforia Engine does not currently optimize for moving objects.

Integration Details

Sending data from your Vuforia Driver

To deliver poses and camera frames to the Vuforia Driver we require that the pose is delivered first immediately followed by the camera frame with both having exactly the same timestamp value (in nanoseconds).

When calculating the pose to delivery to the Vuforia Driver you should ensure the following:

  1. The pose represents the position of the camera in the world.
  2. The pose is in the camera coordinate system.
  3. The pose is in a right-handed (RH) coordinate system.
  4. The pose is expressed in Computer Vision (CV) convention (x-right, y-down, z-away) as illustrated below.

Known issues

If the pose capability is declared in the driver but poses are not sent, the camera frame will be retained in memory by Vuforia Engine and the device will run out of memory after a few seconds.

Output from Vuforia Engine

Vuforia Engine reports results in the State object. Object and Device Tracker poses are reported in world coordinate system. When poses are provided to Vuforia Engine by a Vuforia Driver this means that poses are reported in the device’s positional device tracker coordinate system.

NOTE: That Device Tracker and camera poses reported by Vuforia use the traditional rendering spatial model with x-right, y-up and z-towards as illustrated below:

For rendering we assume that the device renders into the positional device tracker coordinate system positioning the scene camera based on the current device position.

Unity Specific Information

When using the Vuforia Engine Unity Package with an external device tracker you need to consider how the ‘Main Camera’ will be positioned. In a Vuforia application for a phone or tablet the ‘Main Camera’ is typically positioned by Vuforia this could be either

  1. Relative to the detected object, or
  2. Relative to the origin of the device tracker

On custom hardware with its own 6DOF positional device tracker we expect that the ‘Main Camera’ will be positioned by an OEM provided mechanism. In this case Vuforia should not attempt to position the ‘Main Camera’, an API is provided to allow the Unity developer to disable Vuforia updates:

VuforiaConfiguration.Instance.DeviceTracker.UseThirdPartySeethroughEyewear = true;

An example ‘preinitializer’ script using this API can be provided on request.

When using this configuration, you should also ensure that the Vuforia Device Tracker configuration is set in the VuforiaConfiguration as follows:

and that the ‘World Center Mode’ of the ‘VuforiaBehaviour’ script attached to the ARCamera GameObject is set to DEVICE:

When integrating with Unity we recommend that the first step is to get a Unity application using the device’s 6DOF positional device tracker to position a static cube object in space. You should observe that the cube stays in the same place as the device is moved around the environment.

Once this is working, proceed to add the VuforiaBehaviour script to the camera rig and try placing a Vuforia Image Target in the scene and add a simple augmentation as a child of the Image Target GameObject. You should observe the augmentation is seen when the Image Target is detected and that it remains locked to the target.