Working with the Camera

Camera Focus Modes

Continuous Autofocus and Other Focus Modes

This article describes the different focus modes available in Vuforia Engine (since v1.5 and above).

The behavior of the focus modes in Vuforia Engine (since v1.5) is described below:

Focus mode Behavior
FOCUS_MODE_NORMAL Sets the camera into the default mode as defined by the camera driver.
FOCUS_MODE_TRIGGERAUTO Triggers a single autofocus operation.
FOCUS_MODE_CONTINUOUSAUTO Lets you turn on driver-level continuous autofocus for cameras. This mode is optimal for AR applications. This mode yields the best tracking results because it guarantees that the camera is focused on the target. (starts in Android 2.3 and iOS devices)
FOCUS_MODE_INFINITY Sets the camera to infinity, as provided by the camera driver implementation. (not supported on iOS)
FOCUS_MODE_MACRO Sets the camera to macro mode, as provided by the camera driver implementation. This mode provides a sharp camera image for distances of closeups (approximately 15 cm), rarely used in AR setups. (not supported on iOS)

We encourage using FOCUS_MODE_CONTINUOUSAUTO in your applications whenever it is available on the device. When setting this mode, if the return value of setFocusMode() is TRUE your application will provide sharp camera images for both superior rendering, as well as for robust tracking performance.

If the FOCUS_MODE_CONTINUOUSAUTO is not available, the next best option is to implement a 'touch to focus' behavior in your app. To do this, trigger setFocusMode() with FOCUS_MODE_TRIGGERAUTO value each time the user touches the screen. The disadvantage of this behavior is that most camera drivers pick a random direction to focus (near or far), so you have a 50% chance that the image will defocus and then focus on the target. Under certain conditions due this focus logic the tracking can be lost for a moment until a sharp image is provided again by the camera.

FOCUS_MODE_INFINITY and FOCUS_MODE_MACRO are usable in certain application scenarios, as described above.

FOCUS_MODE_NORMAL sets the camera into the default mode as defined by the camera driver.

C++

   bool 
   focusModeSet = Vuforia::CameraDevice::getInstance().setFocusMode( 
     Vuforia::CameraDevice::FOCUS_MODE_CONTINUOUSAUTO); 
   if (!focusModeSet) { 
     LOG("Failed to set focus mode (unsupported mode)."); 
    }

Java

    void Start ()
    {
        var vuforia = VuforiaARController.Instance;
        vuforia.RegisterVuforiaStartedCallback(OnVuforiaStarted);
        vuforia.RegisterOnPauseCallback(OnPaused);
    }
     
    private void OnVuforiaStarted()
    {
        CameraDevice.Instance.SetFocusMode(
            CameraDevice.FocusMode.FOCUS_MODE_CONTINUOUSAUTO);
    }
     
    private void OnPaused(bool paused)
    {
        if (!paused) // resumed
        {
            // Set again autofocus mode when app is resumed
            CameraDevice.Instance.SetFocusMode(
                CameraDevice.FocusMode.FOCUS_MODE_CONTINUOUSAUTO);
        }
    }

Unity

  bool  
   focusModeSet = CameraDevice.Instance.SetFocusMode(
      CameraDevice.FocusMode.FOCUS_MODE_CONTINUOUSAUTO);
        if (!focusModeSet) {
          Debug.Log("Failed to set focus mode
          (unsupported mode).");
        }

You can configure autofocus when Vuforia Engine initializes by adding the following code to a script that inherits from Monobehaviour. This registers a callback with the VuforiaBehaviour that will set a focus mode when the Vuforia Engine process has started.

    void Start ()
    {
        var vuforia = VuforiaARController.Instance;
        vuforia.RegisterVuforiaStartedCallback(OnVuforiaStarted);
        vuforia.RegisterOnPauseCallback(OnPaused);
    }
     
    private void OnVuforiaStarted()
    {
        CameraDevice.Instance.SetFocusMode(
            CameraDevice.FocusMode.FOCUS_MODE_CONTINUOUSAUTO);
    }
     
    private void OnPaused(bool paused)
    {
        if (!paused) // resumed
        {
            // Set again autofocus mode when app is resumed
            CameraDevice.Instance.SetFocusMode(
                CameraDevice.FocusMode.FOCUS_MODE_CONTINUOUSAUTO);
        }
    }

How To Obtain HD Camera Frames

HD camera view enables apps to request camera preview frames in HD.  Apps can offer features such as recording and sharing of images and videos in HD. HD mode is enabled by default on supported devices and there is nothing specific that an app must do. Supported devices include those that have a HD camera and screen, and quad core processors.

How To Access the Camera Image in Unity

This article describes two approaches for obtaining the camera image (without augmentation) in Unity:

  • Use the Vuforia.Image class
  • Use the image as an OpenGL texture

Use the Vuforia.Image class

Using the Vuforia.Image class works much like the native version.

Register for the desired image format using the CameraDevice.SetFrameFormat() method:

    CameraDevice.Instance.SetFrameFormat(PIXEL_FORMAT.RGB888, true);

Note: The Vuforia Namespace Reference page lists the available pixel formats.

Call this method after Vuforia Engine has been initialized and started; to this aim, it is recommended to register an OnVuforiaStarted callback in the Start() method of your MonoBehaviour script, e.g.:

  void Start()
    {
      VuforiaARController.Instance.RegisterVuforiaStartedCallback(OnVuforiaStarted);
    }
    
  private void OnVuforiaStarted()
    {
      // Vuforia has started, now register camera image format  
      if (CameraDevice.Instance.SetFrameFormat(mPixelFormat, true))
      {
        Debug.Log("Successfully registered pixel format " + mPixelFormat.ToString());
        mFormatRegistered = true;
      }
        else
      {
        Debug.LogError(
          "Failed to register pixel format " + mPixelFormat.ToString() +
          "\n the format may be unsupported by your device;" +
          "\n consider using a different pixel format.");
     
            mFormatRegistered = false;
        }
    }

Retrieve the camera image using the CameraDevice.GetCameraImage() method.

  • Take this action from the OnTrackablesUpdated() callback. That way you can ensure that you retrieve the latest camera image that matches the current frame.
  • Always make sure that the camera image is not null, since it can take a few frames for the image to become available after registering for an image format.
  • Make sure to unregister the camera image format whenever the application is paused, and to register it again when the application is resumed.

Instructions: Attach this script to a GameObject in your AR scene and check the logs in the Console to see the captured Image information:

  using UnityEngine;
  using System.Collections;
  using Vuforia;

  public class CameraImageAccess : MonoBehaviour {
    #region PRIVATE_MEMBERS
    private PIXEL_FORMAT mPixelFormat = PIXEL_FORMAT.UNKNOWN_FORMAT;
    private bool mAccessCameraImage = true;
    private bool mFormatRegistered = false;
    #endregion // PRIVATE_MEMBERS

    #region MONOBEHAVIOUR_METHODS
    void Start()
    {
      #if UNITY_EDITOR
        mPixelFormat = PIXEL_FORMAT.GRAYSCALE; // Need Grayscale for Editor
        #else
        mPixelFormat = PIXEL_FORMAT.RGB888; // Use RGB888 for mobile
        #endif

        // Register Vuforia life-cycle callbacks:
        VuforiaARController.Instance.RegisterVuforiaStartedCallback(OnVuforiaStarted);
        VuforiaARController.Instance.RegisterTrackablesUpdatedCallback(OnTrackablesUpdated);
        VuforiaARController.Instance.RegisterOnPauseCallback(OnPause);
    }
    #endregion // MONOBEHAVIOUR_METHODS

    #region PRIVATE_METHODS
    private void OnVuforiaStarted()
    {
      // Try register camera image format
      if (CameraDevice.Instance.SetFrameFormat(mPixelFormat, true))
      {
          Debug.Log("Successfully registered pixel format " + mPixelFormat.ToString());
          mFormatRegistered = true;
      }
      else
      {
          Debug.LogError(
              "\nFailed to register pixel format: " + mPixelFormat.ToString() +
              "\nThe format may be unsupported by your device." +
              "\nConsider using a different pixel format.\n");
          mFormatRegistered = false;
      }
    }
    /// 
    /// Called each time the Vuforia state is updated
    /// 
    void OnTrackablesUpdated()
    {
        if (mFormatRegistered)
        {
            if (mAccessCameraImage)
            {
                Vuforia.Image image = CameraDevice.Instance.GetCameraImage(mPixelFormat); 
                if (image != null)
                {
                    Debug.Log(
                        "\nImage Format: " + image.PixelFormat +
                        "\nImage Size:   " + image.Width + "x" + image.Height +
                        "\nBuffer Size:  " + image.BufferWidth + "x" + image.BufferHeight +
                        "\nImage Stride: " + image.Stride + "\n"
                    );
                    byte[] pixels = image.Pixels;
                    if (pixels != null && pixels.Length > 0)
                    {
                        Debug.Log(
                            "\nImage pixels: " +
                            pixels[0] + ", " +
                            pixels[1] + ", " +
                            pixels[2] + ", ...\n"
                        );
                    }
                }
            }
        }
    }
    /// 
    /// Called when app is paused / resumed
    /// 
    void OnPause(bool paused)
    {
        if (paused)
        {
            Debug.Log("App was paused");
            UnregisterFormat();
        }
        else
        {
            Debug.Log("App was resumed");
            RegisterFormat();
        }
    }
    /// 
    /// Register the camera pixel format
    /// 
    void RegisterFormat()
    {
        if (CameraDevice.Instance.SetFrameFormat(mPixelFormat, true))
        {
            Debug.Log("Successfully registered camera pixel format " + mPixelFormat.ToString());
            mFormatRegistered = true;
        }
        else
        {
            Debug.LogError("Failed to register camera pixel format " + mPixelFormat.ToString());
            mFormatRegistered = false;
        }
    }
    /// 
    /// Unregister the camera pixel format (e.g. call this when app is paused)
    /// 
    void UnregisterFormat()
      {
          Debug.Log("Unregistering camera pixel format " + mPixelFormat.ToString());
          CameraDevice.Instance.SetFrameFormat(mPixelFormat, false);
          mFormatRegistered = false;
      }  
      #endregion //PRIVATE_METHODS
  }

Use an OpenGL texture

The Image class provides the camera pixels as a byte array. That approach is useful for some image processing tasks, but sometimes it is preferable to obtain the image as an OpenGL texture, for example if you wish to use the texture in a Material applied to a game object and/or to process the texture a shader. You can obtain the image as an OpenGL texture using the approach demonstrated in the OcclusionManagement sample.

  1. Register a Texture2D object to be filled with the camera pixels at each frame instead of letting Vuforia Engine render the camera image natively at each frame, using the VuforiaRenderer.VideoBackgroundTexture API
  2. See the OcclusionManagement sample scripts for an example of this technique.

How To Access the Camera Image in Native

This article explains how to extract one of the images provided by the Vuforia Engine, using a desired image format.

Java

If you are using Vuforia Engine 2.8 (or higher), you will be able to implement this functionality using the Vuforia Java API (no need to use any C++ code).

In this case, if you open the VuforiaSamples sample, and find the ImageTargets.java file, you can simply modify the code in the onVuforiaUpdate() method:

  @Override
  public void onVuforiaUpdate(State state) {
  Image imageRGB565 = null;
  Frame frame = state.getFrame();
   
  for (int i = 0; i < frame.getNumImages(); ++i) {
    Image image = frame.getImage(i);
    if (image.getFormat() == PIXEL_FORMAT.RGB565) {
      imageRGB565 = image;
      break;
    }
  }
   
  if (imageRGB565 != null) {
    ByteBuffer pixels = imageRGB565.getPixels();
    byte[] pixelArray = new byte[pixels.remaining()];
    pixels.get(pixelArray, 0, pixelArray.length());
    int imageWidth = imageRGB565.getWidth();
    int imageHeight = imageRGB565.getHeight();
    int stride = imageRGB565.getStride();
    DebugLog.LOGD("Image", "Image width: " + imageWidth);
    DebugLog.LOGD("Image", "Image height: " + imageHeight);
    DebugLog.LOGD("Image", "Image stride: " + stride);
    DebugLog.LOGD("Image", "First pixel byte: " + pixelArray[0]);
  }
}

Also, the Frame Format can be set toi the desired one (e.g. RGB565, RGB888, etc.), by editing the code in thestartAR() method defined in SampleApplicationSession.java:

  Vuforia.setFrameFormat(PIXEL_FORMAT.RGB565, true);

C++

If you are using an older version of Vuforia Engine (prior to 2.8), or if you are in need to use the C++ API for integration with legacy / native code, then you can follow these steps, taking the ImageTargetsNative sample as reference.

  1. In ImageTargets.cpp, add two include statement for Vuforia::Frame and Vuforia::Image:
        #include <vuforia>
        #include <vuforia></vuforia></vuforia>

     

  2. Again in ImageTargets.cpp, also add two global variables (javaVM and activityObj) and the JNI_OnLoad function (which is automatically invoked for initializing the javaVM object, which represents the Java Virtual Machine of our Android Activity):
        //global variables
        JavaVM* javaVM = 0;
        jobject activityObj = 0;
         
        JNIEXPORT jint JNICALL
        JNI_OnLoad(JavaVM* vm, void* reserved) {
          LOG("JNI_OnLoad");
          javaVM = vm;
          return JNI_VERSION_1_4;
        }

     

  3. Initialize the activityObj object by adding this code at the end of the initApplicationNative function:
        activityObj = env->NewGlobalRef(obj);
  4. Add the following code at the beginning of the Vuforia_onUpdate function:
      Vuforia::Image *imageRGB565 = NULL;
      Vuforia::Frame frame = state.getFrame();
       
      for (int i = 0; i < frame.getNumImages(); ++i) {
        const Vuforia::Image *image = frame.getImage(i);
        if (image->getFormat() == Vuforia::RGB565) {
          imageRGB565 = (Vuforia::Image*)image;
         
        break;
        }
      }
       
      if (imageRGB565) {
      JNIEnv* env = 0;
       
      if ((javaVM != 0) && (activityObj != 0) && (javaVM->GetEnv((void**)&env, JNI_VERSION_1_4) ==   JNI_OK)) {
          const short* pixels = (const short*) imageRGB565->getPixels();
          int width = imageRGB565->getWidth();
          int height = imageRGB565->getHeight();
          int numPixels = width * height;
           
          jbyteArray pixelArray = env->NewByteArray(numPixels * 2);
          env->SetByteArrayRegion(pixelArray, 0, numPixels * 2, (const jbyte*) pixels);
          jclass javaClass = env->GetObjectClass(activityObj);
          jmethodID method = env-> GetMethodID(javaClass, "processCameraImage", "([BII)V");
          env->CallVoidMethod(activityObj, method, pixelArray, width, height);
          env->DeleteLocalRef(pixelArray);
        }
      }

     

  5. Again in ImageTargets.cpp, in the startCamera function, add the code to specify the desired frame format, right after calling Vuforia::CameraDevice::getInstance().start(), as shown in the following snippet:
        // Start the camera:
        if (!Vuforia::CameraDevice::getInstance().start())
        return;
         
        Vuforia::setFrameFormat(Vuforia::RGB565, true);
         
        // etc ...

     

  6. Finally, in ImageTargets.java, add a method called processCameraImage to process the byte array representing the image passed from JNI code; for instance, the following code uses the image buffer (byte[]) to build an Android Bitmap:
        public void processCameraImage(byte[] buffer, int width, int height) {
          Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
          bitmap.copyPixelsFromBuffer(ByteBuffer.wrap(buffer));
        }

     

How To Use the Camera Projection Matrix

Based on the intrinsic camera calibration of a specific device, which comes from the device profile, you can create a GL projection matrix with the following coordinate frame:  

The camera sits at <0,0,0> and points into the positive z-direction
x-direction is to the right
y-direction is down-wards
Hence, we have a right-hand coordinate system.
Note: This is not how OpenGL does it per default, but since it is just matrix math, we can do whatever we want here.

Coordinate frame

The reason why this coordinate frame is used is because the frame is the same as for targets. The coordinate frame is just rotated by 90 degrees around the x-axis. For targets:

  • x-direction is to the right
  • y-direction is upwards in the target plane
  • z-direction points out of the target-plane  

Assumptions

The effect is as follows, when you make the following assumptions:

  • The projection matrix is set up using the Vuforia Engine tool function.
  • The modelview matrix is set to identity. Normally one would use the modelview matrix to move the object with respect to the camera.
  • Clipping is set up accordingly (for example, from 0.1 to 20).

Effects

  • An object with coordinate <0,0,1> shows up in the middle of the viewport (being one scene unit away from the camera center).
  • If we put the object to <0,0,2> then it will still be in the middle of the viewport but at half the size as before.
  • If we move the object to <1,0,1> then it moves to the right of the viewport center.
  • If we move the object to <0,1,1> then it moves to lower than the viewport center.  

Since the Vuforia Engine API gives the developer access to the intrinsic camera calibration, the developer can also build their own GL projection matrix. This option can be helpful if the developer uses an existing rendering solution that enforces a different convention. In this case the Vuforia pose conversion to GL modelview matrices will need to be implemented to suit that convention.

How To Access Camera Parameters

This article describes the various camera parameters exposed by Vuforia Engine and how to use them.

Vuforia Engine sets up a camera sitting at the origin, pointing in the positive Z direction, with X to the right, and Y down. For details on this coordinate system, see this article:

Code to build a projection matrix from camera parameters

The projection matrix is created using device-specific camera parameters. Here is the C++ code for building the projection matrix from the camera parameters:

    float nearPlane = 2.0f;
    float farPlane = 2000.0f;
    const Vuforia::CameraCalibration& cameraCalibration =
    Vuforia::CameraDevice::getInstance().getCameraCalibration();
    projectionMatrix = Vuforia::Tool::getProjectionGL(cameraCalibration, nearPlane, farPlane);
     
    // The following code reproduces the projectionMatrix above using the camera parameters
    Vuforia::Vec2F size = cameraCalibration.getSize();
    Vuforia::Vec2F focalLength = cameraCalibration.getFocalLength();
    Vuforia::Vec2F principalPoint = cameraCalibration.getPrincipalPoint();
     
    float dx = principalPoint.data[0] - size.data[0] / 2;
    float dy = principalPoint.data[1] - size.data[1] / 2;
    float x =  2.0f * focalLength.data[0] / size.data[0];
    float y = -2.0f * focalLength.data[1] / size.data[1];
    float a =  2.0f * dx / size.data[0];
    float b = -2.0f * (dy + 1.0f) / size.data[1];
    float c = (farPlane + nearPlane) / (farPlane - nearPlane);
    float d = -nearPlane * (1.0f + c);
     
    Vuforia::Matrix44F mat;
    mat.data[0] = x; mat.data[1] = 0.0f; mat.data[2] = 0.0f; mat.data[3] = 0.0f;
    mat.data[4] = 0.0f; mat.data[5] = y; mat.data[6] = 0.0f; mat.data[7] = 0.0f;
    mat.data[8] = a; mat.data[9] = b; mat.data[10] = c;  mat.data[11] = 1.0f;
    mat.data[12] = 0.0f; mat.data[13] = 0.0f; mat.data[14] = d; mat.data[15] = 0.0f;

Find the vertical field of view

Sometimes it is necessary to find the field of view of the camera, especially when integrating with third party rendering engines. This can be calculated from the CameraCalibration size and focal length parameters. See this article for more information describing the relationship between field of view and focal length: http://paulbourke.net/miscellaneous/lens/.

A C++ code snippet for finding the vertical field of view:
 

    const Vuforia::CameraCalibration& cameraCalibration =
    Vuforia::CameraDevice::getInstance().getCameraCalibration();
     
    Vuforia::Vec2F size = cameraCalibration.getSize();
    Vuforia::Vec2F focalLength = cameraCalibration.getFocalLength();
     
    float fovRadians = 2 * atan(0.5f * size.data[1] / focalLength.data[1]);
    float fovDegrees = fovRadians * 180.0f / M_PI;

CameraCalibration class

The CameraCalibration class allows you to retrieve intrinsic parameters for a camera and lens configuration. Use this class for determining the field-of-view (FOV) of the device camera. This dimension can then be used to perform scene scale calculations for digital eyewear, but it is not exclusive to digital eyewear devices.
The result is a Vector providing the FOV in x and y dimensions as a value in radians.
 

 /// Returns the field of view in x- and y-direction as 2D vector.
    virtual Vec2F getFieldOfViewRads() const = 0;

For more information about the camera calibration parameters, see:http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html.

How To Determine Camera Pose

This article explains how to extract the 3D position and the orientation axis of the AR camera for the reference frame defined by a given trackable.

Provide the trackable pose

For each trackable detected and tracked by Vuforia Engine, the Engine provides the pose of such trackable. The pose represents the combination of the position and orientation of the trackable local reference frame, with respect to the 3D reference frame of the camera. In Vuforia Engine, the trackable reference frame is defined with the X-axis and the Y-axis aligned with the plane tangent to the given image target or frame marker, and with the Z-axis orthogonal to such plane. Alternately, the camera-reference frame is defined with the Z-axis pointing in the camera-viewing direction, and the X-axis and the Y-axis are aligned with the view plane (with X pointing to the right and Y pointing downward).

The pose is mathematically represented by a 3-by-4 matrix (QCAR::Matrix34F) and can be obtained in native code using the following API:

const Vuforia::Matrix34F &
QCAR::TrackableResult::getPose() const;

As shown in the Image Targets sample code (see ImageTargets.cpp – renderFrame function in the ImageTargetsNative sample, or see ImageTargetsRenderer.java in the Vuforia Engine Samples), the pose matrix can also be converted to a 4-by-4 format (QCAR::Matrix44F), which is immediately suitable for rendering in OpenGL as model-view matrix.

However, in some cases (for instance, when using high-level 3D engines for rendering) it can be useful or necessary to specify the camera position and its orientation represented in the reference frame of the trackable.

Extract the camera position and orientation

Follow these steps to extract the camera position and its orientation axis, with coordinates represented in the reference frame of a given trackable. For simplicity, the following process refers to the Image Targets sample.

  1. Copy the files SampleMath.h and SampleMath.cpp from the Dominoes sample (in the /JNI folder) to the Image Targets project.
  2. Edit the Android.mk of the project and add the SampleMath.cpp file to the list of source files: LOCAL_SRC_FILES:= ImageTargets.cpp SampleUtils.cpp Texture.cpp SampleMath.cpp
  3. In ImageTargets.cpp, add an #include statement to the SampleMath.h
  4. Compute the inverse of the model-view matrix, and transpose it:
    const Vuforia::TrackableResult *result = state.getTrackableResult(tIdx);
    const Vuforia::Trackable &
    trackable = result - >
    getTrackable();
    Vuforia::Matrix44F modelViewMatrix = Vuforia::Tool::convertPose2GLMatrix(result - > getPose());
    Vuforia::Matrix44F inverseMV = SampleMath::Matrix44FInverse(modelViewMatrix);
    Vuforia::Matrix44F invTranspMV = SampleMath::Matrix44FTranspose(inverseMV);

     

  5. Extract the camera position from the last column of the matrix computed before the following occurs:
    float cam_x = invTranspMV.data[12];
    float cam_y = invTranspMV.data[13];
    float cam_z = invTranspMV.data[14];
  6.  Extract the camera orientation axis (camera viewing direction, camera right direction, and camera up direction):
    float cam_right_x = invTranspMV.data[0];
    float cam_right_y = invTranspMV.data[1];
    float cam_right_z = invTranspMV.data[2];
    float cam_up_x = -invTranspMV.data[4];
    float cam_up_y = -invTranspMV.data[5];
    float cam_up_z = -invTranspMV.data[6];
    float cam_dir_x = invTranspMV.data[8];
    float cam_dir_y = invTranspMV.data[9];
    float cam_dir_z = invTranspMV.data[10];

     

How To Determine Target Distance

This article describes how to find the distance from the device to the target.

Target Position

The fourth column of the pose matrix is a translation vector. The translation vector describes the position of the target in relation to a camera sitting at the origin.

Given this, finding the distance from the camera to the trackable in scene units is trivial. Find the magnitude of this translation vector using the following code:

C++
 

  #include <math.h>

  ...

  Vuforia::Matrix34F pose = trackable->getPose();     
 
  Vuforia::Vec3F position(pose.data[3], pose.data[7], pose.data[11]);
 
  float distance = sqrt(position.data[0] * position.data[0] +
                        position.data[1] * position.data[1] +
                        position.data[2] * position.data[2]);
 
  LOG("distance: %f", distance);</math.h>

Scene Units

This code returns the distance in scene units, which are defined by the size of the target.

Real Word Distance

To get the real-world distance from the device to the target you must know the printed size of the target. Find the ratio between the defined width of the target and the printed width of the target and then use this as a multiplier for the distance. Note that Vuforia Engine has no way of determining the real-world size of the target it is tracking. A 10 inch-wide target and a 5 inch-wide target might track equally well, and look identical to the tracker, when the device is moved so the target fills the screen.