This page concerns the Vuforia Engine API version 9.8 and earlier. It has been deprecated and will no longer be actively updated.
- How To Scale and Position Content using OpenGL
- How To Draw a 2D Image on top of a Target using OpenGL ES
- How To Draw a Line Connecting Targets using OpenGL ES
- How To Render Static 3D Models using OpenGL ES
- How To Use Open GL to Transform between Target and Screen Space
How To Scale and Position Content using OpenGL
This article describes how to size and position 3D content on top of the trackable using OpenGL ES.
Size the Trackables
The size of an ImageTarget is defined when the target is created in the TMS. The width is entered when you create a new target, while the height is automatically calculated from the aspect ratio of the image.
The units correspond to OpenGL scene units and thus need to work with the perspective frustum set up by the projection matrix.
For example, assume that you set up a projection matrix using the following code: (C++)
projectionMatrix = Vuforia::Tool::getProjectionGL(cameraCalibration, 2.0f, 2000.0f);
Use a target size that would fall within the near plane at 2 and the far plane at 2000. A target of size 1 would be too small, while a target of size 2000 would be too big. A target with the width of 100 would be reasonable. Of course, you can always adjust the near and far planes.
If the application uses multiple trackables it may be important to size them relative to their real-world sizes. This is the case when multiple trackables will be tracked at the same time. Assume that you want to track a frame marker on top of an image target. If the printed frame marker is 1/4 the size of the printed image target, then the frame marker object should be created at 1/4 the size of the image target object. Otherwise, the 3D content sitting on the frame marker would not properly occlude the 3D content sitting on the image target.
Size and position the 3D content
Now that the trackables are sized correctly, you can position and scale the 3D content to sit correctly on the target. By default, the pose matrix places content in the center of the target with X to the right, Y to the top, and Z coming out of the target:
We can apply translations, rotations, and scales to the pose matrix to make the final modelViewMatrix. The ImageTargets sample uses the following set of transforms:(C++)
SampleUtils::translatePoseMatrix(0.0f, 0.0f, kObjectScale,
&
modelViewMatrix.data[0]);
SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale,
&
modelViewMatrix.data[0]);
SampleUtils::multiplyMatrix(& projectionMatrix.data[0],
&
modelViewMatrix.data[0],
&
modelViewProjection.data[0]);
Note that the transforms are applied from the bottom to the top. In this case, the model is scaled by kObjectScale and then translated up the Z axis by kObjectScale.
You can obtain the trackable size of image targets and frame markers at run time by casting the Trackable object to the specific class (ImageTarget or Marker) and calling the getSize()
method. For Multi-targets you can query the size of individual parts.
How To Render Static 3D Models using OpenGL ES
Vuforia Engine uses OpenGL ES to render 3D content. On Android and iOS platforms, the Vuforia native SDK samples are used to show how to render simple static (non-animated) models using OpenGL ES 2.0.
Code that renders 3D models in the OpenGL rendering thread
The sample code performs the rendering of the 3D models within the OpenGL rendering thread.
Android Java
In the Android Java samples, such as VuforiaSamples-x-y-z, the relevant code is located in the renderFrame()method of the GLSurfaceView.Renderer interface. The renderFrame()method is called at every frame and can be used to update the OpenGL view. For instance, for the Image Targets sample package, this code is located in ImageTargetsRenderer.java.
Android C++
Similarly, in the Android C++ native samples, the rendering code is still triggered from the Java renderFrame() method, but the actual OpenGL code is in a native C++ function. For instance, in the ImageTargetsNative sample, the renderFrame()C++ code is located in the ImageTargets.cpp file in the /jni folder.
iOS
In the iOS ImageTargets sample, the rendering code can be found in the renderFrameQCAR function in ImageTargetsEAGLView.mm.
Within the frame-rendering function, the sample code performs the following main operations:
- Retrieves a list of TrackableResult objects
- The objects represent the targets that are currently active, that is, the targets being tracked by Vuforia Engine.
- Retrieves the Pose matrix for each target.
- The pose represents the position and orientation of the target with respect to the camera reference frame.
- Uses the pose matrix as a starting point to build the OpenGL model-view matrix.
- This matrix is used for positioning and rendering the 3D geometry (mesh) associated with a given target.
- The model-view matrix is built when you apply additional transformations to translate, rotate, and scale the 3D model with respect to the target pose (that is, with regard to the target reference frame).
- Multiplies the model-view matrix by the camera projection matrix to create the MVP (model-view-projection) matrix that brings the 3D content to the screen.
- Later in the code, the MVP matrix is injected as a uniform variable into the GLSL shader used for rendering.
- Within the shader, each vertex of our 3D model is multiplied by this matrix, effectively bringing that vertex from world space to screen space (the transforms are actually object > world > eye > window).
Constructing the MVP matrix
The following code snippets show the part of the sample code that builds the MVP matrix from the trackable Pose (for each trackable).
Android/iOS C++
for(int tIdx = 0; tIdx < state.getNumTrackableResults(); tIdx++)
{
// Get the trackable:
const QCAR::TrackableResult* result = state.getTrackableResult(tIdx);
const QCAR::Trackable& trackable = result->getTrackable();
QCAR::Matrix44F modelViewMatrix =
QCAR::Tool::convertPose2GLMatrix(result->getPose());
...
QCAR::Matrix44F modelViewProjection;
SampleUtils::translatePoseMatrix(0.0f, 0.0f, kObjectScale,
&modelViewMatrix.data[0]);
SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale,
&modelViewMatrix.data[0]);
SampleUtils::multiplyMatrix(&projectionMatrix.data[0],
&modelViewMatrix.data[0] ,
&modelViewProjection.data[0]);
...
glUniformMatrix4fv(mvpMatrixHandle, 1, GL_FALSE,
(GLfloat*)&modelViewProjection.data[0] );
...
// render your OpenGL mesh here (e.g. vertex array)
Android Java
for (int tIdx = 0; tIdx < state.getNumTrackableResults(); tIdx++) {
TrackableResult result = state.getTrackableResult(tIdx);
Trackable trackable = result.getTrackable();
ImageTarget itarget = (ImageTarget)trackable;
Matrix44F modelViewMatrix_Vuforia = Tool.convertPose2GLMatrix(result.getPose());
float[] modelViewMatrix = modelViewMatrix_Vuforia.getData();
float[] modelViewProjection = new float[16];
Matrix.translateM(modelViewMatrix, 0, 0.0f, 0.0f, OBJECT_SCALE_FLOAT);
Matrix.scaleM(modelViewMatrix, 0, OBJECT_SCALE_FLOAT, OBJECT_SCALE_FLOAT, OBJECT_SCALE_FLOAT); Matrix.translateM(modelViewMatrix, 0, 0.0f, 0.0f, 0.0f);
Vec2F targetSize = itarget.getSize();
Matrix.scaleM(modelViewMatrix, 0, targetSize.getData()[0], targetSize.getData()[1], 1.0f);
Matrix.multiplyMM(modelViewProjection, 0, vuforiaAppSession .getProjectionMatrix().getData(), 0, modelViewMatrix, 0);
// render your OpenGL mesh here ...
Code that feeds model mesh vertex arrays to the rendering pipeline
The model mesh vertex arrays (vertices, normals, and texture coordinates) are fed to the OpenGL rendering pipeline. This task is achieved by doing the following:
- Bind the shader.
- Assign the vertex arrays to the attribute fields in the shader.
- Enable these attribute arrays.
Android/iOS C++
glUseProgram(shaderProgramID);
glVertexAttribPointer(vertexHandle, 3, GL_FLOAT, GL_FALSE, 0,
(const GLvoid *)&
teapotVertices[0]);
glVertexAttribPointer(normalHandle, 3, GL_FLOAT, GL_FALSE, 0,
(const GLvoid *)&
teapotNormals[0]);
glVertexAttribPointer(textureCoordHandle, 2, GL_FLOAT, GL_FALSE, 0,
(const GLvoid *)&
teapotTexCoords[0]);
glEnableVertexAttribArray(vertexHandle);
glEnableVertexAttribArray(normalHandle);
glEnableVertexAttribArray(textureCoordHandle);
Android Java
GLES20.glVertexAttribPointer(vertexHandle, 3, GLES20.GL_FLOAT,
false, 0, mTeapot.getVertices());
GLES20.glVertexAttribPointer(normalHandle, 3, GLES20.GL_FLOAT,
false, 0, mTeapot.getNormals());
GLES20.glVertexAttribPointer(textureCoordHandle, 2,
GLES20.GL_FLOAT, false, 0, mTeapot.getTexCoords());
GLES20.glEnableVertexAttribArray(vertexHandle);
GLES20.glEnableVertexAttribArray(normalHandle);
GLES20.glEnableVertexAttribArray(textureCoordHandle);
Be aware that Vuforia Engine samples do not make use of the normal array. But you can choose to do lighting calculations in the shader using the normals. To use a model without normals, you can comment out those lines.
The sample code activates the correct texture unit, binds the desired texture; and then renders the model (mesh) using the glDrawElements
call. The correct texture unit is typically 0, unless you have multiple textures.
Android/iOS C++
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, thisTexture->mTextureID);
...
glDrawElements(GL_TRIANGLES, NUM_TEAPOT_OBJECT_INDEX, GL_UNSIGNED_SHORT,
(const GLvoid*) &teapotIndices[0]);
Android Java:
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,
mTextures.get(textureIndex).mTextureID[0]);
GLES20.glUniform1i(texSampler2DHandle, 0);
// pass the model view matrix to the shader
GLES20.glUniformMatrix4fv(mvpMatrixHandle, 1, false,
modelViewProjection, 0);
GLES20.glDrawElements(GLES20.GL_TRIANGLES,
mTeapot.getNumObjectIndex(), GLES20.GL_UNSIGNED_SHORT,
mTeapot.getIndices());
Android Java
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,
mTextures.get(textureIndex).mTextureID[0]);
GLES20.glUniform1i(texSampler2DHandle, 0);
// pass the model view matrix to the shader
GLES20.glUniformMatrix4fv(mvpMatrixHandle, 1, false,
modelViewProjection, 0);
GLES20.glDrawElements(GLES20.GL_TRIANGLES,
mTeapot.getNumObjectIndex(), GLES20.GL_UNSIGNED_SHORT,
mTeapot.getIndices());
Note that glDrawElements
takes an array of indices. This allows you to randomly index into the other model arrays when building the triangles that make up the model. However, sometimes you have a model without indices. This situation means that the model arrays are meant to be read linearly, that is, the triangles are listed in consecutive order. In that case, use the glDrawArrays method instead: glDrawArrays
(GL_TRIANGLES
, 0
, numVertices
);
Replace the teapot model
To change out the teapot model with your own model, follow these steps:
- Obtain the vertex array for your model, and optionally texture coordinates and normals.
Our samples use a simple header format to store the model data as arrays. For an example, see Teapot.h (or Teapot.java in the Java version of the Vuforia Engine samples). - When you have the arrays, you must switch out the following variables in the above code:
- teapotVertices
- teapotNormals
- teapotTexCoords
- teapotIndices
- Change the NUM_TEAPOT_OBJECT_INDEX, and the GL_UNSIGNED_SHORT type in the glDrawElements call if your model uses an index array with a different type. If your model does not have indices, use glDrawArrays, as described above.
Example
For example, replace the sample model with a custom mesh that represents a simple rectangular surface (aka a quad). The quad (rectangle) mesh consists of two triangles.
- In Android Java, define the triangles using code similar to the following:
public class QuadMesh { private final float[] mVertices = new float[] { -0.5f, -0.5f, 0.0f, // bottom-left corner 0.5f, -0.5f, 0.0f, // bottom-right corner 0.5f, 0.5f, 0.0f, // top-right corner -0.5f, 0.5f, 0.0f // top-left corner }; private final float[] mNormals = new float[] { 0.0f, 0.0f, 1.0f, // normal at bottom-left corner 0.0f, 0.0f, 1.0f, // normal at bottom-right corner 0.0f, 0.0f, 1.0f, // normal at top-right corner 0.0f, 0.0f, 1.0f // normal at top-left corner }; private final float[] mTexCoords = new float[] { 0.0f, 0.0f, // tex-coords at bottom-left corner 1.0f, 0.0f, // tex-coords at bottom-right corner 1.0f, 1.0f, // tex-coords at top-right corner 0.0f, 1.0f // tex-coords at top-left corner }; private final short[] mIndices = new short[] { 0, 1, 2, // triangle 1 2, 3, 0 // triangle 2 }; public final Buffer vertices; public final Buffer normals; public final Buffer texCoords; public final Buffer indices; public final int numVertices = mVertices.length / 3; // 3 coords per vertex public final int numIndices = mIndices.length; public QuadMesh() { // init vertex buffers vertices = fillBuffer(mVertices); normals = fillBuffer(mNormals); texCoords = fillBuffer(mTexCoords); indices = fillBuffer(mIndices); } private Buffer fillBuffer(float[] array) { final int sizeOfFloat = 4;// 1 float = 4 bytes ByteBuffer bb = ByteBuffer.allocateDirect(sizeOfFloat * array.length); bb.order(ByteOrder.LITTLE_ENDIAN); for (float f : array) { bb.putFloat(f); } bb.rewind(); return bb; } private Buffer fillBuffer(short[] array) { final int sizeOfShort = 2;// 1 short = 2 bytes ByteBuffer bb = ByteBuffer.allocateDirect(sizeOfShort * array.length); bb.order(ByteOrder.LITTLE_ENDIAN); for (short s : array) { bb.putShort(s); } bb.rewind(); return bb; } }
- In C++ (Android or iOS), create a file called
QuadMesh.h
, and fill the file with the following vertex arrays definitions:static const float quadVertices[] = { -0.5f, -0.5f, 0.0f, //bottom-left corner 0.5f, -0.5f, 0.0f, //bottom-right corner 0.5f, 0.5f, 0.0f, //top-right corner -0.5f, 0.5f, 0.0f //top-left corner }; static const float quadTexcoords[] = { 0.0f, 0.0f, //tex-coords at bottom-left corner 1.0f, 0.0f, //tex-coords at bottom-right corner 1.0f, 1.0f, //tex-coords at top-right corner 0.0f, 1.0f //tex-coords at top-left corner }; static const float quadNormals[] = { 0.0f, 0.0f, 1.0f, //normal at bottom-left corner 0.0f, 0.0f, 1.0f, //normal at bottom-right corner 0.0f, 0.0f, 1.0f, //normal at top-right corner 0.0f, 0.0f, 1.0f //normal at top-left corner }; static const unsigned short quadIndices[] = { 0, 1, 2, // triangle 1 2, 3, 0 // triangle 2 };
- Using either Java or C++, pass the quad vertices, texture coordinates and normals as parameters to the
glVertexAttribPointer()
functions as previously explained, while passing the quad indices to theglDrawElements()
function. - Finally, change the texture.
How To Use Open GL to Transform between Target and Screen Space
Vuforia Engine does not provide direct methods for transforming between target space and screen space. Instead, users can step through the standard OpenGL transform hierarchy to transform a point from screen space to target space, or vice versa.
Another option is the Vuforia::Tool::projectPoint()
method, which takes a 3D point in target space and projects it into camera image space. With a few simple steps, you can take a point from camera space to screen space.
Screen space to target space
You can find code for projecting a touch point in screen space onto the target plane in our sample apps (e.g. SampleMath, SampleApplicationUtils, ImageTargets.cpp, etc.).
For example you can copy these two methods from ImageTargets.cpp
to your project:
- projectScreenPointToPlane
- linePlaneIntersection.
- Create a
Vuforia::Matrix44F modelViewMatrix
global variable, and set it in the renderFrame method. - On Android, create a
Vuforia::Matrix44F inverseProjMatrix
global variable, and set it after initializing the projection matrix - Call the
projectScreenPointToPlane
method as follows: (C++)Vuforia::Vec3F intersection, lineStart, lineEnd; projectScreenPointToPlane(Vuforia::Vec2F(touch.x, touch.y), Vuforia::Vec3F(0, 0, 0), Vuforia::Vec3F(0, 0, 1), intersection, lineStart, lineEnd);
Target space to screen space
The Vuforia::Tool::projectPoint()
method takes a 3D point in target space and transforms it into a 2D point in camera space. The camera image typically has a different aspect ratio than the screen, and by default it is aspect scaled to fill the screen. This means that some of the camera image is cropped by the screen.
The following code snippet takes the camera space point to screen space:
Vuforia::Vec2F cameraPointToScreenPoint(Vuforia::Vec2F cameraPoint)
{
Vuforia::VideoMode videoMode = Vuforia::CameraDevice::getInstance().getVideoMode(Vuforia::CameraDevice::MODE_DEFAULT);
Vuforia::VideoBackgroundConfig config = Vuforia::Renderer::getInstance().getVideoBackgroundConfig();
int xOffset = ((int)screenWidth - config.mSize.data[0]) / 2.0f + config.mPosition.data[0];
int yOffset = ((int)screenHeight - config.mSize.data[1]) / 2.0f - config.mPosition.data[1];
if (isActivityInPortraitMode)
{
// camera image is rotated 90 degrees
int rotatedX = videoMode.mHeight - cameraPoint.data[1];
int rotatedY = cameraPoint.data[0];
return Vuforia::Vec2F(rotatedX * config.mSize.data[0] / (float)videoMode.mHeight + xOffset,
rotatedY * config.mSize.data[1] / (float)videoMode.mWidth + yOffset);
}
else
{
return Vuforia::Vec2F(cameraPoint.data[0] * config.mSize.data[0] / (float)videoMode.mWidth + xOffset,
cameraPoint.data[1] * config.mSize.data[1] / (float)videoMode.mHeight + yOffset);
}
}
The above code snippet should work on Android and iOS platforms.
But the following code snippet is an iOS-specific implementation that was shared in the forum:(ObjectiveC)
- (CGPoint)projectCoord:(CGPoint)coord
inView:(const Vuforia::CameraCalibration &)cameraCalibration
andPose:(Vuforia::Matrix34F)pose
withOffset:(CGPoint)offset
andScale:(CGFloat)scale
{
CGPoint converted;
Vuforia::Vec3F vec(coord.x, coord.y, 0);
Vuforia::Vec2F sc = Vuforia::Tool::projectPoint(cameraCalibration, pose, vec);
converted.x = sc.data[0] * scale - offset.x;
converted.y = sc.data[1] * scale - offset.y;
return converted;
}
- (void)calcScreenCoordsOf:(CGSize)target inView:(CGFloat *)matrix inPose:(Vuforia::Matrix34F)pose
{
// 0,0 is at centre of target so extremities are at w/2,h/2
CGFloat w = target.width / 2;
CGFloat h = target.height / 2;
// need to account for the orientation on view size
CGFloat viewWidth = self.frame.size.height; // Portrait
CGFloat viewHeight = self.frame.size.width; // Portrait
UIInterfaceOrientation orientation = [UIApplication sharedApplication].statusBarOrientation;
if (UIInterfaceOrientationIsLandscape(orientation))
{
viewWidth = self.frame.size.width;
viewHeight = self.frame.size.height;
}
// calculate any mismatch of screen to video size
Vuforia::CameraDevice &
cameraDevice = Vuforia::CameraDevice::getInstance();
const Vuforia::CameraCalibration &
cameraCalibration = cameraDevice.getCameraCalibration();
Vuforia::VideoMode videoMode = cameraDevice.getVideoMode(Vuforia::CameraDevice::MODE_DEFAULT);
CGFloat scale = viewWidth / videoMode.mWidth;
if (videoMode.mHeight * scale & amp; lt; viewHeight)
scale = viewHeight / videoMode.mHeight;
CGFloat scaledWidth = videoMode.mWidth * scale;
CGFloat scaledHeight = videoMode.mHeight * scale;
CGPoint margin = {(scaledWidth - viewWidth) / 2, (scaledHeight - viewHeight) / 2};
// now project the 4 corners of the target
ImageTargetsAppDelegate *delegate = [[UIApplication sharedApplication] delegate];
delegate.s0 = [self projectCoord:CGPointMake(-w, h) inView:cameraCalibration andPose:pose withOffset:margin andScale:scale];
delegate.s1 = [self projectCoord:CGPointMake(-w, -h) inView:cameraCalibration andPose:pose withOffset:margin andScale:scale];
delegate.s2 = [self projectCoord:CGPointMake(w, -h) inView:cameraCalibration andPose:pose withOffset:margin andScale:scale];
delegate.s3 = [self projectCoord:CGPointMake(w, h) inView:cameraCalibration andPose:pose withOffset:margin andScale:scale];