简体   繁体   English

在 Android 中集成 jPCT-AE 和 ARToolKit

[英]Integrating jPCT-AE and ARToolKit in Android

I'm using the ARToolKit for Android to build an AR app.我正在使用 ARToolKit for Android 来构建 AR 应用程序。 I can apply the Projection Matrix and the Marker Transformation Matrix in OpenGL without problem, as explained in the ARSimple example.我可以毫无问题地在 OpenGL 中应用投影矩阵和标记变换矩阵,如 ARSimple 示例中所述。 However, I did not find a way to apply correctly these to the jPCT-AE camera.但是,我没有找到将这些正确应用于 jPCT-AE 相机的方法。 Here is what I did for the camera:这是我为相机所做的:

Camera cam = world.getCamera();
Matrix projMatrix = new Matrix();
projMatrix.transformToGL();
projMatrix.setDump(ARToolKit.getInstance().getProjectionMatrix());

cam.setPosition(projMatrix.getTranslation());
cam.setBack(projMatrix);

and for the object:和对象:

Matrix objMat = new Matrix();
objMat.transformToGL();
objMat.setDump(ARToolKit.getInstance().queryMarkerTransformation(markerID));
cube.setTranslationMatrix(objMat);
cube.setRotationMatrix(objMat);

It almost works: I can see the 3D object if the marker is placed at the center of the screen.它几乎可以工作:如果标记放置在屏幕中央,我可以看到 3D 对象。 However when I move the marker it quickly disappears off screen.但是,当我移动标记时,它会迅速消失在屏幕外。 Also, the cube (and other models I tried to load) seems to render in some sort of "inverted" way.此外,立方体(以及我尝试加载的其他模型)似乎以某种“倒置”方式呈现。 For what I read on the web the ARToolKit matrices are relative to OpenGL world coordinates (while jPCT-AE has its own coordinates), and also that the projection matrix of jPCT-AE is built internally based on the fov, near and far clipping plane, position and rotation, and then I cannot set it directly.对于我在网上读到的内容,ARToolKit 矩阵与 OpenGL 世界坐标相关(而 jPCT-AE 有自己的坐标),而且 jPCT-AE 的投影矩阵是基于 fov、近距和远距剪切平面在内部构建的,位置和旋转,然后我不能直接设置它。

How do I translate the projection matrix and marker matrix to the jPCT-AE engine?如何将投影矩阵和标记矩阵转换为 jPCT-AE 引擎?

Reviewing my code, it seems JPCT-AE does not get the position and back vector correctly from the matrix (although I see no reason why it does not), but it does when you split them in separate vectors.查看我的代码,似乎 JPCT-AE 没有从矩阵中正确获取位置和后向向量(虽然我看不出它为什么不正确),但是当您将它们拆分为单独的向量时却可以。 This are just my findings from trial and error.这只是我从反复试验中得出的结论。

This is how I did it for the camera, using the direction and up vectors.这就是我为相机所做的,使用方向和向上向量。

float[] projection = ARToolKit.getInstance().getProjectionMatrix();
Matrix projMatrix = new Matrix();
projMatrix.setDump(projection);
projMatrix.transformToGL();
SimpleVector translation = projMatrix.getTranslation();
SimpleVector dir = projMatrix.getZAxis();
SimpleVector up = projMatrix.getYAxis();
mCamera.setPosition(translation);
mCamera.setOrientation(dir, up);

And then for the model I extract translation and rotation.然后对于模型,我提取平移和旋转。 It is important to clear the translation , since it is not an absolute position, but a modification to the current position.清除翻译很重要,因为它不是绝对位置,而是对当前位置的修改。 I think this may be your main problem why the objects move out of the screen.我认为这可能是您为什么对象移出屏幕的主要问题。

float[] transformation = ARToolKit.getInstance().queryMarkerTransformation(markerID);
Matrix dump = new Matrix();
dump.setDump(transformation);
dump.transformToGL();
mModel.clearTranslation();
mModel.translate(dump.getTranslation());
mModel.setRotationMatrix(dump);

Also, you should do the transformToGl after calling setDump, I think that is the reason why you see them inverted.此外,您应该在调用 setDump 后执行 transformToGl,我认为这就是您看到它们反转的原因。

Finally, you should reuse the matrix between executions instead of creating a new object every frame, for optimization.最后,您应该在执行之间重用矩阵,而不是每帧创建一个新对象,以进行优化。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM