I am working on application for tablets, that display different 3D models. My current task is to find out whether the user hit on the model when touched the screen. I have X, Y touch coordinates and I have two solutions:
1) I use OpenGL ES 2.0 for model rendering, so perhaps I can create additional framebuffer, and render it in fragment shader with every pixel depth in color. I mean I can set pixel color in black if depth is MAX and white if it is zero. Then I can use data from this framebuffer for getting my point depth and find out what I need.
2) The second solution is to run ray from touch point, and then look over all my model triangles with common ray-triangle intersection algorithm.
My question is: Is there any faster solution? Thank you.
I know this is an old question but AFAIK the best practice in this scenario is to perform ray-triangle intersection tests. You can do this by brute force, by testing each triangle. Or you can use a data structure to accelerate things. An octree or a kd tree should do the trick, though they both have their advantages and disadvantages.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.