简体   繁体   English

在Opengl中,近平面和Image平面是一样的吗?

[英]In Opengl,are Near plane and Image plane same things?

I'm an Android developer but a beginner of OpenGL. 我是Android开发人员,但是OpenGL的初学者。 I'm confused by the view frustum. 我对视锥体感到困惑。 There are two questions: 有两个问题:

  1. Are near plane and picture plane same things? 靠近飞机和画面的东西是一样的吗?
  2. If they are not same things, why there isn't a z-value of the picture plane to be specified? 如果它们不是同一个东西,为什么没有指定图像平面的z值?

I'm not proficient in English, hope you can understand my questions ..Any help will be appreciated! 我不擅长英语,希望你能理解我的问题..任何帮助将不胜感激!

No, they're different concepts and an image plane isn't needed in OpenGL. 不,它们是不同的概念,OpenGL中不需要图像平面。

Near and far planes exist as a result of using a projection matrix to define the camera's projection. 由于使用投影矩阵来定义相机的投影,因此存在近平面和远平面。 They're quite important when you get into the depth buffer and numerical accuracy. 当您进入深度缓冲区和数值精度时,它们非常重要。 The image is the result of rendering, obviously. 显然,图像是渲染的结果。 The image plane however only exists in real world cameras where light is projected through a pinhole or lens and onto the thing recording the image. 然而,图像平面仅存在于现实世界的相机中,其中光通过针孔或透镜投射到记录图像的物体上。 In computer graphics it's not really necessary to model it unless you're doing crazy non-thin lens effects. 在计算机图形学中,除非你正在做疯狂的非薄镜头效果,否则没有必要对它进行建模。 I imagine it like this: 我想它是这样的:

在此输入图像描述

Again, the parts circled "imaginary" don't have any purpose in OpenGL and rasterization. 同样,圈出“虚构”的部分在OpenGL和光栅化中没有任何用途。 They're necessary for a real camera, but artificially computing an image can take some shortcuts. 它们对于真正的相机来说是必需的,但人工计算图像可能需要一些捷径。 Firstly, the projection matrix is used to project geometry into image space, keeping depth values. 首先,投影矩阵用于将几何投影到图像空间中,保持深度值。 Image space is pretty separate to the concept of the image plane. 图像空间与图像平面的概念非常分离。 We don't tend to think of the position and size of the sensor or film in a camera when looking at a photo. 在查看照片时,我们不会考虑相机中传感器或胶片的位置和大小。 The geometry's then rasterized, producing fragments (packets of data for pixels). 然后对几何体进行栅格化,生成片段(像素数据包)。 The depth buffer is used to keep only the front-most, visible, fragments. 深度缓冲区用于仅保留最前面的可见碎片。 This is just based on depth comparisons. 这只是基于深度比较。 In one sense a pixel isn't a surface on an explicit image plane, but represents a cut out portion of the viewing volume. 在一种意义上,像素不是显式图像平面上的表面,而是表示观察体积的切出部分。

The wiki page for homogeneous coordinates discusses the mapping of points onto the plane z = 1, though arguing this is the depth of the image plane doesn't seem to serve any purpose. 齐次坐标的维基页面讨论了点到平面z = 1的映射,尽管认为这是图像平面的深度似乎没有任何用途。

There's a bunch of literature about camera models out there, but I don't know which I should point out. 有很多关于相机型号的文献,但我不知道应该指出哪些。 If others do, please comment/edit! 如果其他人这样做,请评论/编辑! Perhaps start with the red book? 也许从红皮书开始? http://www.glprogramming.com/red/chapter03.html http://www.glprogramming.com/red/chapter03.html

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM