简体   繁体   English

ios 4.0及更高版本中的相机数码变焦

[英]camera digital zoom in ios 4.0 and later

how can I implement a digital zoom slider for the camera. 如何为相机实现数字变焦滑块。 I use the following APIs: AVCaptureVideoPreviewLayer, AVCaptureSession, AVCaptureVideoDataOutput, AVCaptureDeviceInput. 我使用以下API:AVCaptureVideoPreviewLayer,AVCaptureSession,AVCaptureVideoDataOutput,AVCaptureDeviceInput。

I would like to have the same slider, which is available in iphone 4 camera app. 我想拥有相同的滑块,iPhone 4摄像头应用程序中提供该滑块。

Thanks in advance for any tips and examples! 在此先感谢您提供任何提示和示例!

I'm a newbie, and I have tried doing a zoom with the AVFoundation framework only, using an AVCaptureVideoPreviewLayer and I can't make it work either. 我是新手,并且尝试使用AVCaptureVideoPreviewLayer仅使用AVFoundation框架进行缩放,但我也无法使其正常工作。 I think its because that layer has its own AVCaptureSession which controls its own output and even though I added it as a sublayer to a UIScrollView, it still runs on its own and the scroll layer can't affect the preview layer. 我认为这是因为该层具有自己的AVCaptureSession来控制自己的输出,即使我将其作为子层添加到UIScrollView中,它仍然可以独立运行,并且滚动层不会影响预览层。

From WWDC session 419, "Capture from camera using AVFoundation in iOS5", Brad Ford said "AVCaptureVideoPreviewLayer does NOT inherit from AVCaptureOutput (like AVCaptureVideoDataOutput does). It inherits from CALayer but can be inserted into a core animation tree (like other layers). In AVFoundation, the AVSession owns it outputs, but does NOT own its layers. The layers own the session. So if you want to insert a layer into a view hierarchy, you attach a session to it and forget about it. Then when layer tree disposes of itself, it will clean up the session as well." 在WWDC会议419中,“ iOS5中使用AVFoundation从摄像机捕获”,布拉德·福特说:“ AVCaptureVideoPreviewLayer不继承自AVCaptureOutput(像AVCaptureVideoDataOutput继承)。它继承自CALayer,但可以插入到核心动画树中(类似于其他图层)。在AVFoundation中,AVSession拥有它的输出,但不拥有它的图层,这些图层拥有会话。因此,如果要将图层插入视图层次结构中,则将会话附加到该图层上,然后忽略它,然后在图层树中自行处理,也将清理会话。”

I have seen Brad Larson, using a combination of Open GL ES and AVFoundation framework at: http://www.sunsetlakesoftware.com/2010/10/22/gpu-accelerated-video-processing-mac-and-ios use an AVCaptureVideoPreviewLayer where he can adjust the raw data from the camera, so I assume thats the place to start. 我见过Brad Larson,将Open GL ES和AVFoundation框架结合使用,网址为: http : //www.sunsetlakesoftware.com/2010/10/22/gpu-accelerated-video-processing-mac-and-ios使用AVCaptureVideoPreviewLayer他可以从相机调整原始数据,因此我认为这是开始的地方。 Check out his ColorTrackingCamera app. 查看他的ColorTrackingCamera应用。 Its using shaders which you (and I) don't need to zoom, but I think a similar mechanism can be used to zoom. 您(和我)不需要使用它使用的着色器,但是我认为可以使用类似的机制进行缩放。

Oh, I forgot to mention that Brad Larson does NOT attach the AVCaptureInput to the AVCaptureSession. 哦,我忘了提到Brad Larson不会将AVCaptureInput附加到AVCaptureSession。 I can see that he is also using the main thread for his queue instead of creating his own queue on another thread. 我可以看到他还在队列中使用主线程,而不是在另一个线程上创建自己的队列。 His Open GL ES methods to drawFrame is also how he renders the image, and the capture session itself is not doing that. 他对drawFrame的Open GL ES方法也是他渲染图像的方式,而捕获会话本身并没有这样做。 So, if you understand more, or my assumptions are wrong, please let me know too. 因此,如果您了解更多,或者我的假设是错误的,也请让我知道。

Hope this helps, but since I am new to all of this, and OpenGL ES, I am assuming that library can be used to zoom if we can capture each frame and turn it into a UIImage with a different resolution and/or frame size. 希望这会有所帮助,但是由于我对这一切以及OpenGL ES还是陌生的,所以我假设如果我们可以捕获每个帧并将其转换为具有不同分辨率和/或帧大小的UIImage,则可以使用该库进行缩放。

Jeff W. 杰夫·W

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM