简体   繁体   English

什么技术最适合将iPhone视频摄像机数据直播到计算机?

[英]What Techniques Are Best To Live Stream iPhone Video Camera Data To a Computer?

I would like to stream video from an iPhone camera to an app running on a Mac. 我想将视频从iPhone相机流式传输到Mac上运行的应用程序。 Think sorta like video chat but only one way, from the device to a receiver app (and it's not video chat). 想想视频聊天,但只有一种方式,从设备到接收器应用程序(它不是视频聊天)。

My basic understanding so far: 到目前为止我的基本理解:

  1. You can use AVFoundation to get 'live' video camera data without saving to a file but it is uncompressed data and thus I'd have to handle compression on my own. 您可以使用AVFoundation获取“实时”视频摄像机数据而无需保存到文件,但它是未压缩的数据,因此我必须自己处理压缩。
  2. There's no built in AVCaptureOutput support for sending to a network location, I'd have to work this bit out on my own. 没有内置的AVCaptureOutput支持发送到网络位置,我必须自己解决这个问题。

Am I right about the above or am I already off-track? 我对上述情况是对的还是我已经偏离了轨道?

Apple Tech Q&A 1702 provides some info on saving off individual frames as images - is this the best way to go about this? Apple Tech Q&A 1702提供了一些关于将单个帧保存为图像的信息 - 这是最好的解决方法吗? Just saving off 30fps and then something like ffmpeg to compress 'em? 只需保存30fps,然后像ffmpeg那样压缩它们?

There's a lot of discussion of live streaming to the iPhone but far less info on people that are sending live video out. 有很多关于iPhone直播的讨论,但关于发送实时视频的人的信息要少得多。 I'm hoping for some broad strokes to get me pointed in the right direction. 我希望有一些广泛的笔触让我指向正确的方向。

You can use AVCaptureVideoDataOutput and a sampleBufferDelegate to capture raw compressed frames, then you just need to stream them over the network. 您可以使用AVCaptureVideoDataOutputsampleBufferDelegate来捕获原始压缩帧,然后您只需要通过网络传输它们。 AVFoundation provides an API to encode frames to local video files, but doesn't provide any for streaming to the network. AVFoundation提供了一个API,用于将帧编码为本地视频文件,但不提供任何流式传输到网络。 Your best bet is to find a library that streams raw frames over the network. 最好的办法是找到一个通过网络传输原始帧的库。 I'd start with ffmpeg; 我从ffmpeg开始; I believe libavformat supports RTSP, look at the ffserver code. 我相信libavformat支持RTSP,请查看ffserver代码。

Note that you should configure AVCaptureVideoDataOutput to give you compressed frames, so you avoid having to compress raw video frames without the benefit of hardware encoding. 请注意,您应该配置AVCaptureVideoDataOutput以为您提供压缩帧,这样您就可以避免在没有硬件编码的情况下压缩原始视频帧。

This depends a lot on your target resolution and what type of frame rate performance you are targeting. 这很大程度上取决于您的目标分辨率以及您定位的帧速率性能类型。

From an abstract point of view, I would probably have a capture thread to fill a buffer directly from AVCaptureOutput, and a communications thread to send and rezero the buffer (padded if need be) to a previously specified host every x milliseconds. 从抽象的角度来看,我可能有一个捕获线程直接从AVCaptureOutput填充缓冲区,以及一个通信线程,每隔x毫秒发送并重新调整缓冲区(如果需要,填充)到先前指定的主机。

After you accomplish initial data transfer, I would work on achieving 15fps at the lowest resolution, and work my way up until the buffer overflows before the communication thread can transmit which would require balancing image resolution, buffer size (probably dependent on GSM, and soon to be CDMA frame sizes), and finally the maximum rate at which you can transmit that buffer. 完成初始数据传输后,我将以最低分辨率实现15fps,然后继续工作直到缓冲区溢出,然后通信线程才能传输,这需要平衡图像分辨率,缓冲区大小(可能依赖于GSM,很快为CDMA帧大小),最后是传输该缓冲区的最大速率。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM