简体   繁体   中英

Peer to peer video in iOS

I am trying to transmit real time video buffers on one iPhone to another iPhone (called client iPhone) for preview display, and also to accept commands from the client iPhone. I am thinking of a standard way to achieve this. The closest thing I found is AVCaptureMultipeerVideoDataOutput on Github .

However that still uses Multipeer connectivity framework and I think it still requires some setup on both iPhones. The thing I want is there should be ideally no setup required on both iPhones, as long as Wifi (or if possible, bluetooth) is enabled on both iPhones, the peers should recognize each other within the app and prompt user about device discovery. What are the standard ways to achieve this and any links to sample code?

EDIT: I got it working through Multipeer connectivity after writing code from scratch. As of now, I am sending the pixel buffers to peer device by downscaling & compressing the data as jpeg. On the remote device, I have UIImage setup where I display the data every frame time. However I think UIKit may not be the best way to display data, even though images are small. How do I display this data using OpenGLES? Is direct decoding of jpeg possible in Opengles?

Comments:

As of now, I am sending the pixel buffers to peer device by downscaling & compressing the data as jpeg. On the remote device, I have UIImage setup where I display the data every frame time. However I think UIKit may not be the best way to display data, even though images are small.

Turns out, this is the best way to transmit an image via the Multipeer Connectivity framework. I have tried all the alternatives:

  1. I've compressed frames using VideoToolbox. Too slow.
  2. I've compressed frames using Compression. Too slow, but better.

Let me provide some code for #2:

On the iOS device transmitting image data:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer,0);
    __block uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
    dispatch_async(self.compressionQueue, ^{
        uint8_t *compressed = malloc(sizeof(uint8_t) * 1228808);
        size_t compressedSize = compression_encode_buffer(compressed, 1228808, baseAddress, 1228808, NULL, COMPRESSION_ZLIB);
        NSData *data = [NSData dataWithBytes:compressed length:compressedSize];
        NSLog(@"Sending size: %lu", [data length]);
        dispatch_async(dispatch_get_main_queue(), ^{
            __autoreleasing NSError *err;
            [((ViewController *)self.parentViewController).session sendData:data toPeers:((ViewController *)self.parentViewController).session.connectedPeers withMode:MCSessionSendDataReliable error:&err];
        });
    });
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}

On the iOS device displaying image data:

typedef struct {
    size_t length;
    void *data;
} ImageCacheDataStruct;

- (void)session:(nonnull MCSession *)session didReceiveData:(nonnull NSData *)data fromPeer:(nonnull MCPeerID *)peerID
{
    NSLog(@"Receiving size: %lu", [data length]);
    uint8_t *original = malloc(sizeof(uint8_t) * 1228808);
    size_t originalSize = compression_decode_buffer(original, 1228808, [data bytes], [data length], NULL, COMPRESSION_ZLIB);

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef newContext = CGBitmapContextCreate(original, 640, 480, 8, 2560, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext);

    UIImage *image = [[UIImage alloc] initWithCGImage:newImage scale:1 orientation:UIImageOrientationUp];

    CGContextRelease(newContext);
    CGColorSpaceRelease(colorSpace);
    CGImageRelease(newImage);

    if (image) {
        dispatch_async(dispatch_get_main_queue(), ^{
            [((ViewerViewController *)self.childViewControllers.lastObject).view.layer setContents:(__bridge id)image.CGImage];
        });
    }
}

Although this code produces original-quality images on the receiving end, you'll find this far too slow for real-time playback.

Here's the best way to do it:

On the iOS device sending the image data:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    CVPixelBufferLockBaseAddress(imageBuffer,0);
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext);


    UIImage *image = [[UIImage alloc] initWithCGImage:newImage scale:1 orientation:UIImageOrientationUp];
    CGImageRelease(newImage);
    CGContextRelease(newContext);
    CGColorSpaceRelease(colorSpace);
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

    if (image) {
        NSData *data = UIImageJPEGRepresentation(image, 0.7);
        NSError *err;
        [((ViewController *)self.parentViewController).session sendData:data toPeers:((ViewController *)self.parentViewController).session.connectedPeers withMode:MCSessionSendDataReliable error:&err];
    }
}

On the iOS device receiving the image data:

- (void)session:(nonnull MCSession *)session didReceiveData:(nonnull NSData *)data fromPeer:(nonnull MCPeerID *)peerID
{
  dispatch_async(self.imageCacheDataQueue, ^{
        dispatch_semaphore_wait(self.semaphore, DISPATCH_TIME_FOREVER);
        const void *dataBuffer = [data bytes];
        size_t dataLength = [data length];
        ImageCacheDataStruct *imageCacheDataStruct = calloc(1, sizeof(imageCacheDataStruct));
        imageCacheDataStruct->data = (void*)dataBuffer;
        imageCacheDataStruct->length = dataLength;

        __block const void * kMyKey;
        dispatch_queue_set_specific(self.imageDisplayQueue, &kMyKey, (void *)imageCacheDataStruct, NULL);

        dispatch_sync(self.imageDisplayQueue, ^{
            ImageCacheDataStruct *imageCacheDataStruct = calloc(1, sizeof(imageCacheDataStruct));
            imageCacheDataStruct = dispatch_queue_get_specific(self.imageDisplayQueue, &kMyKey);
            const void *dataBytes = imageCacheDataStruct->data;
            size_t length = imageCacheDataStruct->length;
            NSData *imageData = [NSData dataWithBytes:dataBytes length:length];
            UIImage *image = [UIImage imageWithData:imageData];
            if (image) {
                dispatch_async(dispatch_get_main_queue(), ^{
                    [((ViewerViewController *)self.childViewControllers.lastObject).view.layer setContents:(__bridge id)image.CGImage];
                    dispatch_semaphore_signal(self.semaphore);
                });
            }
        });
    });
}

The reason for the semaphores and the separate GCD queues is simple: you want the frames to display at equal time intervals. Otherwise, the video will seem to slow down at first at times, right before speeding up way past normal in order to catch up. My scheme ensures that each frame plays one after another at the same pace, regardless of network bandwidth bottlenecks.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM