簡體   English   中英

Quickblox視頻聊天保存

[英]Quickblox video chat saving

我正在使用QuickBlox iOS SDK在我的應用程序中進行視頻聊天。 工作正常。 現在,我想錄制聊天視頻並將其保存在相機膠卷中。 我怎樣才能做到這一點。 我已經閱讀了他們的文檔並實現了-

 -(IBAction)record:(id)sender{


   // Create video Chat
   videoChat = [[QBChat instance] createAndRegisterVideoChatInstance];
   [videoChat setIsUseCustomVideoChatCaptureSession:YES];

   // Create capture session
    captureSession = [[AVCaptureSession alloc] init];

   // ... setup capture session here

   /*We create a serial queue to handle the processing of our frames*/
   dispatch_queue_t callbackQueue= dispatch_queue_create("cameraQueue", NULL);
  [videoCaptureOutput setSampleBufferDelegate:self queue:callbackQueue];

  /*We start the capture*/
  [captureSession startRunning];
   }

 -(void)captureOutput:(AVCaptureOutput *)captureOutput  didOutputSampleBuffer: (CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {

  // Do something with samples
  // ...

  // forward video samples to SDK
  [videoChat processVideoChatCaptureVideoSample:sampleBuffer];
 }

但是我不確定從這里開始怎么辦。 我應該如何獲取視頻數據?

來自quickblox 文檔

要設置自定義視頻捕獲會話,您只需執行以下步驟:

創建一個AVCaptureSession實例,設置輸入和輸出實現幀回調並將所有幀轉發到QuickBlox iOS SDK,告訴QuickBlox SDK您將使用自己的捕獲會話

要設置自定義視頻捕獲會話,請設置輸入和輸出:

-(void) setupVideoCapture{
self.captureSession = [[AVCaptureSession alloc] init];

__block NSError *error = nil;

// set preset
[self.captureSession setSessionPreset:AVCaptureSessionPresetLow];


// Setup the Video input
AVCaptureDevice *videoDevice = [self frontFacingCamera];
//
AVCaptureDeviceInput *captureVideoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if(error){
    QBDLogEx(@"deviceInputWithDevice Video error: %@", error);
}else{
    if ([self.captureSession  canAddInput:captureVideoInput]){
        [self.captureSession addInput:captureVideoInput];
    }else{
        QBDLogEx(@"cantAddInput Video");
    }
}

// Setup Video output
AVCaptureVideoDataOutput *videoCaptureOutput = [[AVCaptureVideoDataOutput alloc] init];
videoCaptureOutput.alwaysDiscardsLateVideoFrames = YES;
//
// Set the video output to store frame in BGRA (It is supposed to be faster)
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[videoCaptureOutput setVideoSettings:videoSettings];
/*And we create a capture session*/
if([self.captureSession canAddOutput:videoCaptureOutput]){
    [self.captureSession addOutput:videoCaptureOutput];
}else{
    QBDLogEx(@"cantAddOutput");
}
[videoCaptureOutput release];


// set FPS
int framesPerSecond = 3;
AVCaptureConnection *conn = [videoCaptureOutput connectionWithMediaType:AVMediaTypeVideo];
if (conn.isVideoMinFrameDurationSupported){
    conn.videoMinFrameDuration = CMTimeMake(1, framesPerSecond);
}
if (conn.isVideoMaxFrameDurationSupported){
    conn.videoMaxFrameDuration = CMTimeMake(1, framesPerSecond);
}

/*We create a serial queue to handle the processing of our frames*/
dispatch_queue_t callbackQueue= dispatch_queue_create("cameraQueue", NULL);
[videoCaptureOutput setSampleBufferDelegate:self queue:callbackQueue];
dispatch_release(callbackQueue);

// Add preview layer
AVCaptureVideoPreviewLayer *prewLayer = [[[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession] autorelease];
[prewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CGRect layerRect = [[myVideoView layer] bounds];
[prewLayer setBounds:layerRect];
[prewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),CGRectGetMidY(layerRect))];
myVideoView.hidden = NO;
[myVideoView.layer addSublayer:prewLayer];


/*We start the capture*/
[self.captureSession startRunning];
}

- (AVCaptureDevice *) cameraWithPosition:(AVCaptureDevicePosition) position{
    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    for (AVCaptureDevice *device in devices) {
        if ([device position] == position) {
            return device;
        }
    }
    return nil;
}


- (AVCaptureDevice *) backFacingCamera{
    return [self cameraWithPosition:AVCaptureDevicePositionBack];
}

- (AVCaptureDevice *) frontFacingCamera{
    return [self cameraWithPosition:AVCaptureDevicePositionFront];
}

實現框架回調:

- (void)captureOutput:(AVCaptureOutput *)captureOutput  didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {

    // Usually we just forward camera frames to QuickBlox SDK
    // But we also can do something with them before, for example - apply some video filters or so  
    [self.videoChat processVideoChatCaptureVideoSample:sampleBuffer];
}

告訴QuickBlox iOS SDK我們使用自己的視頻捕獲會話:

self.videoChat = [[QBChat instance] createAndRegisterVideoChatInstance];
self.videoChat.viewToRenderOpponentVideoStream = opponentVideoView;
//
// we use own video capture session
self.videoChat.isUseCustomVideoChatCaptureSession = YES;

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM