繁体   English   中英

如何在ios swift中将相机预览视图添加到三个自定义uiview中

[英]How to add camera preview view to three custom uiview's in ios swift

我需要创建一个具有视频处理功能的应用。

我的要求是必须使用相机预览图层创建3个视图。 第一视图应显示原始捕获视频,第二视图应显示原始捕获视频的翻转,最后视图应显示原始捕获视频的反转颜色。

我开始按照这个要求开发。 首先,我创建了3个视图和Camera Capture所需的属性

    @IBOutlet weak var captureView: UIView!
    @IBOutlet weak var flipView: UIView!
    @IBOutlet weak var InvertView: UIView!

    //Camera Capture requiered properties
    var videoDataOutput: AVCaptureVideoDataOutput!
    var videoDataOutputQueue: DispatchQueue!
    var previewLayer:AVCaptureVideoPreviewLayer!
    var captureDevice : AVCaptureDevice!
    let session = AVCaptureSession()
    var replicationLayer: CAReplicatorLayer!

在此输入图像描述

现在我调用了AVCaptureVideoDataOutputSampleBufferDelegate来启动摄像头会话。

extension ViewController:  AVCaptureVideoDataOutputSampleBufferDelegate{
    func setupAVCapture(){
        session.sessionPreset = AVCaptureSessionPreset640x480
        guard let device = AVCaptureDevice
            .defaultDevice(withDeviceType: .builtInWideAngleCamera,
                           mediaType: AVMediaTypeVideo,
                           position: .back) else{
                            return
        }
        captureDevice = device
        beginSession()
    }

    func beginSession(){
        var err : NSError? = nil
        var deviceInput:AVCaptureDeviceInput?
        do {
            deviceInput = try AVCaptureDeviceInput(device: captureDevice)
        } catch let error as NSError {
            err = error
            deviceInput = nil
        }
        if err != nil {
            print("error: \(err?.localizedDescription)");
        }
        if self.session.canAddInput(deviceInput){
            self.session.addInput(deviceInput);
        }

        videoDataOutput = AVCaptureVideoDataOutput()
        videoDataOutput.alwaysDiscardsLateVideoFrames=true
        videoDataOutputQueue = DispatchQueue(label: "VideoDataOutputQueue")
        videoDataOutput.setSampleBufferDelegate(self, queue:self.videoDataOutputQueue)
        if session.canAddOutput(self.videoDataOutput){
            session.addOutput(self.videoDataOutput)
        }
        videoDataOutput.connection(withMediaType: AVMediaTypeVideo).isEnabled = true

        self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session)
        self.previewLayer.frame = self.captureView.bounds
        self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspect

        self.replicationLayer = CAReplicatorLayer()
        self.replicationLayer.frame = self.captureView.bounds
        self.replicationLayer.instanceCount = 1 //
        self.replicationLayer.instanceTransform = CATransform3DMakeTranslation(0.0, self.captureView.bounds.size.height / 1, 0.0)

        self.replicationLayer.addSublayer(self.previewLayer)
        self.captureView.layer.addSublayer(self.replicationLayer)
        self.flipView.layer.addSublayer(self.replicationLayer)
        self.InvertView.layer.addSublayer(self.replicationLayer)

        session.startRunning()
    }

    func captureOutput(_ captureOutput: AVCaptureOutput!,
                       didOutputSampleBuffer sampleBuffer: CMSampleBuffer!,
                       from connection: AVCaptureConnection!) {
        // do stuff here
    }

    // clean up AVCapture
    func stopCamera(){
        session.stopRunning()
    }

}

在这里,我使用CAReplicatorLayer来显示3个视图中的视频。 我将self.replicationLayer.instanceCount指定为1,然后我得到了这样的输出。

在此输入图像描述

如果我将self.replicationLayer.instanceCount指定为3,那么我得到了这样的输出。

在此输入图像描述

因此,请指导我如何在3个不同的视图中显示捕获视频。 并提供一些想法,将原始捕捉视频转换为翻转和反转颜色。 提前致谢。

最后,我在JohnnySlagle / Multiple-Camera-Feeds代码的帮助下找到了答案。

我创建了三个视图

@property (weak, nonatomic) IBOutlet UIView *video1;
@property (weak, nonatomic) IBOutlet UIView *video2;
@property (weak, nonatomic) IBOutlet UIView *video3;

然后稍微改变了setUpFeedViews

- (void)setupFeedViews {
    NSUInteger numberOfFeedViews = 3;

    for (NSUInteger i = 0; i < numberOfFeedViews; i++) {
        VideoFeedView *feedView = [self setupFeedViewWithFrame:CGRectMake(0, 0, self.video1.frame.size.width, self.video1.frame.size.height)];
        feedView.tag = i+1;
        switch (i) {
            case 0:
                [self.video1 addSubview:feedView];
                break;
            case 1:
                [self.video2 addSubview:feedView];
                break;
            case 2:
                [self.video3 addSubview:feedView];
                break;
            default:
                break;
        }
        [self.feedViews addObject:feedView];
    }
}

然后在AVCaptureVideoDataOutputSampleBufferDelegate中应用过滤器

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer);

    // update the video dimensions information
    _currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDesc);

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:nil];

    CGRect sourceExtent = sourceImage.extent;

    CGFloat sourceAspect = sourceExtent.size.width / sourceExtent.size.height;


    for (VideoFeedView *feedView in self.feedViews) {
        CGFloat previewAspect = feedView.viewBounds.size.width  / feedView.viewBounds.size.height;
        // we want to maintain the aspect radio of the screen size, so we clip the video image
        CGRect drawRect = sourceExtent;
        if (sourceAspect > previewAspect) {
            // use full height of the video image, and center crop the width
            drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0;
            drawRect.size.width = drawRect.size.height * previewAspect;
        } else {
            // use full width of the video image, and center crop the height
            drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
            drawRect.size.height = drawRect.size.width / previewAspect;
        }
        [feedView bindDrawable];

        if (_eaglContext != [EAGLContext currentContext]) {
            [EAGLContext setCurrentContext:_eaglContext];
        }

        // clear eagl view to grey
        glClearColor(0.5, 0.5, 0.5, 1.0);
        glClear(GL_COLOR_BUFFER_BIT);

        // set the blend mode to "source over" so that CI will use that
        glEnable(GL_BLEND);
        glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        // This is necessary for non-power-of-two textures
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

        if (feedView.tag == 1) {
            if (sourceImage) {
                [_ciContext drawImage:sourceImage inRect:feedView.viewBounds fromRect:drawRect];
            }
        } else if (feedView.tag == 2) {
            sourceImage = [sourceImage imageByApplyingTransform:CGAffineTransformMakeScale(1, -1)];
            sourceImage = [sourceImage imageByApplyingTransform:CGAffineTransformMakeTranslation(0, sourceExtent.size.height)];
            if (sourceImage) {
                [_ciContext drawImage:sourceImage inRect:feedView.viewBounds fromRect:drawRect];
            }
        } else {
            CIFilter *effectFilter = [CIFilter filterWithName:@"CIColorInvert"];
            [effectFilter setValue:sourceImage forKey:kCIInputImageKey];
            CIImage *invertImage = [effectFilter outputImage];
            if (invertImage) {
                [_ciContext drawImage:invertImage inRect:feedView.viewBounds fromRect:drawRect];
            }
        }
        [feedView display];
    }
}

而已。 它成功地符合我的要求。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM