繁体   English   中英

将图像捕获用于CALayer时会失真

[英]Image capture is distorted when using it for a CALayer

我正在开发拍照应用程序。 使用此代码,应用程序的预览层设置为恰好占据屏幕的一半:

[_previewLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];

这看起来很完美,并且在用户查看相机的“预览” /他们在拍照时所看到的内容时完全没有失真。

但是,一旦他们实际拍摄了照片,我就创建一个子层并将其frame属性设置为预览层的属性,并将照片设置为该子层的内容。

从技术上讲,这是可行的。 一旦用户拍摄了照片,照片就会像应该的那样显示在屏幕的上半部分。

唯一的问题是照片变形了。

看起来像张照片,好像我在拍风景照。

非常感谢您的帮助,我对此感到非常绝望,并且今天整日无法解决它。

这是我所有视图控制器的代码:

#import "MediaCaptureVC.h"

@interface MediaCaptureVC ()

@end

@implementation MediaCaptureVC

- (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil
{
    self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil];
    if (self) {
        // Custom initialization
    }
    return self;
}

- (void)viewDidLoad
{

    [super viewDidLoad];
    // Do any additional setup after loading the view.


    AVCaptureSession *session =[[AVCaptureSession alloc]init];


    [session setSessionPreset:AVCaptureSessionPresetPhoto];


    AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];


    NSError *error = [[NSError alloc]init];

    AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];


    if([session canAddInput:deviceInput])
        [session addInput:deviceInput];


    _previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:session];


    [_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];

    CALayer *rootLayer = [[self view]layer];

    [rootLayer setMasksToBounds:YES];


    [_previewLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];


    [rootLayer insertSublayer:_previewLayer atIndex:0];


    _stillImageOutput = [[AVCaptureStillImageOutput alloc] init];

  [session addOutput:_stillImageOutput];

    [session startRunning];


    }

- (void)didReceiveMemoryWarning
{
    [super didReceiveMemoryWarning];
    // Dispose of any resources that can be recreated.
}


-(UIImage*) rotate:(UIImage*) src andOrientation:(UIImageOrientation)orientation
{
    UIGraphicsBeginImageContext(src.size);

    CGContextRef context=(UIGraphicsGetCurrentContext());

    if (orientation == UIImageOrientationRight) {
        CGContextRotateCTM (context, 90/180*M_PI) ;
    } else if (orientation == UIImageOrientationLeft) {
        CGContextRotateCTM (context, -90/180*M_PI);
    } else if (orientation == UIImageOrientationDown) {
        // NOTHING
    } else if (orientation == UIImageOrientationUp) {
        CGContextRotateCTM (context, 90/180*M_PI);
    }

    [src drawAtPoint:CGPointMake(0, 0)];
    UIImage *img=UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return img;

}



-(IBAction)stillImageCapture {

    AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in _stillImageOutput.connections){
        for (AVCaptureInputPort *port in [connection inputPorts]){

            if ([[port mediaType] isEqual:AVMediaTypeVideo]){

                videoConnection = connection;
                break;
            }
        }
        if (videoConnection) {
            break;
        }
    }

    NSLog(@"about to request a capture from: %@", _stillImageOutput);

[_stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {


if(imageDataSampleBuffer) {

           NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];


            UIImage *image = [[UIImage alloc]initWithData:imageData];


        image = [self rotate:image andOrientation:image.imageOrientation];


            CALayer *subLayer = [CALayer layer];

            CGImageRef imageRef = image.CGImage;

    subLayer.contents = (id)[UIImage imageWithCGImage:imageRef].CGImage;

          subLayer.frame = _previewLayer.frame;

            CALayer *rootLayer = [[self view]layer];

          [rootLayer setMasksToBounds:YES];

            [subLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];

           [_previewLayer addSublayer:subLayer];


            NSLog(@"%@", subLayer.contents);

            NSLog(@"Orientation: %d", image.imageOrientation);

        }

    }];

}

@end

嗨,我希望这对您有帮助-

代码似乎比应有的复杂,因为大多数代码是在CALayer级别而不是imageView / view级别完成的,但是我认为问题是从原始捕获到迷你视口的帧所占比例不同,并且这使UIImage在此语句中失真:

  [subLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];

需要做的是捕获sublayer.frame的比例,并获得适合根层或与其关联的Image视图的最佳大小。

在执行此操作之前,我有一些代码:在处理比例之前编写了一个子例程(请注意,您需要调整框架的原点以获得所需的内容!)

... CGRect newbounds = [自身fig_proportion:image to_fit_rect(rootLayer.frame)if(newbounds.size.height <rootLayer.frame.size.height){rootLayer .....(用于调整图像视框原点的代码)

  -(CGRect) figure_proportion:(UIImage *) image2 to_fit_rect:(CGRect) rect  {
     CGSize image_size = image2.size;
      CGRect newrect = rect;
    float wfactor = image_size.width/ image_size.height;
    float hfactor = image_size.height/ image_size.width;

   if (image2.size.width > image2.size.height) {
       newrect.size.width = rect.size.width;
       newrect.size.height = (rect.size.width   * hfactor);
     }
   else if (image2.size.height > image2.size.width) {
       newrect.size.height = rect.size.height;
       newrect.size.width = (rect.size.height   * wfactor);
    }
   else {
      newrect.size.width = rect.size.width;
      newrect.size.height = newrect.size.width;
   }
   if (newrect.size.height > rect.size.height) {
       newrect.size.height = rect.size.height;
       newrect.size.width = (newrect.size.height* wfactor);
    }
   if (newrect.size.width > rect.size.width) {
       newrect.size.width = rect.size.width;
       newrect.size.height = (newrect.size.width* hfactor);
  }
   return(newrect);


   }

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM