![](/img/trans.png)
[英]How can you set a custom frame size for AV Foundation camera capture?
[英]How to set AVCaptureVideoDataOutput capture frame size?
我有一個layerRect可以顯示相機圖像,如下所示:
CGRect layerRect = [[videoStreamView layer] bounds];
[[[self captureManager] previewLayer] setBounds:layerRect];
[[[self captureManager] previewLayer] setPosition:CGPointMake(CGRectGetMidX(layerRect),
CGRectGetMidY(layerRect))];
[[videoStreamView layer] addSublayer:[[self captureManager] previewLayer]];
videoStreamView
是供我顯示視頻的視圖,該視圖為150x150。 但是我從AVCaptureVideoDataOutput
使用setSampleBufferDelegate
,我得到的視頻幀是整個攝像機圖像(1280 * 720)。 我該如何修改? 謝謝。
我相信它是由AVCaptureSession
的sessionPreset
屬性控制的。 我仍在嘗試找出每個預設值在圖像大小等方面的含義。
也許這解決了嗎?
CGRect bounds = view.layer.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
previewLayer.bounds=bounds;
previewLayer.position=CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds));
試試這個代碼
AVCaptureVideoPreviewLayer *newCaptureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:[self.captureManager session]];
newCaptureVideoPreviewLayer.frame = bounds;//CGRectMake(bounds.origin.x, bounds.origin.y, bounds.size.height, bounds.size.width);
@property (nonatomic, retain) AVCaptureVideoPreviewLayer *prevLayer;
然后:
self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.captureSession];
self.prevLayer.frame = yourRect;
[self.view.layer addSublayer: self.prevLayer];
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.