[英]Convert CMSampleBuffer to UIImage
我正在嘗試將 sampleBuffer 轉換為 UIImage 並使用 colorspaceGray 在圖像視圖中顯示它。 但它顯示為下圖。 我認為轉換存在問題。 如何轉換 CMSampleBuffer?
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
print("buffered")
let imageBuffer: CVImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))
let width: Int = CVPixelBufferGetWidth(imageBuffer)
let height: Int = CVPixelBufferGetHeight(imageBuffer)
let bytesPerRow: Int = CVPixelBufferGetBytesPerRow(imageBuffer)
let lumaBuffer = CVPixelBufferGetBaseAddress(imageBuffer)
//let planeCount : Int = CVPixelBufferGetPlaneCount(imageBuffer)
let grayColorSpace: CGColorSpace = CGColorSpaceCreateDeviceGray()
let context: CGContext = CGContext(data: lumaBuffer, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow , space: grayColorSpace, bitmapInfo: CGImageAlphaInfo.none.rawValue)!
let dstImageFilter: CGImage = context.makeImage()!
let imageRect : CGRect = CGRect(x: 0, y: 0, width: width, height: height)
context.draw(dstImageFilter, in: imageRect)
let image = UIImage(cgImage: dstImageFilter)
DispatchQueue.main.sync(execute: {() -> Void in
self.imageTest.image = image
})
}
轉換很簡單:
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let ciimage = CIImage(cvPixelBuffer: imageBuffer)
let image = self.convert(cmage: ciimage)
}
// Convert CIImage to UIImage
func convert(cmage: CIImage) -> UIImage {
let context = CIContext(options: nil)
let cgImage = context.createCGImage(cmage, from: cmage.extent)!
let image = UIImage(cgImage: cgImage)
return image
}
看起來CMSampleBuffer
正在為您提供 RGBA 數據,然后您可以從中直接構建灰度圖像。 您要么需要構建一個新的緩沖區,其中為每個像素執行類似gray = (pixel.red+pixel.green+pixel.blue)/3
。 或者您需要根據您收到的數據創建一個普通的 RGBA 圖像,然后將其轉換為灰度。
但是在您的代碼中,您根本沒有過渡。 您使用CVPixelBufferGetBaseAddress
將原始指針指向緩沖區,而不管那里有什么類型的數據。 然后您只需在創建圖像時傳遞相同的指針,假設接收到的數據是灰度的。
現在可以使用UIImage
上的新便利方法進一步改進上述解決方案。 我在下面概述了一個帶有圖像方向校正的現代解決方案。 該解決方案不使用CGImage
轉換,這提高了運行時性能。
func orientation() -> UIImage.Orientation {
let curDeviceOrientation = UIDevice.current.orientation
var exifOrientation: UIImage.Orientation
switch curDeviceOrientation {
case UIDeviceOrientation.portraitUpsideDown: // Device oriented vertically, Home button on the top
exifOrientation = .left
case UIDeviceOrientation.landscapeLeft: // Device oriented horizontally, Home button on the right
exifOrientation = .upMirrored
case UIDeviceOrientation.landscapeRight: // Device oriented horizontally, Home button on the left
exifOrientation = .down
case UIDeviceOrientation.portrait: // Device oriented vertically, Home button on the bottom
exifOrientation = .up
default:
exifOrientation = .up
}
return exifOrientation
}
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
guard let imageBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
let ciimage : CIImage = CIImage(cvPixelBuffer: imageBuffer)
let image = UIImage.init(ciImage: ciimage, scale: 1.0, orientation: orientation())
}
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.