I am using mediapipe to develop a iOS application, now I need input an image data to the mediapipe, but mediapipe only accepted 32BGRA CVPixelBuffer. ...
I am using mediapipe to develop a iOS application, now I need input an image data to the mediapipe, but mediapipe only accepted 32BGRA CVPixelBuffer. ...
I'm wondering if it's possible to achieve a better performance in converting the UIView into CVPixelBuffer. My app converts a sequence of UIViews fir ...
I'm having a stream of video in IYUV (4:2:0) format and trying to convert it into CVPixelBufferRef and then into CMSampleBufferRef and play it in AVSa ...
I have an image that should be preprocessed before passing to CoreML to 640x640 square with resizing and saving aspect ratio (check the image). I foun ...
I'm trying to get 5 main/dominant colors from a CVPixelBuffer, and I need it to be as quick and efficient as possible. I've tried Pixelating with CIF ...
Is there a standard performant way to edit/draw on a CVImageBuffer/CVPixelBuffer in swift? All the video editing demos I've found online overlay the ...
Right now I'm working with Depth camera at iOS since I want to measure distance to the camera of certain points at the frame. I did all necessary set ...
I'm using the method below to add drawings to a pixel buffer, then append it to an AVAssetWriterInputPixelBufferAdaptor. It works on my Mac mini (mac ...
My Objective is to extract 300x300 pixel frame from a CVImageBuffer (camera stream) and convert it in to a UInt Byte Array. Technically the array size ...
I want to use Vision 2D Hand Tracking input coupled with ARKit > People Occlusion > Body Segmentation With Depth, which leverage LiDAR, to get 3 ...
EDIT: I worked around this issue but to rephrase my question - is it possible to create the CVPixelBuffer to match the CGImage? For example I would li ...
As the title says, I am having some trouble with AVAssetWriter and memory. Some notes about my environment/requirements: I am NOT using ARC, but ...
I am using ARKit 4 (+MetalKit) with the new iPhone device, and I am trying to access the depth data (from the LiDAR) and save it as the depth map alon ...
I'm trying to save the camera image from ARFrame to file. The image is given as a CVPixelBuffer. I have the following code, which produces an image wi ...
I am using MPSImageLanczosScale to scale image texture (initiated from CVPixelBufferRef) using Metal framework. The issue is MPSImageLanczosScale is o ...
I'm getting the depth data from the TrueDepth camera, and converting it to a grayscale image. (I realize I could pass the AVDepthData to a CIImage con ...
I have an IOSurface-backed CVPixelBuffer that is getting updated from an outside source at 30fps. I want to render a preview of the image data in an N ...
I am trying to use VNDetectRectangleRequest from Apple's Vision framework to automatically grab a picture of a card. However when I convert the points ...
I have a AVCaptureVideoDataOutput producing CMSampleBuffer instances passed into my AVCaptureVideoDataOutputSampleBufferDelegate function. I want to e ...
I have a tflite model and I want to run model using ARKit session captured image. It's showing source pixel format is invalid. I was able to run tflit ...