简体   繁体   中英

Metal Framework on macOS

I am creating a simple Texture display that essentially renders the Video frames in BGRA format through Metal display. I follow the same steps as told in Metal WWDC session. But I have problems in creating the render encoder. My code is

id <MTLDevice> device = MTLCreateSystemDefaultDevice();
id<MTLCommandQueue> commandQueue = [device newCommandQueue];

id<MTLLibrary> library = [device newDefaultLibrary];

// Create Render Command Descriptor.
MTLRenderPipelineDescriptor* renderPipelineDesc = [MTLRenderPipelineDescriptor new];
renderPipelineDesc.colorAttachments[0].pixelFormat = MTLPixelFormatBGRA8Unorm;
renderPipelineDesc.vertexFunction = [library newFunctionWithName:@"basic_vertex"];
renderPipelineDesc.fragmentFunction = [library newFunctionWithName:@"basic_fragment"];

NSError* error = nil;
id<MTLRenderPipelineState> renderPipelineState = [device newRenderPipelineStateWithDescriptor:renderPipelineDesc
                                                               error:&error];

id<MTLCommandBuffer> commandBuffer = [commandQueue commandBuffer];

MTLRenderPassDescriptor* renderPassDesc = [MTLRenderPassDescriptor renderPassDescriptor];

id<CAMetalDrawable> drawable = [_metalLayer nextDrawable];

MTLRenderPassColorAttachmentDescriptor* colorAttachmentDesc = [MTLRenderPassColorAttachmentDescriptor new];
colorAttachmentDesc.texture = drawable.texture;
colorAttachmentDesc.loadAction = MTLLoadActionLoad;
colorAttachmentDesc.storeAction = MTLStoreActionStore;
colorAttachmentDesc.clearColor = MTLClearColorMake(0, 0, 0, 1);

[renderPassDesc.colorAttachments setObject:colorAttachmentDesc atIndexedSubscript:0];

[inTexture replaceRegion:region
         mipmapLevel:0
           withBytes:imageBytes
         bytesPerRow:CVPixelBufferGetBytesPerRow(_image)];

id<MTLRenderCommandEncoder> renderCmdEncoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPassDesc];

[renderCmdEncoder setRenderPipelineState:_renderPipelineState];
[renderCmdEncoder endEncoding];

This code crashes in the line saying "No Render Targets Found" id renderCmdEncoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPassDesc]; I am not able to figure out where and how to set the render target.

This will work perfectly; if you need help implementing it, let me know:

@import UIKit;
@import AVFoundation;
@import CoreMedia;
#import <MetalKit/MetalKit.h>
#import <Metal/Metal.h>
#import <MetalPerformanceShaders/MetalPerformanceShaders.h>

@interface ViewController : UIViewController <MTKViewDelegate, AVCaptureVideoDataOutputSampleBufferDelegate>  {
    NSString *_displayName;
    NSString *serviceType;
}

@property (retain, nonatomic) SessionContainer *session;
@property (retain, nonatomic) AVCaptureSession *avSession;

@end;

#import "ViewController.h"

@interface ViewController () {
    MTKView *_metalView;

    id<MTLDevice> _device;
    id<MTLCommandQueue> _commandQueue;
    id<MTLTexture> _texture;

    CVMetalTextureCacheRef _textureCache;
}

@property (strong, nonatomic) AVCaptureDevice *videoDevice;
@property (nonatomic) dispatch_queue_t sessionQueue;

@end

@implementation ViewController

- (void)viewDidLoad {
    NSLog(@"%s", __PRETTY_FUNCTION__);
    [super viewDidLoad];

    _device = MTLCreateSystemDefaultDevice();
    _metalView = [[MTKView alloc] initWithFrame:self.view.bounds];
    [_metalView setContentMode:UIViewContentModeScaleAspectFit];
    _metalView.device = _device;
    _metalView.delegate = self;
    _metalView.clearColor = MTLClearColorMake(1, 1, 1, 1);
    _metalView.colorPixelFormat = MTLPixelFormatBGRA8Unorm;
    _metalView.framebufferOnly = NO;
    _metalView.autoResizeDrawable = NO;

    CVMetalTextureCacheCreate(NULL, NULL, _device, NULL, &_textureCache);

    [self.view addSubview:_metalView];

    self.sessionQueue = dispatch_queue_create( "session queue", DISPATCH_QUEUE_SERIAL );

    if ([self setupCamera]) {
        [_avSession startRunning];
    }
}

- (BOOL)setupCamera {
    NSLog(@"%s", __PRETTY_FUNCTION__);
    @try {
        NSError * error;

            _avSession = [[AVCaptureSession alloc] init];
            [_avSession beginConfiguration];
            [_avSession setSessionPreset:AVCaptureSessionPreset640x480];

            // get list of devices; connect to front-facing camera
            self.videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
            if (self.videoDevice == nil) return FALSE;

            AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:self.videoDevice error:&error];
            [_avSession addInput:input];

            dispatch_queue_t sampleBufferQueue = dispatch_queue_create("CameraMulticaster", DISPATCH_QUEUE_SERIAL);

            AVCaptureVideoDataOutput * dataOutput = [[AVCaptureVideoDataOutput alloc] init];
            [dataOutput setAlwaysDiscardsLateVideoFrames:YES];
            [dataOutput setVideoSettings:@{(id)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA)}];
            [dataOutput setSampleBufferDelegate:self queue:sampleBufferQueue];

            [_avSession addOutput:dataOutput];
            [_avSession commitConfiguration]; 
    } @catch (NSException *exception) {
        NSLog(@"%s - %@", __PRETTY_FUNCTION__, exception.description);
        return FALSE;
    } @finally {
        return TRUE;
    }

}

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    {
        size_t width = CVPixelBufferGetWidth(pixelBuffer);
        size_t height = CVPixelBufferGetHeight(pixelBuffer);

        CVMetalTextureRef texture = NULL;
        CVReturn status = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _textureCache, pixelBuffer, NULL, MTLPixelFormatBGRA8Unorm, width, height, 0, &texture);
        if(status == kCVReturnSuccess)
        {
            _metalView.drawableSize = CGSizeMake(width, height);
            _texture = CVMetalTextureGetTexture(texture);
            _commandQueue = [_device newCommandQueue];
            CFRelease(texture);
        }
    }
}

- (void)drawInMTKView:(MTKView *)view {
    // creating command encoder
    if (_texture) {
        id<MTLCommandBuffer> commandBuffer = [_commandQueue commandBuffer];
        id<MTLTexture> drawingTexture = view.currentDrawable.texture;

        // set up and encode the filter
        MPSImageGaussianBlur *filter = [[MPSImageGaussianBlur alloc] initWithDevice:_device sigma:5];

        [filter encodeToCommandBuffer:commandBuffer sourceTexture:_texture destinationTexture:drawingTexture];

        // committing the drawing
        [commandBuffer presentDrawable:view.currentDrawable];
        [commandBuffer commit];
        _texture = nil;
    }
}

- (void)mtkView:(MTKView *)view drawableSizeWillChange:(CGSize)size {

}

@end

you should try one of following points

1.Instead of creating new render pass descriptor,use current render pass descriptor object from MTKView object.this render pass descriptor already will be configured.you need not set anything.try the sample code given below-

if let currentPassDesc = view.currentRenderPassDescriptor, 
let currentDrawable = view.currentDrawable
{
let renderCommandEncoder =        

commandBuffer.makeRenderCommandEncoder(descriptor: currentPassDesc)


renderCommandEncoder.setRenderPipelineState(renderPipeline)

//set vertex buffers and call draw apis
.......
.......
commandBuffer.present(currentDrawable)

}

2.you are creating a new render pass descriptor and then setting its color attachment by the texture of drawable object so instead of doing this you should create a new texture object and then set usage of this texture as render target.then you will get content rendered in your new texture but it will be not displayed on screen so to get displayed the content of your textue you have to copy the content of your texture in drawable texture and then present drawable.

below is the code of making render target -

renderPassDescriptor.colorAttachments[0].clearColor = 

MTLClearColor(red:  

0.0,green: 0.0,blue: 0.0,alpha: 1.0)
renderPassDescriptor.colorAttachments[0].loadAction = .clear
renderPassDescriptor.colorAttachments[0].storeAction = .store

renderPassDescriptor.depthAttachment.clearDepth = 1.0
renderPassDescriptor.depthAttachment.loadAction = .clear
renderPassDescriptor.depthAttachment.storeAction = .dontCare

let view = self.view as!MTKView
let textDesc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: 

.bgra8Unorm, width: Int(view.frame.width), 
 height: Int(view.frame.height), mipmapped: false)
 textDesc.depth = 1
 //see below line       
textDesc.usage =   
[MTLTextureUsage.renderTarget,MTLTextureUsage.shaderRead]
textDesc.storageMode = .private
mainPassFrameBuffer = device.makeTexture(descriptor: textDesc)
renderPassDescriptor.colorAttachments[0].texture = mainPassFrameBuffer

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM