简体   繁体   中英

Using MPSImageConvolution kernel with Metal compute shaders

I am using the MetalVideoCapture example located here https://github.com/FlexMonkey/MetalVideoCapture . The only thing I altered in my version was using MPSImageConvolution (instead of MPSImageGaussianBlur) with kernel values:

[-2.0, -1.0, 0.0,
 -1.0,  1.0, 1.0, 
  0.0,  1.0, 2.0]

Using the values above failed to alter the output in any visible way. But an edge enhance kernel eg

[0.0, -1.0, 0.0, 
 0.0,  1.0, 0.0, 
 0.0,  0.0, 0.0]

works, mind you in column-major order only; it does not work in row-major order even though thats what MPSImageConvolution expects. I'm really stumped by this. I do not know if there is an obvious reason that a convolution kernel cannot work in a compute pipeline (only in a render pipeline) but i couldn't find any info on this online.

I also modified the codebase to apply the kernel to a static image instead of a live video feed; This yielded the same results, however.

I also wanted to point out that I posted the same question on the example project's message board ( https://github.com/FlexMonkey/MetalVideoCapture/issues/1#issuecomment-217609500 ). The author of the example was equally as stumped as I was, which lead me to believe it was some sort of bug or a gap in my conceptual knowledge of why this is not even suppose to work.

I do have a workaround and that's to avoid using an in-place texture. Try this: create a separate destination texture:

    let descriptor = MTLTextureDescriptor.texture2DDescriptorWithPixelFormat(
        drawable.texture.pixelFormat,
        width: drawable.texture.width,
        height: drawable.texture.height,
        mipmapped: false)

    let destination: MTLTexture = device!.newTextureWithDescriptor(descriptor)

Have the YCbCrColorConversion shader target the destination:

commandEncoder.setTexture(destination, atIndex: 2) // out texture

...and then use the alternative encodeToCommandBuffer that uses a destination:

encodeToCommandBuffer(commandBuffer, sourceTexture: destination, destinationTexture: drawable.texture)

This stuff can be removed:

//        let inPlaceTexture = UnsafeMutablePointer<MTLTexture?>.alloc(1)
//        inPlaceTexture.initialize(drawable.texture)

Simon

With thanks to Warren !

Generally speaking, no convolution filter will work in place unless it is implemented as a multi-pass filter. Since you are looking at adjacent pixels, writing out a result in one pixel would change the inputs for the pixel next to it, causing an error. Small convolutions like this one are generally implemented as single pass filters in MPS. You should be using the in-place MPS encode methods that allow the framework to swap out the destination texture as needed. Another way to save memory would be to make use of MPSTemporaryImages (iOS 10 and later).

MPSImageConvolution should be detecting the usage mode failure and asserting. Make sure the Metal debug layer is turned on in Xcode. If still no assert, the failure to detect the problem is bug worthy. http://bugreporter.apple.com

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM