简体   繁体   English

如何在arkit上添加黑白过滤器(swift4)

[英]How to add black and white filter on arkit (swift4)

All I want to do is take the basic arkit view and turn it into a black and white view. 我想做的就是采用基本的arkit视图并将其转换为黑白视图。 Right now the basic view is just normal and I have no idea on how add the filter. 现在基本视图是正常的,我不知道如何添加过滤器。 Ideally when taking a screenshot the black and white filter is added onto the screenshot. 理想情况下,在截取屏幕截图时,黑白过滤器会添加到屏幕截图中。

import UIKit
import SceneKit
import ARKit

class ViewController: UIViewController, ARSCNViewDelegate {

    @IBOutlet var sceneView: ARSCNView!

    override func viewDidLoad() {
        super.viewDidLoad()
        sceneView.delegate = self
        sceneView.showsStatistics = true
    }

    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)
        let configuration = ARWorldTrackingConfiguration()
        sceneView.session.run(configuration)
    }

    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)
        sceneView.session.pause()
    }

    @IBAction func changeTextColour(){
        let snapShot = self.augmentedRealityView.snapshot()
        UIImageWriteToSavedPhotosAlbum(snapShot, self, #selector(image(_:didFinishSavingWithError:contextInfo:)), nil)
    }
}

If you want to apply the filter in real-time the best way to achieve that is to use SCNTechnique . 如果要实时应用过滤器,最好的方法是使用SCNTechnique Techniques are used for postprocessing and allow us to render an SCNView content in several passes – exactly what we need (first render a scene, then apply an effect to it). 技术用于后处理,并允许我们在几个过程中渲染SCNView内容 - 正是我们需要的(首先渲染场景,然后对其应用效果)。

Here's the example project . 这是示例项目


Plist setup Plist设置

First, we need to describe a technique in a .plist file. 首先,我们需要在.plist文件中描述一种技术。

Here's a screenshot of a plist that I've come up with (for better visualization): 这是我提出的一个plist的截图(为了更好的可视化):

plist描述了SCNTechnique

And here's it's source: 这是它的来源:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>sequence</key>
    <array>
        <string>apply_filter</string>
    </array>
    <key>passes</key>
    <dict>
        <key>apply_filter</key>
        <dict>
            <key>metalVertexShader</key>
            <string>scene_filter_vertex</string>
            <key>metalFragmentShader</key>
            <string>scene_filter_fragment</string>
            <key>draw</key>
            <string>DRAW_QUAD</string>
            <key>inputs</key>
            <dict>
                <key>scene</key>
                <string>COLOR</string>
            </dict>
            <key>outputs</key>
            <dict>
                <key>color</key>
                <string>COLOR</string>
            </dict>
        </dict>
    </dict>
</dict>

The topic of SCNTechnique s is quire broad and I will only quickly cover the things we need for the case at hand. SCNTechnique的主题是广泛的,我只会快速介绍我们手头的案例。 To get a real understanding of what they are capable of I recommend reading Apple's comprehensive documentation on techniques. 为了真正了解他们的能力,我建议您阅读Apple关于技术的综合文档

Technique description 技术说明

passes is a dictionary containing description of passes that you want an SCNTechnique to perform. passes是一个字典,包含您希望SCNTechnique执行的过程的描述。

sequence is an array that specifies an order in which these passes are going to be performed using their keys. sequence是一个数组,指定使用其键执行这些传递的顺序。

You do not specify the main render pass here (meaning whatever is rendered without applying SCNTechnique s) – it is implied and it's resulting color can be accessed using COLOR constant (more on it in a bit). 你没有在这里指定主渲染过程(意思是在没有应用SCNTechnique的情况下渲染的任何SCNTechnique ) - 它是隐含的,它的结果颜色可以使用COLOR常量访问(稍微多一点)。

So the only "extra" pass (besides the main one) that we are going to do will be apply_filter that converts colors into black and white (it can be named whatever you want, just make sure it has the same key in passes and sequence ). 因此,我们要做的唯一“额外”传递(除了主要的传递)将是apply_filter ,它将颜色转换为黑色和白色(它可以命名为任何你想要的,只需确保它在passessequence具有相同的键)。

Now to the description of the apply_filter pass itself. 现在来看一下apply_filter传递本身的描述。

Render pass description 渲染传递说明

metalVertexShader and metalFragmentShader – names of Metal shader functions that are going to be used for drawing. metalVertexShadermetalFragmentShader - 将用于绘图的Metal着色器函数的名称。

draw defines what the pass is going to render. draw定义了要传递的内容。 DRAW_QUAD stands for: DRAW_QUAD代表:

Render only a rectangle covering the entire bounds of the view. 仅渲染覆盖视图整个边界的矩形。 Use this option for drawing passes that process image buffers output by earlier passes. 使用此选项绘制处理先前传递输出的图像缓冲区的传递。

which means, roughly speaking, that we are going to be rendering a plain "image" with out render pass. 粗略地说,这意味着我们将使用渲染过程渲染一个简单的“图像”。

inputs specifies input resources that we will be able to use in shaders. inputs指定我们将能够在着色器中使用的输入资源。 As I previously said, COLOR refers to a color data provided by a main render pass. 正如我之前所说, COLOR指的是主渲染通道提供的颜色数据。

outputs specifies outputs. outputs指定输出。 It can be color , depth or stencil , but we only need a color output. 它可以是colordepthstencil ,但我们只需要一个color输出。 COLOR value means that we, simply put, are going to be rendering "directly" to the screen (as opposed to rendering into intermediate targets, for example). COLOR值意味着我们简单地说,将“直接”渲染到屏幕上(例如,与渲染到中间目标相反)。


Metal shader 金属着色器

Create a .metal file with following contents: 创建一个包含以下内容的.metal文件:

#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>

struct VertexInput {
    float4 position [[ attribute(SCNVertexSemanticPosition) ]];
    float2 texcoord [[ attribute(SCNVertexSemanticTexcoord0) ]];
};

struct VertexOut {
    float4 position [[position]];
    float2 texcoord;
};

// metalVertexShader
vertex VertexOut scene_filter_vertex(VertexInput in [[stage_in]])
{
    VertexOut out;
    out.position = in.position;
    out.texcoord = float2((in.position.x + 1.0) * 0.5 , (in.position.y + 1.0) * -0.5);
    return out;
}

// metalFragmentShader
fragment half4 scene_filter_fragment(VertexOut vert [[stage_in]],
                                    texture2d<half, access::sample> scene [[texture(0)]])
{
    constexpr sampler samp = sampler(coord::normalized, address::repeat, filter::nearest);
    constexpr half3 weights = half3(0.2126, 0.7152, 0.0722);

    half4 color = scene.sample(samp, vert.texcoord);
    color.rgb = half3(dot(color.rgb, weights));

    return color;
}

Notice, that the function names for fragment and vertex shaders should be the same names that are specified in the plist file in the pass descriptor. 请注意,片段和顶点着色器的函数名称应与传递描述符中plist文件中指定的名称相同。

To get a better understanding of what VertexInput and VertexOut structures mean, refer to the SCNProgram documentation . 要更好地了解VertexInputVertexOut结构的含义,请参阅SCNProgram文档

The given vertex function can be used pretty much in any DRAW_QUAD render pass. 给定的顶点函数可以在任何DRAW_QUAD渲染过程中使用。 It basically gives us normalized coordinates of the screen space (that are accessed with vert.texcoord in the fragment shader). 它基本上为我们提供了屏幕空间的标准化坐标(可以通过片段着色器中的vert.texcoord访问)。

The fragment function is where all the "magic" happens. 片段功能是所有“魔法”发生的地方。 There, you can manipulate the texture that you've got from the main pass. 在那里,你可以操纵你从主要传递中获得的纹理。 Using this setup you can potentially implement a ton of filters/effects and more. 使用此设置,您可以实现大量过滤器/效果等。

In our case, I used a basic desaturation (zero saturation) formula to get the black and white colors. 在我们的例子中,我使用了基本的去饱和度(零饱和度)公式来获得黑白颜色。


Swift setup Swift设置

Now, we can finally use all of this in the ARKit / SceneKit . 现在,我们终于可以在ARKit / SceneKit使用所有这些。

let plistName = "SceneFilterTechnique" // the name of the plist you've created

guard let url = Bundle.main.url(forResource: plistName, withExtension: "plist") else {
    fatalError("\(plistName).plist does not exist in the main bundle")
}

guard let dictionary = NSDictionary(contentsOf: url) as? [String: Any] else {
    fatalError("Failed to parse \(plistName).plist as a dictionary")
}

guard let technique = SCNTechnique(dictionary: dictionary) else {
    fatalError("Failed to initialize a technique using \(plistName).plist")
}

and just set it as technique of the ARSCNView . 并将其设置为ARSCNView technique

sceneView.technique = technique

That's it. 而已。 Now the whole scene is going to be rendered in grayscale including when taking snapshots. 现在整个场景将以灰度渲染, 包括拍摄快照时。

在此输入图像描述

Filter ARSCNView Snapshot: If you want to create a black and white screenShot of your ARSCNView you can do something like this which returns a UIImage in GrayScale and whereby augmentedRealityView refers to an ARSCNView : 过滤ARSCNView快照:如果你想创建一个黑白屏幕你的ARSCNView你可以做这样的事情,返回GrayScale中的UIImage ,其中augmentedRealityView引用ARSCNView

/// Converts A UIImage To A High Contrast GrayScaleImage
///
/// - Returns: UIImage
func highContrastBlackAndWhiteFilter() -> UIImage?
{
    //1. Convert It To A CIIamge
    guard let convertedImage = CIImage(image: self) else { return nil }

    //2. Set The Filter Parameters
    let filterParameters = [kCIInputBrightnessKey: 0.0,
                            kCIInputContrastKey:   1.1,
                            kCIInputSaturationKey: 0.0]

    //3. Apply The Basic Filter To The Image
    let imageToFilter = convertedImage.applyingFilter("CIColorControls", parameters: filterParameters)

    //4. Set The Exposure
    let exposure =  [kCIInputEVKey: NSNumber(value: 0.7)]

    //5. Process The Image With The Exposure Setting
    let processedImage = imageToFilter.applyingFilter("CIExposureAdjust", parameters: exposure)

    //6. Create A CG GrayScale Image
    guard let grayScaleImage = CIContext().createCGImage(processedImage, from: processedImage.extent) else { return nil }

    return UIImage(cgImage: grayScaleImage, scale: self.scale, orientation: self.imageOrientation)
}

An example of using this therefore could be like so: 因此,使用它的一个例子可能是这样的:

 override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {

    //1. Create A UIImageView Dynamically
    let imageViewResult = UIImageView(frame: CGRect(x: 0, y: 0, width: self.view.bounds.width, height: self.view.bounds.height))
    self.view.addSubview(imageViewResult)

    //2. Create The Snapshot & Get The Black & White Image
    guard let snapShotImage = self.augmentedRealityView.snapshot().highContrastBlackAndWhiteFilter() else { return }
    imageViewResult.image = snapShotImage

    //3. Remove The ImageView After A Delay Of 5 Seconds
    DispatchQueue.main.asyncAfter(deadline: .now() + 5) {
        imageViewResult.removeFromSuperview()
    }

}

Which will yield a result something like this: 这会产生如下结果:

在此输入图像描述

In order to make your code reusable you could also create an extension of `UIImage: 为了使您的代码可重用,您还可以创建`UIImage的extension

//------------------------
//MARK: UIImage Extensions
//------------------------

extension UIImage
{

    /// Converts A UIImage To A High Contrast GrayScaleImage
    ///
    /// - Returns: UIImage
    func highContrastBlackAndWhiteFilter() -> UIImage?
    {
        //1. Convert It To A CIIamge
        guard let convertedImage = CIImage(image: self) else { return nil }

        //2. Set The Filter Parameters
        let filterParameters = [kCIInputBrightnessKey: 0.0,
                                kCIInputContrastKey:   1.1,
                                kCIInputSaturationKey: 0.0]

        //3. Apply The Basic Filter To The Image
        let imageToFilter = convertedImage.applyingFilter("CIColorControls", parameters: filterParameters)

        //4. Set The Exposure
        let exposure =  [kCIInputEVKey: NSNumber(value: 0.7)]

        //5. Process The Image With The Exposure Setting
        let processedImage = imageToFilter.applyingFilter("CIExposureAdjust", parameters: exposure)

        //6. Create A CG GrayScale Image
        guard let grayScaleImage = CIContext().createCGImage(processedImage, from: processedImage.extent) else { return nil }

        return UIImage(cgImage: grayScaleImage, scale: self.scale, orientation: self.imageOrientation)
    }

}

Which you can then use easily like so: 您可以轻松地使用它,如下所示:

guard let snapShotImage = self.augmentedRealityView.snapshot().highContrastBlackAndWhiteFilter() else { return }

Remembering that you should place your extension above your class declaration eg: 请记住,您应将扩展名放在class declaration之上,例如:

extension UIImage{

}

class ViewController: UIViewController, ARSCNViewDelegate {

}

So based on the code provided in your question you would have something like this: 因此,基于您的问题中提供的代码,您将拥有以下内容:

/// Creates A Black & White ScreenShot & Saves It To The Photo Album
@IBAction func changeTextColour(){

    //1. Create A Snapshot
    guard let snapShotImage = self.augmentedRealityView.snapshot().highContrastBlackAndWhiteFilter() else { return }

    //2. Save It The Photos Album
    UIImageWriteToSavedPhotosAlbum(snapShotImage, self, #selector(image(_:didFinishSavingWithError:contextInfo:)), nil)

}

///Calback To Check Whether The Image Has Been Saved
@objc func image(_ image: UIImage, didFinishSavingWithError error: Error?, contextInfo: UnsafeRawPointer) {

    if let error = error {
        print("Error Saving ARKit Scene \(error)")
    } else {
        print("ARKit Scene Successfully Saved")
    }
}

Live Rendering In Black & White: Using this brilliant answer here by diviaki I was also able to get the entire camera feed to render in Black and White using the following methods: 黑白实时渲染:diviaki使用这个精彩的答案,我还可以使用以下方法将整个相机输入渲染为黑白:

1st. 1。 Register for the ARSessionDelegate like so: 像这样注册ARSessionDelegate

 augmentedRealitySession.delegate = self

2nd. 第2位。 Then in the following delegate callback add the following: 然后在以下委托回调中添加以下内容:

 //-----------------------
 //MARK: ARSessionDelegate
 //-----------------------

 extension ViewController: ARSessionDelegate{

 func session(_ session: ARSession, didUpdate frame: ARFrame) {

        /*
        Full Credit To https://stackoverflow.com/questions/45919745/reliable-access-and-modify-captured-camera-frames-under-scenekit
        */

        //1. Convert The Current Frame To Black & White
        guard let currentBackgroundFrameImage = augmentedRealityView.session.currentFrame?.capturedImage,
              let pixelBufferAddressOfPlane = CVPixelBufferGetBaseAddressOfPlane(currentBackgroundFrameImage, 1) else { return }

        let x: size_t = CVPixelBufferGetWidthOfPlane(currentBackgroundFrameImage, 1)
        let y: size_t = CVPixelBufferGetHeightOfPlane(currentBackgroundFrameImage, 1)
        memset(pixelBufferAddressOfPlane, 128, Int(x * y) * 2)

      }

 }

Which successfully renders the camera feed Black & White: 哪个成功渲染相机Feed Black&White:

在此输入图像描述

Filtering Elements Of An SCNScene In Black & White: 过滤SCNScene在黑与白中的元素:

As @Confused rightly said, If you decided that you wanted the cameraFeed to be in colour, but the contents of your AR Experience to be in Black & White you can apply a filter directly to an SCNNode using it's filters property which is simply: 正如@Confused正确地说的那样,如果你决定想让cameraFeed变成彩色,但你的AR Experience内容是黑白的,你可以使用它的filters属性将过滤器直接应用到SCNNode ,这只是:

An array of Core Image filters to be applied to the rendered contents of the node. 要应用于节点的渲染内容的Core Image过滤器数组。

Let's say for example that we dynamically create 3 SCNNodes with a Sphere Geometry we can apply a CoreImageFilter to these directly like so: 让我们假设我们使用Sphere Geometry动态创建3个SCNNodes ,我们可以直接将CoreImageFilter应用于这些:

/// Creates 3 Objects And Adds Them To The Scene (Rendering Them In GrayScale)
func createObjects(){

    //1. Create An Array Of UIColors To Set As The Geometry Colours
    let colours = [UIColor.red, UIColor.green, UIColor.yellow]

    //2. Create An Array Of The X Positions Of The Nodes
    let xPositions: [CGFloat] = [-0.3, 0, 0.3]

    //3. Create The Nodes & Add Them To The Scene
    for i in 0 ..< 3{

        let sphereNode = SCNNode()
        let sphereGeometry = SCNSphere(radius: 0.1)
        sphereGeometry.firstMaterial?.diffuse.contents = colours[i]
        sphereNode.geometry = sphereGeometry
        sphereNode.position = SCNVector3( xPositions[i], 0, -1.5)
        augmentedRealityView.scene.rootNode.addChildNode(sphereNode)

        //a. Create A Black & White Filter
        guard let blackAndWhiteFilter = CIFilter(name: "CIColorControls", withInputParameters: [kCIInputSaturationKey:0.0]) else { return }
        blackAndWhiteFilter.name = "bw"
        sphereNode.filters = [blackAndWhiteFilter]
        sphereNode.setValue(CIFilter(), forKeyPath: "bw")
    }

}

Which will yield a result something like the following: 这将产生如下结果:

在此输入图像描述

For a full list of these filters you can refer to the following: CoreImage Filter Reference 有关这些过滤器的完整列表,请参阅以下内容: CoreImage过滤器参考

Example Project: Here is a complete Example Project which you can download and explore for yourself. 示例项目:这是一个完整的示例项目 ,您可以自己下载和探索。

Hope it helps... 希望能帮助到你...

The snapshot object should be an UIImage . snapshot对象应该是UIImage Apply filters on this UIImage object by importing CoreImage framework and then apply Core Image filters on it. 通过导入CoreImage框架在此UIImage对象上应用过滤器,然后在其上应用Core Image过滤器。 You should be adjusting the exposure and control values on the image. 您应该调整图像上的曝光和控制值。 For more implementation details check this answer . 有关更多实施细节,请查看此答案 From iOS6, you can also use CIColorMonochrome filter to achieve the same effect. 从iOS6开始,您还可以使用CIColorMonochrome过滤器来实现相同的效果。

Here is the apple documentation for all the available filters. 以下是所有可用过滤器的Apple 文档 Click on each of the filters, to know the visual effects on the image upon application of the filter. 单击每个过滤器,以了解应用过滤器时图像上的视觉效果。

Here is the swift 4 code. 这是swift 4代码。

 func imageBlackAndWhite() -> UIImage?
    {
        if let beginImage = CoreImage.CIImage(image: self)
        {
            let paramsColor: [String : Double] = [kCIInputBrightnessKey: 0.0,
                                                  kCIInputContrastKey:   1.1,
                                                  kCIInputSaturationKey: 0.0]
            let blackAndWhite = beginImage.applyingFilter("CIColorControls", parameters: paramsColor)

            let paramsExposure: [String : AnyObject] = [kCIInputEVKey: NSNumber(value: 0.7)]
            let output = blackAndWhite.applyingFilter("CIExposureAdjust", parameters: paramsExposure)

            guard let processedCGImage = CIContext().createCGImage(output, from: output.extent) else {
                return nil
            }

            return UIImage(cgImage: processedCGImage, scale: self.scale, orientation: self.imageOrientation)
        }
        return nil
    }

This might be the easiest and fastest way to do this: 这可能是最简单,最快速的方法:

Apply a CoreImage Filter to the Scene: 将CoreImage过滤器应用于场景:

https://developer.apple.com/documentation/scenekit/scnnode/1407949-filters https://developer.apple.com/documentation/scenekit/scnnode/1407949-filters

This filter gives a very good impression of a black and white photograph, with good transitions through grays: https://developer.apple.com/library/content/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/filter/ci/CIPhotoEffectMono 这个过滤器给人一种黑白照片的非常好的印象,通过灰色很好地过渡: https//developer.apple.com/library/content/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/ DOC /滤波器/ CI / CIPhotoEffectMono

You could also use this one, and get results easy to shift in hue, too: 您也可以使用这个,并且也可以轻松地调整结果:

https://developer.apple.com/library/content/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/filter/ci/CIColorMonochrome https://developer.apple.com/library/content/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/filter/ci/CIColorMonochrome

And here, in Japanese, is the proof of filters and SceneKit ARKit working together: http://appleengine.hatenablog.com/entry/advent20171215 在这里,在日语中,过滤器和SceneKit ARKit协同工作的证明: http ://appleengine.hatenablog.com/entry/advent20171215

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM