简体   繁体   English

ARKIT - 它可以跟踪多少跟踪图像?

[英]ARKIT - how many tracking images can it track?

So I understand that in order to track images, we need to create a AR Resource Folder and place all the images we intend to track there, as well as configuring thru the inspector their real world size properties. 所以我理解为了跟踪图像,我们需要创建一个AR资源文件夹并放置我们打算在那里跟踪的所有图像,以及通过检查器配置它们的真实世界大小属性。

Then we set the array of ARReferenceImages to the Session's World Config. 然后我们将ARReferenceImages数组设置为Session的World Config。

All good with that. 一切都很好。 But HOW MANY can we track ? 但我们可以跟踪多少? 10? 10? 100? 100? 1000000? 百万? and would it be possible to download those images and create ARReferences on the fly, instead of having them in the bundle from the very beginning ? 是否可以下载这些图像并动态创建ARReferences,而不是从一开始就将它们放在捆绑中?

Having a look at the Apple Docs it doesn't seem to specify a limit. 看看Apple Docs它似乎没有指定限制。 As such it is likely to assume it would likely depend on memory management etc. 因此,可能会假设它可能取决于内存管理等。

Regarding creating images on the fly, this is definitely possible. 关于动态创建图像,这绝对是可能的。

According to the docs this can be done one of two ways: 根据文档,这可以通过以下两种方式之一完成:

  1. Creating aa new reference image from a Core Graphics image object: 从Core Graphics图像对象创建新的参考图像:

     init(CGImage, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat) 
  2. Creating a new reference image from a Core Video pixel buffer: 从核心视频像素缓冲区创建新的参考图像:

     init(CVPixelBuffer, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat) 

Here is an example of creating a referenceImage on the fly using an image from the standard Assets Bundle , although this can easily be adapted for parsing an image from a URL etc: 下面是使用标准 Assets Bundle的图像动态创建referenceImage的示例,尽管这可以很容易地适用于从URL等解析图像:

// Create ARReference Images From Somewhere Other Than The Default Folder
func loadDynamicImageReferences(){

//1. Get The Image From The Folder
guard let imageFromBundle = UIImage(named: "moonTarget"),
//2. Convert It To A CIImage
let imageToCIImage = CIImage(image: imageFromBundle),
//3. Then Convert The CIImage To A CGImage
let cgImage = convertCIImageToCGImage(inputImage: imageToCIImage)else { return }

//4. Create An ARReference Image (Remembering Physical Width Is In Metres)
let arImage = ARReferenceImage(cgImage, orientation: CGImagePropertyOrientation.up, physicalWidth: 0.2)

//5. Name The Image
arImage.name = "CGImage Test"

//5. Set The ARWorldTrackingConfiguration Detection Images Assuming A Configuration Is Running
configuration.detectionImages = [arImage]

}


/// Converts A CIImage To A CGImage
///
/// - Parameter inputImage: CIImage
/// - Returns: CGImage
func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {

let context = CIContext(options: nil)
if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {

 return cgImage

}

return nil
}

We can then test this within ARSCNViewDelegate eg 然后我们可以在ARSCNViewDelegate测试它,例如

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {

//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }

let x = currentImageAnchor.transform
print(x.columns.3.x, x.columns.3.y , x.columns.3.z)

//2. Get The Targets Name
let name = currentImageAnchor.referenceImage.name!

//3. Get The Targets Width & Height In Meters
let width = currentImageAnchor.referenceImage.physicalSize.width
let height = currentImageAnchor.referenceImage.physicalSize.height

print("""
Image Name = \(name)
Image Width = \(width)
Image Height = \(height)
""")

//4. Create A Plane Geometry To Cover The ARImageAnchor
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.opacity = 0.25
planeNode.geometry = planeGeometry

//5. Rotate The PlaneNode To Horizontal
planeNode.eulerAngles.x = -.pi/2

//The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode)

//6. Create AN SCNBox
let boxNode = SCNNode()
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)

//7. Create A Different Colour For Each Face
let faceColours = [UIColor.red, UIColor.green, UIColor.blue, UIColor.cyan, UIColor.yellow, UIColor.gray]
var faceMaterials = [SCNMaterial]()

//8. Apply It To Each Face
for face in 0 ..< 5{
    let material = SCNMaterial()
    material.diffuse.contents = faceColours[face]
    faceMaterials.append(material)
}
boxGeometry.materials = faceMaterials
boxNode.geometry = boxGeometry

//9. Set The Boxes Position To Be Placed On The Plane (node.x + box.height)
boxNode.position = SCNVector3(0 , 0.05, 0)

//10. Add The Box To The Node
node.addChildNode(boxNode)

 }

As you can see the process if fairly easy. 正如你所看到的那样,过程相当容易。 So in your case, you are probably more interested in the conversion function above which uses this method to create the dynamic images: 因此,在您的情况下,您可能对上面使用此方法创建动态图像的转换函数更感兴趣:

init(CGImage, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat)

Paraphrasing the Human Interface Guidelines for AR... image detection performance/accuracy deteriorates as the number of images increases. 解释AR的人机界面指南 ...随着图像数量的增加,图像检测性能/准确度会下降。 So there's no hard limit in the API, but if you try to put more than around 25 images in the current detection set, it'll start getting to where it's too slow/inaccurate to be useful. 因此API中没有硬性限制,但如果您尝试在当前检测集中放置超过25个图像,它将开始到达它太慢/不准确无用的地方。

There are lots of other factors affecting performance/accuracy, too, so consider that a guideline, not a hard limit. 还有很多其他因素影响性能/准确性,因此请考虑一个指南,而不是硬限制。 Depending on scene conditions in the place where you're running the app, how much you're stressing the CPU with other tasks, how distinct your reference images are from one another, etc, you might manage a few more than 25... or start having detection problems with a few less than 25. 根据您运行应用程序的地方的场景条件,您对其他任务施加压力的程度,参考图像彼此之间的差异等等,您可能会管理超过25个......或者开始检测少于25的问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM