简体   繁体   English

流跳跃运动实际摄像机进给

[英]Stream Leap Motion Actual Camera Feed

Is there a way to stream (in JS preferably, but any language would do) the actual infrared camera video feed from the Leap Motion? 有没有一种方法(最好是用JS,但可以用任何一种语言完成)从Leap Motion流式传输实际的红外摄像机视频? The demo seen at 0:52 here seems to show that the device can provide more data than just a skeleton of points, and I'd love to be able to display the actual "Leap-View" data in one of my projects, which I would assume would essentially be a grayscale image. 在0:52 看到的演示似乎表明该设备可以提供更多的数据,而不仅仅是点的骨架,我很希望能够在我的一个项目中显示实际的“ Leap-View”数据,我认为基本上是灰度图像。

Thanks! 谢谢!

My name's Edwin with the Leap Motion Community Team. 我叫Led Motion社区团队的Edwin。 Unfortunately the "point clouds" that were featured in our early videos are visualizations from some of our debugging tools. 不幸的是,我们早期视频中介绍的“点云”是来自某些调试工具的可视化效果。 Because they're not temporally or spatially consistent, they are not usable as methods of interaction. 由于它们在时间或空间上不一致,因此不能用作交互方法。 There is currently no point cloud to be had. 当前没有点云。 It may be something we can reconstruct from the 3D information we do have, but probably not a feature we will add in the short term. 我们可以从已有的3D信息中重建出某些东西,但可能不会在短期内增加此功能。

I think what you want is this: https://github.com/meyburgh/forirony/blob/master/misc/leap.cpp 我认为您想要的是这样的: https : //github.com/meyburgh/forirony/blob/master/misc/leap.cpp

Which is a very simple demo that shows the gray scale infrared video from each of the leap motion's cameras. 这是一个非常简单的演示,显示了每个跳跃运动摄像机的灰度红外视频。

The video looks a bit strange, so if you want it to look 'normal' you need to rectify it--Leap provides image.rectify(), but that occurs on the cpu, so for performance it's best to use a shader instead of the image.rectify() function. 该视频看起来有些奇怪,因此如果您希望它看起来“正常”,则需要对其进行纠正-Leap提供了image.rectify(),但这发生在cpu上,因此为了提高性能,最好使用着色器代替image.rectify()函数。

To get the 'point cloud,' if that's what you're interested in, you could do per pixel disparity mapping (which opencv has on the cpu/gpu) or you can check out NVIDIA's CUDA toolkit which has a disparity map demo included in the samples. 要获得“点云”,如果您对此感兴趣,可以按像素进行视差映射(opencv在cpu / gpu上具有此功能),也可以查看NVIDIA的CUDA工具包,该工具包中包含有视差图演示。样本。 link to opencv's stereo correspondence (aka disparity mapping): http://docs.opencv.org/3.0-beta/modules/cudastereo/doc/stereo.html 链接到opencv的立体声对应关系(也称为视差映射): http : //docs.opencv.org/3.0-beta/modules/cudastereo/doc/stereo.html

I can appreciate that the quality of the point cloud through disparity mapping would be quite coarse and noisy, and thus not useful for 'interaction' as Edwin put it in his post, but if you are interested in studying statistical techniques to make sense of the information hidden in the noise, or would like the point cloud for 'artistic' reasons then this is the way to go I would say. 我可以体会到,通过视差映射获得的点云的质量将非常粗糙且嘈杂,因此对于像Edwin在其文章中所说的“交互”没有用,但是如果您有兴趣研究统计技术以了解隐藏在噪音中的信息,或者出于“艺术”原因想要点云,那么这就是我要说的。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM