[英]How to print a Kinect frame in OpenCV using OpenNI bindings
I'm trying to use OpenCV to Process depth images from a kinect.我正在尝试使用 OpenCV 来处理来自 kinect 的深度图像。 Im using Python and primesense's bindings ( https://pypi.org/project/primesense/ ), but im having a lot of trouble just showing the images i get from openNI.
我使用 Python 和 primesense 的绑定( https://pypi.org/project/primesense/ ),但我在显示我从 openNI 获得的图像时遇到了很多麻烦。 Im usin
我在用
import numpy as np
import cv2
from primesense import openni2
openni2.initialize("./Redist") # can also accept the path of the OpenNI redistribution
dev = openni2.Device.open_any()
depth_stream = dev.create_color_stream()
depth_stream.start()
while(True):
frame = depth_stream.read_frame()
print(type(frame)) #prints <class 'primesense.openni2.VideoFrame'>
frame_data = frame.get_buffer_as_uint8()
print(frame_data) # prints <primesense.openni2.c_ubyte_Array_921600 object at 0x000002B3AF5F8848>
image = np.array(frame_data, dtype=np.uint8)
print(type(image)) #prints <class 'numpy.ndarray'>
print(image) #prints [12 24 3 ... 1 3 12], i guess this is the array that makes the image
cv2.imshow('image', image)
depth_stream.stop()
openni2.unload()
this is the output im getting, just a window with no image:这是我得到的输出,只是一个没有图像的窗口:
there is no documentation at all on how to use these bindings, so im kind on a blind spot here.根本没有关于如何使用这些绑定的文档,所以我在这里有点盲点。 i thought that the
frame.get_buffer_as_uint8()
was giving me the array ready to print, but it just returns primesense.openni2.c_ubyte_Array_921600 object at 0x000002B3AF5F8848
.我认为
frame.get_buffer_as_uint8()
给了我准备打印的数组,但它只返回primesense.openni2.c_ubyte_Array_921600 object at 0x000002B3AF5F8848
。
Actually, i looked at the binding's code, and found this:实际上,我查看了绑定的代码,发现了这个:
def get_buffer_as_uint8(self):
return self.get_buffer_as(ctypes.c_uint8)
def get_buffer_as_uint16(self):
return self.get_buffer_as(ctypes.c_uint16)
def get_buffer_as_triplet(self):
return self.get_buffer_as(ctypes.c_uint8 * 3)
has anyone used this bindings?有人用过这个绑定吗? any idea of how to makes them work?
知道如何使它们工作吗? thank you in advance
先感谢您
I found the solution:我找到了解决方案:
Instead of using image = np.array(frame_data, dtype=np.uint8)
for getting the image, you have to use frame_data = frame.get_buffer_as_uint16()
.而不是使用
image = np.array(frame_data, dtype=np.uint8)
获取图像,您必须使用frame_data = frame.get_buffer_as_uint16()
。 also, i was failing to set the image shape correctly.此外,我未能正确设置图像形状。
FOR FUTURE REFERENCE备查
To take an image from a depth camera (the Kinect is not the only one), using the OpenNI bindings for Python, and process that image with OpenCV, the following code will do the trick:要从深度相机(Kinect 不是唯一的)获取图像,使用 Python 的 OpenNI 绑定,并使用 OpenCV 处理该图像,以下代码将完成此操作:
import numpy as np
import cv2
from primesense import openni2
from primesense import _openni2 as c_api
openni2.initialize("./Redist") # can also accept the path of the OpenNI redistribution
dev = openni2.Device.open_any()
depth_stream = dev.create_depth_stream()
depth_stream.start()
while(True):
frame = depth_stream.read_frame()
frame_data = frame.get_buffer_as_uint16()
img = np.frombuffer(frame_data, dtype=np.uint16)
img.shape = (1, 480, 640)
img = np.concatenate((img, img, img), axis=0)
img = np.swapaxes(img, 0, 2)
img = np.swapaxes(img, 0, 1)
cv2.imshow("image", img)
cv2.waitKey(34)
depth_stream.stop()
openni2.unload()
to use the color camera, you can use dev.create_color_stream()
instead.要使用彩色相机,您可以使用
dev.create_color_stream()
代替。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.