[英]how to connect xBox kinect to opencv in ubuntu?
I want to work kinect with openCV in ubuntu (14.04) by C++. 我想通过C ++在ubuntu(14.04)中使用openCV进行kinect。 I install openni and libfreenect.
我安装了openni和libfreenect。
when I type lsusb in terminal, the system answer text following to me. 当我在终端输入lsusb时,系统会回复我的文字。
Bus 003 Device 005: ID 045e:02ae Microsoft Corp. Xbox NUI Camera
Bus 003 Device 003: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor
Bus 003 Device 004: ID 045e:02ad Microsoft Corp. Xbox NUI Audio
when I type freenect-glview in terminal, the system answer text following to me. 当我在终端输入freenect-glview时,系统会回复我的文字。
Kinect camera test
Number of devices found: 1
and the system shows RGB and depth. 并且系统显示RGB和深度。
also,I activated openni when cmake opencv (-D WITH_OPENNI:ON) and after system shows: 另外,我在cmake opencv(-D WITH_OPENNI:ON)时激活了openni,并在系统显示后:
openni: yes
prime-sensor-kinect : yes
I am compiling code by : 我正在编译代码:
g++ -o test1 test1.cpp `pkg-config opencv --cflags --libs`
but when I am runing code, system error: 但是当我运行代码时,系统错误:
CvCapture_OpenNI::CvCapture_OpenNI : Failed to enumerate production trees: Can't create
any node of the requested type! 请求类型的任何节点!
code: 码:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include "opencv2/opencv.hpp"
#include <vector>
#include <stdio.h>
using namespace cv;
int main(int, char**)
{
VideoCapture cap(CV_CAP_OPENNI); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("edges", edges);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
and when running python code, system error: 当运行python代码时,系统错误:
CvCapture_OpenNI::CvCapture_OpenNI : Failed to enumerate production trees: Can't create any node of the requested type!
0.0
Unable to Retrieve Disparity Map from camera
code python : 代码python:
import cv2
import cv2.cv as cv
capture = cv2.VideoCapture(cv.CV_CAP_OPENNI)
capture.set(cv.CV_CAP_OPENNI_IMAGE_GENERATOR_OUTPUT_MODE, cv.CV_CAP_OPENNI_VGA_30HZ)
print capture.get(cv.CV_CAP_PROP_OPENNI_REGISTRATION)
while True:
if not capture.grab():
print "Unable to Grab Frames from camera"
break
okay1, depth_map = capture.retrieve(0,cv.CV_CAP_OPENNI_DEPTH_MAP)
if not okay1:
print "Unable to Retrieve Disparity Map from camera"
break
okay2, gray_image = capture.retrieve(0,cv.CV_CAP_OPENNI_GRAY_IMAGE)
if not okay2:
print "Unable to retrieve Gray Image from device"
break
cv2.imshow("depth camera", depth_map)
cv2.imshow("rgb camera", gray_image)
if cv2.waitKey(10) == 27:
break
cv2.destroyAllWindows()
capture.release()
opencv doesn't recognize kinect as an input device.How to solve this problem? opencv不认识kinect作为输入设备。如何解决这个问题?
I am sorry for bad writing because my English language is bad. 我很抱歉写不好,因为我的英语不好。
I stumbled upon this thread while finding myself in a similar situation. 我发现自己处于类似情况时偶然发现了这个帖子。 I could manage to get the sensor data from OpenCV only after installing the PrimeSense modules for OpenNI, which you can find here .
只有在为OpenNI安装PrimeSense模块后,我才能设法从OpenCV获取传感器数据,您可以在此处找到。 After following the instructions listed in the README for my system (Ubuntu 14.04.5), I could manage to get this code running:
按照我的系统自述文件(Ubuntu 14.04.5)中列出的说明操作后,我可以设法运行此代码:
#include <cstdio>
#include <opencv2/opencv.hpp>
int main(int argc, char **argv){
cv::VideoCapture capture(CV_CAP_OPENNI);
cv::Mat image;
cv::Mat bgrImage;
while(true){
capture.grab();
capture.retrieve(image, CV_CAP_OPENNI_DEPTH_MAP);
capture.retrieve(bgrImage, CV_CAP_OPENNI_BGR_IMAGE);
imshow("Image", image);
imshow("Color", bgrImage);
if(cv::waitKey(30) >= 0) break;
}
return 0;
}
If you've installed libfreenect and opencv, you should be able to run the following python script: 如果你已经安装了libfreenect和opencv,你应该能够运行以下python脚本:
import freenect
import cv2
import numpy as np
from functions import *
def nothing(x):
pass
kernel = np.ones((5, 5), np.uint8)
def pretty_depth(depth):
np.clip(depth, 0, 2**10 - 1, depth)
depth >>= 2
depth = depth.astype(np.uint8)
return depth
while 1:
orig = freenect.sync_get_video()[0]
orig = cv2.cvtColor(orig,cv2.COLOR_BGR2RGB)
dst = pretty_depth(freenect.sync_get_depth()[0])#input from kinect
cv2.imshow('Disparity', dst)
cv2.imshow('RGB',orig)
if cv2.waitKey(1) & 0xFF == ord('b'):
break
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.