[英]Reading Image Stream from RCCC Bayer Camera Sensor in Ubuntu
I am working with a LI-AR0820 GMSL2 camera which uses the On-Semi AR0820 sensor that captures images in a 12-Bit RCCC Bayer format. 我正在使用LI-AR0820 GMSL2相机,该相机使用On-Semi AR0820传感器以12位RCCC Bayer格式捕获图像。 I want to read the real-time image stream from the camera and turn it into a grayscale image (using this demosaicing algorithm) and then feed it into an object detection algorithm.
我想从相机读取实时图像流,并将其转换为灰度图像(使用此去马赛克算法),然后将其输入到对象检测算法中。 However, since OpenCV does not support the RCCC format I can't use the VideoCapture class to get image data from the camera.
但是,由于OpenCV不支持RCCC格式,因此无法使用VideoCapture类从摄像机获取图像数据。 I am looking for something similar to get the streamed image data in an array-like format so that I can further manipulate it.
我正在寻找类似的东西来以类似数组的格式获取流式图像数据,以便我可以对其进行进一步的操作。 Any ideas?
有任何想法吗?
I'm running Ubuntu 18.04 with OpenCV 3.2.0 and Python 3.7.1. 我正在运行带有OpenCV 3.2.0和Python 3.7.1的Ubuntu 18.04。
EDIT. 编辑。 I am using the code here .
我在这里使用代码。
#include <vector>
#include <iostream>
#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main() {
// Each pixel is made up of 16 bits, with the high 4 bits always equal to 0
unsigned char bytes[2];
// Hold the data in a vector
std::vector<unsigned short int> data;
// Read the camera data
FILE *fp = fopen("test.raw","rb");
while(fread(bytes, 2, 1, fp) != 0) {
// The data comes in little-endian, so shift the second byte right and concatenate the first byte
data.push_back(bytes[0] | (bytes[1] << 8));
}
// Make a matrix 1280x720 with 16 bits of unsigned integers
cv::Mat imBayer = cv::Mat(720, 1280, CV_16U);
// Make a matrix to hold RGB data
cv::Mat imRGB;
// Copy the data in the vector into a nice matrix
memmove(imBayer.data, data.data(), data.size()*2);
// Convert the GR Bayer pattern into RGB, putting it into the RGB matrix!
cv::cvtColor(imBayer, imRGB, CV_BayerGR2RGB);
cv::namedWindow("Display window", cv::WINDOW_AUTOSIZE);
// *15 because the image is dark
cv::imshow("Display window", 15*imRGB);
cv::waitKey(0);
return 0;
}
There are two problems with the code. 代码有两个问题。 First, I have to get a raw image file using fswebcam and then use the code above to read the raw file and display the image.
首先,我必须使用fswebcam获取原始图像文件,然后使用上面的代码读取原始文件并显示图像。 I want to be able to access the /dev/video1 node and directly read the raw data from there instead of having to first save it and then read it separately.
我希望能够访问/ dev / video1节点并直接从那里读取原始数据,而不必先保存然后再单独读取。 Second, OpenCV does not support the RCCC Bayer format so I have to come up with a demosaicing method.
其次,OpenCV不支持RCCC Bayer格式,因此我必须提出一种去马赛克方法。
The camera outputs serialized data through a Coax cable, so I use a Deser board with USB 3.0 connection to connect the camera to my laptop. 相机通过同轴电缆输出串行数据,因此我使用USB 3.0连接的Deser板将相机连接到笔记本电脑。 The setup can be seen here .
设置可以在这里看到。
If your camera supports the CAP_PROP_CONVERT_RGB
property, you might be able to get raw RCCC data from VideoCapture. 如果您的摄像机支持
CAP_PROP_CONVERT_RGB
属性,则可以从VideoCapture获取原始RCCC数据。 By setting this property to False
, you can disable the conversion to RGB. 通过将此属性设置为
False
,可以禁用到RGB的转换。 So, you can capture raw frames using a code like (no error checking for simplicity): 因此,您可以使用以下代码捕获原始帧(为简单起见,不进行错误检查):
cap = cv2.VideoCapture(0)
# disable converting images to RGB
cap.set(cv2.CAP_PROP_CONVERT_RGB, False)
while(True):
ret, frame = cap.read()
# other processing ...
cap.release()
I don't know if this works for your camera. 我不知道这是否适用于您的相机。
If you can get the raw images somehow, you can apply the de-mosaicing method described in the ANALOG DEVICES appnote. 如果可以某种方式获得原始图像,则可以应用“ ANALOG DEVICES”应用笔记中所述的去马赛克方法。
with optimal filter 带有最佳过滤器
I wrote following python code as described in the appnote to test the RCCC -> GRAY conversion. 我按照应用笔记中的说明编写了以下python代码,以测试RCCC->灰色转换。
import cv2
import numpy as np
rgb = cv2.cvtColor(cv2.imread('RGB.png'), cv2.COLOR_BGR2RGB)
c = cv2.cvtColor(rgb, cv2.COLOR_RGB2GRAY)
r = rgb[:, :, 0]
# no error checking. c shape must be a multiple of 2
rmask = np.tile([[1, 0], [0, 0]], [c.shape[0]//2, c.shape[1]//2])
cmask = np.tile([[0, 1], [1, 1]], [c.shape[0]//2, c.shape[1]//2])
# create RCCC image by replacing 1 pixel out of 2x2 pixel region
# in the monochrome image (c) with a red pixel
rccc = (rmask*r + cmask*c).astype(np.uint8)
# RCCC -> GRAY conversion
def rccc_demosaic(rccc, rmask, cmask, filt):
# RCCC -> GRAY
# use border type REFLECT_101 to give correct results for border pixels
filtered = cv2.filter2D(src=rccc, ddepth=-1, kernel=filt,
anchor=(-1, -1), borderType=cv2.BORDER_REFLECT_101)
demos = (rmask*filtered + cmask*rccc).astype(np.uint8)
return demos
# demo of the optimal filter
zeta = 0.5
kernel_4neighbor = np.array([[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 1, 0, 1, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]])/4.0
kernel_optimal = np.array([[0, 0, -1, 0, 0],
[0, 0, 2, 0, 0],
[-1, 2, 4, 2, -1],
[0, 0, 2, 0, 0],
[0, 0, -1, 0, 0]])/8.0
kernel_param = np.array([[0, 0, -1./4, 0, 0],
[0, 0, 0, 0, 0],
[-1./4, 0, 1., 0, -1./4],
[0, 0, 0, 0, 0],
[0, 0, -1./4, 0, 0]])
# apply optimal filter (Figure 7)
opt1 = rccc_demosaic(rccc, rmask, cmask, kernel_optimal)
# parametric filter with zeta = 0.5 (Figure 5)
opt2 = rccc_demosaic(rccc, rmask, cmask, kernel_4neighbor + zeta * kernel_param)
# PSNR
print(10 * np.log10(255**2/((c - opt1)**2).mean()))
print(10 * np.log10(255**2/((c - opt2)**2).mean()))
Simulated RCCC image: 模拟的RCCC图像:
Gray image from de-mosaicing algorithm: 去马赛克算法的灰度图像:
One more thing: 还有一件事:
If your camera vendor provides an SDK for Linux, it may have an API to do the RCCC -> GRAY conversion, or at least get the raw image. 如果您的相机供应商提供了Linux的SDK,则它可能具有执行RCCC-> GREY转换的API,或至少获取了原始图像。 If RCCC -> GRAY conversion is not in the SDK, the C# sample code should have it, so I suggest you take a look at their code.
如果SDK中没有RCCC-> GREY转换,则C#示例代码应该包含它,因此我建议您看一下它们的代码。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.