[英]sws_scale on raw yuv420p data
I am getting camera data from my raspberry pi in yuv420p format.我正在从我的树莓派获取 yuv420p 格式的相机数据。 I am trying to use sws_scale
to convert them into RGBA format.我正在尝试使用sws_scale
将它们转换为 RGBA 格式。 So this is how I initialize my context:所以这就是我初始化上下文的方式:
_sws_context = sws_getContext(CAMERA_WIDTH, CAMERA_HEIGHT, AV_PIX_FMT_YUV420P,
CAMERA_WIDTH, CAMERA_HEIGHT, AV_PIX_FMT_RGBA, 0, nullptr, nullptr, nullptr);
I am now a bit confused on how to set the data and line size for sws_scale
.我现在对如何设置sws_scale
的数据和行大小有点困惑。 From the camera I just get a plain array of bytes without further structure.从相机我只是得到一个没有进一步结构的普通字节数组。 I assume I have to subdivide that into the planes somehow.我想我必须以某种方式将其细分为平面。 My first approach was not to separate it at all and essentially have something like this (based on the fact that:我的第一种方法是根本不将它分开,基本上有这样的东西(基于以下事实:
const uint8_t *src_data[] = {data.data()};
const int src_strides[] = {(int) std::ceil((CAMERA_WIDTH * 6) / 8)};
This was based on:这是基于:
there are 12 bits for a 2x2 grid of pixels 2x2 像素网格有 12 位
So I assumed one line would use half of this.所以我假设一条线会使用其中的一半。 But that causes a segmentation fault.但这会导致分段错误。 So I think I somehow have to split src_data
and src_strides
into the respective YUV planes, but I am not sure how to do this, especially since one pixel for YUV420 data uses less than one byte per plane...所以我想我必须以某种方式将src_data
和src_strides
拆分到各自的 YUV 平面中,但我不确定如何做到这一点,特别是因为 YUV420 数据的一个像素每个平面使用的字节少于一个字节......
Turns out it is simpler than I thought: The planes are one after another which also makes the strides pretty obvious:事实证明它比我想象的要简单:飞机一个接一个,这也使步伐非常明显:
const auto y = data.data();
const auto u = y + CAMERA_WIDTH * CAMERA_HEIGHT;
const auto v = u + (CAMERA_WIDTH * CAMERA_HEIGHT) / 4;
const auto stride_y = CAMERA_WIDTH;
const auto stride_u = CAMERA_WIDTH / 2;
const auto stride_v = CAMERA_WIDTH / 2;
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.