简体   繁体   中英

Convert individual pixel values from RGB to YUV420 and save the frame - C++

I have been working with RGB->YUV420 conversion for sometime using the FFmpeg library. Already tried the sws_scale functionality but its not working well. Now, I have decided to convert each pixel individually, using colorspace conversion formulae. So, following is the code that gets me few frames and allows me to access individual R,G,B values of each pixel:

// Read frames and save first five frames to disk
    i=0;
    while((av_read_frame(pFormatCtx, &packet)>=0) && (i<5)) 
    {
        // Is this a packet from the video stream?
        if(packet.stream_index==videoStreamIdx) 
        {   
            /// Decode video frame            
            avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

            // Did we get a video frame?
            if(frameFinished) 
            {
                i++;
                sws_scale(img_convert_ctx, (const uint8_t * const *)pFrame->data,
                          pFrame->linesize, 0, pCodecCtx->height,
                          pFrameRGB->data, pFrameRGB->linesize);

                int x, y, R, G, B;
                uint8_t *p = pFrameRGB->data[0];
                for(y = 0; y < h; y++)
                {  
                    for(x = 0; x < w; x++) 
                    {
                        R = *p++;
                        G = *p++;
                        B = *p++;
                        printf(" %d-%d-%d ",R,G,B);
                    }
                }

                SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height, i);
            }
        }

        // Free the packet that was allocated by av_read_frame
        av_free_packet(&packet);
    }

I read online that to convert RGB->YUV420 or vice-versa, one should first convert to YUV444 format. So, its like: RGB->YUV444->YUV420. How do I implement this in C++?

Also, here is the SaveFrame() function used above. I guess this will also have to change a little since YUV420 stores data differently. How to take care of that?

void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame)
{
    FILE *pFile;
    char szFilename[32];
    int  y;

    // Open file
    sprintf(szFilename, "frame%d.ppm", iFrame);
    pFile=fopen(szFilename, "wb");
    if(pFile==NULL)
        return;

    // Write header
    fprintf(pFile, "P6\n%d %d\n255\n", width, height);

    // Write pixel data
    for(y=0; y<height; y++)
        fwrite(pFrame->data[0]+y*pFrame->linesize[0], 1, width*3, pFile);

    // Close file
    fclose(pFile);
}

Can somebody please suggest? Many thanks!!!

void SaveFrameYUV420P(AVFrame *pFrame, int width, int height, int iFrame)
{
    FILE *pFile;
    char szFilename[32];
    int  y;

    // Open file
    sprintf(szFilename, "frame%d.yuv", iFrame);
    pFile=fopen(szFilename, "wb");
    if(pFile==NULL)
        return;

    // Write pixel data
    fwrite(pFrame->data[0], 1, width*height, pFile);
    fwrite(pFrame->data[1], 1, width*height/4, pFile);
    fwrite(pFrame->data[2], 1, width*height/4, pFile);

    // Close file
    fclose(pFile);
}

On Windows, you can use irfanview to see frames saved this way. You open the frame as RAW, 24bpp format, provide width and height, and check the box "yuv420".

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM