简体   繁体   中英

Frame size difference before and after creating a video

I am creating a video from images using Opencv.

dim=(width, height)
fourcc = cv2.VideoWriter_fourcc(*'X264')
out_d = cv2.VideoWriter(save_path_depth,fourcc, fps, dim)

After creating video I read video and extract frames from that video

while(cap.isOpened()):
    ret, frame = cap.read()

    if ret == False:
        break
    print(frame)
    cv2.imwrite(output+"/"+ str(i).zfill(1) + ".png", frame)      
    i+=1

cap.release()

Frame size is almost double than size of frame i initially used to create the video. Other than that when i do frame to frame comparison, some frames are completely different than their counterpart original frames. Can somebody explain what can be the reason behind it.

It's not a fair comparison.

Your original input may have been appropriate for PNG and therefore efficiently compressed.

Your actual encoding options are not shown but you are most likely experiencing generation loss due to using a lossy format. The images are permanently altered with encoding artifacts. These encoding artifacts are due to the methods that help make videos small in file size. While watching the video they are designed to be harder to notice. However, re-encoding from H.264 back to PNG must include these noisy artifacts in the new images which increases the complexity, makes compression harder, and therefore increases the file size. PNG doesn't do well with noise.

Secondly, RGB to YUV colorspace conversion is occurring which can also cause differences.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM