简体   繁体   中英

ffpyplayer Image to QPixmap

I am looking for a way to stream a video file to a QPixmap in PyQt using FfPyPlayer. The docs give an example to stream Image objects out of the player, however I am not sure how to convert this Image to a QPixmap. I do see that there is a way to convert it to a bytearray, and QPixmap does have a QPixmap.loadFromData() function but I was not able to tie these two together successfully.

Here's the example from the docs:

from ffpyplayer.player import MediaPlayer
import time

player = MediaPlayer(filename)
val = ''
while val != 'eof':
    frame, val = player.get_frame()
    if val != 'eof' and frame is not None:
        img, t = frame
        # Now what?!

EDIT Well I did come up with a solution by converting the FfPyPlayer Image to a PIL Image and then a QPixmap via the Pillow ImageQt.toqpixmap() method. However I hesitate to mark this as an answer because it seems so inefficient that I would like to know if anyone else has a better solution.

from ffpyplayer.player import MediaPlayer
from PIL import ImageQt, Image

player = MediaPlayer(filename)
val = ''
while val != 'eof':
    frame, val = player.get_frame()
    if val != 'eof' and frame is not None:
        img, t = frame

        data = img.to_bytearray()[0]
        img2 = Image.frombytes("RGB", img.get_size(), bytes(data))

        pixmap = ImageQt.toqpixmap(img2) # <- returns a QPixmap

The problem is that QPixmap.loadFromData() expects actual image "files", while what ffpyplayer (as in ffmpeg) provides are actually raw images, which is the actual representation (rasterization) of the images as they will eventually appear.

For various reasons, QPixmap cannot directly deal with raw data, as it's intended for showing images on screen, while QImages can, as they also are hardware independent.

QImage can load raw image data (meaning that, for example, if the image is compressed, the raw data is the actual representation of an uncompressed image), but it needs to know something about the "image format" and, obviously the image width/height.
When you use QPixmap.loadFromData() what actually happens is that Qt "guesses" the image file format, decompresses it (if necessary) to get data about "each pixel", gets both graphical size and pixel format, then it builds an internal QImage with all that information and finally returns it as QPixmap.

In this case, you already have all that data provided by ffpyplayer, so you just skip the first "decoding" passage entirely and create a QImage, and then get a QPixmap from it.

while val != 'eof':
    frame, val = player.get_frame()
    if val != 'eof' and frame is not None:
        img, t = frame

        data = img.to_bytearray()[0]
        width, height = img.get_size()

        # the technical name for the 'rgb24' default pixel format is RGB888,
        # which is QImage.Format_RGB888 in the QImage format enum
        qimage = QtGui.QImage(data, width, height, QtGui.QImage.Format_RGB888)
        pixmap = QtGui.QPixmap.fromImage(qimage)

Note that the default out_fmt of MediaPlayer is 'rgb24', I doubt you'll need to change it, but if you do avoid any YUV based pixel format, as QImage doesn't support them. It's still possible to convert it (there are some methods ) but I'd not suggest to go that way at all.

Finally, consider the aspect ratio, which might change between the source image and what would be the actual representation (for example, anamorphic format). You can get that from player.get_metadata() and if it doesn't match the image aspect ratio, you can get the correct image with this:

pixmap = pixmap.scaled(targetWidth, targetHeight, 
    QtCore.Qt.IgnoreAspectRatio, QtCore.Qt.SmoothTransformation)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM