简体   繁体   English

PySide / Python视频播放器问题

[英]PySide/python video player issue

I am trying to write a simple YUV video player using python. 我正在尝试使用python编写一个简单的YUV视频播放器。 After some initial study, I thought I could use PySide and started with it. 经过初步研究,我认为我可以使用PySide并开始使用它。 As a first step, I have taken the following approach without consideration for real-time performance. 第一步,我采取了以下方法,而没有考虑实时性能。 Read YUV buffer (420 planar) -> convert the YUV image to RGB (32bit format) - > call PySide utilities for display. 读取YUV缓冲区(420平面)->将YUV图像转换为RGB(32位格式)->调用PySide实用程序进行显示。 The basic problem that I have with my simple program is that I am able to get only the first frame to display and the rest are not displayed, eventhough the paint event seems to be happening according to the counter in the (below) code. 我的简单程序遇到的基本问题是,即使根据(下面)代码中的计数器似乎正在发生绘画事件,我也只能显示第一帧,而其余的则不会显示。 I would appreciate any comments to understand (i) any mistakes and lack of understanding from my side regarding painting/repainting at regular intervals on QLabel/QWidget. 我会很感激任何评论,以了解(i)我身边关于在QLabel / QWidget上定期绘画/重新绘画的任何错误和缺乏理解。 (ii) Any pointers to Python based video players/display from YUV or RGB source. (ii)从YUV或RGB源指向基于Python的视频播放器/显示的任何指针。

    #!/usr/bin/python

import sys
from PySide.QtCore import *
from PySide.QtGui import *
import array
import numpy as np

class VideoWin(QWidget):
    def __init__(self, width, height, f_yuv):
        QWidget.__init__(self)
        self.width = width
        self.height = height
        self.f_yuv = f_yuv
        self.setWindowTitle('Video Window')
        self.setGeometry(10, 10, width, height)
        self.display_counter = 0
        self.img = QImage(width, height, QImage.Format_ARGB32)
        #qApp.processEvents()

    def getImageBuf(self):
        return self.img.bits()

    def paintEvent(self, e):
        painter = QPainter(self)
        self.display_counter += 1
        painter.drawImage(QPoint(0, 0), self.img)
    def timerSlot(self):
        print "In timer"
        yuv = array.array('B')
        pix = np.ndarray(shape=(height, width), dtype=np.uint32, buffer=self.getImageBuf())

        for i in range(0,self.height):
            for j in range(0, self.width):
                pix[i, j] = 0

        for k in range (0, 10):
            #qApp.processEvents()
            yuv.fromfile(self.f_yuv, 3*self.width*self.height/2)
            for i in range(0, self.height):
                for j in range(0, self.width):
                    Y_val = yuv[(i*self.width)+j]
                    U_val = yuv[self.width*self.height + ((i/2)*(self.width/2))+(j/2)]
                    V_val = yuv[self.width*self.height + self.width*self.height/4 + ((i/2)*(self.width/2))+(j/2)]
                    C = Y_val - 16
                    D = U_val - 128
                    E = V_val - 128
                    R = (( 298 * C           + 409 * E + 128) >> 8)
                    G = (( 298 * C - 100 * D - 208 * E + 128) >> 8)
                    B = (( 298 * C + 516 * D           + 128) >> 8)
                    if R > 255:
                        R = 255
                    if G > 255:
                        G = 255
                    if B > 255:
                        B = 255

                    assert(int(R) < 256)
                    pix[i, j] = (255 << 24 | ((int(R) % 256 )<< 16) | ((int(G) % 256 ) << 8) | (int(B) % 256))

            self.repaint()
            print "videowin.display_counter = %d" % videowin.display_counter


if __name__ == "__main__":
    try:
        yuv_file_name = sys.argv[1]
        width = int(sys.argv[2])
        height = int(sys.argv[3])
        f_yuv = open(yuv_file_name, "rb")

        videoApp = QApplication(sys.argv)

        videowin = VideoWin(width, height, f_yuv)

        timer = QTimer()
        timer.singleShot(100, videowin.timerSlot)

        videowin.show()
        videoApp.exec_()


        sys.exit(0)
    except NameError:
        print("Name Error : ", sys.exc_info()[1])
    except SystemExit:
        print("Closing Window...")
    except Exception:
        print(sys.exc_info()[1])

I have tried a second approach where I have tried a combination of creating a Signal object which "emits" each decoded RGB image (converted from YUV)as a signal which is caught by the "updateFrame" method in the displaying class which displays the received RGB buffer/frame using QPainter.drawImage(...) method. 我尝试了第二种方法,其中尝试了创建一个Signal对象的组合,该对象“发出”每个解码的RGB图像(从YUV转换),作为显示类中的“ updateFrame”方法捕获的信号,该类显示接收到的使用QPainter.drawImage(...)方法的RGB缓冲区/帧。 YUV-to-RGB decode--->Signal(Image buffer) --->updateFrame ---> QPainter.drawImage(...) This also displays only the first image alone although the slot which catches the signal (getting the image) shows that it is called as many times as the signal is sent by the YUV->RGB converter/decoder. YUV到RGB解码--->信号(图像缓冲区)---> updateFrame ---> QPainter.drawImage(...)尽管捕获信号的插槽(仅获取第一个图像)图片)显示它被调用的次数与YUV-> RGB转换器/解码器发送的信号次数相同。 I have also tried running the YUV->RGB converter and Video display (calling drawImage) in seperate threads, but the result is the same. 我也尝试过在单独的线程中运行YUV-> RGB转换器和视频显示(调用drawImage),但是结果是相同的。

Please note that in both the cases, I am writing the RGB pixel values directly into the bit buffer of the QImage object which is part of the VideoWin class in the code shown (NOTE: the code line pix = np.ndarray(shape=(height, width), dtype=np.uint32, buffer=videowin.getImageBuf()) which gets the img.bits() buffer of the QImage class) Also, for this test I am decoding and displaying only the first 10 frames of the video file. 请注意,在两种情况下,我都将RGB像素值直接写入QImage对象的位缓冲区中,该对象是所示代码中VideoWin类的一部分(注意:代码行pix = np.ndarray(shape =( (高度,宽度),dtype = np.uint32,buffer = videowin.getImageBuf())(它获取QImage类的img.bits()缓冲区)。此外,对于此测试,我仅解码并显示视频文件。 Versions: Python - 2.7, Qt - 4.8.5 using Pyside 版本:Python-2.7,Qt-使用Pyside的4.8.5

From the docs for array.fromfile() : 从文档的array.fromfile()

Read n items (as machine values) from the file object f and append them to the end of the array . 从文件对象f中读取n项(作为机器值), 并将它们附加到数组的末尾 [emphasis added] [加重]

The example code does not include an offset into the array, and so the first frame is read over and over again. 示例代码不包含数组中的偏移量,因此一遍又一遍地读取第一帧。 A simple fix would be to clear the array before reading the next frame: 一个简单的解决方法是在读取下一帧之前清除数组:

    for k in range (0, 100):
        del yuv[:]
        yuv.fromfile(self.f_yuv, 3*self.width*self.height/2)

And note that, to see a difference, you will need to read at least sixty frames of the test file you linked to, because the first fifty or so are all the same (ie a plain green background). 并注意,要想有所不同,您将需要至少读取链接到的测试文件的六十帧,因为前五十个左右都是相同的(即纯绿色背景)。

I have got this working based on some modifications (and extensions) to the program suggested in Displaying a video stream in QLabel with PySide . 我根据对PySide在QLabel显示视频流中建议的程序进行了一些修改(和扩展),完成了此工作。 I have added a double buffer mechanism between processing and display, used an array to read in YUV file, and finally run the Yuv2Rgb conversion as a separate thread. 我在处理和显示之间添加了双重缓冲机制,使用数组读取YUV文件,最后将Yuv2Rgb转换作为单独的线程运行。 This works for me - ie, displays all frames in the file sequentially. 这对我有用-即,顺序显示文件中的所有帧。 Here is the program for any suggestions and improvements. 这是任何建议和改进的程序。 Thanks for all your pointers so far! 到目前为止,感谢您提出的所有建议! Please note that this is not running real-time! 请注意,这不是实时运行!

#!/usr/bin/python

import sys
import time
from threading import Thread
from PySide.QtCore import *
from PySide.QtGui import *
from PIL import Image
import array
import struct
import numpy as np


class VideoDisplay(QLabel):
    def __init__(self):
        super(VideoDisplay, self).__init__()
        self.disp_counter = 0

    def updateFrame(self, image):
        self.disp_counter += 1
        self.setPixmap(QPixmap.fromImage(image))


class YuvVideoPlayer(QWidget):
    video_signal = Signal(QImage)
    video_display = None

    def __init__(self, f_yuv, width, height):
        super(YuvVideoPlayer, self).__init__()
        print "Setting up YuvVideoPlayer params"
        self.img = {}
        self.img[0] = QImage(width, height, QImage.Format_ARGB32)
        self.img[1] = QImage(width, height, QImage.Format_ARGB32)
        self.video_display = VideoDisplay()
        self.video_signal.connect(self.video_display.updateFrame)
        grid = QGridLayout()
        grid.setSpacing(10)
        grid.addWidget(self.video_display, 0, 0)
        self.setLayout(grid)
        self.setGeometry(0, 0, width, height)
        self.setMinimumSize(width, height)
        self.setMaximumSize(width, height)
        self.setWindowTitle('Control Center')
        print "Creating display thread"
        thYuv2Rgb = Thread(target=self.Yuv2Rgb, args=(f_yuv, width, height))
        print "Starting display thread"
        thYuv2Rgb.start()
        self.show()


    def Yuv2Rgb(self, f_yuv, width, height):
        '''This function gets called by an external thread'''
        try:
            yuv = array.array('B')
            pix = {}
            pix[0] = np.ndarray(shape=(height, width), dtype=np.uint32, buffer=self.img[0].bits())
            pix[1] = np.ndarray(shape=(height, width), dtype=np.uint32, buffer=self.img[1].bits())
            for i in range(0,height):
                for j in range(0, width):
                    pix[0][i, j] = 0
                    pix[1][i, j] = 0

            for k in range (0, 10):
                yuv.fromfile(f_yuv, 3*width*height/2)
                #y = yuv[0:width*height]
                for i in range(0, height):
                    for j in range(0, width):
                        Y_val = yuv[(i*width)+j]
                        U_val = yuv[width*height + ((i/2)*(width/2))+(j/2)]
                        V_val = yuv[width*height + width*height/4 + ((i/2)*(width/2))+(j/2)]

                        C = Y_val - 16
                        D = U_val - 128
                        E = V_val - 128
                        R = (( 298 * C           + 409 * E + 128) >> 8)
                        G = (( 298 * C - 100 * D - 208 * E + 128) >> 8)
                        B = (( 298 * C + 516 * D           + 128) >> 8)
                        if R > 255:
                            R = 255
                        if G > 255:
                            G = 255
                        if B > 255:
                            B = 255

                        pix[k % 2][i, j] = (255 << 24 | ((int(R) % 256 )<< 16) | ((int(G) % 256 ) << 8) | (int(B) % 256))
                self.video_signal.emit(self.img[k % 2])
                print "Complted pic num %r, disp_counter = %r" % (k, self.video_display.disp_counter)
                del yuv[:]


        except Exception, e:
            print(e)

if __name__ == "__main__":
    print "In Main"
    yuv_file_name = sys.argv[1]
    width = int(sys.argv[2])
    height = int(sys.argv[3])
    f_yuv = open(yuv_file_name, "rb")

    app = QApplication(sys.argv)
    print "Creating YuvVideoPlayer object"
    ex = YuvVideoPlayer(f_yuv, width, height)
    #ex.up_Video_callback(f_yuv, width, height)
    app.exec_()

    sys.exit(0)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM