[英]Raspberry Pi 3 Increasing FPS with 720p USB Camera
I write some code in python to open USB camera and grab frame from it.我在 python 中编写了一些代码来打开 USB 摄像头并从中抓取帧。 I use my code for http stream.
我将我的代码用于 http 流。 For JPEG encode I use libturbojpeg library.
对于 JPEG 编码,我使用 libturbojpeg 库。 For this I use 64 bit OS.
为此,我使用 64 位操作系统。
product: Raspberry Pi 3 Model B Rev 1.2
serial: 00000000f9307746
width: 64 bits
capabilities: smp cp15_barrier setend swp
I do some test with different resolutions.我用不同的分辨率做了一些测试。
Resolution FPS Time for encode
640 x 480 ~35 ~0.01
1280 x 720 ~17 ~0.028
And this is my code这是我的代码
import time
import os
import re
import uvc
from turbojpeg import TurboJPEG, TJPF_GRAY, TJSAMP_GRAY
jpeg = TurboJPEG("/opt/libjpeg-turbo/lib64/libturbojpeg.so")
camera = None
import numpy as np
from threading import Thread
class ProcessJPG(Thread):
def __init__(self, data):
self.jpeg_data = None
self.data = data
super(ProcessJPG, self).__init__()
def run(self):
self.jpeg_data = jpeg.encode((self.data))
def get_frame(self):
self.frame = camera.get_frame()
global camera
dev_list = uvc.device_list()
print("devices: ", dev_list)
camera = uvc.Capture(dev_list[1]['uid'])
camera.frame_size = camera.frame_sizes[2] // set 1280 x 720
camera.frame_rate = camera.frame_rates[0] // set 30 fps
class GetFrame(Thread):
def __init__(self):
self.frame = None
super(GetFrame, self).__init__()
def run(self):
self.frame = camera.get_frame()
_fps = -1
count_to_fps = 0
_real_fps = 0
from time import time
_real_fps = ""
cfps_time = time()
while True:
if camera:
t = GetFrame()
t.start()
t.join()
img = t.frame
timestamp = img.timestamp
img = img.img
ret = 1
t_start = time()
t = ProcessJPG(img)
t.start()
t.join()
jpg = t.jpeg_data
t_end = time()
print(t_end - t_start)
count_to_fps += 1
if count_to_fps >= _fps:
t_to_fps = time() - cfps_time
_real_fps = 1.0 / t_to_fps
cfps_time = time()
count_to_fps = 0
print("FPS, ", _real_fps)
Encoding line is: jpeg.encode((self.data))
编码行是:
jpeg.encode((self.data))
My question is, it is possible to increase FPS for 1280 x 720 (eg 30fps) resolution or should I use more powerful device?我的问题是,可以提高 1280 x 720 (例如 30fps)分辨率的FPS还是我应该使用更强大的设备? When I look on htop during the computation CPU is not used in 100%.
当我在计算过程中查看 htop 时,CPU 未 100% 使用。
EDIT: Camera formats:编辑:相机格式:
[video4linux2,v4l2 @ 0xa705c0] Raw : yuyv422 : YUYV 4:2:2 : 640x480 1280x720 960x544 800x448 640x360 424x240 352x288 320x240 800x600 176x144 160x120 1280x800
[video4linux2,v4l2 @ 0xa705c0] Compressed: mjpeg : Motion-JPEG : 640x480 1280x720 960x544 800x448 640x360 800x600 416x240 352x288 176x144 320x240 160x120
It is possible and you don't need more powerful hardware.这是可能的,您不需要更强大的硬件。
From the pyuvc README.md ,从pyuvc README.md ,
* Capture instance will always grab mjpeg conpressed frames from cameras.
When your code accesses the .img
property, that invokes jpeg2yuv
(see here and here ).当您的代码访问
.img
属性时,它会调用jpeg2yuv
(参见此处和此处)。 Then you are re-encoding with jpeg_encode()
.然后你用
jpeg_encode()
重新编码。 Try using frame.jpeg_buffer
after the capture and don't touch .img
at all.捕获后尝试使用
frame.jpeg_buffer
并且根本不要触摸.img
。
I took a look at pyuvc on an RPi2 with a Logitech C310 and made a simplified example,我在带有Logitech C310 的RPi2 上查看了 pyuvc 并做了一个简化的例子,
import uvc
import time
def main():
dev_list = uvc.device_list()
cap = uvc.Capture(dev_list[0]["uid"])
cap.frame_mode = (1280, 720, 30)
tlast = time.time()
for x in range(100):
frame = cap.get_frame_robust()
jpeg = frame.jpeg_buffer
print("%s (%d bytes)" % (type(jpeg), len(jpeg)))
#img = frame.img
tnow = time.time()
print("%.3f" % (tnow - tlast))
tlast = tnow
cap = None
main()
I get ~.033s per frame, which works out to ~30fps at ~8%CPU.我每帧得到 ~.033 秒,在 ~8% CPU 下可以达到 ~30fps。 If I uncomment the
#img = frame.img
line it goes up to ~.054s/frame or ~18fps at 99%CPU (the decode time limits the capture rate).如果我取消注释
#img = frame.img
行,它会在 99%CPU 时上升到 ~.054s/frame 或 ~18fps(解码时间限制了捕获率)。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.