簡體   English   中英

Python Opencv 和 Sockets - 以 h264 編碼的流式視頻

[英]Python Opencv and Sockets - Streaming video encoded in h264

所以我正在嘗試制作一個流媒體,將視頻從一台計算機傳輸到我 LAN 上的另一台(或現在是同一台)。 我需要它使用盡可能少的帶寬,所以我試圖用 h264 編碼。 我在做這件事時遇到了麻煩,我真的不知道從哪里開始。 現在它以jpg編碼,並且正在逐幀發送。 但是,我知道這是非常低效的,並且會消耗大量帶寬。 這是我當前的接收器代碼。

import cv2
import socket
import _pickle
import time

host = "192.168.1.196"
port = 25565
boo = True

s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # declares s object with two parameters
s.bind((host, port)) # tells my socket object to connect to this host & port "binds it to it"
s.listen(10) # tells the socket how much data it will be receiving.

conn, addr = s.accept()
buf = ''
while boo:
        pictures = conn.recv(128000) # creates a pictures variable that receives the pictures with a max amount of 128000 data it can receive
        decoded = _pickle.loads(pictures) # decodes the pictures
        frame = cv2.imdecode(decoded, cv2.IMREAD_COLOR) # translates decoded into frames that we can see!
        cv2.imshow("recv", frame)
        if cv2.waitKey(1) & 0xFF == ord("q"):  # wait until q key was pressed once and
            break

這是我當前的客戶端代碼(發件人):

import cv2
import numpy as np
import socket
import _pickle
from cv2 import *

host = "192.168.1.196"
port = 25565

s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)  # declares s object with two parameters
s.connect((host, port))  # connects to the host & port
cap = cv2.VideoCapture(1)
cv2.cv.CV_FOURCC('H','2','6','4')
while cap.isOpened(): # while camera is being used
    ret, frame = cap.read()  # reads each frame from webcam
    cv2.imshow("client", frame)
    if ret:
        encoded = _pickle.dumps(cv2.imencode("fourcc", frame)[1]) # encoding each frame, instead of sending live video it is sending pictures one by one
        s.sendall(encoded)
    if cv2.waitKey(1) & 0xFF == ord("q"): # wait until key was pressed once and
        break
cap.release()
cv2.destroyAllWindows()

我只需要一些關於如何編碼視頻並在 h264 中解碼的幫助。

您可以使用pyzmq和帶有 base64 字符串編碼/解碼的發布/訂閱模式來執行此操作。 在服務器端,想法是:

  • 從相機流中獲取幀
  • 使用cv2.imencode從內存緩沖區讀取圖像
  • 使用 base64 將ndarray轉換為str並通過套接字發送

在客戶端,我們簡單地顛倒這個過程:

  • 從套接字讀取圖像字符串
  • 使用 base64 將str轉換為bytes
  • 轉換bytesndarraynp.frombuffer + cv2.imdecode

此方法不應該使用太多帶寬,因為它只通過套接字發送字符串。


服務器

import base64
import cv2
import zmq

context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.connect('tcp://localhost:7777')

camera = cv2.VideoCapture(0)

while True:
    try:
        ret, frame = camera.read()
        frame = cv2.resize(frame, (640, 480))
        encoded, buf = cv2.imencode('.jpg', frame)
        image = base64.b64encode(buf)
        socket.send(image)
    except KeyboardInterrupt:
        camera.release()
        cv2.destroyAllWindows()
        break

客戶

import cv2
import zmq
import base64
import numpy as np

context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.bind('tcp://*:7777')
socket.setsockopt_string(zmq.SUBSCRIBE, np.unicode(''))

while True:
    try:
        image_string = socket.recv_string()
        raw_image = base64.b64decode(image_string)
        image = np.frombuffer(raw_image, dtype=np.uint8)
        frame = cv2.imdecode(image, 1)
        cv2.imshow("frame", frame)
        cv2.waitKey(1)
    except KeyboardInterrupt:
        cv2.destroyAllWindows()
        break

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM