簡體   English   中英

如何使用pyglet播放流式音頻?

[英]How to play streaming audio using pyglet?

這個問題的目標是試圖弄清楚如何使用pyglet播放流式音頻。 第一個是確保你能夠使用pyglet播放mp3文件,這就是第一個片段的目的:

import sys
import inspect
import requests

import pyglet
from pyglet.media import *

pyglet.lib.load_library('avbin')
pyglet.have_avbin = True


def url_to_filename(url):
    return url.split('/')[-1]


def download_file(url, filename=None):
    filename = filename or url_to_filename(url)

    with open(filename, "wb") as f:
        print("Downloading %s" % filename)
        response = requests.get(url, stream=True)
        total_length = response.headers.get('content-length')

        if total_length is None:
            f.write(response.content)
        else:
            dl = 0
            total_length = int(total_length)
            for data in response.iter_content(chunk_size=4096):
                dl += len(data)
                f.write(data)
                done = int(50 * dl / total_length)
                sys.stdout.write("\r[%s%s]" % ('=' * done, ' ' * (50 - done)))
                sys.stdout.flush()


url = "https://freemusicarchive.org/file/music/ccCommunity/DASK/Abiogenesis/DASK_-_08_-_Protocell.mp3"
filename = "mcve.mp3"
download_file(url, filename)

music = pyglet.media.load(filename)
music.play()
pyglet.app.run()

如果您已經安裝了庫pip install pyglet requests並且此時還安裝了AVBin ,那么一旦下載了mp3,您就應該能夠收聽它。

一旦我們達到這一點,我想弄清楚如何以類似於使用pyglet +請求的現有網絡視頻/音頻播放器的方式播放和緩沖文件。 這意味着播放文件時無需等待文件完全下載。

閱讀pyglet媒體文檔后,您可以看到這些類可用:

media
    sources
        base
            AudioData
            AudioFormat
            Source
            SourceGroup
            SourceInfo
            StaticSource
            StreamingSource
            VideoFormat
    player
        Player
        PlayerGroup

我已經看到有另外類似的SO問題,但它們沒有得到妥善解決,其內容沒有提供很多相關細節:

這就是我創造一個新問題的原因。 你如何使用pyglet播放流音頻? 你能用上面的mcve作為基礎提供一些例子嗎?

假設您不想導入新包來為您執行此操作 - 這可以通過一些努力來完成。

首先,讓我們頭以上的Pyglet源代碼,看看media.loadmedia/__init__.py

"""Load a Source from a file.

All decoders that are registered for the filename extension are tried.
If none succeed, the exception from the first decoder is raised.
You can also specifically pass a decoder to use.

:Parameters:
    `filename` : str
        Used to guess the media format, and to load the file if `file` is
        unspecified.
    `file` : file-like object or None
        Source of media data in any supported format.
    `streaming` : bool
        If `False`, a :class:`StaticSource` will be returned; otherwise
        (default) a :class:`~pyglet.media.StreamingSource` is created.
    `decoder` : MediaDecoder or None
        A specific decoder you wish to use, rather than relying on
        automatic detection. If specified, no other decoders are tried.

:rtype: StreamingSource or Source
"""
if decoder:
    return decoder.decode(file, filename, streaming)
else:
    first_exception = None
    for decoder in get_decoders(filename):
        try:
            loaded_source = decoder.decode(file, filename, streaming)
            return loaded_source
        except MediaDecodeException as e:
            if not first_exception or first_exception.exception_priority < e.exception_priority:
                first_exception = e

    # TODO: Review this:
    # The FFmpeg codec attempts to decode anything, so this codepath won't be reached.
    if not first_exception:
        raise MediaDecodeException('No decoders are available for this media format.')
    raise first_exception


add_default_media_codecs()

這里的關鍵線是loaded_source = decoder.decode(...) 從本質上講,加載音頻Pyglet需要一個文件並將其傳送到媒體解碼器(例如FFMPEG),然后媒體解碼器返回Pyglet可以使用內置Player類播放的“幀”或數據包列表。 如果音頻格式是壓縮的(例如mp3或aac),Pyglet將使用外部庫(目前僅支持AVBin)將其轉換為原始的解壓縮音頻。 你可能已經知道了一些。

因此,如果我們想看看如何將字節流填充到Pyglet的音頻引擎而不是文件中,我們需要看看其中一個解碼器。 對於這個例子,讓我們使用FFMPEG,因為它是最容易訪問的。

media/codecs/ffmpeg.py

class FFmpegDecoder(object):

def get_file_extensions(self):
    return ['.mp3', '.ogg']

def decode(self, file, filename, streaming):
    if streaming:
        return FFmpegSource(filename, file)
    else:
        return StaticSource(FFmpegSource(filename, file))

它繼承的'對象'是MediaDecoder ,可在media/codecs/__init__.py 回到media/__init__.py中的load函數,你會看到pyglet會根據文件擴展名選擇一個MediaDecoder,然后返回它的decode函數,並以文件作為參數來獲取數據包流形式的音頻。 該數據包流是一個Source對象; 每個解碼器都有自己的風格,以StaticSource或StreamingSource的形式。 前者用於在內存中存儲音頻,后者用於立即播放。 FFmpeg的解碼器僅支持StreamingSource。

我們可以看到FFMPEG是FFmpegSource,也位於media/codecs/ffmpeg.py 我們發現這個類的歌利亞:

class FFmpegSource(StreamingSource):
# Max increase/decrease of original sample size
SAMPLE_CORRECTION_PERCENT_MAX = 10

def __init__(self, filename, file=None):
    if file is not None:
        raise NotImplementedError('Loading from file stream is not supported')

    self._file = ffmpeg_open_filename(asbytes_filename(filename))
    if not self._file:
        raise FFmpegException('Could not open "{0}"'.format(filename))

    self._video_stream = None
    self._video_stream_index = None
    self._audio_stream = None
    self._audio_stream_index = None
    self._audio_format = None

    self.img_convert_ctx = POINTER(SwsContext)()
    self.audio_convert_ctx = POINTER(SwrContext)()

    file_info = ffmpeg_file_info(self._file)

    self.info = SourceInfo()
    self.info.title = file_info.title
    self.info.author = file_info.author
    self.info.copyright = file_info.copyright
    self.info.comment = file_info.comment
    self.info.album = file_info.album
    self.info.year = file_info.year
    self.info.track = file_info.track
    self.info.genre = file_info.genre

    # Pick the first video and audio streams found, ignore others.
    for i in range(file_info.n_streams):
        info = ffmpeg_stream_info(self._file, i)

        if isinstance(info, StreamVideoInfo) and self._video_stream is None:

            stream = ffmpeg_open_stream(self._file, i)

            self.video_format = VideoFormat(
                width=info.width,
                height=info.height)
            if info.sample_aspect_num != 0:
                self.video_format.sample_aspect = (
                    float(info.sample_aspect_num) /
                    info.sample_aspect_den)
            self.video_format.frame_rate = (
                float(info.frame_rate_num) /
                info.frame_rate_den)
            self._video_stream = stream
            self._video_stream_index = i

        elif (isinstance(info, StreamAudioInfo) and
                      info.sample_bits in (8, 16) and
                      self._audio_stream is None):

            stream = ffmpeg_open_stream(self._file, i)

            self.audio_format = AudioFormat(
                channels=min(2, info.channels),
                sample_size=info.sample_bits,
                sample_rate=info.sample_rate)
            self._audio_stream = stream
            self._audio_stream_index = i

            channel_input = avutil.av_get_default_channel_layout(info.channels)
            channels_out = min(2, info.channels)
            channel_output = avutil.av_get_default_channel_layout(channels_out)

            sample_rate = stream.codec_context.contents.sample_rate
            sample_format = stream.codec_context.contents.sample_fmt
            if sample_format in (AV_SAMPLE_FMT_U8, AV_SAMPLE_FMT_U8P):
                self.tgt_format = AV_SAMPLE_FMT_U8
            elif sample_format in (AV_SAMPLE_FMT_S16, AV_SAMPLE_FMT_S16P):
                self.tgt_format = AV_SAMPLE_FMT_S16
            elif sample_format in (AV_SAMPLE_FMT_S32, AV_SAMPLE_FMT_S32P):
                self.tgt_format = AV_SAMPLE_FMT_S32
            elif sample_format in (AV_SAMPLE_FMT_FLT, AV_SAMPLE_FMT_FLTP):
                self.tgt_format = AV_SAMPLE_FMT_S16
            else:
                raise FFmpegException('Audio format not supported.')

            self.audio_convert_ctx = swresample.swr_alloc_set_opts(None,
                                                                   channel_output,
                                                                   self.tgt_format, sample_rate,
                                                                   channel_input, sample_format,
                                                                   sample_rate,
                                                                   0, None)
            if (not self.audio_convert_ctx or
                        swresample.swr_init(self.audio_convert_ctx) < 0):
                swresample.swr_free(self.audio_convert_ctx)
                raise FFmpegException('Cannot create sample rate converter.')

    self._packet = ffmpeg_init_packet()
    self._events = []  # They don't seem to be used!

    self.audioq = deque()
    # Make queue big enough to accomodate 1.2 sec?
    self._max_len_audioq = 50  # Need to figure out a correct amount
    if self.audio_format:
        # Buffer 1 sec worth of audio
        self._audio_buffer = \
            (c_uint8 * ffmpeg_get_audio_buffer_size(self.audio_format))()

    self.videoq = deque()
    self._max_len_videoq = 50  # Need to figure out a correct amount

    self.start_time = self._get_start_time()
    self._duration = timestamp_from_ffmpeg(file_info.duration)
    self._duration -= self.start_time

    # Flag to determine if the _fillq method was already scheduled
    self._fillq_scheduled = False
    self._fillq()
    # Don't understand why, but some files show that seeking without
    # reading the first few packets results in a seeking where we lose
    # many packets at the beginning. 
    # We only seek back to 0 for media which have a start_time > 0
    if self.start_time > 0:
        self.seek(0.0)
---
[A few hundred lines more...]
---

def get_next_video_timestamp(self):
    if not self.video_format:
        return

    if self.videoq:
        while True:
            # We skip video packets which are not video frames
            # This happens in mkv files for the first few frames.
            video_packet = self.videoq[0]
            if video_packet.image == 0:
                self._decode_video_packet(video_packet)
            if video_packet.image is not None:
                break
            self._get_video_packet()

        ts = video_packet.timestamp
    else:
        ts = None

    if _debug:
        print('Next video timestamp is', ts)
    return ts

def get_next_video_frame(self, skip_empty_frame=True):
    if not self.video_format:
        return

    while True:
        # We skip video packets which are not video frames
        # This happens in mkv files for the first few frames.
        video_packet = self._get_video_packet()
        if video_packet.image == 0:
            self._decode_video_packet(video_packet)
        if video_packet.image is not None or not skip_empty_frame:
            break

    if _debug:
        print('Returning', video_packet)

    return video_packet.image

def _get_start_time(self):
    def streams():
        format_context = self._file.context
        for idx in (self._video_stream_index, self._audio_stream_index):
            if idx is None:
                continue
            stream = format_context.contents.streams[idx].contents
            yield stream

    def start_times(streams):
        yield 0
        for stream in streams:
            start = stream.start_time
            if start == AV_NOPTS_VALUE:
                yield 0
            start_time = avutil.av_rescale_q(start,
                                             stream.time_base,
                                             AV_TIME_BASE_Q)
            start_time = timestamp_from_ffmpeg(start_time)
            yield start_time

    return max(start_times(streams()))

@property
def audio_format(self):
    return self._audio_format

@audio_format.setter
def audio_format(self, value):
    self._audio_format = value
    if value is None:
        self.audioq.clear()

您在這里感興趣的行是self._file = ffmpeg_open_filename(asbytes_filename(filename)) 這再次在media/codecs/ffmpeg.py我們帶到這里:

def ffmpeg_open_filename(filename):
"""Open the media file.

:rtype: FFmpegFile
:return: The structure containing all the information for the media.
"""
file = FFmpegFile()  # TODO: delete this structure and use directly AVFormatContext
result = avformat.avformat_open_input(byref(file.context),
                                      filename,
                                      None,
                                      None)
if result != 0:
    raise FFmpegException('Error opening file ' + filename.decode("utf8"))

result = avformat.avformat_find_stream_info(file.context, None)
if result < 0:
    raise FFmpegException('Could not find stream info')

return file

這就是事情變得混亂的地方:它調用了一個ctypes函數(avformat_open_input),當給定一個文件時,它會獲取它的詳細信息並填寫FFmpegSource類所需的所有信息。 通過一些工作,您應該能夠獲得avformat_open_input以獲取字節對象而不是文件的路徑,該文件將打開以獲取相同的信息。 我喜歡這樣做並包含一個有效的例子,但我現在沒有時間。 然后,您需要使用新的avformat_open_input函數創建一個新的ffmpeg_open_filename函數,然后使用新的ffmpeg_open_filename函數創建一個新的FFmpegSource類。 您現在需要的是一個新的FFmpegDecoder類,它使用新的FFmpegSource類。

然后,您可以通過直接將其添加到pyglet包來實現它。 之后,您需要在load()函數中添加對byte對象參數的支持(位於media/__init__.py ,並將解碼器覆蓋到新的。)現在,您可以在不保存的情況下流式傳輸音頻它。


或者,您可以簡單地使用已經支持它的包。 Python-vlc可以。 您可以使用此處的示例從鏈接播放您想要的任何音頻。 如果您不是為了挑戰而這樣做,我強烈建議您使用另一個包。 否則:祝你好運。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM