簡體   English   中英

FFMPEG 從 memory 讀取音頻不起作用

[英]FFMPEG Reading audio from memory doesn't work

當我嘗試實例化這個結構時,我的程序崩潰了:

struct MemoryAVFormat {
    MemoryAVFormat(const MemoryAVFormat &) = delete;

    AVFormatContext *ctx;
    AVIOContext *ioCtx;

    MemoryAVFormat(char *audio, size_t audio_length) :
            ctx(avformat_alloc_context()),
            ioCtx(create_audio_buffer_io_context(audio, audio_length)) {

        if (ctx == nullptr)
            throw audio_processing_exception("Failed to allocate context");

        if (ioCtx == nullptr)
            throw audio_processing_exception("Failed to allocate IO context for audio buffer");

        ctx->pb = ioCtx;
        ctx->flags |= AVFMT_FLAG_CUSTOM_IO;

        int err = avformat_open_input(&ctx, "nullptr", NULL, NULL);
        if (err != 0)
            throwAvError("Error configuring context from audio buffer", err);
    }

    AVIOContext *create_audio_buffer_io_context(char *audio, size_t audio_length) const {
        return avio_alloc_context(reinterpret_cast<unsigned char *>(audio),
                                  audio_length,
                                  0,
                                  audio,
                                  [](void *, uint8_t *, int buf_size) { return buf_size; },
                                  NULL,
                                  NULL);
    }

    ~MemoryAVFormat() {
        av_free(ioCtx);
        avformat_close_input(&ctx);
    }
}

我已經閱讀並嘗試了每一個關於這樣做的教程,但它們都不起作用

有沒有人以前做過這個工作?

崩潰上線: int err = avformat_open_input(&ctx, "nullptr", NULL, NULL);

avio_alloc_context()文檔指定buffer參數應由av_malloc()分配,而且它將由AVIOContext析構函數釋放並且可以隨時重新分配:

 * @param buffer Memory block for input/output operations via AVIOContext.
 *        The buffer must be allocated with av_malloc() and friends.
 *        It may be freed and replaced with a new buffer by libavformat.
 *        AVIOContext.buffer holds the buffer currently in use,
 *        which must be later freed with av_free().

在您的代碼示例中,您省略了audio緩沖區分配的詳細信息,但我認為它不符合這些要求,因此當 FFmpeg 嘗試釋放或重新分配audio緩沖區時會發生崩潰。

我猜想將整個音頻文件內容作為外部分配的緩沖區傳遞不適用於AVIOContext - 這個 API 真的是要與臨時緩沖區一起用於從其他地方流式傳輸數據(文件,web 或另一個 ZCD69B4957F06CD8291Z73 緩沖區)。

我沒有完整的示例來查看它是否會按預期工作,但代碼可能看起來像這樣(您可能需要調整read() function 並考慮實施搜索過程):

struct MemoryAVFormat {
    MemoryAVFormat(const MemoryAVFormat &) = delete;

    AVFormatContext *ctx;
    AVIOContext *ioCtx;

    char *audio;
    size_t audio_length;
    size_t audio_offset;

    MemoryAVFormat(char *theAudio, size_t theAudioLength)
    : ctx(avformat_alloc_context()),
      ioCtx(nullptr),
      audio(theAudio),
      audio_length(theAudioLength),
      audio_offset(0) {
        ioCtx = create_audio_buffer_io_context();
        if (ctx == nullptr)
            throw audio_processing_exception("Failed to allocate context");

        if (ioCtx == nullptr)
            throw audio_processing_exception("Failed to allocate IO context for audio buffer");

        ctx->pb = ioCtx;
        ctx->flags |= AVFMT_FLAG_CUSTOM_IO;

        int err = avformat_open_input(&ctx, "nullptr", NULL, NULL);
        if (err != 0)
            throwAvError("Error configuring context from audio buffer", err);
    }

    int read (uint8_t* theBuf, int theBufSize) {
        int aNbRead = std::min (int(audio_length - audio_offset), theBufSize);
        if(aNbRead == 0) { return AVERROR_EOF; }
        memcpy(theBuf, audio + audio_offset, aNbRead);
        audio_offset += aNbRead;
        return aNbRead;
    }

    int64_t seek(int64_t offset, int whence) {
         if (whence == AVSEEK_SIZE) { return audio_length; }
         audio_offset = offset;

         if(audio == NULL || audio_length == 0) { return -1; }
         if     (whence == SEEK_SET) { audio_offset = offset; }
         else if(whence == SEEK_CUR) { audio_offset += offset; }
         else if(whence == SEEK_END) { audio_offset = audio_length + offset; }

         //if(audio_offset < 0) { audio_offset  = 0; } else
         //if(audio_offset > audio_length) { audio_offset = audio_length; }
         return offset;
    }

    AVIOContext *create_audio_buffer_io_context() {
        const int aBufferSize = 4096;
        unsigned char* aBufferIO = (unsigned char* )av_malloc(aBufferSize + AV_INPUT_BUFFER_PADDING_SIZE);
        return avio_alloc_context(aBufferIO,
                                  aBufferSize,
                                  0,
                                  this,
                                  [](void* opaque, uint8_t* buf, int bufSize)
                                  { return ((MemoryAVFormat* )opaque)->read(buf, bufSize); },
                                  NULL,
                                  [](void* opaque, int64_t offset, int whence)
                                  { return ((MemoryAVFormat* )opaque)->seek(offset, whence); });
    }

    ~MemoryAVFormat() {
        av_free(ioCtx);
        avformat_close_input(&ctx);
    }
}

如果您事先知道 stream 是哪種音頻格式(例如完全跳過創建AVFormatContext ),則實現AVIOContext接口和使用avformat_open_input()的替代方法可以將音頻緩沖區作為自定義AVPacket的有效負載直接傳遞給解碼器. 我這樣做是為了解碼圖像像素圖,但不知道它是否可以(輕松)應用於音頻。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM