繁体   English   中英

MF SinkWriter mp4文件的播放持续时间是添加音频样本时的一半,图像的播放速度也要快两倍

[英]Playback duration from MF SinkWriter mp4 file is half the time when adding an audio sample also the playback speed of the images is twice as fast

我为我的c#项目创建了一个托管c ++库,以基于MSDN教程SinkWriter将图像和音频编码到mp4容器中。 为了测试结果是否正确,我创建了一种提供600帧的方法。 此帧代表每秒60帧的10秒视频。

我提供的图像每秒变化一次,我的音频文件包含的声音数达到10。

我面临的问题是输出视频实际上只有5秒长。 视频的元数据显示是10秒,但不是。 而且声音几乎不超过5。

如果我只写没有音频部分的图像样本,则视频的持续时间为预期的10秒。

我在这里想念什么?

这是我的应用程序的某些部分。

这是我用来创建600帧的c#部分,然后在c#部分中也调用PushFrame方法。

var videoFrameCount = 10 * FPS;
SetBinaryImage();

for (int i = 0; i <= videoFrameCount; i++)
{
    // New picture every second
    if (i > 0 &&  i % FPS == 0)
    {
        SetBinaryImage();
    }

    PushFrame();
}

PushFrame方法将图像和音频数据复制到SinkWriter提供的指针。 然后,我调用SinkWriter的PushFrame方法。

private void PushFrame()
{
    try
    {
        encodeStopwatch.Reset();
        encodeStopwatch.Start();

        // Video
        var frameBufferHandler = GCHandle.Alloc(frameBuffer, GCHandleType.Pinned);
        frameBufferPtr = frameBufferHandler.AddrOfPinnedObject();
        CopyImageDataToPointer(BinaryImage, ScreenWidth, ScreenHeight, frameBufferPtr);

        // Audio
        var audioBufferHandler = GCHandle.Alloc(audioBuffer, GCHandleType.Pinned);
        audioBufferPtr = audioBufferHandler.AddrOfPinnedObject();
        var readLength = audioBuffer.Length;

        if (BinaryAudio.Length - (audioOffset + audioBuffer.Length) < 0)
        {
            readLength = BinaryAudio.Length - audioOffset;
        }

        if (!EndOfFile)
        {
            Marshal.Copy(BinaryAudio, audioOffset, (IntPtr)audioBufferPtr, readLength);
            audioOffset += audioBuffer.Length;

        }

        if (readLength < audioBuffer.Length && !EndOfFile)
        {
            EndOfFile = true;
        }

        unsafe
        {
            // Copy video data
            var yuv = SinkWriter.VideoCapturerBuffer();
            SinkWriter.Encode((byte*)frameBufferPtr, ScreenWidth, ScreenHeight, (int)SWPF.SWPF_RGB, yuv);

            // Copy audio data
            var audioDestPtr = SinkWriter.AudioCapturerBuffer();
            SinkWriter.EncodeAudio((byte*)audioBufferPtr, audioDestPtr);

            SinkWriter.PushFrame();
        }

        encodeStopwatch.Stop();
        Console.WriteLine($"YUV frame generated in: {encodeStopwatch.TakeTotalMilliseconds()} ms");
    }
    catch (Exception ex)
    {
    }
}

这是我在c ++中添加到SinkWriter中的一些部分。 我猜音频部分的MediaType是可以的,因为音频的播放有效。

rtStart和rtDuration的定义如下:

LONGLONG rtStart = 0;
UINT64 rtDuration;
MFFrameRateToAverageTimePerFrame(fps, 1, &rtDuration);

像这样使用编码器的两个缓冲区

int SinkWriter::Encode(Byte * rgbBuf, int w, int h, int pxFormat, Byte * yufBuf)
{
    const LONG cbWidth = 4 * VIDEO_WIDTH;
    const DWORD cbBuffer = cbWidth * VIDEO_HEIGHT;

    // Create a new memory buffer.
    HRESULT hr = MFCreateMemoryBuffer(cbBuffer, &pFrameBuffer);

    // Lock the buffer and copy the video frame to the buffer.
    if (SUCCEEDED(hr))
    {
        hr = pFrameBuffer->Lock(&yufBuf, NULL, NULL);
    }

    if (SUCCEEDED(hr))
    {
        // Calculate the stride
        DWORD bitsPerPixel = GetBitsPerPixel(pxFormat);
        DWORD bytesPerPixel = bitsPerPixel / 8;
        DWORD stride = w * bytesPerPixel;

        // Copy image in yuv pointer
        hr = MFCopyImage(
            yufBuf,                      // Destination buffer.
            stride,                    // Destination stride.
            rgbBuf,     // First row in source image.
            stride,                    // Source stride.
            stride,                    // Image width in bytes.
            h                // Image height in pixels.
        );
    }

    if (pFrameBuffer)
    {
        pFrameBuffer->Unlock();
    }

    // Set the data length of the buffer.
    if (SUCCEEDED(hr))
    {
        hr = pFrameBuffer->SetCurrentLength(cbBuffer);
    }

    if (SUCCEEDED(hr))
    {
        return 0;
    }
    else
    {
        return -1;
    }

    return 0;
}

int SinkWriter::EncodeAudio(Byte * src, Byte * dest)
{
    DWORD samplePerSecond = AUDIO_SAMPLES_PER_SECOND * AUDIO_BITS_PER_SAMPLE * AUDIO_NUM_CHANNELS;
    DWORD cbBuffer = samplePerSecond / 1000;

    // Create a new memory buffer.
    HRESULT hr = MFCreateMemoryBuffer(cbBuffer, &pAudioBuffer);

    // Lock the buffer and copy the video frame to the buffer.
    if (SUCCEEDED(hr))
    {
        hr = pAudioBuffer->Lock(&dest, NULL, NULL);
    }

    CopyMemory(dest, src, cbBuffer);

    if (pAudioBuffer)
    {
        pAudioBuffer->Unlock();
    }

    // Set the data length of the buffer.
    if (SUCCEEDED(hr))
    {
        hr = pAudioBuffer->SetCurrentLength(cbBuffer);
    }

    if (SUCCEEDED(hr))
    {
        return 0;
    }
    else
    {
        return -1;
    }

    return 0;
}

这是SinkWriter的PushFrame方法,它将SinkWriter,streamIndex,audioIndex,rtStart和rtDuration传递给WriteFrame方法。

int SinkWriter::PushFrame()
{
    if (initialized)
    {
        HRESULT hr = WriteFrame(ptrSinkWriter, stream, audio, rtStart, rtDuration);
        if (FAILED(hr))
        {
            return -1;
        }

        rtStart += rtDuration;

        return 0;
    }

    return -1;
}

这是结合视频和音频样本的WriteFrame方法。

HRESULT SinkWriter::WriteFrame(IMFSinkWriter *pWriter, DWORD streamIndex, DWORD audioStreamIndex, const LONGLONG& rtStart, const LONGLONG& rtDuration)
{
    IMFSample *pVideoSample = NULL;

    // Create a media sample and add the buffer to the sample.
    HRESULT hr = MFCreateSample(&pVideoSample);

    if (SUCCEEDED(hr))
    {
        hr = pVideoSample->AddBuffer(pFrameBuffer);
    }
    if (SUCCEEDED(hr))
    {
        pVideoSample->SetUINT32(MFSampleExtension_Discontinuity, FALSE);
    }
    // Set the time stamp and the duration.
    if (SUCCEEDED(hr))
    {
        hr = pVideoSample->SetSampleTime(rtStart);
    }
    if (SUCCEEDED(hr))
    {
        hr = pVideoSample->SetSampleDuration(rtDuration);
    }

    // Send the sample to the Sink Writer.
    if (SUCCEEDED(hr))
    {
        hr = pWriter->WriteSample(streamIndex, pVideoSample);
    }

    // Audio
    IMFSample *pAudioSample = NULL;

    if (SUCCEEDED(hr))
    {
        hr = MFCreateSample(&pAudioSample);
    }

    if (SUCCEEDED(hr))
    {
        hr = pAudioSample->AddBuffer(pAudioBuffer);
    }

    // Set the time stamp and the duration.
    if (SUCCEEDED(hr))
    {
        hr = pAudioSample->SetSampleTime(rtStart);
    }
    if (SUCCEEDED(hr))
    {
        hr = pAudioSample->SetSampleDuration(rtDuration);
    }
    // Send the sample to the Sink Writer.
    if (SUCCEEDED(hr))
    {
        hr = pWriter->WriteSample(audioStreamIndex, pAudioSample);
    }


    SafeRelease(&pVideoSample);
    SafeRelease(&pFrameBuffer);
    SafeRelease(&pAudioSample);
    SafeRelease(&pAudioBuffer);
    return hr;
}

问题是音频缓冲区大小的计算错误。 这是正确的计算:

var avgBytesPerSecond = sampleRate * 2 * channels;
var avgBytesPerMillisecond = avgBytesPerSecond / 1000;
var bufferSize = avgBytesPerMillisecond * (1000 / 60);
audioBuffer = new byte[bufferSize];

在我的问题中,我的缓冲区大小为一毫秒。 因此,MF Framework似乎可以加快图像处理速度,因此声音听起来不错。 固定缓冲区大小后,视频将具有我期望的持续时间,声音也没有错误。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM