简体   繁体   English

Android MediaMuxer音频问题

[英]Android MediaMuxer Audio Issue

I am trying to use MediaMuxer to add a audio track to a video. 我正在尝试使用MediaMuxer将音轨添加到视频中。 The following code works but the audio stops halfway through the video. 以下代码有效,但音频在视频中途停止播放。 Both the video and audio files only have one track. 视频和音频文件都只有一个轨道。 The playback speed of the audio and video seem to be fine. 音频和视频的播放速度似乎不错。 The audio file is longer than the video so I dont think that is the issue. 音频文件比视频文件长,所以我认为这不是问题。 I have been at this for a while now can cant figure it out. 我已经有一段时间了,现在无法弄清楚。

private void createFinalVideo(){

    String outputFile = "";

     try {

        File file = new File(Environment.getExternalStorageDirectory() + File.separator + "final.mp4");
        file.createNewFile();
        outputFile = file.getAbsolutePath();

        MediaExtractor videoExtractor = new MediaExtractor();
        videoExtractor.setDataSource(OUTPUT);

        MediaExtractor audioExtractor = new MediaExtractor();
        final AssetFileDescriptor afd = context.getAssets().openFd("audio.m4a");
        audioExtractor.setDataSource(afd.getFileDescriptor(),afd.getStartOffset(),afd.getLength());

        Log.d(TAG, "Video Extractor Track Count " + videoExtractor.getTrackCount() );
        Log.d(TAG, "Audio Extractor Track Count " + audioExtractor.getTrackCount() );

        MediaMuxer muxer = new MediaMuxer(outputFile, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);

        videoExtractor.selectTrack(0);
        MediaFormat videoFormat = videoExtractor.getTrackFormat(0);
        int videoTrack = muxer.addTrack(videoFormat);

        audioExtractor.selectTrack(0);
        MediaFormat audioFormat = audioExtractor.getTrackFormat(0);
        int audioTrack = muxer.addTrack(audioFormat);

        Log.d(TAG, "Video Format " + videoFormat.toString() );
        Log.d(TAG, "Audio Format " + audioFormat.toString() );

        boolean sawEOS = false;
        int frameCount = 0;
        int offset = 100;
        int sampleSize = 256 * 1024;
        ByteBuffer videoBuf = ByteBuffer.allocate(sampleSize);
        ByteBuffer audioBuf = ByteBuffer.allocate(sampleSize);
        BufferInfo videoBufferInfo = new BufferInfo();
        BufferInfo audioBufferInfo = new BufferInfo();

        videoExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
        audioExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);

        muxer.start();

        while (!sawEOS) 
        {
            videoBufferInfo.offset = offset;
            audioBufferInfo.offset = offset;

            videoBufferInfo.size = videoExtractor.readSampleData(videoBuf, offset);
            audioBufferInfo.size = audioExtractor.readSampleData(audioBuf, offset);

            if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0) 
            {
                Log.d(TAG, "saw input EOS.");
                sawEOS = true;
                videoBufferInfo.size = 0;
                audioBufferInfo.size = 0;
            } 
            else 
            {
                videoBufferInfo.presentationTimeUs = videoExtractor.getSampleTime();
                videoBufferInfo.flags = videoExtractor.getSampleFlags();
                muxer.writeSampleData(videoTrack, videoBuf, videoBufferInfo);
                videoExtractor.advance();

                audioBufferInfo.presentationTimeUs = audioExtractor.getSampleTime();
                audioBufferInfo.flags = audioExtractor.getSampleFlags();
                muxer.writeSampleData(audioTrack, audioBuf, audioBufferInfo);
                audioExtractor.advance();

                frameCount++;

                Log.d(TAG, "Frame (" + frameCount + ") Video PresentationTimeUs:" + videoBufferInfo.presentationTimeUs +" Flags:" + videoBufferInfo.flags +" Size(KB) " + videoBufferInfo.size / 1024);
                Log.d(TAG, "Frame (" + frameCount + ") Audio PresentationTimeUs:" + audioBufferInfo.presentationTimeUs +" Flags:" + audioBufferInfo.flags +" Size(KB) " + audioBufferInfo.size / 1024);

            }
        }
        muxer.stop();
        muxer.release();


     } catch (IOException e) {
         Log.d(TAG, "Mixer Error 1 " + e.getMessage());
     } catch (Exception e) {
         Log.d(TAG, "Mixer Error 2 " + e.getMessage());
     }

    return;
}

The media format print out is the following 打印出的媒体格式如下

Video Format {max-input-size=1572864, frame-rate=28, height=1920, csd-0=java.nio.ByteArrayBuffer[position=0,limit=18,capacity=18], width=1072, durationUs=2968688, csd-1=java.nio.ByteArrayBuffer[position=0,limit=8,capacity=8], mime=video/avc, isDMCMMExtractor=1}

Audio Format {max-input-size=1572864, encoder-padding=708, aac-profile=2, csd-0=java.nio.ByteArrayBuffer[position=0,limit=2,capacity=2], sample-rate=44100, durationUs=19783401, channel-count=2, encoder-delay=2112, mime=audio/mp4a-latm, isDMCMMExtractor=1}

Any guidance would be highly appreciated. 任何指导将不胜感激。

Thanks 谢谢

user346443: The issue was the video extractor was pulling data out quicker than the audio extractor. user346443:问题是视频提取器比音频提取器更快地提取数据。 Splitting the audio and video into separate loops fixed the issue. 将音频和视频分成单独的循环可解决此问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM