简体   繁体   English

在 Android 上合并/多路复用多个 mp4 视频文件

[英]Merge/Mux multiple mp4 video files on Android

I have a series of mp4 files saved on the device that need to be merged together to make a single mp4 file.我在设备上保存了一系列 mp4 文件,需要将它们合并在一起以制作单个 mp4 文件。

video_p1.mp4 video_p2.mp4 video_p3.mp4 > video.mp4 video_p1.mp4 video_p2.mp4 video_p3.mp4 > video.mp4

The solutions I have researched such as the mp4parser framework use deprecated code.我研究过的解决方案(例如 mp4parser 框架)使用了不推荐使用的代码。

The best solution I could find is using a MediaMuxer and MediaExtractor.我能找到的最佳解决方案是使用 MediaMuxer 和 MediaExtractor。

The code runs but my videos are not merged (only the content in video_p1.mp4 is displayed and it is in landscape orientation, not portrait).代码运行但我的视频没有合并(只显示 video_p1.mp4 中的内容,它是横向的,而不是纵向的)。

Can anyone help me sort this out?谁能帮我解决这个问题?

    public static boolean concatenateFiles(File dst, File... sources) {
    if ((sources == null) || (sources.length == 0)) {
        return false;
    }

    boolean result;
    MediaExtractor extractor = null;
    MediaMuxer muxer = null;
    try {
        // Set up MediaMuxer for the destination.
        muxer = new MediaMuxer(dst.getPath(), MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);

        // Copy the samples from MediaExtractor to MediaMuxer.
        boolean sawEOS = false;
        //int bufferSize = MAX_SAMPLE_SIZE;
        int bufferSize = 1 * 1024 * 1024;
        int frameCount = 0;
        int offset = 100;

        ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
        MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();

        long timeOffsetUs = 0;
        int dstTrackIndex = -1;

        for (int fileIndex = 0; fileIndex < sources.length; fileIndex++) {
            int numberOfSamplesInSource = getNumberOfSamples(sources[fileIndex]);

            // Set up MediaExtractor to read from the source.
            extractor = new MediaExtractor();
            extractor.setDataSource(sources[fileIndex].getPath());

            // Set up the tracks.
            SparseIntArray indexMap = new SparseIntArray(extractor.getTrackCount());
            for (int i = 0; i < extractor.getTrackCount(); i++) {
                extractor.selectTrack(i);
                MediaFormat format = extractor.getTrackFormat(i);
                if (dstTrackIndex < 0) {
                    dstTrackIndex = muxer.addTrack(format);
                    muxer.start();
                }
                indexMap.put(i, dstTrackIndex);
            }

            long lastPresentationTimeUs = 0;
            int currentSample = 0;

            while (!sawEOS) {
                bufferInfo.offset = offset;
                bufferInfo.size = extractor.readSampleData(dstBuf, offset);

                if (bufferInfo.size < 0) {
                    sawEOS = true;
                    bufferInfo.size = 0;
                    timeOffsetUs += (lastPresentationTimeUs + 0);
                }
                else {
                    lastPresentationTimeUs = extractor.getSampleTime();
                    bufferInfo.presentationTimeUs = extractor.getSampleTime() + timeOffsetUs;
                    bufferInfo.flags = extractor.getSampleFlags();
                    int trackIndex = extractor.getSampleTrackIndex();

                    if ((currentSample < numberOfSamplesInSource) || (fileIndex == sources.length - 1)) {
                        muxer.writeSampleData(indexMap.get(trackIndex), dstBuf, bufferInfo);
                    }
                    extractor.advance();

                    frameCount++;
                    currentSample++;
                    Log.d("tag2", "Frame (" + frameCount + ") " +
                                "PresentationTimeUs:" + bufferInfo.presentationTimeUs +
                                " Flags:" + bufferInfo.flags +
                                " TrackIndex:" + trackIndex +
                                " Size(KB) " + bufferInfo.size / 1024);

                }
            }
            extractor.release();
            extractor = null;
        }

        result = true;
    }
    catch (IOException e) {
        result = false;
    }
    finally {
        if (extractor != null) {
            extractor.release();
        }
        if (muxer != null) {
            muxer.stop();
            muxer.release();
        }
    }
    return result;
}

public static int getNumberOfSamples(File src) {
    MediaExtractor extractor = new MediaExtractor();
    int result;
    try {
        extractor.setDataSource(src.getPath());
        extractor.selectTrack(0);

        result = 0;
        while (extractor.advance()) {
            result ++;
        }
    }
    catch(IOException e) {
        result = -1;
    }
    finally {
        extractor.release();
    }
    return result;
}

I'm using this library for muxing videos: ffmpeg-android-java我正在使用这个库来混合视频: ffmpeg-android-java

gradle dependency: gradle 依赖:

implementation 'com.writingminds:FFmpegAndroid:0.3.2'

Here's how I use it in my project to mux video and audio in kotlin: VideoAudioMuxer So basically it works like the ffmpeg in terminal but you're inputing your command to a method as an array of strings along with a listener.以下是我在我的项目中如何使用它在 kotlin 中复用视频和音频: VideoAudioMuxer所以基本上它的工作方式类似于终端中的 ffmpeg,但是您将命令作为字符串数组和侦听器输入到方法中。

fmpeg.execute(arrayOf("-i", videoPath, "-i", audioPath, "$targetPath.mp4"), object : ExecuteBinaryResponseHandler() {

You'll have to search how to merge videos in ffmpeg and convert the commands into array of strings for the argument you need.您必须搜索如何在 ffmpeg 中合并视频并将命令转换为您需要的参数的字符串数组。

You could probably do almost anything, since ffmpeg is a very powerful tool.您几乎可以做任何事情,因为 ffmpeg 是一个非常强大的工具。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM