简体   繁体   English

录制时如何将音频添加到视频[ContinuousCaptureActivity] [Grafika]

[英]How to add Audio to Video while Recording [ContinuousCaptureActivity] [Grafika]

I implement Video recording using ContinuousCaptureActivity.java . 我使用ContinuousCaptureActivity.java实现视频录制。 it's work perfectly. 它的工作完美。

Now i want to add Audio in this video. 现在我想在此视频中添加音频。

I know using MediaMuxer it is possible to add audio in video. 我知道使用MediaMuxer可以在视频中添加音频。

But the problem is i don't know how to i use MediaMuxer . 但问题是我不知道如何使用MediaMuxer

Also if you have any other solution without MediaMuxer then share with me any link or doc. 此外,如果您有任何其他没有MediaMuxer解决方案,那么与我分享任何链接或文档。

also i have demo AudioVideoRecordingSample . 我也有演示AudioVideoRecordingSample But i don't understand how to i merge this with my code. 但我不明白如何将其与我的代码合并。

please explain to me if anyone knows. 如果有人知道,请向我解释。

Thanks in Advance. 提前致谢。

Merging Audio File and Video File 合并音频文件和视频文件

private void muxing() {

String outputFile = "";

try {

File file = new File(Environment.getExternalStorageDirectory() + File.separator + "final2.mp4");
file.createNewFile();
outputFile = file.getAbsolutePath();

MediaExtractor videoExtractor = new MediaExtractor();
AssetFileDescriptor afdd = getAssets().openFd("Produce.MP4");
videoExtractor.setDataSource(afdd.getFileDescriptor() ,afdd.getStartOffset(),afdd.getLength());

MediaExtractor audioExtractor = new MediaExtractor();
audioExtractor.setDataSource(audioFilePath);

Log.d(TAG, "Video Extractor Track Count " + videoExtractor.getTrackCount() );
Log.d(TAG, "Audio Extractor Track Count " + audioExtractor.getTrackCount() );

MediaMuxer muxer = new MediaMuxer(outputFile, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);

videoExtractor.selectTrack(0);
MediaFormat videoFormat = videoExtractor.getTrackFormat(0);
int videoTrack = muxer.addTrack(videoFormat);

audioExtractor.selectTrack(0);
MediaFormat audioFormat = audioExtractor.getTrackFormat(0);
int audioTrack = muxer.addTrack(audioFormat);

Log.d(TAG, "Video Format " + videoFormat.toString() );
Log.d(TAG, "Audio Format " + audioFormat.toString() );

boolean sawEOS = false;
int frameCount = 0;
int offset = 100;
int sampleSize = 256 * 1024;
ByteBuffer videoBuf = ByteBuffer.allocate(sampleSize);
ByteBuffer audioBuf = ByteBuffer.allocate(sampleSize);
MediaCodec.BufferInfo videoBufferInfo = new MediaCodec.BufferInfo();
MediaCodec.BufferInfo audioBufferInfo = new MediaCodec.BufferInfo();


videoExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
audioExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);

muxer.start();

while (!sawEOS)
{
    videoBufferInfo.offset = offset;
    videoBufferInfo.size = videoExtractor.readSampleData(videoBuf, offset);


    if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0)
    {
        Log.d(TAG, "saw input EOS.");
        sawEOS = true;
        videoBufferInfo.size = 0;

    }
    else
    {
        videoBufferInfo.presentationTimeUs = videoExtractor.getSampleTime();
        videoBufferInfo.flags = videoExtractor.getSampleFlags();
        muxer.writeSampleData(videoTrack, videoBuf, videoBufferInfo);
        videoExtractor.advance();


        frameCount++;
        Log.d(TAG, "Frame (" + frameCount + ") Video PresentationTimeUs:" + videoBufferInfo.presentationTimeUs +" Flags:" + videoBufferInfo.flags +" Size(KB) " + videoBufferInfo.size / 1024);
        Log.d(TAG, "Frame (" + frameCount + ") Audio PresentationTimeUs:" + audioBufferInfo.presentationTimeUs +" Flags:" + audioBufferInfo.flags +" Size(KB) " + audioBufferInfo.size / 1024);

    }
}

Toast.makeText(getApplicationContext() , "frame:" + frameCount , Toast.LENGTH_SHORT).show();



boolean sawEOS2 = false;
int frameCount2 =0;
while (!sawEOS2)
{
    frameCount2++;

    audioBufferInfo.offset = offset;
    audioBufferInfo.size = audioExtractor.readSampleData(audioBuf, offset);

    if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0)
    {
        Log.d(TAG, "saw input EOS.");
        sawEOS2 = true;
        audioBufferInfo.size = 0;
    }
    else
    {
        audioBufferInfo.presentationTimeUs = audioExtractor.getSampleTime();
        audioBufferInfo.flags = audioExtractor.getSampleFlags();
        muxer.writeSampleData(audioTrack, audioBuf, audioBufferInfo);
        audioExtractor.advance();


        Log.d(TAG, "Frame (" + frameCount + ") Video PresentationTimeUs:" + videoBufferInfo.presentationTimeUs +" Flags:" + videoBufferInfo.flags +" Size(KB) " + videoBufferInfo.size / 1024);
        Log.d(TAG, "Frame (" + frameCount + ") Audio PresentationTimeUs:" + audioBufferInfo.presentationTimeUs +" Flags:" + audioBufferInfo.flags +" Size(KB) " + audioBufferInfo.size / 1024);

    }
}

Toast.makeText(getApplicationContext() , "frame:" + frameCount2 , Toast.LENGTH_SHORT).show();

muxer.stop();
muxer.release();


} catch (IOException e) {
Log.d(TAG, "Mixer Error 1 " + e.getMessage());
} catch (Exception e) {
Log.d(TAG, "Mixer Error 2 " + e.getMessage());
}

For Samples Visit here 样品请访问 此处

Sorry, I am late. 抱歉,我迟到了。 this is what you want. 这就是你想要的。

https://github.com/Kickflip/kickflip-android-sdk https://github.com/Kickflip/kickflip-android-sdk

It also implemented encoding with MediaCodec and upload video stream with ffmpeg. 它还使用MediaCodec实现编码,并使用ffmpeg上传视频流。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM