简体   繁体   English

Android MediaCodec 向后搜索

[英]Android MediaCodec backward seeking

I'm trying to implement precise seeking for video using MediaCodec and MediaExtractor .我正在尝试使用MediaCodecMediaExtractor实现对视频的精确搜索。 By following Grafika's MoviePlayer , I've managed to implement the forward seeking.通过关注 Grafika 的MoviePlayer ,我设法实现了前向搜索。 However I'm still having problem with backward seeking.但是我仍然遇到向后搜索的问题。 The relevant bit of code is here:相关的代码在这里:

public void seekBackward(long position){
    final int TIMEOUT_USEC = 10000;
    int inputChunk = 0;
    long firstInputTimeNsec = -1;

    boolean outputDone = false;
    boolean inputDone = false;

    mExtractor.seekTo(position, MediaExtractor.SEEK_TO_PREVIOUS_SYNC);
    Log.d("TEST_MEDIA", "sampleTime: " + mExtractor.getSampleTime()/1000 + " -- position: " + position/1000 + " ----- BACKWARD");

    while (mExtractor.getSampleTime() < position && position >= 0) {

        if (VERBOSE) Log.d(TAG, "loop");
        if (mIsStopRequested) {
            Log.d(TAG, "Stop requested");
            return;
        }

        // Feed more data to the decoder.
        if (!inputDone) {
            int inputBufIndex = mDecoder.dequeueInputBuffer(TIMEOUT_USEC);
            if (inputBufIndex >= 0) {
                if (firstInputTimeNsec == -1) {
                    firstInputTimeNsec = System.nanoTime();
                }
                ByteBuffer inputBuf = mDecoderInputBuffers[inputBufIndex];
                // Read the sample data into the ByteBuffer.  This neither respects nor
                // updates inputBuf's position, limit, etc.
                int chunkSize = mExtractor.readSampleData(inputBuf, 0);
                if (chunkSize < 0) {
                    // End of stream -- send empty frame with EOS flag set.
                    mDecoder.queueInputBuffer(inputBufIndex, 0, 0, 0L,
                            MediaCodec.BUFFER_FLAG_END_OF_STREAM);
                    inputDone = true;
                    if (VERBOSE) Log.d(TAG, "sent input EOS");
                } else {
                    if (mExtractor.getSampleTrackIndex() != mTrackIndex) {
                        Log.w(TAG, "WEIRD: got sample from track " +
                                mExtractor.getSampleTrackIndex() + ", expected " + mTrackIndex);
                    }
                    long presentationTimeUs = mExtractor.getSampleTime();
                    mDecoder.queueInputBuffer(inputBufIndex, 0, chunkSize,
                            presentationTimeUs, 0 /*flags*/);
                    if (VERBOSE) {
                        Log.d(TAG, "submitted frame " + inputChunk + " to dec, size=" + chunkSize);
                    }
                    inputChunk++;
                    mExtractor.advance();
                }
            } else {
                if (VERBOSE) Log.d(TAG, "input buffer not available");
            }
        }

        if (!outputDone) {
            int decoderStatus = mDecoder.dequeueOutputBuffer(mBufferInfo, TIMEOUT_USEC);
            if (decoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
                // no output available yet
                if (VERBOSE) Log.d(TAG, "no output from decoder available");
            } else if (decoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
                // not important for us, since we're using Surface
                if (VERBOSE) Log.d(TAG, "decoder output buffers changed");
            } else if (decoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
                MediaFormat newFormat = mDecoder.getOutputFormat();
                if (VERBOSE) Log.d(TAG, "decoder output format changed: " + newFormat);
            } else if (decoderStatus < 0) {
                throw new RuntimeException(
                        "unexpected result from decoder.dequeueOutputBuffer: " +
                                decoderStatus);
            } else { // decoderStatus >= 0
                if (firstInputTimeNsec != 0) {
                    // Log the delay from the first buffer of input to the first buffer
                    // of output.
                    long nowNsec = System.nanoTime();
                    Log.d(TAG, "startup lag " + ((nowNsec-firstInputTimeNsec) / 1000000.0) + " ms");
                    firstInputTimeNsec = 0;
                }
                boolean doLoop = false;
                if (VERBOSE) Log.d(TAG, "surface decoder given buffer " + decoderStatus +
                        " (size=" + mBufferInfo.size + ")");
                if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
                    if (VERBOSE) Log.d(TAG, "output EOS");
                    if (mLoop) {
                        doLoop = true;
                    } else {
                        outputDone = true;
                    }
                }

                boolean doRender = (mBufferInfo.size != 0);

                // As soon as we call releaseOutputBuffer, the buffer will be forwarded
                // to SurfaceTexture to convert to a texture.  We can't control when it
                // appears on-screen, but we can manage the pace at which we release
                // the buffers.
                if (doRender && mFrameCallback != null) {
                    mFrameCallback.preRender(mBufferInfo.presentationTimeUs);
                }
                mDecoder.releaseOutputBuffer(decoderStatus, doRender);
                doRender = false;
                if (doRender && mFrameCallback != null) {
                    mFrameCallback.postRender();
                }

                if (doLoop) {
                    Log.d(TAG, "Reached EOS, looping");
                    mExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
                    inputDone = false;
                    mDecoder.flush();    // reset decoder state
                    mFrameCallback.loopReset();
                }
            }
        }
    }
}

Basically, it's the same as MoviePlayer's doExtract method.基本上,它与 MoviePlayer 的doExtract方法相同。 I just add a slight modification to seek back to the previous keyframe than decode forward to the position I want.我只是添加了一个轻微的修改来寻找前一个关键帧,而不是向前解码到我想要的位置。 I've also follow fadden's suggestion here with little success.我也在这里遵循了 fadden 的建议,但收效甚微。

Another side question, to my understanding, ExoPlayer is built upon MediaCodec , then how come it can play videos recorded by iOS just fine while MoviePlayer's pure implementation of MediaCodec can't?另一个问题,据我了解,ExoPlayer 是建立在MediaCodec之上的,那么为什么它可以播放 iOS 录制的视频就好了,而 MoviePlayer 的MediaCodec纯实现却不能呢?

Ok, so this is how I solve my problem, basically I misunderstood fadden's comment on the render flag.好的,这就是我解决问题的方法,基本上我误解了 fadden 对render标志的评论。 The problem is not with the decoding but instead only displaying the last buffer that is closest to the seeking position.问题不在于解码,而在于只显示最接近搜索位置的最后一个缓冲区。 Here is how I do it:这是我如何做到的:

if (Math.abs(position - mExtractor.getSampleTime()) < 10000) {
   mDecoder.releaseOutputBuffer(decoderStatus, true);
} else {
   mDecoder.releaseOutputBuffer(decoderStatus, false);
}

This is quite a hackish way to go about this.这是一个相当黑客的方式来解决这个问题。 The elegant way should be saving the last output buffer and display it outside the while loop but I don't really know how to access the output buffer so that I can save it to a temporary one.优雅的方法应该是保存最后一个输出缓冲区并将其显示在while循环之外,但我真的不知道如何访问输出缓冲区,以便我可以将其保存到临时缓冲区。

EDIT:编辑:

This is a bit less hackish way to do this.这是一个不那么黑客的方式来做到这一点。 Basically, we only need to calculate the total frames in between the keyframe and the seeking position and then we just need to display 1 or 2 frames closest to the seeking position.基本上,我们只需要计算关键帧和搜索位置之间的总帧数,然后我们只需要显示最接近搜索位置的 1 或 2 帧。 Something like this:像这样的东西:

    mExtractor.seekTo(position, MediaExtractor.SEEK_TO_PREVIOUS_SYNC);
    int stopPosition = getStopPosition(mExtractor.getSampleTime(), position);
    int count = 0;

    while (mExtractor.getSampleTime() < position && mExtractor.getSampleTime() != -1 && position >= 0) {
    ....

        if(stopPosition - count < 2) { //just to make sure we will get something (1 frame sooner), see getStopPosition comment
           mDecoder.releaseOutputBuffer(decoderStatus, true);
        }else{
           mDecoder.releaseOutputBuffer(decoderStatus, false);
        }
        count++;
     ...
    }

/**
 * Calculate how many frame in between the key frame and the seeking position
 * so that we can determine how many while loop will be execute, then we can just
 * need to stop the loop 2 or 3 frames sooner to ensure we can get something.
 * */
private int getStopPosition(long start, long end){
    long delta = end - start;
    float framePerMicroSecond = mFPS / 1000000;

    return (int)(delta * framePerMicroSecond);
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM