简体   繁体   English

使用媒体编解码器的视频隐写术

[英]video steganography using mediacodec

I need to create videos with data hidden in them.我需要创建隐藏数据的视频。 i managed to extract video frames using mediacodec decoder as NV21 buffer and save them, then i create mp4 file from frames using mediacodec encoder.我设法使用 mediacodec 解码器作为 NV21 缓冲区提取视频帧并保存它们,然后我使用 mediacodec 编码器从帧创建 mp4 文件。 the class below is responsible for saving frame files if we are in encode process or check the value if we want to extract data from stego-video.如果我们在编码过程中,下面的 class 负责保存帧文件,或者如果我们想从 stego-video 中提取数据,则检查该值。

public class ExtractMpegFramesBufferDecoder {
private static final String TAG = "ExtractMpegFramesDec";
private static final boolean VERBOSE = true;           // lots of logging

// where to find files (note: requires WRITE_EXTERNAL_STORAGE permission)
private File STORE_FRAME_DIRECTORY;
private String INPUT_FILE;
private int frameRate;       // stop extracting after this many
private int saveWidth;
private int saveHeight;
private int decodeCount;
private Handler _progressBarHandler;
private int duration;
//
private int MAX_FRAMES;
private boolean fromDecode;
//

public ExtractMpegFramesBufferDecoder(File storeFrameDirectory, String inputVideoPath, int frameRate
        , int saveWidth, int saveHeight
        , double duration, int rotation
        , Handler _progressBarHandler) {
    this.STORE_FRAME_DIRECTORY = storeFrameDirectory;
    this.INPUT_FILE = inputVideoPath;
    this.frameRate = frameRate;
    this.saveWidth = saveWidth;
    this.saveHeight = saveHeight;
    this._progressBarHandler = _progressBarHandler;
    this.duration = (int) duration;
}


/**
 * Tests extraction from an MP4 to a series of PNG files.
 * <p>
 * We scale the video to 640x480 for the PNG just to demonstrate that we can scale the
 * video with the GPU.  If the input video has a different aspect ratio, we could preserve
 * it by adjusting the GL viewport to get letterboxing or pillarboxing, but generally if
 * you're extracting frames you don't want black bars.
 */
public void extractMpegFrames(int maxFrame, boolean fromDecode) throws IOException {
    MediaCodec decoder = null;
    MediaExtractor extractor = null;
    MAX_FRAMES = maxFrame;
    this.fromDecode = fromDecode;

    try {
        File inputFile = new File(INPUT_FILE);   // must be an absolute path
        // The MediaExtractor error messages aren't very useful.  Check to see if the input
        // file exists so we can throw a better one if it's not there.
        if (!inputFile.canRead()) {
            throw new FileNotFoundException("Unable to read " + inputFile);
        }

        extractor = new MediaExtractor();
        extractor.setDataSource(inputFile.toString());
        int trackIndex = selectTrack(extractor);
        if (trackIndex < 0) {
            throw new RuntimeException("No video track found in " + inputFile);
        }
        extractor.selectTrack(trackIndex);

        MediaFormat format = extractor.getTrackFormat(trackIndex);
        if (VERBOSE) {
            Log.d(TAG, "Video size is " + format.getInteger(MediaFormat.KEY_WIDTH) + "x" +
                    format.getInteger(MediaFormat.KEY_HEIGHT));
        }

        // Create a MediaCodec decoder, and configure it with the MediaFormat from the
        // extractor.  It's very important to use the format from the extractor because
        // it contains a copy of the CSD-0/CSD-1 codec-specific data chunks.
        String mime = format.getString(MediaFormat.KEY_MIME);
        decoder = MediaCodec.createDecoderByType(mime);
        decoder.configure(format, null, null, 0);
        decoder.start();

        doExtract(extractor, trackIndex, decoder);
    } finally {
        if (decoder != null) {
            decoder.stop();
            decoder.release();
            decoder = null;
        }
        if (extractor != null) {
            extractor.release();
            extractor = null;
        }
    }
}

/**
 * Selects the video track, if any.
 *
 * @return the track index, or -1 if no video track is found.
 */
private int selectTrack(MediaExtractor extractor) {
    // Select the first video track we find, ignore the rest.
    int numTracks = extractor.getTrackCount();
    for (int i = 0; i < numTracks; i++) {
        MediaFormat format = extractor.getTrackFormat(i);
        String mime = format.getString(MediaFormat.KEY_MIME);
        if (mime.startsWith("video/")) {
            if (VERBOSE) {
                Log.d(TAG, "Extractor selected track " + i + " (" + mime + "): " + format);
            }
            return i;
        }
    }

    return -1;
}

/**
 * Work loop.
 */
public void doExtract(MediaExtractor extractor, int trackIndex, MediaCodec decoder) throws IOException {
    final int TIMEOUT_USEC = 10000;
    ByteBuffer[] decoderInputBuffers = decoder.getInputBuffers();
    MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
    int inputChunk = 0;
    decodeCount = 0;
    long frameSaveTime = 0;

    boolean outputDone = false;
    boolean inputDone = false;


    ByteBuffer[] decoderOutputBuffers = decoder.getOutputBuffers();
    MediaFormat decoderOutputFormat = null;
    long rawSize = 0;


    while (!outputDone) {
        if (VERBOSE) Log.d(TAG, "loop");

        // Feed more data to the decoder.
        if (!inputDone) {
            int inputBufIndex = decoder.dequeueInputBuffer(TIMEOUT_USEC);
            if (inputBufIndex >= 0) {
                ByteBuffer inputBuf = decoderInputBuffers[inputBufIndex];
                // Read the sample data into the ByteBuffer.  This neither respects nor
                // updates inputBuf's position, limit, etc.
                int chunkSize = extractor.readSampleData(inputBuf, 0);
                if (chunkSize < 0) {
                    // End of stream -- send empty frame with EOS flag set.
                    decoder.queueInputBuffer(inputBufIndex, 0, 0, 0L,
                            MediaCodec.BUFFER_FLAG_END_OF_STREAM);
                    inputDone = true;
                    if (VERBOSE) Log.d(TAG, "sent input EOS");
                } else {
                    if (extractor.getSampleTrackIndex() != trackIndex) {
                        Log.w(TAG, "WEIRD: got sample from track " +
                                extractor.getSampleTrackIndex() + ", expected " + trackIndex);
                    }
                    long presentationTimeUs = extractor.getSampleTime();
                    decoder.queueInputBuffer(inputBufIndex, 0, chunkSize,
                            presentationTimeUs, 0 /*flags*/);
                    if (VERBOSE) {
                        Log.d(TAG, "submitted frame " + inputChunk + " to dec, size=" +
                                chunkSize);
                    }
                    inputChunk++;
                    extractor.advance();
                }
            } else {
                if (VERBOSE) Log.d(TAG, "input buffer not available");
            }
        }

        if (!outputDone) {
            int decoderStatus = decoder.dequeueOutputBuffer(info, TIMEOUT_USEC);
            if (decoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
                // no output available yet
                if (VERBOSE) Log.d(TAG, "no output from decoder available");
            } else if (decoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
                // not important for us, since we're using Surface
                if (VERBOSE) Log.d(TAG, "decoder output buffers changed");
                decoderOutputBuffers = decoder.getOutputBuffers();
            } else if (decoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
                MediaFormat newFormat = decoder.getOutputFormat();
                decoderOutputFormat = newFormat;
                if (VERBOSE) Log.d(TAG, "decoder output format changed: " + newFormat);
            } else if (decoderStatus < 0) {
                Log.e(TAG, "unexpected result from decoder.dequeueOutputBuffer: " + decoderStatus);
            } else { // decoderStatus >= 0
                if (VERBOSE) Log.d(TAG, "surface decoder given buffer " + decoderStatus +
                        " (size=" + info.size + ")");
                if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
                    if (VERBOSE) Log.d(TAG, "output EOS");
                    outputDone = true;
                }
                ByteBuffer outputFrame = decoderOutputBuffers[decoderStatus];
                outputFrame.position(info.offset);
                outputFrame.limit(info.offset + info.size);
                rawSize += info.size;

                if (info.size == 0) {
                    if (VERBOSE) Log.d(TAG, "got empty frame");
                } else {
                    // if it's decode then check the altered value
                    // else save the frames
                    if (fromDecode) {
                        outputFrame.rewind();
                        byte[] data = new byte[outputFrame.remaining()];
                        outputFrame.get(data);
                        int size = saveWidth * saveHeight;
                        int offset = size;
                        int[] pixels = new int[size];
                        int u, v, y1, y2, y3, y4;
                        int uvIndex = 0;

                        if (decodeCount == 1) {
                            // i percorre os Y and the final pixels
                            // k percorre os pixles U e V
                            for (int i = 0, k = 0; i < size; i += 2, k += 2) {
                                y1 = data[i] & 0xff;
                                y2 = data[i + 1] & 0xff;
                                y3 = data[saveWidth + i] & 0xff;
                                y4 = data[saveWidth + i + 1] & 0xff;

                                u = data[offset + k] & 0xff;
                                v = data[offset + k + 1] & 0xff;

                                // getting size
                                if (uvIndex == 0) {
                                    int specialByte1P1 = u & 15;
                                    int specialByte1P2 = v & 15;
                                    int specialCharacter1 = (specialByte1P1 << 4) | specialByte1P2;
                                    if (specialCharacter1 != 17) {
                                        throw new IllegalArgumentException("value has changed");
                                    }
                                }

                                uvIndex++;
                                if (i != 0 && (i + 2) % saveWidth == 0)
                                    i += saveWidth;
                            }
                        }
                    } else {
                        outputFrame.rewind();
                        byte[] data = new byte[outputFrame.remaining()];
                        outputFrame.get(data);
                        try {
                            File outputFile = new File(STORE_FRAME_DIRECTORY,
                                    String.format(Locale.US, "frame_%d.frame", decodeCount));
                            FileOutputStream stream = new FileOutputStream(outputFile.getAbsoluteFile());
                            stream.write(data);
                        } catch (FileNotFoundException e1) {
                            e1.printStackTrace();
                        }
                    }

                    decodeCount++;
                }
                if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
                    if (VERBOSE) Log.d(TAG, "output EOS");
                    outputDone = true;
                }

                decoder.releaseOutputBuffer(decoderStatus, false);

            }
        }
    }

    int numSaved = (frameRate < decodeCount) ? frameRate : decodeCount;
    Log.d(TAG, "Saving " + numSaved + " frames took " +
            (frameSaveTime / numSaved / 1000) + " us per frame");
}

public int getDecodeCount() {
    return decodeCount;
}
}

in class below i encode frames, alter one uv value of (frame 1), store digit number 17 in lsb of first u and v and build mp4 using mediacodec encoder.在下面的 class 中,我对帧进行编码,更改(帧 1)的一个 uv 值,将数字 17 存储在第一个 u 和 v 的 lsb 中,并使用 mediacodec 编码器构建 mp4。

public class YUVFrameBufferToVideoEncoder {
private static final String TAG = BitmapToVideoEncoder.class.getSimpleName();
private static final int ERROR_IN_PROCESS = 0;

private IBitmapToVideoEncoderCallback mCallback;
private File mOutputFile;
private Queue<File> mEncodeQueue = new ConcurrentLinkedQueue();
private MediaCodec mediaCodec;
private MediaMuxer mediaMuxer;

private Object mFrameSync = new Object();
private CountDownLatch mNewFrameLatch;

private static final String MIME_TYPE = "video/avc"; // H.264 Advanced Video Coding
private static int mWidth;
private static int mHeight;
private static int BIT_RATE;
private static int FRAME_RATE; // Frames per second
private int frameCount;
private Handler _progressBarHandler;
private Handler _processHandler;

private static final int I_FRAME_INTERVAL = 1;

private int mGenerateIndex = 0;
private int mTrackIndex;
private boolean mNoMoreFrames = false;
private boolean mAbort = false;
//
private byte[] dataToHide;

public interface IBitmapToVideoEncoderCallback {
    void onEncodingComplete(File outputFile);
}

public YUVFrameBufferToVideoEncoder(IBitmapToVideoEncoderCallback callback) {
    mCallback = callback;
}

public boolean isEncodingStarted() {
    return (mediaCodec != null) && (mediaMuxer != null) && !mNoMoreFrames && !mAbort;
}

public int getActiveBitmaps() {
    return mEncodeQueue.size();
}

public boolean startEncoding(int width, int height, int fps, int bitrate, int frameCount
        , byte[] dataToHide, Handler _progressBarHandler, Handler _processHandler
        , File outputFile) {
    mWidth = width;
    mHeight = height;
    FRAME_RATE = fps;
    BIT_RATE = bitrate;
    this.frameCount = frameCount;
    this._progressBarHandler = _progressBarHandler;
    this._processHandler = _processHandler;
    mOutputFile = outputFile;
    this.dataToHide = dataToHide;

    String outputFileString;
    try {
        outputFileString = outputFile.getCanonicalPath();
    } catch (IOException e) {
        Log.e(TAG, "Unable to get path for " + outputFile);
        ErrorManager.getInstance().addErrorMessage("Unable to get path for " + outputFile);
        return false;
    }

    MediaCodecInfo codecInfo = selectCodec(MIME_TYPE);
    if (codecInfo == null) {
        Log.e(TAG, "Unable to find an appropriate codec for " + MIME_TYPE);
        ErrorManager.getInstance().addErrorMessage("Unable to find an appropriate codec for " + MIME_TYPE);
        return false;
    }
    Log.d(TAG, "found codec: " + codecInfo.getName());
    int colorFormat;
    try {
        colorFormat = MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar;
    } catch (Exception e) {
        colorFormat = MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar;
    }

    try {
        mediaCodec = MediaCodec.createByCodecName(codecInfo.getName());
    } catch (IOException e) {
        Log.e(TAG, "Unable to create MediaCodec " + e.getMessage());
        ErrorManager.getInstance().addErrorMessage("Unable to create MediaCodec " + e.getMessage());
        return false;
    }

    MediaFormat mediaFormat = MediaFormat.createVideoFormat(MIME_TYPE, mWidth, mHeight);
    mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, BIT_RATE);
    mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, FRAME_RATE);
    mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, colorFormat);
    mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, I_FRAME_INTERVAL);
    mediaCodec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
    mediaCodec.start();
    try {
        mediaMuxer = new MediaMuxer(outputFileString, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
    } catch (IOException e) {
        Log.e(TAG, "MediaMuxer creation failed. " + e.getMessage());
        ErrorManager.getInstance().addErrorMessage("MediaMuxer creation failed. " + e.getMessage());
        return false;
    }

    Log.d(TAG, "Initialization complete. Starting encoder...");

    Completable.fromAction(this::encode)
            .subscribeOn(Schedulers.io())
            .observeOn(AndroidSchedulers.mainThread())
            .subscribe();
    return true;
}

public void stopEncoding() {
    if (mediaCodec == null || mediaMuxer == null) {
        Log.d(TAG, "Failed to stop encoding since it never started");
        return;
    }
    Log.d(TAG, "Stopping encoding");

    mNoMoreFrames = true;

    synchronized (mFrameSync) {
        if ((mNewFrameLatch != null) && (mNewFrameLatch.getCount() > 0)) {
            mNewFrameLatch.countDown();
        }
    }
}

public void abortEncoding() {
    if (mediaCodec == null || mediaMuxer == null) {
        Log.d(TAG, "Failed to abort encoding since it never started");
        return;
    }
    Log.d(TAG, "Aborting encoding");

    mNoMoreFrames = true;
    mAbort = true;
    mEncodeQueue = new ConcurrentLinkedQueue(); // Drop all frames

    synchronized (mFrameSync) {
        if ((mNewFrameLatch != null) && (mNewFrameLatch.getCount() > 0)) {
            mNewFrameLatch.countDown();
        }
    }
}

public void queueFrame(File frame) {
    if (mediaCodec == null || mediaMuxer == null) {
        Log.d(TAG, "Failed to queue frame. Encoding not started");
        return;
    }

    Log.d(TAG, "Queueing frame");
    mEncodeQueue.add(frame);

    synchronized (mFrameSync) {
        if ((mNewFrameLatch != null) && (mNewFrameLatch.getCount() > 0)) {
            mNewFrameLatch.countDown();
        }
    }
}

private void encode() {

    Log.d(TAG, "Encoder started");

    while (true) {
        if (mNoMoreFrames && (mEncodeQueue.size() == 0)) break;

        File frame = mEncodeQueue.poll();
        if (frame == null) {
            synchronized (mFrameSync) {
                mNewFrameLatch = new CountDownLatch(1);
            }

            try {
                mNewFrameLatch.await();
            } catch (InterruptedException e) {
            }

            frame = mEncodeQueue.poll();
        }

        if (frame == null) continue;

        int size = (int) frame.length();
        byte[] bytesNV21 = new byte[size];

        try {
            BufferedInputStream buf = new BufferedInputStream(new FileInputStream(frame));
            buf.read(bytesNV21, 0, bytesNV21.length);
            buf.close();
        } catch (FileNotFoundException e) {
            e.printStackTrace();
        } catch (IOException e) {
            e.printStackTrace();
        }

        int offsetSize = mWidth * mHeight;
        int byteNV21Offset = offsetSize;
        int u, v, y1, y2, y3, y4;

        //
        int dataToHideLength = 0;
        if (dataToHide != null)
            dataToHideLength = dataToHide.length;

        boolean isLastIndexInserted1 = false;
        boolean isLastIndexInserted2 = false;
        boolean isLastIndexInserted3 = false;

        int uvIndex = 0;
        int frameByteCapacity = ((mWidth * mHeight) / 4) / 20;
        Log.e(TAG, "encode: dataToHideLength: " + dataToHideLength);
        Log.e(TAG, "encode: frameByteCapacity: " + dataToHideLength);
        //

        // i percorre os Y and the final pixels
        // k percorre os pixles U e V
        for (int i = 0, k = 0; i < offsetSize; i += 2, k += 2) {
            y1 = bytesNV21[i] & 0xff;
            y2 = bytesNV21[i + 1] & 0xff;
            y3 = bytesNV21[mWidth + i] & 0xff;
            y4 = bytesNV21[mWidth + i + 1] & 0xff;

            u = bytesNV21[byteNV21Offset + k] & 0xff;
            v = bytesNV21[byteNV21Offset + k + 1] & 0xff;


            // frame 1
            // altering u and v for test
            if (mGenerateIndex == 1) {
                int Unew = u & 240;
                int Vnew = v & 240;

                if (uvIndex == 0) {
                    // used in start and end of stego bytes
                    int specialByte1Integer = 17;
                    int specialByte1P1 = specialByte1Integer & 240;
                    int specialByte1P2 = specialByte1Integer & 15;

                    // shift p1 right 4 position
                    specialByte1P1 = specialByte1P1 >> 4;

                    u = Unew | specialByte1P1;
                    v = Vnew | specialByte1P2;

                }

                bytesNV21[byteNV21Offset + k] = (byte) u;
                bytesNV21[byteNV21Offset + k + 1] = (byte) v;
            }


            uvIndex++;

            if (i != 0 && (i + 2) % mWidth == 0)
                i += mWidth;
        }

        long TIMEOUT_USEC = 500000;
        int inputBufIndex = mediaCodec.dequeueInputBuffer(TIMEOUT_USEC);
        long ptsUsec = computePresentationTime(mGenerateIndex, FRAME_RATE);
        if (inputBufIndex >= 0) {
            final ByteBuffer inputBuffer = mediaCodec.getInputBuffers()[inputBufIndex];
            inputBuffer.clear();
            inputBuffer.put(bytesNV21);
            mediaCodec.queueInputBuffer(inputBufIndex, 0, bytesNV21.length, ptsUsec, 0);
            mGenerateIndex++;

            int percentComplete = 70 + (int) ((((double) mGenerateIndex) / (frameCount)) * 30);
            if (_progressBarHandler != null) {
                _progressBarHandler.sendMessage(_progressBarHandler.obtainMessage(percentComplete));
            }
            Log.w("creatingVideo: ", "is:" + percentComplete);
        }
        MediaCodec.BufferInfo mBufferInfo = new MediaCodec.BufferInfo();
        int encoderStatus = mediaCodec.dequeueOutputBuffer(mBufferInfo, TIMEOUT_USEC);
        if (encoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
            // no output available yet
            Log.e(TAG, "No output from encoder available");
        } else if (encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
            // not expected for an encoder
            MediaFormat newFormat = mediaCodec.getOutputFormat();
            mTrackIndex = mediaMuxer.addTrack(newFormat);
            mediaMuxer.start();
        } else if (encoderStatus < 0) {
            Log.e(TAG, "unexpected result from encoder.dequeueOutputBuffer: " + encoderStatus);
        } else if (mBufferInfo.size != 0) {
            ByteBuffer encodedData = mediaCodec.getOutputBuffers()[encoderStatus];
            if (encodedData == null) {
                Log.e(TAG, "encoderOutputBuffer " + encoderStatus + " was null");
            } else {
                encodedData.position(mBufferInfo.offset);
                encodedData.limit(mBufferInfo.offset + mBufferInfo.size);
                mediaMuxer.writeSampleData(mTrackIndex, encodedData, mBufferInfo);
                mediaCodec.releaseOutputBuffer(encoderStatus, false);
            }
        }
    }

    release();

    if (mAbort) {
        mOutputFile.delete();
    } else {
        mCallback.onEncodingComplete(mOutputFile);
    }
}

private void release() {
    try {
        if (mediaCodec != null) {
            mediaCodec.stop();
            mediaCodec.release();
            mediaCodec = null;
            Log.d(TAG, "RELEASE CODEC");
        }
        if (mediaMuxer != null) {
            mediaMuxer.stop();
            mediaMuxer.release();
            mediaMuxer = null;
            Log.d(TAG, "RELEASE MUXER");
        }
    } catch (Exception ignored) {
        ErrorManager.getInstance().addErrorMessage("unsupported video file");
        Message res = _processHandler.obtainMessage(ERROR_IN_PROCESS);
        _processHandler.sendMessage(res);
    }
}

private static MediaCodecInfo selectCodec(String mimeType) {
    int numCodecs = MediaCodecList.getCodecCount();
    for (int i = 0; i < numCodecs; i++) {
        MediaCodecInfo codecInfo = MediaCodecList.getCodecInfoAt(i);
        if (!codecInfo.isEncoder()) {
            continue;
        }
        String[] types = codecInfo.getSupportedTypes();
        for (int j = 0; j < types.length; j++) {
            if (types[j].equalsIgnoreCase(mimeType)) {
                return codecInfo;
            }
        }
    }
    return null;
}

private static int selectColorFormat(MediaCodecInfo codecInfo,
                                     String mimeType) {
    MediaCodecInfo.CodecCapabilities capabilities = codecInfo
            .getCapabilitiesForType(mimeType);
    for (int i = 0; i < capabilities.colorFormats.length; i++) {
        int colorFormat = capabilities.colorFormats[i];
        if (isRecognizedFormat(colorFormat)) {
            return colorFormat;
        }
    }
    return 0; // not reached
}

private static boolean isRecognizedFormat(int colorFormat) {
    switch (colorFormat) {
        // these are the formats we know how to handle for
        case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Planar:
        case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420PackedPlanar:
        case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar:
        case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420PackedSemiPlanar:
        case MediaCodecInfo.CodecCapabilities.COLOR_TI_FormatYUV420PackedSemiPlanar:
            return true;
        default:
            return false;
    }
}

private long computePresentationTime(long frameIndex, int framerate) {
    return 132 + frameIndex * 1000000 / framerate;
}}

output video is created successfully without any problem, but mediacodec has changed the altered test value and i cannot retrieve it. output 视频创建成功,没有任何问题,但媒体编解码器更改了更改的测试值,我无法检索它。

here is my question, is this a right approach for doing video steganography in android?这是我的问题,这是在 android 中进行视频隐写的正确方法吗? if it is not the right way can you please make a suggestion?如果这不是正确的方法,您可以提出建议吗?

Steganography comes with a prerequisite - lossless encoding .隐写术带有一个先决条件——无损编码

None of the codecs available on Android support lossless video encoding, as of now.截至目前,Android 上可用的编解码器均不支持无损视频编码。

So I'm afraid your LSBs would never remain the same post encoding/decoding.所以我担心你的 LSB 永远不会在编码/解码后保持不变。

Suggestion : If you don't have a lot many frames, I would suggest you use a lossless format.建议:如果你没有很多帧,我建议你使用无损格式。 You may encode your frames into a sequence of PNG images.您可以将帧编码为一系列 PNG 图像。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM