简体   繁体   English

大疆手机 SDK Mavic 2 Pro - 直播 Stream 4k

[英]DJI mobile SDK Mavic 2 Pro - Live Stream 4k

I'm new here in stackoverflow and I hope someone of you can help me.我是stackoverflow的新手,希望你们中的某个人能帮助我。

I developed an application with DJI mobile SDK and I implemented a live stream.我用 DJI 移动 SDK 开发了一个应用程序,并实现了一个实时 stream。 The problem is the resolution of the live stream is not 4k and I need 4k.问题是直播stream的分辨率不是4k,我需要4k。 I think the drone provides the secondary stream for the live preview.我认为无人机为实时预览提供了辅助 stream。 Is it possible to change the secondary stream to the primary stream which have 4k resolution?是否可以将辅助 stream 更改为具有 4k 分辨率的主 stream? And when it is possible how can I do that?如果有可能,我该怎么做? Or is it simply possible to increase the resolution of the live stream / secondary stream?或者是否可以简单地提高实时 stream / 二级 stream 的分辨率?

Here is my current implementation:这是我当前的实现:

Initialization of surface texture element for live stream preview:直播 stream 预览的表面纹理元素的初始化:

SurfaceTextureListener surfaceTextureListener = new SurfaceTextureListener(getApplicationContext());
this.videoStreamPreviewTtView.setSurfaceTextureListener(surfaceTextureListener);

This is my listener:这是我的听众:

public class SurfaceTextureListener implements TextureView.SurfaceTextureListener {

    private final Context context;

    public SurfaceTextureListener(Context context) {
        this.context = context;
    }

    @Override
    public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
        if (DroneControl.getCodecManager() == null) {
            DroneControl.setCodecManager(new DJICodecManager(this.context, surface, width, height));
            DroneControl.getCodecManager().resetKeyFrame();
            DroneControl.getCodecManager().enabledYuvData(true);
            DroneControl.getCodecManager().setYuvDataCallback(new LiveStreamDataCallback(this.context));
        }
    }

    @Override
        public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) {
    }

    @Override
    public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
        if (DroneControl.getCodecManager() != null) {
            DroneControl.getCodecManager().cleanSurface();
            DroneControl.setCodecManager(null);
        }
        return false;
    }

    @Override
    public void onSurfaceTextureUpdated(SurfaceTexture surface) {
    }
}

And here is my callback function:这是我的回调 function:

public class LiveStreamDataCallback implements Base, DJICodecManager.YuvDataCallback {

    private final Context context;
    private final long lastUpdate;

    public LiveStreamDataCallback(Context context) {
        this.context = context;
        this.lastUpdate = System.currentTimeMillis();
    }

    @Override
    public void onYuvDataReceived(MediaFormat format, final ByteBuffer yuvFrame, int dataSize, final int width, final int height) {
        long differenceInMillis = System.currentTimeMillis() - this.lastUpdate;

        if (differenceInMillis > SCREEN_SHOT_PERIOD && yuvFrame != null) {
            final byte[] bytes = new byte[dataSize];
            yuvFrame.get(bytes);
            newSaveYuvDataToJPEG(bytes, width, height);
        }
    }

    private void newSaveYuvDataToJPEG(byte[] yuvFrame, int width, int height) {
        if (yuvFrame.length < width * height) {
            return;
        }
        int length = width * height;

        byte[] u = new byte[width * height / 4];
        byte[] v = new byte[width * height / 4];

        for (int i = 0; i < u.length; i++) {
            u[i] = yuvFrame[length + i];
            v[i] = yuvFrame[length + u.length + i];
        }
        for (int i = 0; i < u.length; i++) {
            yuvFrame[length + 2 * i] = v[i];
            yuvFrame[length + 2 * i + 1] = u[i];
        }
        screenShot(yuvFrame, width, height);
    }

    private void screenShot(byte[] buf, int width, int height) {
        ByteArrayOutputStream bOutput = new ByteArrayOutputStream();

        YuvImage yuvImage = new YuvImage(buf,
            ImageFormat.NV21,
            width,
            height,
            null);

        yuvImage.compressToJpeg(new Rect(0,
            0,
            width,
            height), 100, bOutput);

        insertIntoDB(Base64.getEncoder().encodeToString(bOutput.toByteArray()));
    }

    private void insertIntoDB(String base64EncodedContent) {
        //only a limit of images will be saved inside the DB to avoid using too much space!
        DatabaseUtil.reduceTableContentToMaxContentIfNecessary(this.context, ScreenShotModel.ScreenShotEntry.TABLE_NAME, MAX_KEEP_COUNT_FOR_LIVE_STREAM_SCREEN_SHOTS);

        Date now = new Date();
        SimpleDateFormat dateFormat = new SimpleDateFormat(DATE_FORMAT_FOR_LOGGING, Locale.GERMANY);
        SQLiteDatabase db = DroneControl.getDbWriteAccess(this.context);

        //create a new map of values, where column names are the keys
        ContentValues values = new ContentValues();
        values.put(ScreenShotModel.ScreenShotEntry.COLUMN_NAME_DATA, base64EncodedContent);
        values.put(ScreenShotModel.ScreenShotEntry.COLUMN_NAME_CREATED, dateFormat.format(now));

        db.insert(ScreenShotModel.ScreenShotEntry.TABLE_NAME, null, values);
    }
}

Hope someone has a solution for my problem.希望有人能解决我的问题。

Thank you very much!非常感谢!

Best regards此致

Can't be done.做不到。 1080p is max, ocusync can't do any higher than that. 1080p 是最大的,ocusync 不能做得比这更高。 The bandwidth isn't high enough, and the hardware in the drone don't support it.带宽不够高,无人机硬件不支持。

I don't know anyway to do what you ask.我不知道怎么做你要求的。 The only thing you can do is to take an still image and download it, but it will be slow, of course, but can be used for image recognition for example.您唯一可以做的就是拍摄静止图像并下载它,但它当然会很慢,但可以用于图像识别等。 You don't say what you are going to use it for, but since you seem to be looking at frames, that may be a (slow) solution.您没有说您将使用它来做什么,但由于您似乎正在查看框架,这可能是一个(缓慢的)解决方案。

From DJI web: OcuSync Part of the Lightbridge family, DJI's newly developed OcuSync transmission system performs far better than Wi-Fi transmission at all transmission speeds.来自 DJI web:OcuSync 作为 Lightbridge 系列的一部分,DJI 新开发的 OcuSync 传输系统在所有传输速度下的性能都远优于 Wi-Fi 传输。 OcuSync also uses more effective digital compression and channel transmission technologies, allowing it to transmit HD video reliably even in environments with strong radio interference. OcuSync 还使用更有效的数字压缩和通道传输技术,即使在无线电干扰强烈的环境中也能可靠地传输高清视频。 Compared to traditional analog transmission, OcuSync can transmit video at 720p and 1080p – equivalent to a 4-10 times better quality, without a color cast, static interference, flickering or other problems associated with analog transmission.与传统的模拟传输相比, OcuSync 可以传输 720p 和 1080p 的视频- 相当于提高 4-10 倍的质量,没有色偏、static 干扰、闪烁或其他与模拟传输相关的问题。 Even when using the same amount of radio transmission power, OcuSync transmits further than analog at 4.1mi (7km)即使使用相同数量的无线电传输功率,OcuSync 在 4.1 英里(7 公里)处比模拟传输更远

Thank you very much for your answer and your help!非常感谢您的回答和帮助!

Actually, I tried to capture 4k images first and transfer it.实际上,我尝试先捕获 4k 图像并进行传输。 As you already said I try to use the frames for object detection and I need the best performance I can get.正如你已经说过的,我尝试使用框架进行 object 检测,我需要我能获得的最佳性能。 Capturing images and transferring it is very time consuming.捕获图像并传输它非常耗时。 I need approximately 1.5 seconds for capture and save the image, 3 more seconds for transfer it and, the biggest surprise for me, to read the SD card to find the newest image takes almost 11 seconds (I tried different SD cards with max class 10).我需要大约 1.5 秒来捕获和保存图像,再需要 3 秒来传输它,对我来说最大的惊喜是读取 SD 卡以找到最新图像需要将近 11 秒(我尝试了最大 class 10 的不同 SD 卡)。 In sum the whole process for one image takes 15.5 seconds, this is way too long for my purpose…总之,一张图像的整个过程需要 15.5 秒,这对我的目的来说太长了……

Then I thought I could do it with the live stream.然后我想我可以用现场 stream 做到这一点。 The whole process with live stream takes 500 milliseconds and this is an acceptable value for my project.实时 stream 的整个过程需要 500 毫秒,这对于我的项目来说是可以接受的值。 It is very sobering that it seems to be impossible…很是发人深省,这似乎是不可能的……

Maybe there is still another opportunity to transfer images from the drone very fast?也许还有另一个机会可以非常快速地从无人机传输图像?

I get stills from the live stream, but in another way.我从现场 stream 中得到了剧照,但以另一种方式。 I use the fpv-widget (the live stream is shown on the device), and read the bmp directly from the widget.我使用 fpv-widget(现场 stream 显示在设备上),并直接从小部件中读取 bmp。 In this way you don't have to handle so much data in java.这样就不用在java中处理这么多数据了。

I even do it from python, and with some quirks I got it into a opencv2 without any marshalling.我什至从 python 开始执行此操作,并且由于一些怪癖,我将其放入 opencv2 中而无需任何编组。 Can read out 100frames/second in python, so java should be at least that fast.可以在 python 中读取 100 帧/秒,因此 java 应该至少那么快。 You might reconsider your way of doing it.你可能会重新考虑你的做法。 Try to avoid repacking the image data.尽量避免重新打包图像数据。 It's still only 1080, but it's very good quality I must say.它仍然只有1080,但我必须说它的质量非常好。

<dji.ux.widget.FPVWidget
    android:id="@+id/fpv_widget"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:layout_centerInParent="true"
    custom:sourceCameraNameVisibility="true" />

    public Bitmap getFrameBitmap() {
        fpvWidget = findViewById(R.id.fpv_widget);
        return fpvWidget.getBitmap();
    }

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM