简体   繁体   English

如何在Windows(ffmpeg等)中使用Java快速截取桌面?

[英]How to take a screenshot of desktop fast with Java in Windows (ffmpeg, etc.)?

I would like to use java to take a screenshot of my machine using FFMPEG or some other solution. 我想使用java来使用FFMPEG或其他解决方案截取我的机器的屏幕截图。 I know linux works with ffmpeg without JNI, but running it in Windows does not work and may require (JNI?) is there any sample of some simple Java class (and anything else necessary) to capture a screenshot runnable in a windows environment? 我知道linux可以在没有JNI的情况下使用ffmpeg,但是在Windows中运行它不起作用并且可能需要(JNI?)是否有一些简单的Java类(以及其他任何必要的)的示例来捕获在Windows环境中可运行的屏幕截图? Is there some alternative to FFMPEG? FFMPEG有替代品吗? I want to take screenshot at a rate faster than the Java Robot API, which I have found to work at taking screenshots, but is slower than I would like. 我想以比Java Robot API更快的速度截取屏幕截图,我发现它可以用来截取屏幕截图,但速度比我想要的慢。

I know in Linux this works very fast: 我知道在Linux中它的工作速度非常快:

import com.googlecode.javacv.*;

public class ScreenGrabber {
    public static void main(String[] args) throws Exception {
        int x = 0, y = 0, w = 1024, h = 768;
        FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(":0.0+" + x + "," + y);
        grabber.setFormat("x11grab");
        grabber.setImageWidth(w);
        grabber.setImageHeight(h);
        grabber.start();

        CanvasFrame frame = new CanvasFrame("Screen Capture");
        while (frame.isVisible()) {
            frame.showImage(grabber.grab());
        }
        frame.dispose();
        grabber.stop();
    }

This does not work in windows environment. 这在Windows环境中不起作用。 Am not sure if there is some way I could use this same code, but use javacpp to actually get it working without having to change much of the above code. 我不确定是否有某些方法可以使用相同的代码,但使用javacpp实际上可以使其工作,而无需更改上述代码。

Goal is to take screenshots of screen fast, but then stop after it takes a screenshot that is "different", aka. 目标是快速截取屏幕截图,但在截取“不同”的屏幕截图之后停止。 screen changed because of some event like, a window is window closed, etc. 屏幕因某些事件而改变,例如窗口关闭等等。

Using the built-in Robots class is way easier than other Java libraries and should probably fit your needs. 使用内置的Robots类比其他Java库更容易,并且应该可以满足您的需求。

If you need a smooth video with >= 30fps (more than 30 screenshots per second), you should first try the Robots approach plus performance improvements there using asynchronous storing of the screenshots . 如果您需要> = 30fps(每秒超过30个屏幕截图)的流畅视频,您应首先尝试使用机器人方法以及使用异步存储屏幕截图的性能改进。

If it doesn't work for you, try using JNA and that is (even though it's more complex) almost guaranteed to work for smooth screen capturing. 如果它不适合你,尝试使用JNA ,即使它更复杂,几乎可以保证平滑的屏幕捕获。

Approach with Robots 机器人的方法

The robots class is indeed capable of doing what you want, the problem most screen capturing approaches with Robots have is the saving of the screenshots. 机器人类确实能够做你想要的,大多数机器人屏幕捕获方法的问题是屏幕截图的保存 An approach could look like that: Looping over the captureScreen() method, grabbing the screen into a BufferedImage, convert it to a byte array and save it with an asynchronous file writer to a target file after adding the future reference of your image to the ArrayList to be able to keep going while storing the image data. 一种方法可能如下所示:循环使用captureScreen()方法,将屏幕抓取到BufferedImage中,将其转换为字节数组,并在将图像的未来引用添加到目标文件后将其与异步文件编写器一起保存到目标文件中。 ArrayList能够在存储图像数据的同时继续前进。

// Pseudo code
while (capturing)
{
    grab bufferedImage (screenCapture) from screen
    convert bufferImage to byte array
    start asynchronous file channel to write to the output file
      and add the future reference (return value) to the ArrayList
}

Approach with JNA 与JNA的方法

Original Question: How to take screenshots fast in Java? 原始问题: 如何在Java中快速截取屏幕截图?

As it is bad practice to just link, I will post the example here: 由于链接是不好的做法,我将在此处发布示例:

import java.awt.Rectangle;
import java.awt.image.BufferedImage;
import java.awt.image.ColorModel;
import java.awt.image.DataBuffer;
import java.awt.image.DataBufferInt;
import java.awt.image.DataBufferUShort;
import java.awt.image.DirectColorModel;
import java.awt.image.Raster;
import java.awt.image.WritableRaster;

import com.sun.jna.Native;
import com.sun.jna.platform.win32.W32API;
import com.sun.jna.win32.W32APIOptions;

public class JNAScreenShot
{

    public static BufferedImage getScreenshot(Rectangle bounds)
    {
        W32API.HDC windowDC = GDI.GetDC(USER.GetDesktopWindow());
        W32API.HBITMAP outputBitmap = GDI.CreateCompatibleBitmap(windowDC, bounds.width, bounds.height);
        try
        {
            W32API.HDC blitDC = GDI.CreateCompatibleDC(windowDC);
            try
            {
                W32API.HANDLE oldBitmap = GDI.SelectObject(blitDC, outputBitmap);
                try
                {
                    GDI.BitBlt(blitDC, 0, 0, bounds.width, bounds.height, windowDC, bounds.x, bounds.y, GDI32.SRCCOPY);
                }
                finally
                {
                    GDI.SelectObject(blitDC, oldBitmap);
                }
                GDI32.BITMAPINFO bi = new GDI32.BITMAPINFO(40);
                bi.bmiHeader.biSize = 40;
                boolean ok = GDI.GetDIBits(blitDC, outputBitmap, 0, bounds.height, (byte[]) null, bi, GDI32.DIB_RGB_COLORS);
                if (ok)
                {
                    GDI32.BITMAPINFOHEADER bih = bi.bmiHeader;
                    bih.biHeight = -Math.abs(bih.biHeight);
                    bi.bmiHeader.biCompression = 0;
                    return bufferedImageFromBitmap(blitDC, outputBitmap, bi);
                }
                else
                {
                    return null;
                }
            }
            finally
            {
                GDI.DeleteObject(blitDC);
            }
        }
        finally
        {
            GDI.DeleteObject(outputBitmap);
        }
    }

    private static BufferedImage bufferedImageFromBitmap(GDI32.HDC blitDC, GDI32.HBITMAP outputBitmap, GDI32.BITMAPINFO bi)
    {
        GDI32.BITMAPINFOHEADER bih = bi.bmiHeader;
        int height = Math.abs(bih.biHeight);
        final ColorModel cm;
        final DataBuffer buffer;
        final WritableRaster raster;
        int strideBits = (bih.biWidth * bih.biBitCount);
        int strideBytesAligned = (((strideBits - 1) | 0x1F) + 1) >> 3;
        final int strideElementsAligned;
        switch (bih.biBitCount)
        {
            case 16:
                strideElementsAligned = strideBytesAligned / 2;
                cm = new DirectColorModel(16, 0x7C00, 0x3E0, 0x1F);
                buffer = new DataBufferUShort(strideElementsAligned * height);
                raster = Raster.createPackedRaster(buffer, bih.biWidth, height, strideElementsAligned, ((DirectColorModel) cm).getMasks(), null);
                break;
            case 32:
                strideElementsAligned = strideBytesAligned / 4;
                cm = new DirectColorModel(32, 0xFF0000, 0xFF00, 0xFF);
                buffer = new DataBufferInt(strideElementsAligned * height);
                raster = Raster.createPackedRaster(buffer, bih.biWidth, height, strideElementsAligned, ((DirectColorModel) cm).getMasks(), null);
                break;
            default:
                throw new IllegalArgumentException("Unsupported bit count: " + bih.biBitCount);
        }
        final boolean ok;
        switch (buffer.getDataType())
        {
            case DataBuffer.TYPE_INT:
            {
                int[] pixels = ((DataBufferInt) buffer).getData();
                ok = GDI.GetDIBits(blitDC, outputBitmap, 0, raster.getHeight(), pixels, bi, 0);
            }
                break;
            case DataBuffer.TYPE_USHORT:
            {
                short[] pixels = ((DataBufferUShort) buffer).getData();
                ok = GDI.GetDIBits(blitDC, outputBitmap, 0, raster.getHeight(), pixels, bi, 0);
            }
                break;
            default:
                throw new AssertionError("Unexpected buffer element type: " + buffer.getDataType());
        }
        if (ok)
        {
            return new BufferedImage(cm, raster, false, null);
        }
        else
        {
            return null;
        }
    }

    private static final User32 USER = User32.INSTANCE;

    private static final GDI32 GDI = GDI32.INSTANCE;

}

interface GDI32 extends com.sun.jna.platform.win32.GDI32
{
    GDI32 INSTANCE = (GDI32) Native.loadLibrary(GDI32.class);

    boolean BitBlt(HDC hdcDest, int nXDest, int nYDest, int nWidth, int nHeight, HDC hdcSrc, int nXSrc, int nYSrc, int dwRop);

    HDC GetDC(HWND hWnd);

    boolean GetDIBits(HDC dc, HBITMAP bmp, int startScan, int scanLines, byte[] pixels, BITMAPINFO bi, int usage);

    boolean GetDIBits(HDC dc, HBITMAP bmp, int startScan, int scanLines, short[] pixels, BITMAPINFO bi, int usage);

    boolean GetDIBits(HDC dc, HBITMAP bmp, int startScan, int scanLines, int[] pixels, BITMAPINFO bi, int usage);

    int SRCCOPY = 0xCC0020;
}

interface User32 extends com.sun.jna.platform.win32.User32
{
    User32 INSTANCE = (User32) Native.loadLibrary(User32.class, W32APIOptions.UNICODE_OPTIONS);

    HWND GetDesktopWindow();
}

More information and approaches 更多信息和方法

See also 也可以看看

You will need to use JNI or JNA to call some combination of CreateCompatibleBitmap, XGetImage, DirectX or OpenGL to grab a screenshot and then copy some raw bitmap data back to Java. 您需要使用JNI或JNA调用CreateCompatibleBitmap,XGetImage,DirectX或OpenGL的某些组合来获取屏幕截图,然后将一些原始位图数据复制回Java。 My profiling showed a speed up of about 400% over the Robot class when accessing raw bitmap data on X11. 在X11上访问原始位图数据时,我的分析显示,机器人类的速度提高了大约400%。 I have not tested other platforms at this time. 我目前还没有测试过其他平台。 Some very early code is available here but I haven't had much time to work on it recently. 这里提供一些非常早期的代码但我最近没有太多时间来处理它。

According to the official ffmpeg documentation you should be able to keep it pretty cross platform if you make the file parameter passed to the FFmpegFrameGrabber (which is really an input parameter that gets passed down as the -i option to ffmpeg) adhere to the different formats each device expects . 根据官方的ffmpeg文档 ,如果你将file参数传递给FFmpegFrameGrabber (它实际上是一个input参数,作为ffmpeg的-i选项传递下来),你应该能够保持它非常跨平台。遵循不同的格式每个device期望

ie: 即:

for Windows: dshow expects -i video="screen-capture-recorder" 对于Windows: dshow期望-i video="screen-capture-recorder"

for OSX: avfoundation expects -i "<screen device index>": 对于OSX: avfoundation期望-i "<screen device index>":

and for Linux: x11grab expects -i :<display id>+<x>,<y> . 对于Linux: x11grab需要-i :<display id>+<x>,<y>

So just passing those values (arguments to -i ) to the constructor and setting the format (via setFormat ) accordingly should do the trick. 因此,只需将这些值(参数传递给-i )传递给构造函数并相应地设置格式(通过setFormat )即可。

Are you familiar with Xuggler? 你熟悉Xuggler吗? It uses FFmpeg for encoding and decoding. 它使用FFmpeg进行编码和解码。 I got to know it a few months ago when I had to extract frames from a video and it worked smoothly. 几个月前,当我不得不从视频中提取帧并且它工作顺利时,我就知道了。

On the official website you can find some examples including one called "CaptureScreenToFile.java". 在官方网站上,您可以找到一些示例,包括一个名为“CaptureScreenToFile.java”的示例。 For more info follow these links: 有关更多信息,请访问以下链接

http://www.xuggle.com/xuggler/ http://www.xuggle.com/xuggler/

https://github.com/artclarke/xuggle-xuggler/tree/master/src/com/xuggle/xuggler/demos https://github.com/artclarke/xuggle-xuggler/tree/master/src/com/xuggle/xuggler/demos

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM