简体   繁体   English

如何在Linux中编程截图(高fps)(编程)

[英]How to take screenshot (high fps) in Linux (programming)

First of all I want to say that I've been reading a lot about this and I've learnt many ways to do it, but I haven't been able to do it in linux. 首先,我想说我已经阅读了很多关于这个的内容,并且我已经学到了很多方法,但是我还没能在linux上做到这一点。

My project is an ambilight with arduino, so I need to take a screenshot of the desktop and analyze its colour. 我的项目是arduino的流光溢彩,所以我需要截取桌面的截图并分析它的颜色。

At the beginning I used Processing 2.0 with the class 'Robot' from 'java.awt'. 一开始我使用Processing 2.0和'java.awt'中的'Robot'类。 Initially I could take 5 frames per second and then I got 13fps. 最初我可以每秒拍摄5帧,然后我得到13帧。 This works but I want more perfomance, so I start reading. 这有效,但我想要更多性能,所以我开始阅读。

In Windows or Mac you have libraries that let you access directly to the 'frameBuffer', so you can take screenshot really 'easy' and really fast. 在Windows或Mac中,您可以使用库直接访问'frameBuffer',因此您可以非常“轻松”地截取屏幕截图。

In Ubuntu I have tried python with Gtk, PIL, Qt... and the fastest way is GTK but I can only have about 15fps too. 在Ubuntu中我用Gtk,PIL,Qt尝试过python ......最快的方法是GTK,但我也只有大约15fps。

My problem is: I want to do it cross platform but I prefer that my program work in Linux at the beginning and then in Windows (I don't like it too much :P). 我的问题是:我想跨平台做,但我更喜欢我的程序在开始时在Linux中工作,然后在Windows中工作(我不太喜欢它:P)。

So, the first question: is python able to offer that perfomance? 那么,第一个问题:python是否能够提供这种性能? Because I think that C++ can be a better option. 因为我认为C ++可以是更好的选择。

And the second question: what do I need to do it? 第二个问题:我需要做什么? I've read about Xlib (X11) but I can't find documentation that let me take a screenshot. 我读过关于Xlib(X11)但我找不到让我截取屏幕截图的文档。 Also I know, for example, FFmpeg which is a powerful tool but I don't know how to implement it. 我也知道,例如,FFmpeg是一个强大的工具,但我不知道如何实现它。

I hope that you could help me (and excuse me if I've made any mistakes). 我希望你能帮助我(如果我犯了任何错误,请原谅我)。

Making this work cross platform is likely to be quite a bit of work. 使这项工作跨平台可能是相当多的工作。 If your final target is windows, then why not use the amblone project, which seems to do exactly what you want? 如果您的最终目标是Windows,那么为什么不使用amblone项目,这似乎完全符合您的要求?

http://amblone.com/guide http://amblone.com/guide

At any rate, here is a solution with ffmpeg & graphicsmagick that is pretty fast (on my i7 8GB laptop). 无论如何,这里有一个ffmpeg和graphicsmagick的解决方案非常快(在我的i7 8GB笔记本电脑上)。 ffmpeg captures exactly one screen, reduces it to the smallest square size that it can, pipes the output to graphicsmagick convert, where it is resized to 1x1 pixel and then reports the image rgb values. ffmpeg只捕获一个屏幕,将其缩小到它可以的最小方块大小,将输出管道传输到graphicsmagick转换,将其大小调整为1x1像素,然后报告图像rgb值。

#!/bin/bash

mkfifo /tmp/screencap.fifo

while true
    do
        # this version will send the info to a fifo
        # ffmpeg -y -loglevel error -f x11grab -s 1920x1080 -i :0.0 -s 32x32 \
        # -vframes 1 -f image2 -threads 2 - |  gm convert - -resize 1x1 \
        # txt:- > /tmp/screencap.fifo

        # this version will write out the info to the command line
        # and will show you what is going on.
        ffmpeg -y -loglevel error -f x11grab -s 1920x1080 -i :0.0 -s 32x32 \
         -vframes 1 -f image2 -threads 2 - |  gm convert - -resize 1x1 txt:-
    done
exit

This will give you something like the following: 这将为您提供以下内容:

0,0: ( 62, 63, 63) #3E3F3F
0,0: (204,205,203) #CCCDCB
0,0: ( 77, 78, 76) #4D4E4C

The 0,0 is the location of the pixel being read. 0,0是正在读取的像素的位置。 The numbers in parenthesis are the respective R,G,B values, and the numbers at the end are your typical html-esque hex values. 括号中的数字分别是R,G,B值,末尾的数字是典型的html-esque十六进制值。 In the case above there is only 1 pixel, but you could (if you wanted to have the cardinal directions as generalized RGB values) simply change the -resize 1x1 part above to -resize 3x3 and you'll get something like: 在上面的例子中只有1个像素,但你可以(如果你想将主要方向作为通用RGB值)只需将上面的-resize 1x1部分改为-resize 3x3 ,你会得到类似的东西:

0,0: ( 62, 63, 65) #3E3F41
1,0: ( 90, 90, 91) #5A5A5B
2,0: (104,105,106) #68696A
0,1: ( 52, 51, 52) #343334
1,1: ( 60, 60, 59) #3C3C3B
2,1: ( 64, 64, 64) #404040
0,2: ( 49, 49, 50) #313132
1,2: ( 60, 60, 60) #3C3C3C
2,2: ( 65, 65, 65) #414141

I'll leave it to you to pass that information to your arduino. 我会留给你把这些信息传递给你的arduino。

ffmpeg is great, but you'll have to remember to switch out the screen-capture bit (here in my example -f x11grab ) with whatever your windows system uses. ffmpeg很棒,但你必须记住用你的windows系统使用的任何东西来切换屏幕捕获位(在我的例子中为-f x11grab )。 Here is a SO link that goes into a bit more detail. 这是一个更详细的SO链接

If you really insist on making something cross-platform, then I would recommend diving into openCV with python bindings and using the framebuffer device as a video input, scaling the result down to 1x1 pixel and using the resulting color average to drive your pwm through some type of UDP broadcast. 如果你真的坚持跨平台制作某些东西,那么我建议使用python绑定潜入openCV并使用framebuffer设备作为视频输入,将结果缩小到1x1像素并使用生成的颜色平均值来驱动你的pwm通过一些UDP广播的类型。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM