简体   繁体   English

如何限制在docker容器中运行的ffmpeg的资源(从python脚本调用)?

[英]How do I limit resources for ffmpeg, called from a python-script, running in a docker container?

I deployed a service, that periodically does video encoding on my server; 我部署了一项服务,该服务会定期在服务器上进行视频编码; And every time it does, all other services slow down significantly. 每次这样做,所有其他服务都会大大减慢速度。 The encoding is hidden under multiple layers of abstraction. 编码隐藏在多层抽象之下。 Limiting any of those layers would be fine. 限制任何这些层都可以。 (eg limiting the docker-container would work just as well as limiting the ffmpeg-sub process.) (例如,限制docker-container和限制ffmpeg-sub进程一样有效。)

My Stack: 我的堆栈:

  1. VPS (ubuntu:zesty) VPS(ubuntu:zesty)
  2. docker-compose 码头工人组成
  3. docker-container (ubuntu:zesty) docker-container(ubuntu:zesty)
  4. python 蟒蛇
  5. ffmpeg (via subprocess.check_call() in python) ffmpeg(通过python中的subprocess.check_call())

What I want to limit: 我想限制的是:

  • CPU: single core CPU:单核
  • RAM: max 2 GB RAM:最大2 GB
  • HDD: max 4 GB 硬盘:最大4 GB

It would be possible to recompile ffmpeg if needed. 如果需要,可以重新编译ffmpeg。

What would be the place to put limits in this stack? 在此堆栈中放置限制的地方是什么?

You can do it easily with your docker compose file :) 您可以使用docker compose文件轻松完成此操作:)

https://docs.docker.com/compose/compose-file/#resources https://docs.docker.com/compose/compose-file/#resources

Just use the limits keyword and set your cpu usage ! 只需使用limits关键字并设置您的CPU使用率即可!

Your best bet is to write small set of scripts around cgroups; 最好的选择是在cgroup周围编写少量脚本; either on standalone linux or along-with docker containers. 在独立Linux上或与Docker容器一起使用。

For former, it is basically done by creating a new cgroup; 对于前者,基本上是通过创建一个新的cgroup来完成的; specifying resources for it and the moving your main process pid to the created cgroup. 为其指定资源,并将主进程pid移至创建的cgroup。 Detailed intructions are at https://www.cloudsigma.com/manage-docker-resources-with-cgroups/ . 有关详细说明, 参见https://www.cloudsigma.com/manage-docker-resources-with-cgroups/

For latter, see https://www.cloudsigma.com/manage-docker-resources-with-cgroups/ 对于后者,请参见https://www.cloudsigma.com/manage-docker-resources-with-cgroups/

In plain docker you can achieve each of the limit with command line options: 在普通docker中,您可以使用命令行选项实现每个限制:

A container can be limited to a single CPU core (or a hyperthread on current intel hardware): 容器可以限制为单个CPU内核(或当前intel硬件上的超线程):

docker run \
  --cpus 1 \
  image

or limited by Dockers CPU shares , which default to 1024. This will only help if most of your tasks that are being slowed down are also in Docker containers, so they are being allocated Dockers shares as well. 或受Dockers CPU共享限制,默认为1024。这仅在您减慢的大多数任务也位于Docker容器中时才有帮助,因此也要为其分配Dockers共享。

docker run \
  --cpu-shares 512 \
  image

Limiting memory is a bit finicky as your process will just crash if it hits the limit. 限制内存有点挑剔,因为如果达到极限,您的进程就会崩溃。

docker run \
  --memory-reservation 2000 \
  --memory 2048 \
  --memory-swap 2048 \
  image

Block or Device IO is more important than total space for performance. 块或设备IO比总性能空间更重要。 This can be limited per device, so if you keep data on a specific device for your conversion: 每个设备可能会受到限制,因此,如果将数据保留在特定设备上进行转换:

docker run \
  --volume /something/on/sda:/conversion \
  --device-read-bps /dev/sda:2mb \
  --device-read-iops /dev/sda:1024 \
  --device-write-bps /dev/sda:2mb \
  --device-write-iops /dev/sda:1024 \
  image 

If you want to limit total disk usage as well, you will need to have the correct storage setup . 如果还想限制总磁盘使用量,则需要具有正确的存储设置 Quotas are supported on the devicemapper , btrfs and zfs storage drivers, and also with the overlay2 driver when used on an xfs file system that is mounted with the pquota option. devicemapperbtrfszfs存储驱动程序以及在与pquota选项一起安装的xfs文件系统上使用时, overlay2驱动程序都支持配额。

docker run \
   --storage-opt size=120G
   image

Compose/Service 撰写/服务

Docker compose v3 seems to have abstracted some of these concepts away to what can be applied to a service/swarm so you don't get the same fine grained control. Docker compose v3似乎已经将其中一些概念抽象为可以应用于服务/群的内容,因此您不会获得相同的细粒度控制。

For a v3 file, use the resources object to configure limits and reservations for cpu and memory: 对于v3文件,请使用resources对象为cpu和内存配置limitsreservations

services:
  blah:
    image: blah
    deploy:
      resources:
        limits:
          cpu: 1
          memory: 2048M
        reservations:
          memory: 2000M

Disk based limits might need a volume driver that supports setting limits. 基于磁盘的限制可能需要支持设置限制的卷驱动程序。

If you can go back to a v2.2 Compose file you can use the full range of constraints on a container at the base level of the service which are analogous to the docker run options: 如果您可以返回v2.2 Compose文件 ,则可以在服务的基础级别上对容器使用全部约束 ,类似于docker run选项:

cpu_count , cpu_percent , cpu_shares , cpu_quota , cpus , cpuset , mem_limit , memswap_limit , mem_swappiness , mem_reservation , oom_score_adj , shm_size cpu_countcpu_percentcpu_sharescpu_quotacpuscpusetmem_limitmemswap_limitmem_swappinessmem_reservationoom_score_adjshm_size

What I want to limit: 我想限制的是:

CPU: single core CPU:单核

RAM: max 2 GB RAM:最大2 GB

HDD: max 4 GB 硬盘:最大4 GB

Other answers have tackled this from the perspective of docker, which actually may be your best approach in this situation, but here is a little more insight on ffmpeg for you: 其他解决方案从docker的角度解决了这个问题,这实际上可能是您在这种情况下的最佳方法,但是这里为您提供了关于ffmpeg的更多见解:

General 一般

There is no ffmpeg option for limiting CPU, RAM and HDD specifically, you have to know quite a lot about transcoding to hit metrics as specifically as you're requesting and without any information on the input file(s) and output file(s) its impossible to give you specific advice. 没有用于限制CPU,RAM和HDD的ffmpeg选项,您必须非常了解有关转码以达到要求的指标的要求,特别是在您所请求的情况下,并且在输入文件和输出文件上没有任何信息无法为您提供具体建议。 Encoding and decoding take varying resources based on where they are coming from and going to. 编码和解码会根据资源的来源和去向获取不同的资源。

CPU 中央处理器

The closest thing you have here is the -threads option, which will limit the total number of threads (not CPU cores) used, or you can supply 0 to allow maximum threads. 您在此处拥有的最接近的东西是-threads选项,该选项将限制使用的线程总数(不是CPU内核),或者您可以提供0以允许最大线程数。 Again, different encoders/decoders/codecs have different limitations on this. 同样,不同的编码器/解码器/编解码器对此有不同的限制。

RAM 内存

No luck here, again, based on your media and codec choices. 再次,根据您的媒体和编解码器选择,这里没有运气。

HDD 硬碟

I haven't done this before but take a look at this article . 我以前没有做过,但是请看一下这篇文章 If that doesn't work you need to do research on your overall output bitrate and compare it to the input video duration. 如果这不起作用,则需要研究整体输出比特率,并将其与输入视频时长进行比较。 The -t option can be used to limit an output based on time duration (or limit reading from an input) -t选项可用于根据持续时间来限制输出(或限制从输入中读取)

Lastly 最后

... all other services slow down significantly ...所有其他服务的速度大大降低

This is expected, ffmpeg tries to take up as much of your machine's resources as the transcode will allow, the best bet is to move transcodes to a separate server especially considering it is already in a docker container. 这是预料之中的,ffmpeg尝试占用转码所允许的尽可能多的机器资源,最好的选择是将转码移动到单独的服务器上,尤其是考虑到它已经在docker容器中。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何从在 docker 容器中运行的 python selenium 脚本在主机上打开 chrome? - How do I open chrome on the host from a python selenium script running in a docker container? 如何创建单个Docker容器(节点,Python,FFMPEG)? - How do I create a single Docker container (Node, Python, FFMPEG)? 如何使用 FastApi 端点或另一个 docker 容器上的 python 脚本查询在 docker 容器上运行的 postgres 数据库? - How do i query a postgres database running on docker container using a FastApi endpoint or python script on another docker container? 如何使用在 docker 容器中运行的 python 脚本创建(dockerized)Elasticsearch 索引? - How do I create a (dockerized) Elasticsearch index using a python script running in a docker container? 如何从正在运行的 python 脚本中杀死我的 docker 容器? - How can I kill my docker container from the running python script? 从自身内部重新启动 python-script - Restart python-script from within itself 如何在 Docker 容器中运行本地 Python 脚本? - How do I run a local Python script in a Docker container? 用于控制 Python 脚本是否仍在运行(未冻结)的 Shell 脚本 - Shell-Script to control if Python-Script is still running (not frozen) 如何从 ZC5FD214CDD0D2B3B4272E 中的 python 容器脚本连接到 Ubuntu 上的本地 SQL 服务器 - How do I connect to the local SQL server on Ubuntu from a python script in Docker container 如何在Windows上的Docker中运行Tensorflow运行python脚本? - How do I run a python script with Tensorflow running in a Docker on Windows?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM