简体   繁体   English

增加在 Azure 容器实例上运行的 Paho-MQTT 发布者(Locust 负载测试)

[英]Increase Paho-MQTT publishers running on Azure Container Instances (Locust Load Test)

We are trying to run a distributed Locust MQTT tests using Azure Container Instances and the Python Paho-MQTT library.我们正在尝试使用 Azure 容器实例和 Python Paho-MQTT 库运行分布式 Locust MQTT 测试。 We can't run more than 340 clients per worker.每个工作人员不能运行超过 340 个客户端。

OSError: [Errno 24] Too many open files.

The problem is related with the following issues:该问题与以下问题有关:

With Docker the soft and hard limits can be changed using --ulimit, but ACI does not accept Docker arguments.使用 Docker,可以使用 --ulimit 更改软限制和硬限制,但 ACI 不接受 Docker 参数。

We changed the ACI entry point to increase the open files soft limit running the following bash script:我们更改了 ACI 入口点以增加运行以下 bash 脚本的打开文件软限制:

ulimit -Sn 10000

locust

We added to the locustfile.py:我们在 locustfile.py 中添加:

resource.setrlimit(resource.RLIMIT_NOFILE, (200000, 200000))

We also tried to use the following command:我们还尝试使用以下命令:

sudo sysctl -w fs.file-max=500000

sysctl -p

But it returns a permission denied error.但它返回权限被拒绝的错误。

Any idea?任何的想法?

It is not an ACI problem but how the Paho-MQTT client is built.这不是 ACI 问题,而是 Paho-MQTT 客户端的构建方式。 Paho uses the select(..) method which limits us to opening more than 1024 file descriptors. Paho 使用select(..)方法限制我们打开超过 1024 个文件描述符。

Each MQTT client means 3 open file descriptors: 3 * 340 = 1020. Over 340 client connections, we hit the 1024 open file descriptors.每个 MQTT 客户端意味着 3 个打开的文件描述符:3 * 340 = 1020。超过 340 个客户端连接,我们达到了 1024 个打开的文件描述符。

We use an MQTT User which inherits from the Paho Client.我们使用从 Paho 客户端继承的 MQTT 用户。 We overrode the following methods to use the eventfd package.我们覆盖了以下方法以使用 eventfd 包。

import eventfd

    [...]
    def loop_start(self) -> Optional[int]:
        [...]
        # self._sockpairR, self._sockpairW = _socketpair_compat()
        self._wake_event = EventFD()
        [...]

    def _reset_sockets(self, sockpair_only: bool = False) -> None:
        [...]
        # if self._sockpairR:
        #   self._sockpairR.close()
        #   self._sockpairR = None
        # if self._sockpairW:
        #   self._sockpairW.close()
        #   self._sockpairW = None
        if self._wake_event:
            self._wake_event = None

    def _packet_queue(
        self,
        command: Literal[48],
        packet: bytearray,
        mid: int,
        qos: int,
        info: Optional[Any] = None,
    ) -> int:
        [...]
        # if self._sockpairW is not None:
        #   try:
        #     self._sockpairW.send(sockpair_data)
        #   except BlockingIOError:
        #     pass
        if self._wake_event is not None:
            with suppress(BlockingIOError):
                self._wake_event.set()
        [...]

    def _loop(self, timeout: float = 1.0) -> int:
        [...]
        # if self._sockpairR is None:
        #   rlist = [self._sock]
        # else:
        #   rlist = [self._sock, self._sockpairR]
        if self._wake_event is None:
            rlist = [self._sock]
        else:
            rlist = [self._sock, self._wake_event]
        [...]
        # if self._sockpairR and self._sockpairR in socklist[0]:
        #   socklist[1].insert(0, self._sock)
        #   try:
        #       self._sockpairR.recv(10000)
        #   except BlockingIOError:
        #       pass
        if self._wake_event and self._wake_event.is_set():
            self._wake_event.clear()
        [...]
    [...]

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM