简体   繁体   中英

PyZMQ Dockerized pub sub - sub won't receive messages

I want to build a modularized system with modules communicating over ZeroMQ. To improve usability, I want to Dockerize (some) of these modules, so that users don't have to setup an environment. However, I cannot get a dockerized publisher to have its messaged received by a non-dockerized subscriber.

System

  • Ubuntu 18.04
  • Python 3.7
  • libzmq version 4.2.5
  • pyzmq version is 17.1.2
  • Docker version 18.09.0, build 4d60db4

Minimal test case

zmq_sub.py

# CC0

import zmq


def main():
    # ZMQ connection
    url = "tcp://127.0.0.1:5550"
    ctx = zmq.Context()
    socket = ctx.socket(zmq.SUB)
    socket.bind(url)  # subscriber creates ZeroMQ socket
    socket.setsockopt(zmq.SUBSCRIBE, ''.encode('ascii'))  # any topic
    print("Sub bound to: {}\nWaiting for data...".format(url))

    while True:
        # wait for publisher data
        topic, msg = socket.recv_multipart()
        print("On topic {}, received data: {}".format(topic, msg))


if __name__ == "__main__":
    main()

zmq_pub.py

# CC0

import zmq
import time


def main():
    # ZMQ connection
    url = "tcp://127.0.0.1:5550"
    ctx = zmq.Context()
    socket = ctx.socket(zmq.PUB)
    socket.connect(url)  # publisher connects to subscriber
    print("Pub connected to: {}\nSending data...".format(url))

    i = 0

    while True:
        topic = 'foo'.encode('ascii')
        msg = 'test {}'.format(i).encode('ascii')
        # publish data
        socket.send_multipart([topic, msg])  # 'test'.format(i)
        print("On topic {}, send data: {}".format(topic, msg))
        time.sleep(.5)

        i += 1


if __name__ == "__main__":
    main()

When I open 2 terminals and run:

  • python zmq_sub.py
  • python zmq_pub.py

The subscriber receives without error the data ( On topic b'foo', received data: b'test 1' )

Dockerfile

I've created the following Dockerfile :

FROM python:3.7.1-slim

MAINTAINER foo bar <foo@spam.eggs>

RUN apt-get update && \
  apt-get install -y --no-install-recommends \
  gcc

WORKDIR /app
COPY requirements.txt /app
RUN pip install -r requirements.txt

COPY zmq_pub.py /app/zmq_pub.py

EXPOSE 5550

CMD ["python", "zmq_pub.py"]

and then I successfully build a Dockerized publisher with the command: sudo docker build . -t foo/bar sudo docker build . -t foo/bar

Attempts

Attempt 1

Now I have my docker container with publisher, I'm trying to have my non-dockerized subscriber receive the data. I run the following 2 commands:

  1. python zmq_sub.py
  2. sudo docker run -it foo/bar

I see my publisher inside the container publishing data, but my subscriber receives nothing.

Attempt 2

With the idea I have to map the internal port of my dockerized publisher to my machine's port, I run the following 2 commands:

  1. python zmq_sub.py
  2. sudo docker run -p 5550:5550 -it foo/bar

However, then I receive the following error: docker: Error response from daemon: driver failed programming external connectivity on endpoint objective_shaw (09b5226d89a815ce5d29842df775836766471aba90b95f2e593cf5ceae0cf174): Error starting userland proxy: listen tcp 0.0.0.0:5550: bind: address already in use.

It seems to me that my subscriber has already bound to 127.0.0.1:5550 and therefore Docker cannot do this anymore when I try to map it. If I change it to -p 5549:5550 , Docker doesn't give an error, but then it's the same situation as with Attempt 1.

Question

How do I get my dockerized publisher to publish data to my non-dockerized subscriber?

Code

Edit 1: Code updated to also give an example on how to use docker-compose for automatic IP inference.

GitHub: https://github.com/NumesSanguis/pyzmq-docker

This is mostly a docker networking question, and isn't specific to pyzmq or zeromq. You would have the same issues with anything trying to connect from a container to the host.

To be clear, in this example, you have a server running on the host (zmq_sub.py, which calls bind), and you want to connect to it from a client running inside a docker container (zmq_pub.py).

Since the docker container is the one connecting, you don't need to do any docker port exposing or forwarding. EXPOSE and forwarding ports are only for making it possible to connect to a container (ie bind is called in the container), not to make outbound connections from a container, which is what's going on here.

The main thing here is that when it comes to networking with docker, you should think of each container as a separate machine on a local network. In order to be connectable from other containers or the host, a service should bind onto all interfaces (or at least an accessible interface). Binding on localhost in a container means that only other processes in that container should be able to talk to it. Similarly, binding localhost on the host means that docker containers shouldn't be able to connect.

So the first change is that your bind url should be:

url = 'tcp://0.0.0.0:5550'
...
socket.bind(url)

or pick the ip address that corresponds to your docker virtual network.

Then, your connect url needs to be the ip of your host as seen from the container. This can be found via ifconfig . Typically any ip address will do, but if you have a docker0 network, that would be the logical choice.

I had the same problem in this post .

Your problem is that the container localhost ( 127.0.0.1 ) have a difference with other container or host machine localhost.

So to overcome that, use the tcp://*:5550 in the .bind() instead of 127.0.0.1 or machine IP.

Then, you should make an expose IP and declare assign IP between the container and the host machine (I used from docker-compose to do that on the mentioned SO post at above). I think in your case will be as the following as you tried it:

EXPOSE 5550

and

sudo docker run -p 5550:5550 -it foo/bar

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM