简体   繁体   English

在 docker swarm 上运行的容器无法从外部访问

[英]container running on docker swarm not accessible from outside

I am running my containers on the docker swarm.我在 docker 群上运行我的容器。 asset-frontend service is my frontend application which is running Nginx inside the container and exposing port 80. now if I do资产前端服务是我的前端应用程序,它在容器内运行 Nginx 并公开端口 80。现在如果我这样做

curl http://10.255.8.21:80 curl http://10.255.8.21:80

or或者

curl http://127.0.0.1:80 curl http://127.0.0.1:80

from my host where I am running these containers I am able to see my asset-frontend application but it is not accessible outside of the host.从我运行这些容器的主机上,我可以看到我的资产前端应用程序,但它无法在主机外部访问。 I am not able to access it from another machine, my host machine operating system is centos 8.我无法从另一台机器访问它,我的主机操作系统是 centos 8。

this is my docker-compose file这是我的 docker-compose 文件

version: "3.3"
networks:
  basic:
services:
  asset-backend:
    image: asset/asset-management-backend
    env_file: .env
    deploy:
      replicas: 1
    depends_on:
      - asset-mongodb
      - asset-postgres
    networks:
      - basic
  asset-mongodb:
    image: mongo
    restart: always
    env_file: .env
    ports:
      - "27017:27017"
    volumes:
      - $HOME/asset/mongodb:/data/db
    networks:
      - basic
  asset-postgres:
    image: asset/postgresql
    restart: always
    env_file: .env
    ports:
      - "5432:5432"
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=password
      - POSTGRES_DB=asset-management
    volumes:
      - $HOME/asset/postgres:/var/lib/postgresql/data
    networks:
      - basic
  asset-frontend:
    image: asset/asset-management-frontend
    restart: always
    ports:
      - "80:80"
    environment:
      - ENV=dev
    depends_on:
      - asset-backend
    deploy:
      replicas: 1
    networks:
      - basic
  asset-autodiscovery-cron:
    image: asset/auto-discovery-cron
    restart: always
    env_file: .env
    deploy:
      replicas: 1
    depends_on:
      - asset-mongodb
      - asset-postgres
    networks:
      - basic

this is my docker service ls这是我的 docker 服务 ls

ID                  NAME                                       MODE                REPLICAS            IMAGE                                         PORTS
auz640zl60bx        asset_asset-autodiscovery-cron   replicated          1/1                 asset/auto-discovery-cron:latest         
g6poofhvmoal        asset_asset-backend              replicated          1/1                 asset/asset-management-backend:latest    
brhq4g4mz7cf        asset_asset-frontend             replicated          1/1                 asset/asset-management-frontend:latest   *:80->80/tcp
rmkncnsm2pjn        asset_asset-mongodb              replicated          1/1                 mongo:latest                                  *:27017->27017/tcp
rmlmdpa5fz69        asset_asset-postgres             replicated          1/1                 asset/postgresql:latest                  *:5432->5432/tcp

My 80 port is open in firewall following is the output of firewall-cmd --list-all我的 80 端口在防火墙中打开以下是firewall-cmd --list-all的 output

public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  sources: 
  services: cockpit dhcpv6-client ssh
  ports: 22/tcp 2376/tcp 2377/tcp 7946/tcp 7946/udp 4789/udp 80/tcp
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:

if i inspect my created.network the output is following如果我检查我的 created.network output 正在跟随

[
    {
        "Name": "asset_basic",
        "Id": "zw73vr9xigfx7hy16u1myw5gc",
        "Created": "2019-11-26T02:36:38.241352385-05:00",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.3.0/24",
                    "Gateway": "10.0.3.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "9348f4fc6bfc1b14b84570e205c88a67aba46f295a5e61bda301fdb3e55f3576": {
                "Name": "asset_asset-frontend.1.zew1obp21ozmg8r1tzmi5h8g8",
                "EndpointID": "27624fe2a7b282cef1762c4328ce0239dc70ebccba8e00d7a61595a7a1da2066",
                "MacAddress": "02:42:0a:00:03:08",
                "IPv4Address": "10.0.3.8/24",
                "IPv6Address": ""
            },
            "943895f12de86d85fd03d0ce77567ef88555cf4766fa50b2a8088e220fe1eafe": {
                "Name": "asset_asset-mongodb.1.ygswft1l34o5vfaxbzmnf0hrr",
                "EndpointID": "98fd1ce6e16ade2b165b11c8f2875a0bdd3bc326c807ba6a1eb3c92f4417feed",
                "MacAddress": "02:42:0a:00:03:04",
                "IPv4Address": "10.0.3.4/24",
                "IPv6Address": ""
            },
            "afab468aefab0689aa3488ee7f85dbc2cebe0202669ab4a58d570c12ee2bde21": {
                "Name": "asset_asset-autodiscovery-cron.1.5k23u87w7224mpuasiyakgbdx",
                "EndpointID": "d3d4c303e1bc665969ad9e4c9672e65a625fb71ed76e2423dca444a89779e4ee",
                "MacAddress": "02:42:0a:00:03:0a",
                "IPv4Address": "10.0.3.10/24",
                "IPv6Address": ""
            },
            "f0a768e5cb2f1f700ee39d94e380aeb4bab5fe477bd136fd0abfa776917e90c1": {
                "Name": "asset_asset-backend.1.8ql9t3qqt512etekjuntkft4q",
                "EndpointID": "41587022c339023f15c57a5efc5e5adf6e57dc173286753216f90a976741d292",
                "MacAddress": "02:42:0a:00:03:0c",
                "IPv4Address": "10.0.3.12/24",
                "IPv6Address": ""
            },
            "f577c539bbc3c06a501612d747f0d28d8a7994b843c6a37e18eeccb77717539e": {
                "Name": "asset_asset-postgres.1.ynrqbzvba9kvfdkek3hurs7hl",
                "EndpointID": "272d642a9e20e45f661ba01e8731f5256cef87898de7976f19577e16082c5854",
                "MacAddress": "02:42:0a:00:03:06",
                "IPv4Address": "10.0.3.6/24",
                "IPv6Address": ""
            },
            "lb-asset_basic": {
                "Name": "asset_basic-endpoint",
                "EndpointID": "142373fd9c0d56d5a633b640d1ec9e4248bac22fa383ba2f754c1ff567a3502e",
                "MacAddress": "02:42:0a:00:03:02",
                "IPv4Address": "10.0.3.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4100"
        },
        "Labels": {
            "com.docker.stack.namespace": "asset"
        },
        "Peers": [
            {
                "Name": "8170c4487a4b",
                "IP": "10.255.8.21"
            }
        ]
    }
]

Ran into this same issue and it turns out it was a clash between my local networks subnet and the subnet of the automatically created ingress network.遇到同样的问题,事实证明这是我的本地网络子网和自动创建的ingress网络的子网之间的冲突。 This can be verified using docker network inspect ingress and checking if the IPAM.Config.Subnet value overlaps with your local network.这可以使用 docker docker network inspect ingress并检查IPAM.Config.Subnet值是否与您的本地网络重叠来验证。

To fix you can update the configuration of the ingress network as specified in Customize the default ingress network ;要解决此问题,您可以按照自定义默认入口网络中的指定更新ingress网络的配置; in summary:总之:

  1. Remove services that publish ports删除发布端口的服务
  2. Remove existing network: docker network rm ingress删除现有网络: docker network rm ingress
  3. Recreate using non-conflicting subnet:使用非冲突子网重新创建:
     docker network create \ --driver overlay \ --ingress \ --subnet 172.16.0.0/16 \ # Or whatever other subnet you want to use --gateway 172.16.0.1 \ ingress
  4. Restart services重启服务

You can avoid a clash to begin with by specifying the default subnet pool when initializing the swarm using the --default-addr-pool option.您可以通过在使用--default-addr-pool选项初始化 swarm 时指定默认子网池来避免冲突。

docker service update your-service --publish-add 80:80

您可以通过更新服务来发布端口。

Can you try this url instead of the ip adres?你可以试试这个网址而不是 IP 地址吗? host.docker.internal so something like http://host.docker.internal:80 host.docker.internal类似http://host.docker.internal:80

I suggest you verify the "right" behavior using docker-compose first.我建议您首先使用 docker-compose 验证“正确”的行为。 Then, try to use docker swarm without network specification just to verify there are no network interface problems.然后,尝试使用没有网络规范的docker swarm来验证没有网络接口问题。

Also, you could use the below command to verify your LISTEN ports:此外,您可以使用以下命令来验证您的 LISTEN 端口:

netstat -tulpn

EDIT: I faced this same issue but I was able to access my services through 127.0.0.1编辑:我遇到了同样的问题,但我能够通过 127.0.0.1 访问我的服务

While running docker provide an port mapping, like在运行 docker 时提供端口映射,例如

docker run -p 8081:8081 your-docker-image

Or, provide the port mapping in the docker desktop while starting the container.或者,在启动容器时在 docker 桌面中提供端口映射。

I got into this same issue.我遇到了同样的问题。 It turns out that's my iptables filter causes external connections not work.原来是我的 iptables 过滤器导致外部连接不起作用。

In docker swarm mode, docker create a virtual.network bridge device docker_gwbridge to access to overlap.network.在docker swarm模式下,docker创建一个virtual.network桥接设备docker_gwbridge来访问overlap.network。 My iptables has following line to drop packet forwards:我的 iptables 有以下行来丢弃数据包转发:

:FORWARD DROP

That makes.network packets from physical NIC can't reach the docker ingress.network, so that my docker service only works on localhost.这使得来自物理网卡的网络数据包无法到达 docker ingress.network,因此我的 docker 服务只能在本地主机上运行。

Change iptables rule to将 iptables 规则更改为

:FORWARD ACCEPT

And problem solved without touching the docker.并且在不接触 docker 的情况下解决了问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM