简体   繁体   English

由于 lxc 容器没有网络连接,OpenStack Ansible 部署失败

[英]OpenStack Ansible deployment fails due to lxc containers not having network connection

I'm trying to deploy OpenStack Ansible.我正在尝试部署 OpenStack Ansible。 When running the first playbook openstack-ansible setup-hosts.yml , there are errors for all containers during the task [openstack_hosts : Remove the blacklisted packages] (see below) and the playbook fails.运行第一个 playbook openstack-ansible setup-hosts.yml ,在任务[openstack_hosts : Remove the blacklisted packages] (见下文)期间所有容器都存在错误,并且 playbook 失败。

fatal: [infra1_repo_container-1f1565cd]: FAILED! => {"changed": false, "cmd": "apt-get update", "msg": "E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic Release' no longer has a Release file.\nE: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release' no longer has a Release file.\nE: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release' no longer has a Release file.\nE: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release' no longer has a Release file.", "rc": 100, "stderr": "E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic Release' no longer has a Release file.\nE: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release' no longer has a Release file.\nE: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release' no longer has a Release file.\nE: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release' no longer has a Release file.\n", "stderr_lines": ["E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic Release' no longer has a Release file.", "E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release' no longer has a Release file.", "E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release' no longer has a Release file.", "E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release' no longer has a Release file."], "stdout": "Ign:1 http://ubuntu.mirror.lrz.de/ubuntu bionic InRelease\nIgn:2 http://ubuntu.mirror.lrz.de/ubuntu bionic-updates InRelease\nIgn:3 http://ubuntu.mirror.lrz.de/ubuntu bionic-backports InRelease\nIgn:4 http://ubuntu.mirror.lrz.de/ubuntu bionic-security InRelease\nErr:5 http://ubuntu.mirror.lrz.de/ubuntu bionic Release\n  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)\nErr:6 http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release\n  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)\nErr:7 http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release\n  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)\nErr:8 http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release\n  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)\nReading package lists...\n", "stdout_lines": ["Ign:1 http://ubuntu.mirror.lrz.de/ubuntu bionic InRelease", "Ign:2 http://ubuntu.mirror.lrz.de/ubuntu bionic-updates InRelease", "Ign:3 http://ubuntu.mirror.lrz.de/ubuntu bionic-backports InRelease", "Ign:4 http://ubuntu.mirror.lrz.de/ubuntu bionic-security InRelease", "Err:5 http://ubuntu.mirror.lrz.de/ubuntu bionic Release", "  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)", "Err:6 http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release", "  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)", "Err:7 http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release", "  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)", "Err:8 http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release", "  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)", "Reading package lists..."]}

When I attach to any container and run ping 192.168.100.6 (local DNS), I get the same error ( connect: Network is unreachable ).当我附加到任何容器并运行ping 192.168.100.6 (本地 DNS)时,我收到相同的错误( connect: Network is unreachable )。 However, when I specify an interface by running ping -I eth1 192.168.100.6 there is a successful connection.但是,当我通过运行ping -I eth1 192.168.100.6指定接口时,连接成功。 Running ip r on the infra_cinder container yields:在 infra_cinder 容器上运行ip r产生:

10.0.3.0/24 dev eth2 proto kernel scope link src 10.0.3.5 
192.168.110.0/24 dev eth1 proto kernel scope link src 192.168.110.232

so there seems to be no default route which is why the connection fails (similar for the other infra containers).所以似乎没有默认路由,这就是连接失败的原因(类似于其他基础设施容器)。 Shouldn't OSA automatically configure this? OSA 不应该自动配置吗? I didn't find anything regarding a default route on containers in the Docs.我在文档中没有找到任何关于容器默认路由的信息。

Here's my openstack_user_config.yml:这是我的 openstack_user_config.yml:

cidr_networks:
  container: 192.168.110.0/24
  tunnel: 192.168.32.0/24
  storage: 10.0.3.0/24

used_ips:
  - "192.168.110.1,192.168.110.2" 
  - "192.168.110.111" 
  - "192.168.110.115" 
  - "192.168.110.117,192.168.110.118" 
  - "192.168.110.131,192.168.110.140" 
  - "192.168.110.201,192.168.110.207" 
  - "192.168.32.1,192.168.32.2" 
  - "192.168.32.201,192.168.32.207" 
  - "10.0.3.1" 
  - "10.0.3.11,10.0.3.14" 
  - "10.0.3.21,10.0.3.24" 
  - "10.0.3.31,10.0.3.42" 
  - "10.0.3.201,10.0.3.207" 

global_overrides:
  # The internal and external VIP should be different IPs, however they
  # do not need to be on separate networks.
  external_lb_vip_address: 192.168.100.168
  internal_lb_vip_address: 192.168.110.201
  management_bridge: "br-mgmt" 
  provider_networks:
    - network:
        container_bridge: "br-mgmt" 
        container_type: "veth" 
        container_interface: "eth1" 
        ip_from_q: "container" 
        type: "raw" 
        group_binds:
          - all_containers
          - hosts
        is_container_address: true
    - network:
        container_bridge: "br-vxlan" 
        container_type: "veth" 
        container_interface: "eth10" 
        ip_from_q: "tunnel" 
        type: "vxlan" 
        range: "1:1000" 
        net_name: "vxlan" 
        group_binds:
          - neutron_linuxbridge_agent
    - network:
        container_bridge: "br-ext1" 
        container_type: "veth" 
        container_interface: "eth12" 
        host_bind_override: "eth12" 
        type: "flat" 
        net_name: "ext_net" 
        group_binds:
          - neutron_linuxbridge_agent
    - network:
        container_bridge: "br-storage" 
        container_type: "veth" 
        container_interface: "eth2" 
        ip_from_q: "storage" 
        type: "raw" 
        group_binds:
          - glance_api
          - cinder_api
          - cinder_volume
          - nova_compute
          - swift-proxy

###
### Infrastructure
###

# galera, memcache, rabbitmq, utility
shared-infra_hosts:
  infra1:
    ip: 192.168.110.201

# repository (apt cache, python packages, etc)
repo-infra_hosts:
  infra1:
    ip: 192.168.110.201

# load balancer
haproxy_hosts:
  infra1:
    ip: 192.168.110.201

###
### OpenStack
###

os-infra_hosts:
   infra1:
     ip: 192.168.110.201

identity_hosts:
   infra1:
     ip: 192.168.110.201

network_hosts:
   infra1:
     ip: 192.168.110.201

compute_hosts:
   compute1:
     ip: 192.168.110.204
   compute2:
     ip: 192.168.110.205
   compute3:
     ip: 192.168.110.206
   compute4:
     ip: 192.168.110.207

storage-infra_hosts:
   infra1:
     ip: 192.168.110.201

storage_hosts:
   lvm-storage1:
     ip: 192.168.110.202
     container_vars:
       cinder_backends:
         lvm:
           volume_backend_name: LVM_iSCSI
           volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
           volume_group: cinder_volumes
           iscsi_ip_address: "{{ cinder_storage_address }}" 
         limit_container_types: cinder_volume

I tried to backtrack from my configuration to the AIO but the same error kept showing up.我试图从我的配置回溯到 AIO,但同样的错误不断出现。 Finally it disappeared after rebooting the servers so there didn't seem to be a problem with the configuration after all...最后它在重新启动服务器后消失了,所以毕竟配置似乎没有问题......

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM