简体   繁体   中英

OpenStack Ansible deployment fails due to lxc containers not having network connection

I'm trying to deploy OpenStack Ansible. When running the first playbook openstack-ansible setup-hosts.yml , there are errors for all containers during the task [openstack_hosts : Remove the blacklisted packages] (see below) and the playbook fails.

fatal: [infra1_repo_container-1f1565cd]: FAILED! => {"changed": false, "cmd": "apt-get update", "msg": "E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic Release' no longer has a Release file.\nE: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release' no longer has a Release file.\nE: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release' no longer has a Release file.\nE: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release' no longer has a Release file.", "rc": 100, "stderr": "E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic Release' no longer has a Release file.\nE: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release' no longer has a Release file.\nE: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release' no longer has a Release file.\nE: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release' no longer has a Release file.\n", "stderr_lines": ["E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic Release' no longer has a Release file.", "E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release' no longer has a Release file.", "E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release' no longer has a Release file.", "E: The repository 'http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release' no longer has a Release file."], "stdout": "Ign:1 http://ubuntu.mirror.lrz.de/ubuntu bionic InRelease\nIgn:2 http://ubuntu.mirror.lrz.de/ubuntu bionic-updates InRelease\nIgn:3 http://ubuntu.mirror.lrz.de/ubuntu bionic-backports InRelease\nIgn:4 http://ubuntu.mirror.lrz.de/ubuntu bionic-security InRelease\nErr:5 http://ubuntu.mirror.lrz.de/ubuntu bionic Release\n  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)\nErr:6 http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release\n  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)\nErr:7 http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release\n  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)\nErr:8 http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release\n  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)\nReading package lists...\n", "stdout_lines": ["Ign:1 http://ubuntu.mirror.lrz.de/ubuntu bionic InRelease", "Ign:2 http://ubuntu.mirror.lrz.de/ubuntu bionic-updates InRelease", "Ign:3 http://ubuntu.mirror.lrz.de/ubuntu bionic-backports InRelease", "Ign:4 http://ubuntu.mirror.lrz.de/ubuntu bionic-security InRelease", "Err:5 http://ubuntu.mirror.lrz.de/ubuntu bionic Release", "  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)", "Err:6 http://ubuntu.mirror.lrz.de/ubuntu bionic-updates Release", "  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)", "Err:7 http://ubuntu.mirror.lrz.de/ubuntu bionic-backports Release", "  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)", "Err:8 http://ubuntu.mirror.lrz.de/ubuntu bionic-security Release", "  Cannot initiate the connection to 192.168.100.6:8000 (192.168.100.6). - connect (101: Network is unreachable)", "Reading package lists..."]}

When I attach to any container and run ping 192.168.100.6 (local DNS), I get the same error ( connect: Network is unreachable ). However, when I specify an interface by running ping -I eth1 192.168.100.6 there is a successful connection. Running ip r on the infra_cinder container yields:

10.0.3.0/24 dev eth2 proto kernel scope link src 10.0.3.5 
192.168.110.0/24 dev eth1 proto kernel scope link src 192.168.110.232

so there seems to be no default route which is why the connection fails (similar for the other infra containers). Shouldn't OSA automatically configure this? I didn't find anything regarding a default route on containers in the Docs.

Here's my openstack_user_config.yml:

cidr_networks:
  container: 192.168.110.0/24
  tunnel: 192.168.32.0/24
  storage: 10.0.3.0/24

used_ips:
  - "192.168.110.1,192.168.110.2" 
  - "192.168.110.111" 
  - "192.168.110.115" 
  - "192.168.110.117,192.168.110.118" 
  - "192.168.110.131,192.168.110.140" 
  - "192.168.110.201,192.168.110.207" 
  - "192.168.32.1,192.168.32.2" 
  - "192.168.32.201,192.168.32.207" 
  - "10.0.3.1" 
  - "10.0.3.11,10.0.3.14" 
  - "10.0.3.21,10.0.3.24" 
  - "10.0.3.31,10.0.3.42" 
  - "10.0.3.201,10.0.3.207" 

global_overrides:
  # The internal and external VIP should be different IPs, however they
  # do not need to be on separate networks.
  external_lb_vip_address: 192.168.100.168
  internal_lb_vip_address: 192.168.110.201
  management_bridge: "br-mgmt" 
  provider_networks:
    - network:
        container_bridge: "br-mgmt" 
        container_type: "veth" 
        container_interface: "eth1" 
        ip_from_q: "container" 
        type: "raw" 
        group_binds:
          - all_containers
          - hosts
        is_container_address: true
    - network:
        container_bridge: "br-vxlan" 
        container_type: "veth" 
        container_interface: "eth10" 
        ip_from_q: "tunnel" 
        type: "vxlan" 
        range: "1:1000" 
        net_name: "vxlan" 
        group_binds:
          - neutron_linuxbridge_agent
    - network:
        container_bridge: "br-ext1" 
        container_type: "veth" 
        container_interface: "eth12" 
        host_bind_override: "eth12" 
        type: "flat" 
        net_name: "ext_net" 
        group_binds:
          - neutron_linuxbridge_agent
    - network:
        container_bridge: "br-storage" 
        container_type: "veth" 
        container_interface: "eth2" 
        ip_from_q: "storage" 
        type: "raw" 
        group_binds:
          - glance_api
          - cinder_api
          - cinder_volume
          - nova_compute
          - swift-proxy

###
### Infrastructure
###

# galera, memcache, rabbitmq, utility
shared-infra_hosts:
  infra1:
    ip: 192.168.110.201

# repository (apt cache, python packages, etc)
repo-infra_hosts:
  infra1:
    ip: 192.168.110.201

# load balancer
haproxy_hosts:
  infra1:
    ip: 192.168.110.201

###
### OpenStack
###

os-infra_hosts:
   infra1:
     ip: 192.168.110.201

identity_hosts:
   infra1:
     ip: 192.168.110.201

network_hosts:
   infra1:
     ip: 192.168.110.201

compute_hosts:
   compute1:
     ip: 192.168.110.204
   compute2:
     ip: 192.168.110.205
   compute3:
     ip: 192.168.110.206
   compute4:
     ip: 192.168.110.207

storage-infra_hosts:
   infra1:
     ip: 192.168.110.201

storage_hosts:
   lvm-storage1:
     ip: 192.168.110.202
     container_vars:
       cinder_backends:
         lvm:
           volume_backend_name: LVM_iSCSI
           volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
           volume_group: cinder_volumes
           iscsi_ip_address: "{{ cinder_storage_address }}" 
         limit_container_types: cinder_volume

I tried to backtrack from my configuration to the AIO but the same error kept showing up. Finally it disappeared after rebooting the servers so there didn't seem to be a problem with the configuration after all...

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM