简体   繁体   English

Octavia:尝试删除不可变负载均衡器

[英]Octavia: Trying to delete immutable loadbalancer

I have a loadbalancer (see status below) that I want to delete.我有一个要删除的负载均衡器(请参阅下面的状态)。 I already deleted the instances in its pool.我已经删除了其池中的实例。 Full disclosure: This is on a Devstack which I rebooted, and where I recreated the lb-mgmt-network routing manually.完全披露:这是在我重新启动的 Devstack 上,并在其中手动重新创建了lb-mgmt-network路由。 I may have overlooked a detail after the reboot.重启后我可能忽略了一个细节。 The loadbalancer worked before the reboot.负载均衡器在重启前工作。

The first step to delete the loadbalancer is to delete its pool members.删除负载均衡器的第一步是删除其池成员。 This fails as follows:这将失败如下:

$ alias olb='openstack loadbalancer'
$ olb member delete website-pool 08f55..
Load Balancer 1ff... is immutable and cannot be updated. (HTTP 409)

What can I do to make it mutable?我该怎么做才能使它可变?

Below, see the loadbalancer's status after recreating the o-hm0 route and restarting the amphora.下面,在重新创建o-hm0路由并重新启动 amphora 后,查看负载均衡器的状态。 Its provisioning status is ERROR, but according to the API , this should enable me to delete it:它的配置状态是 ERROR,但根据 API ,这应该使我能够删除它:

$ olb status show kubelb
{
    "loadbalancer": {
        "id": "1ff7682b-3989-444d-a1a8-6c91aac69c45",
        "name": "kubelb",
        "operating_status": "ONLINE",
        "provisioning_status": "ERROR",
        "listeners": [
            {
                "id": "d3c3eb7f-345f-4ded-a7f8-7d97e3af0fd4",
                "name": "weblistener",
                "operating_status": "ONLINE",
                "provisioning_status": "ACTIVE",
                "pools": [
                    {
                        "id": "9b0875e0-7d16-4ebc-9e8d-d1b90d4264a6",
                        "name": "website-pool",
                        "provisioning_status": "ACTIVE",
                        "operating_status": "ONLINE",
                        "members": [
                            {
                                "id": "08f55bba-260a-4b83-ad6d-f9d6b44f0e2c",
                                "name": "",
                                "operating_status": "NO_MONITOR",
                                "provisioning_status": "ACTIVE",
                                "address": "172.16.0.21",
                                "protocol_port": 80
                            },
                            {
                                "id": "f7665e90-dad0-480e-8ef4-65e0a042b9fa",
                                "name": "",
                                "operating_status": "NO_MONITOR",
                                "provisioning_status": "ACTIVE",
                                "address": "172.16.0.22",
                                "protocol_port": 80
                            }
                        ]
                    }
                ]
            }
        ]
    }
}

When you have a load balancer in ERROR state you have two options:当负载均衡器处于 ERROR 状态时,您有两个选择:

  1. Delete the load balancer using the cascade delete option (--cascade on the cli).使用级联删除选项(cli 上的 --cascade)删除负载均衡器。
  2. Use the failover API to tell Octavia to repair the load balancer once your cloud is fixed.使用故障转移 API 告诉 Octavia 在您的云修复后修复负载均衡器。

In Octavia, operating status is a measured/observed status.在 Octavia 中,操作状态是一种测量/观察状态。 If they don't go ONLINE it is likely that there is a network configuration issue with the lb-mgmt-net and the health heartbeat messages (UDP 5555) are not making it back to the health manager controller.如果他们不联机,则 lb-mgmt-net 可能存在网络配置问题,并且运行状况心跳消息 (UDP 5555) 未将其返回到运行状况管理器控制器。

That said, devstack is not setup to work after a reboot.也就是说,devstack 未设置为在重新启动后工作。 Specifically neutron and the network interfaces will be in an improper state.特别是 neutron 和网络接口将处于不正确的状态。 As you have found, you can manually reconfigure those and usually get things working again.正如您所发现的,您可以手动重新配置这些内容,并且通常可以重新开始工作。

If I understand documentation and source code right, a loadbalancer in provisioning status ERROR can be deleted but not modified .如果我正确理解文档和源代码,则可以删除配置状态为 ERROR 的负载均衡器,但不能修改 Unfortunately, it can only be deleted after its pools and listeners have been deleted, which would modify the loadbalancer.不幸的是,它只能在其池和侦听器被删除后才能删除,这会修改负载均衡器。 Looks like a chicken and egg problem to me.对我来说看起来像鸡和蛋的问题。 I "solved" this by recreating the cloud from scratch.我通过从头开始重新创建云来“解决”这个问题。 I guess I could also have cleaned up the database.我想我也可以清理数据库。

An analysis of the stack.sh log file revealed that a few additional steps were needed to make the Devstack cloud reboot-proof.对 stack.sh 日志文件的分析表明,需要一些额外的步骤来使 Devstack 云重新启动。 To make Octavia ready:要使 Octavia 准备就绪:

  • Create /var/run/octavia , owned by the stack user创建/var/run/octavia ,由 stack 用户拥有
  • Ensure o-hm0 is up确保 o-hm0 已启动
  • Give o-hm0 the correct MAC and IP addresses, both found in the details of Neutron port octavia-health-manager-standalone-listen-port给 o-hm0 正确的 MAC 和 IP 地址,都可以在 Neutron 端口的细节中找到 Octavia -health-manager-standalone-listen-port
  • add netfilter rules for traffic coming from o-hm0为来自 o-hm0 的流量添加 netfilter 规则

At this point, I feel I can reboot Devstack and still have functioning load balancers.在这一点上,我觉得我可以重新启动 Devstack 并且仍然可以使用负载平衡器。 Strangely, all load balancers' operating_status (as well as their listeners' and pools' operating_status ) is OFFLINE.奇怪的是,所有负载平衡器的operation_status (以及它们的监听器和池的operation_status )都是离线的。 However, that doesn't prevent them from working.但是,这并不妨碍它们工作。 I have not found out how to make that ONLINE.我还没有找到如何在线制作。

In case anybody is interested, below is the script I use after rebooting Devstack.如果有人感兴趣,下面是我在重新启动 Devstack 后使用的脚本。 In addition, I also changed the Netplan configuration so that br-ex gets the server's IP address (further below).此外,我还更改了 Netplan 配置,以便 br-ex 获取服务器的 IP 地址(下文进一步介绍)。

restore-devstack script:恢复开发堆栈脚本:

$ cat restore-devstack

source ~/devstack/openrc admin admin

if losetup -a | grep -q /opt/stack/data/stack-volumes
then echo loop devices are already set up
else
    sudo losetup -f --show --direct-io=on /opt/stack/data/stack-volumes-default-backing-file
    sudo losetup -f --show --direct-io=on /opt/stack/data/stack-volumes-lvmdriver-1-backing-file
    echo restarting Cinder Volume service
    sudo systemctl restart devstack@c-vol
fi
sudo lvs
openstack volume service list
echo
echo recreating /var/run/octavia
sudo mkdir /var/run/octavia
sudo chown stack /var/run/octavia
echo
echo setting up the o-hm0 interface
if ip l show o-hm0 | grep -q 'state DOWN'
then sudo ip l set o-hm0 up
else echo o-hm0 interface is not DOWN
fi

HEALTH_IP=$(openstack port show octavia-health-manager-standalone-listen-port -c fixed_ips -f yaml | grep ip_address | cut -d' ' -f3)
echo health monitor IP is $HEALTH_IP
if ip a show dev o-hm0 | grep -q $HEALTH_IP
then echo o-hm0 interface has IP address
else sudo ip a add ${HEALTH_IP}/24 dev o-hm0
fi
HEALTH_MAC=$(openstack port show octavia-health-manager-standalone-listen-port -c mac_address -f value)
echo health monitor MAC is $HEALTH_MAC
sudo ip link set dev o-hm0 address $HEALTH_MAC
echo o-hm0 MAC address set to $HEALTH_MAC
echo route to loadbalancer network:
ip r show 192.168.0.0/24
echo
echo fix netfilter for Octavia
sudo iptables -A INPUT -i o-hm0 -p udp -m udp --dport 20514 -j ACCEPT
sudo iptables -A INPUT -i o-hm0 -p udp -m udp --dport 10514 -j ACCEPT
sudo iptables -A INPUT -i o-hm0 -p udp -m udp --dport 5555 -j ACCEPT
echo fix netfilter for Magnum
sudo iptables -A INPUT -d 192.168.1.200/32 -p tcp -m tcp --dport 443 -j ACCEPT
sudo iptables -A INPUT -d 192.168.1.200/32 -p tcp -m tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -d 192.168.1.200/32 -p tcp -m tcp --dport 9511 -j ACCEPT

Netplan config:网络规划配置:

$ cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
  ethernets:
    enp1s0:
      dhcp4: no
    br-ex:
      addresses: [192.168.1.200/24]
      nameservers: { addresses: [192.168.1.16,1.1.1.1] }
      gateway4: 192.168.1.1
  version: 2

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM