简体   繁体   中英

Can't delete Kubernetes cluster deployed with Kops at AWS

I can't delete/update a cluster. I'm getting:

    I0107 19:54:02.618454 8397 request_logger.go:45] AWS request: autoscaling/DescribeAutoScalingGroups
I0107 19:54:02.812764 8397 request_logger.go:45] AWS request: ec2/DescribeNatGateways
W0107 19:54:03.032646 8397 executor.go:130] error running task "ElasticIP/us-east-1a.my.domain" (9m56s remaining to succeed): error finding AssociatedNatGatewayRouteTable: error listing NatGateway %!q(*string=0xc42169eb08): NatGatewayNotFound: NAT gateway nat-083300682d9a0fa74 was not found
status code: 400, request id: 8408a79d-1f8f-4886-83d9-ae0a26c1cc47
I0107 19:54:03.032738 8397 executor.go:103] Tasks: 98 done / 101 total; 1 can run
I0107 19:54:03.032828 8397 executor.go:178] Executing task "ElasticIP/us-east-1a.my.domain": *awstasks.ElasticIP {"Name":"us-east-1a.my.domain","Lifecycle":"Sync","ID":null,"PublicIP":null,"TagOnSubnet":null,"Tags":{"KubernetesCluster":"my.domain","Name":"us-east-1a.my.domain","kubernetes.io/cluster/my.domain":"owned"},"AssociatedNatGatewayRouteTable":{"Name":"private-us-east-1a.my.domain","Lifecycle":"Sync","ID":"rtb-089bd4ffc062a3b15","VPC":{"Name":"my.domain","Lifecycle":"Sync","ID":"vpc-0b638e55c11fc9021","CIDR":"172.10.0.0/16","EnableDNSHostnames":null,"EnableDNSSupport":true,"Shared":true,"Tags":null},"Shared":false,"Tags":{"KubernetesCluster":"my.domain","Name":"private-us-east-1a.my.domain","kubernetes.io/cluster/my.domain":"owned","kubernetes.io/kops/role":"private-us-east-1a"}}}
I0107 19:54:03.033039 8397 natgateway.go:205] trying to match NatGateway via RouteTable rtb-089bd4ffc062a3b15
I0107 19:54:03.033304 8397 request_logger.go:45] AWS request: ec2/DescribeRouteTables
I0107 19:54:03.741980 8397 request_logger.go:45] AWS request: ec2/DescribeNatGateways
W0107 19:54:03.981744 8397 executor.go:130] error running task "ElasticIP/us-east-1a.my.domain" (9m55s remaining to succeed): error finding AssociatedNatGatewayRouteTable: error listing NatGateway %!q(*string=0xc4217e8da8): NatGatewayNotFound: NAT gateway nat-083300682d9a0fa74 was not found
status code: 400, request id: 3be6843a-38e2-4584-b2cd-b29f6a132d2d
I0107 19:54:03.981881 8397 executor.go:145] No progress made, sleeping before retrying 1 failed task(s)
I0107 19:54:13.982261 8397 executor.go:103] Tasks: 98 done / 101 total; 1 can run

I change kubectl version to do some tasks for other clusters and then got back to latest, I've been testing new clusters deleting, creating, updating with no issues...until now, I have this cluster that I can't modify and spending money, sure I can remove kops IAM but I use it for other environments at the same account.

At least, is there a file where I can edit what kops' looking at AWS so I can remove this object? I couldn't find at config/spec S3 files.

I have a deployed cluster that I can't use due to this, sure I can deny kops permissions and delete the cluster so kops can't recreate it, but I have other clusters as well.

kops version: Version 1.10.0 (git-8b52ea6d1)

I deleted the bucket and then all resources manually.

For future readers, enable versioning at the bucket where you export the cluster config.

We ran into the same issue a few minutes ago. We were able to fix it by searching for VPC RouteTable entries which pointed to the respective NatGateway (Status was Blackhole). After deleting those, we we're finally able to delete the cluster without any additional issues.

We were pointed in the right direction by this issue comment .

First ensure that you are connected to the cluster using the correct credentials

export KUBECONFIG=<kubeconfig_location> 
AWS_ACCESS_KEY_ID=<access-key> AWS_SECRET_ACCESS_KEY=<Secret_KEY> kops validate cluster --wait 10m --state="<S3-bucket>" --name=<CLUSTER_NAME>

If you validation is successful, then you can delete the cluster using the following command

kops delete cluster --state="<bucket_name>" --yes

You might find some resources pending for deletion. This means they were created externally (may be manually). For example you created DB Subnet in the same VPC and the DB instance is running in the same subnet. This means kops cannot delete the VPC until you delete DB and Db subnet.

Just deleting the master node the cluster dies. I had the similar issue while I was testing KOPS and resulted into a little payment. When I deleted a child node a new one created immediately and it is understandable. So I deleted master node and the cluster died.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM