简体   繁体   中英

Dynamic deployment of stateful applications in GKE

I'm trying to figure out which tools from GKE stack I should apply to my use case which is a dynamic deployment of stateful application with dynamic HTTP endpoints.

Stateful in my case means that I don't want any replicas and load-balancing (because the app doesn't scale horizontally at all). I understand though that in k8s/gke nomenclature I'm still going to be using a 'load-balancer' even though it'll act as a reverse proxy and not actually balance any load.

The use case is as follows. I have some web app where I can request for a 'new instance' and in return I get a dynamically generated url (eg http://random-uuid-1.acme.io ). This domain should point to a newly spawned, single instance of a container (Pod) hosting some web application. Again, if I request another 'new instance', I'll get a http://random-uuid-2.acme.io which will point to another (separate), newly spawned instance of the same app.

So far I figured out following setup. Every time I request a 'new instance' I do the following:

  • create a new Pod with dynamic name app-${uuid} that exposes HTTP port
  • create a new Service with NodePort that "exposes" the Pod's HTTP port to the Cluster
  • create or update (if exists) Ingress by adding a new http rule where I specify that domain X should point at NodePort X

The Ingress mentioned above uses a LoadBalancer as its controller, which is automated process in GKE.

A few issues that I've already encountered which you might be able to help me out with:

  1. While Pod and NodePort are separate resources per each app, Ingress is shared. I am thus not able to just create/delete a resource but I'm also forced to keep track of what has been added to the Ingress to be then able to append/delete from the yaml which is definitely not the way to do that (ie editing yamls). Instead I'd probably want to have something like an Ingress to monitor a specific namespace and create rules automatically based on Pod labels. Say I have 3 pods with labels, app-1 , app-2 and app-3 and I want Ingress to automatically monitor all Pods in my namespace and create rules based on the labels of these pods (ie app-1.acme.io -> reverse proxy to Pod app-1 ).
  2. Updating Ingress with a new HTTP rule takes around a minute to allow traffic into the Pod, until then I keep getting 404 even though both Ingress and LoadBalancer look as 'ready'. I can't figure out what I should watch/wait for to get a clear message that the Ingress Controller is ready for accepting traffic for newly spawned app.
  3. What would be the good practice of managing such cluster where you can't strictly define Pods/Services manifests because you are creating them dynamically (with different names, endpoints or rules). You surely don't want to create bunch of yaml-s for every application you spawn to maintain. I would imagine something similar to consul templates in case of Consul but for k8s?

I participated in a similar project and our decision was to use Kubernetes Client Library to spawn instances. The instances were managed by a simple web application, which took some customisation parameters, saved them into its database, then created an instance. Because of the database, there was no problem with keeping track of what have been created so far. By querying the database we were able to tell if such deployment was already created or update/delete any associated resources.

Each instance consisted of:

  • a deployment (single or multi-replica, depending on the instance);
  • a ClusterIp service (no reason to reserve machine port with NodePort );
  • an ingress object for shared ingress controller;
  • and some shared configMaps.

And we also used external DNS and cert manager , one to manage DNS records and another to issue SSL certificates for the ingress. With this setup it took about 10 minutes to deploy a new instance. The pod and ingress controller were ready in seconds but we had to wait for the certificate and it's readiness depended on whether issuer's DNS got our new record. This problem might be avoided by using a wildcard domain but we had to use many different domains so it wasn't an option in our case.

Other than that you might consider writing a Helm chart and make use of helm list command to find existing instances and manage them. Though, this is a rather 'manual' solution. If you want this functionality to be a part of your application - better use a client library for Kubernetes.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM