简体   繁体   中英

How to track stateful pods created in a k8s cluster?

Setup

I have a k8s cluster setup in the following manner:

  • 1 Master Node
  • 2 Worker Nodes

The cluster is setup using kubeadm and Flannel .

I have two different pod types:

  • Java Proxy Server
  • Java TCP Server

The Java Proxy Server pod is what I initially created as a StatefulSet . Each Java Proxy Server has its own state (clients currently connected), however they all are expected to share a common state.

This common state is an up-to-date list of Java TCP Server pods and their associated IP addresses. My objective here is to ensure every proxy server has a current list of TCP servers it can proxy connections towards.

Each instance of the Java TCP server has its own unique state and is also deployed as a StatefulSet. The only commonality between the TCP server pods is that they can receive connections from the proxy servers.

The Proxy Servers must know whenever a TCP server pod comes up or goes down so they know what pods are available to proxy connections towards.

The TCP servers are delegated connections by the proxy server. It is never the case where a TCP server is randomly given a connection by the proxy server and they are not load balanced.

Attempt

I have tried to utilize the Java Kubernetes Client and implemented a watch on my Proxy Servers like so:

ApiClient apiClient = Config.defaultClient();
apiClient.setReadTimeout(0);
System.out.println(apiClient.getBasePath());
Configuration.setDefaultApiClient(apiClient);

CoreV1Api api = new CoreV1Api();
V1PodList pods = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null);
V1ListMeta podsMeta = pods.getMetadata();
if (podsMeta != null) {
    String resourceVersion = podsMeta.getResourceVersion();

    Watch<V1Pod> watch = Watch.createWatch(
            apiClient,
            api.listPodForAllNamespacesCall(null, null, null, null, null, null, resourceVersion, null, true, null),
            new TypeToken<Watch.Response<V1Pod>>(){}.getType());

    while (watch.hasNext()) {
        Watch.Response<V1Pod> response = watch.next();
        V1Pod pod = response.object;
        V1PodStatus status = pod.getStatus();
        if (status != null) {
            System.out.printf("Pod IP: %s\n", status.getPodIP());
            System.out.printf("Pod Reason: %s\n", status.getReason());
        }
    }

    watch.close();
}

This works relatively well. The big problem for me is that for this simple process, it adds a massive 40MB to my final Jar file.

I know that 40MB might not be much to some people. I just feel there's a more lightweight way for me to implement what I'm trying to do?

Is there a much better process to track these pods that are created and destroyed within the cluster that I am overlooking?

I have come up with my own solution (for now), albeit not very pretty. I've decided to use a side-car pattern and have another container shipped with my server proxy pods. It's written in Go and the binaries are stripped and run on Alpine Linux .

For now, I'm just using a simple UDP connection to the Java Proxy Server to let it know when a pod has been removed or added.

I referenced the Kubernetes docs which describe how to use the kube-apiserver service. For my case, I'm just using the default service account.

I'll attach some example code along with this to give an idea of the implementation. Basically all the API credentials are supplied through files by Kubernetes as a default.

/var/run/secrets/kubernetes.io/serviceaccount/token /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

We can then use these credentials to access the API Kubernetes provides through the default DNS configuration.

https://kubernetes.default.svc/api/v1/

Here's in essence on how I am able to track pods within the cluster:

package main

import (
    "context"
    "crypto/tls"
    "crypto/x509"
    "encoding/json"
    "fmt"
    "io/ioutil"
    "log"
    "net/http"
)

func main() {
    backGroundContext := context.Background()

    accessTokenData, accessTokenFileError := ioutil.ReadFile("/var/run/secrets/kubernetes.io/serviceaccount/token")
    if accessTokenFileError != nil {
        log.Fatalln(accessTokenFileError)
    }

    accessToken := string(accessTokenData)

    k8sCertificate, certificateFileError := ioutil.ReadFile("/var/run/secrets/kubernetes.io/serviceaccount/ca.crt")
    if certificateFileError != nil {
        log.Fatalln(certificateFileError)
    }

    certificateAuthorityPool := x509.NewCertPool()
    certificateAuthorityPool.AppendCertsFromPEM(k8sCertificate)

    client := &http.Client{
        Transport: &http.Transport{
            TLSClientConfig: &tls.Config{
                RootCAs: certificateAuthorityPool,
            },
        },
        Timeout: 0, // disable timeout for the watch request
    }

    request, requestError := http.NewRequestWithContext(backGroundContext, "GET", "https://kubernetes.default.svc/api/v1/namespaces/default/pods", nil)
    if requestError != nil {
        log.Fatalln(requestError)
    }
    request.Header.Add("Authorization", fmt.Sprintf("Bearer %s", accessToken))

    response, responseError := client.Do(request)
    if responseError != nil {
        log.Fatalln(responseError)
    }
    defer response.Body.Close()

    type PodListMetaData struct {
        ResourceVersion string `json:"resourceVersion"`
    }

    type PodStatus struct {
        IP string `json:"podIP"`
    }

    type PodMetaData struct {
        Name string `json:"name"`
    }

    type PodResult struct {
        MetaData PodMetaData `json:"metadata"`
        Status   PodStatus   `json:"status"`
    }

    type PodListResult struct {
        MetaData PodListMetaData `json:"metadata"`
        Items    []PodResult     `json:"items"`
    }

    var list PodListResult

    decoder := json.NewDecoder(response.Body)
    decodeError := decoder.Decode(&list)
    if decodeError != nil {
        log.Fatalln(decodeError)
    }

    resourceVersion := list.MetaData.ResourceVersion
    log.Printf("Resource Version: %s\n", resourceVersion)

    for _, item := range list.Items {
        log.Printf("Found Pod: %s with IP of %s\n", item.MetaData.Name, item.Status.IP)
    }

    watchRequest, watchRequestError := http.NewRequestWithContext(backGroundContext, "GET", fmt.Sprintf("https://kubernetes.default.svc/api/v1/namespaces/default/pods?watch=1&resourceVersion=%s&allowWatchBookmarks=true", resourceVersion), nil)
    if watchRequestError != nil {
        log.Fatalln(watchRequestError)
    }
    watchRequest.Header.Add("Authorization", fmt.Sprintf("Bearer %s", accessToken))

    response1, response1Error := client.Do(watchRequest)
    if response1Error != nil {
        log.Fatalln(response1Error)
    }
    defer response1.Body.Close()

    type PodListWatchResult struct {
        Type   string    `json:"type"`
        Object PodResult `json:"object"`
    }

    decoder1 := json.NewDecoder(response1.Body)

    for decoder1.More() {
        var podResult PodListWatchResult
        decodeError1 := decoder1.Decode(&podResult)
        if decodeError1 != nil {
            log.Fatalln(decodeError1)
        }

        log.Printf("Found Pod: %s with IP of %s\n", podResult.Object.MetaData.Name, podResult.Object.Status.IP)
    }
}

There totally is room for improvement here, but this is the gist of my solution. Every time a new JSON object is streamed I process it to ensure it's something I'm interested in then forward it through UDP to the Java Proxy Server.

The binary built on Alpine came out to be about 10 MB, which is quite light!

I suspect that my only option for a 'real' solution would be to use the official Kubernetes Go Client or Java Client . It would make my container much larger, but my solution doesn't have many guarantees and seems like a hack at best.

I'm still hoping there's something I have overlooked that simplifies all this and doesn't require a big client library.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM