简体   繁体   English

GO 获取 K8S api 服务器健康状态

[英]GO get K8S api server health status

I've a golang program which I need to add a new call to the K8S API server status (livez) api to get the health status.我有一个 golang 程序,我需要向K8S API server状态 (livez) api 添加一个新调用以获取健康状态。

https://kubernetes.io/docs/reference/using-api/health-checks/ https://kubernetes.io/docs/reference/using-api/health-checks/

The program should run on same cluster of the api server and need to get the /livez status, I've tried to find this API in client-go lib but didn't find a way to achieve it...该程序应该在 api 服务器的同一个集群上运行,并且需要获取/livez状态,我试图在 client-go lib 中找到这个 API,但没有找到实现它的方法......

https://github.com/kubernetes/client-go https://github.com/kubernetes/client-go

Is there a way to do it from Go program which is running on the same cluster that the API server run?有没有办法从运行在 API 服务器运行的同一个集群上的 Go 程序中做到这一点?

Update (final answer)更新(最终答案)

Appended附加

OP requested I modify my answer to show configs for "fine-tuned" or "specific" service accounts, without using cluster admin.. OP 要求我修改答案以显示“微调”或“特定”服务帐户的配置,而不使用集群管理员。

As far as I can tell, each pod has permissions to read from /healthz by default.据我所知,默认情况下,每个 pod 都有读取/healthz的权限。 For example, the following CronJob works just fine without using a ServiceAccount at all:例如,以下CronJob工作得很好, CronJob不使用ServiceAccount

# cronjob
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: is-healthz-ok-no-svc
spec:
  schedule: "*/5 * * * *" # at every fifth minute
  jobTemplate:
    spec:
      template:
        spec:
######### serviceAccountName: health-reader-sa
          containers:
            - name: is-healthz-ok-no-svc
              image: oze4/is-healthz-ok:latest
          restartPolicy: OnFailure

在此处输入图片说明

Original原来的

I went ahead and wrote a proof of concept for this.我继续为此写了一个概念证明。 You can find the full repo here , but the code is below.你可以在这里找到完整的 repo ,但代码在下面。

main.go main.go

package main

import (
    "os"
    "errors"
    "fmt"

    "k8s.io/client-go/kubernetes"
    "k8s.io/client-go/rest"
)

func main() {
    client, err := newInClusterClient()
    if err != nil {
        panic(err.Error())
    }

    path := "/healthz"
    content, err := client.Discovery().RESTClient().Get().AbsPath(path).DoRaw()
    if err != nil {
        fmt.Printf("ErrorBadRequst : %s\n", err.Error())
        os.Exit(1)
    }

    contentStr := string(content)
    if contentStr != "ok" {
        fmt.Printf("ErrorNotOk : response != 'ok' : %s\n", contentStr)
        os.Exit(1)
    }

    fmt.Printf("Success : ok!")
    os.Exit(0)
}

func newInClusterClient() (*kubernetes.Clientset, error) {
    config, err := rest.InClusterConfig()
    if err != nil {
        return &kubernetes.Clientset{}, errors.New("Failed loading client config")
    }
    clientset, err := kubernetes.NewForConfig(config)
    if err != nil {
        return &kubernetes.Clientset{}, errors.New("Failed getting clientset")
    }
    return clientset, nil
}

dockerfile文件

FROM golang:latest
RUN mkdir /app
ADD . /app
WORKDIR /app
RUN go build -o main .
CMD ["/app/main"]

deploy.yaml部署.yaml

(as CronJob) (作为 CronJob)

# cronjob
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: is-healthz-ok
spec:
  schedule: "*/5 * * * *" # at every fifth minute
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: is-healthz-ok
          containers:
            - name: is-healthz-ok
              image: oze4/is-healthz-ok:latest
          restartPolicy: OnFailure
---
# service account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: is-healthz-ok
  namespace: default
---
# cluster role binding
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: is-healthz-ok
subjects:
  - kind: ServiceAccount
    name: is-healthz-ok
    namespace: default
roleRef:
  kind: ClusterRole
  ##########################################################################
  # Instead of assigning cluster-admin you can create your own ClusterRole #
  # I used cluster-admin because this is a homelab                         #
  ##########################################################################
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
---

Screenshot截屏

of successful CronJob run成功运行 CronJob

在此处输入图片说明


Update 1更新 1

OP was asking how to deploy "in-cluster-client-config" so I am providing an example deployment (one that I am using).. OP 询问如何部署“in-cluster-client-config”,所以我提供了一个示例部署(我正在使用的一个)。

You can find the repo here你可以在 这里找到回购

Example deployment (I am using a CronJob, but it could be anything):示例部署(我使用的是 CronJob,但它可以是任何东西):

cronjob.yaml cronjob.yaml

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: remove-terminating-namespaces-cronjob
spec:
  schedule: "0 */1 * * *" # at minute 0 of each hour aka once per hour
  #successfulJobsHistoryLimit: 0
  #failedJobsHistoryLimit: 0
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: svc-remove-terminating-namespaces
          containers:
          - name: remove-terminating-namespaces
            image: oze4/service.remove-terminating-namespaces:latest
          restartPolicy: OnFailure

rbac.yaml rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: svc-remove-terminating-namespaces
  namespace: default
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: crb-namespace-reader-writer
subjects:
- kind: ServiceAccount
  name: svc-remove-terminating-namespaces
  namespace: default
roleRef:
  kind: ClusterRole
  ##########################################################################
  # Instead of assigning cluster-admin you can create your own ClusterRole #
  # I used cluster-admin because this is a homelab                         #
  ##########################################################################
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
---

Original Answer原答案

It sounds like what you are looking for is the "in-cluster-client-config" from client-go.听起来您正在寻找的是来自 client-go 的“in-cluster-client-config”。

It is important to remember that when using an "in-cluster-client-config", the API calls within your Go code use the service account for "that" pod.请务必记住,当使用“in-cluster-client-config”时,Go 代码中的 API 调用使用“那个”pod 的服务帐户。 Just wanted to make sure you were testing with an account that has permissions to read "/livez".只是想确保您正在使用有权读取“/livez”的帐户进行测试。

I tested the following code and I am able to get "livez" status..我测试了以下代码,我能够获得“livez”状态..

package main

import (
    "errors"
    "flag"
    "fmt"
    "path/filepath"

    "k8s.io/client-go/kubernetes"
    "k8s.io/client-go/tools/clientcmd"
    "k8s.io/client-go/rest"
    "k8s.io/client-go/util/homedir"
)

func main() {
    // I find it easiest to use "out-of-cluster" for tetsing
    // client, err := newOutOfClusterClient()

    client, err := newInClusterClient()
    if err != nil {
        panic(err.Error())
    }

    livez := "/livez"
    content, _ := client.Discovery().RESTClient().Get().AbsPath(livez).DoRaw()

    fmt.Println(string(content))
}

func newInClusterClient() (*kubernetes.Clientset, error) {
    config, err := rest.InClusterConfig()
    if err != nil {
        return &kubernetes.Clientset{}, errors.New("Failed loading client config")
    }
    clientset, err := kubernetes.NewForConfig(config)
    if err != nil {
        return &kubernetes.Clientset{}, errors.New("Failed getting clientset")
    }
    return clientset, nil
}

// I find it easiest to use "out-of-cluster" for tetsing
func newOutOfClusterClient() (*kubernetes.Clientset, error) {
    var kubeconfig *string
    if home := homedir.HomeDir(); home != "" {
        kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
    } else {
        kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
    }
    flag.Parse()

    // use the current context in kubeconfig
    config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
    if err != nil {
        return nil, err
    }

    // create the clientset
    client, err := kubernetes.NewForConfig(config)
    if err != nil {
        return nil, err
    }

    return client, nil
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM