简体   繁体   中英

Installing k0s using getting started guide shows no nodes

I tried to install k0s on my VPS using getting started guide.

After running:

k0s status

It shows:

Role: controller

Whereas in the getting started guide it shows:

Role: controller+worker

I suspect this is why then when trying to see nodes:

k0s kubectl get nodes

I see:

No resources found

How do I tell it to run as controller+worker ?

EDIT (logs):

root@n132:~# sudo systemctl status k0scontroller
● k0scontroller.service - k0s - Zero Friction Kubernetes
     Loaded: loaded (/etc/systemd/system/k0scontroller.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2023-01-08 12:45:14 UTC; 47s ago
       Docs: https://docs.k0sproject.io
   Main PID: 7047 (k0s)
      Tasks: 75
     Memory: 351.5M
        CPU: 7.253s
     CGroup: /system.slice/k0scontroller.service
             ├─7047 /usr/local/bin/k0s controller --single=true
             ├─7053 /var/lib/k0s/bin/kine --endpoint=sqlite:///var/lib/k0s/db/state.db?mode=rwc&_journal=WAL&cache=shared --listen-address=unix:///run/k0s/kine/kine.sock:2379
             ├─7061 /var/lib/k0s/bin/kube-apiserver --proxy-client-key-file=/var/lib/k0s/pki/front-proxy-client.key --requestheader-client-ca-file=/var/lib/k0s/pki/front-proxy-ca.crt --service-account-key-file=/var/lib/k0s/pki/sa.pub --tls-cert->             ├─7100 /var/lib/k0s/bin/containerd --root=/var/lib/k0s/containerd --state=/run/k0s/containerd --address=/run/k0s/containerd.sock --log-level=info --config=/etc/k0s/containerd.toml
             ├─7108 /var/lib/k0s/bin/kube-scheduler --bind-address=127.0.0.1 --leader-elect=false --profiling=false --authentication-kubeconfig=/var/lib/k0s/pki/scheduler.conf --authorization-kubeconfig=/var/lib/k0s/pki/scheduler.conf --kubeconf>             └─7112 /var/lib/k0s/bin/kube-controller-manager --cluster-signing-cert-file=/var/lib/k0s/pki/ca.crt --cluster-signing-key-file=/var/lib/k0s/pki/ca.key --service-account-private-key-file=/var/lib/k0s/pki/sa.key --v=1 --client-ca-file>
sty 08 12:45:58 n132 k0s[7047]: time="2023-01-08 12:45:58" level=info msg="I0108 12:45:58.125902    7211 state_mem.go:36] \"Initialized new in-memory state store\"" component=kubelet
sty 08 12:45:58 n132 k0s[7047]: time="2023-01-08 12:45:58" level=info msg="I0108 12:45:58.326296    7211 server.go:764] \"Failed to ApplyOOMScoreAdj\" err=\"write /proc/self/oom_score_adj: permission denied\"" component=kubelet
sty 08 12:45:58 n132 k0s[7047]: time="2023-01-08 12:45:58" level=info msg="I0108 12:45:58.328617    7211 kubelet.go:381] \"Attempting to sync node with API server\"" component=kubelet
sty 08 12:45:58 n132 k0s[7047]: time="2023-01-08 12:45:58" level=info msg="I0108 12:45:58.328629    7211 kubelet.go:281] \"Adding apiserver pod source\"" component=kubelet
sty 08 12:45:58 n132 k0s[7047]: time="2023-01-08 12:45:58" level=info msg="I0108 12:45:58.328637    7211 apiserver.go:42] \"Waiting for node sync before watching apiserver pods\"" component=kubelet
sty 08 12:45:58 n132 k0s[7047]: time="2023-01-08 12:45:58" level=info msg="E0108 12:45:58.328931    7211 kubelet.go:461] \"Failed to create an oomWatcher (running in UserNS, Hint: enable KubeletInUserNamespace feature flag to ignore the error)\">sty 08 12:45:58 n132 k0s[7047]: time="2023-01-08 12:45:58" level=info msg="E0108 12:45:58.328946    7211 run.go:74] \"command failed\" err=\"failed to run Kubelet: failed to create kubelet: open /dev/kmsg: no such file or directory\"" component=>sty 08 12:45:58 n132 k0s[7047]: time="2023-01-08 12:45:58" level=warning msg="exit status 1" component=kubelet
sty 08 12:45:58 n132 k0s[7047]: time="2023-01-08 12:45:58" level=info msg="respawning in 5s" component=kubelet
sty 08 12:46:00 n132 k0s[7047]: time="2023-01-08 12:46:00" level=info msg="current cfg matches existing, not gonna do anything" component=coredns

EDIT 2: Entire VPS is an LXC container.

I did some research and test k0s then I realized something.

If you run a node as Controller the output of k0s status is

Version: v1.25.4+k0s.0
Process ID: 1562
Role: controller
Workloads: false
SingleNode: false

and the output of k0s kubectl get nodes is

No resources found

If you run a node as Controller+Worker the output of k0s status is

Version: v1.25.4+k0s.0
Process ID: 2918
Role: controller
Workloads: true
SingleNode: true
Kube-api probing successful: true
Kube-api probing last error:   

and the output of k0s kubectl get nodes is

NAME     STATUS   ROLES           AGE     VERSION
ubuntu   Ready    control-plane   4m56s   v1.25.4+k0s

So, in any two cases, Role will not show us Controller+Worker .

It seems that you did not install it correctly Run the following commands to fix the problem.

$ sudo k0s install controller --single --force
$ sudo systemctl daemon-reload

$ k0s stop
$ k0s start

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM