[英]How to use a local cluster by "Skaffold" while using "Kubeadm" for the "Kubernetes"?
I am trying to deploy my NodeJS application on a local Kube.netes cluster, using skaffold
but I get the following result:我正在尝试使用
skaffold
在本地 Kube.netes 集群上部署我的 NodeJS 应用程序,但我得到以下结果:
DEBU[0018] Pod "expiration-depl-7989dc5ff4-lkpvw" scheduled but not ready: checking container statuses subtask=-1 task=DevLoop
DEBU[0018] marking resource failed due to error code STATUSCHECK_IMAGE_PULL_ERR subtask=-1 task=Deploy
- deployment/expiration-depl: container expiration is waiting to start: learnertester/expiration:8c6b05f89e0abe8e6a33da266355cf79713e6bd22d1abda0da5541f24d5d8d9e can't be pulled
- pod/expiration-depl-7989dc5ff4-lkpvw: container expiration is waiting to start: learnertester/expiration:8c6b05f89e0abe8e6a33da266355cf79713e6bd22d1abda0da5541f24d5d8d9e can't be pulled
- deployment/expiration-depl failed. Error: container expiration is waiting to start: learnertester/expiration:8c6b05f89e0abe8e6a33da266355cf79713e6bd22d1abda0da5541f24d5d8d9e can't be pulled.
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED subtask=-1 task=Deploy
DEBU[0018] pod statuses could not be fetched this time due to following errors occurred context canceled subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED subtask=-1 task=Deploy
DEBU[0018] pod statuses could not be fetched this time due to following errors occurred context canceled subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED subtask=-1 task=Deploy
DEBU[0018] setting skaffold deploy status to STATUSCHECK_IMAGE_PULL_ERR. subtask=-1 task=Deploy
Cleaning up...
DEBU[0018] Running command: [kubectl --context kubernetes-admin@kubernetes delete --ignore-not-found=true --wait=false -f -] subtask=-1 task=DevLoop
- deployment.apps "auth-depl" deleted
- service "auth-srv" deleted
- deployment.apps "auth-mongo-depl" deleted
- service "auth-mongo-srv" deleted
- deployment.apps "client-depl" deleted
- service "client-srv" deleted
- deployment.apps "expiration-depl" deleted
- deployment.apps "expiration-redis-depl" deleted
- service "expiration-redis-srv" deleted
- ingress.networking.k8s.io "ingress-service" deleted
- deployment.apps "nats-depl" deleted
- service "nats-srv" deleted
- deployment.apps "orders-depl" deleted
- service "orders-srv" deleted
- deployment.apps "orders-mongo-depl" deleted
- service "orders-mongo-srv" deleted
- deployment.apps "payments-depl" deleted
- service "payments-srv" deleted
- deployment.apps "payments-mongo-depl" deleted
- service "payments-mongo-srv" deleted
- deployment.apps "tickets-depl" deleted
- service "tickets-srv" deleted
- deployment.apps "tickets-mongo-depl" deleted
- service "tickets-mongo-srv" deleted
INFO[0054] Cleanup completed in 35.7 seconds subtask=-1 task=DevLoop
DEBU[0054] Running command: [tput colors] subtask=-1 task=DevLoop
DEBU[0054] Command output: [256
] subtask=-1 task=DevLoop
1/12 deployment(s) failed
This is the expiration-depl.yaml
:这是
expiration-depl.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: expiration-depl
spec:
replicas: 1
selector:
matchLabels:
app: expiration
template:
metadata:
labels:
app: expiration
spec:
containers:
- name: expiration
image: learnertester/expiration
env:
- name: NATS_CLIENT_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NATS_URL
value: 'http://nats-srv:4222'
- name: NATS_CLUSTER_ID
value: ticketing
- name: REDIS_HOST
value: expiration-redis-srv
And this is the expiration-redis-depl.yaml
:这是
expiration-redis-depl.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: expiration-redis-depl
spec:
replicas: 1
selector:
matchLabels:
app: expiration-redis
template:
metadata:
labels:
app: expiration-redis
spec:
containers:
- name: expiration-redis
image: redis
---
apiVersion: v1
kind: Service
metadata:
name: expiration-redis-srv
spec:
selector:
app: expiration-redis
ports:
- name: db
protocol: TCP
port: 6379
targetPort: 6379
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: learnertester/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
- image: learnertester/ticketing-client
context: client
docker:
dockerfile: Dockerfile
sync:
manual:
- src: '**/*.js'
dest: .
- image: learnertester/tickets
context: tickets
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
- image: learnertester/orders
context: orders
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
- image: learnertester/expiration
context: expiration
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
- image: learnertester/payments
context: payments
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
First of all what 'local Kube.netes' are you using?首先,您使用的是什么“本地 Kube.netes”?
In the deploy section in skaffold.yaml, you need to specify which k8s context you want to use for the deployments, like so:在 skaffold.yaml 的部署部分,您需要指定要用于部署的 k8s 上下文,如下所示:
deploy:
kubeContext: minikube
To check available k8s contexts on your machine type:要检查您的机器类型上可用的 k8s 上下文:
kubectl config get-contexts
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.