简体   繁体   English

Kubernetes:如何扩展我的 Pod

[英]Kubernetes: how to scale my pods

I'm new to Kubernetes.我是 Kubernetes 的新手。 I try to scale my pods.我尝试缩放我的豆荚。 First I started 3 pods:首先,我启动了 3 个 pod:

./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80

There were starting 3 pods.有开始 3 个豆荚。 First I tried to scale up/down by using a replicationcontroller but this did not exist.首先,我尝试使用复制控制器来扩大/缩小规模,但这并不存在。 It seems to be a replicaSet now.现在似乎是一个副本集。

./cluster/kubectl.sh get rs
NAME                  DESIRED   CURRENT   AGE
my-nginx-2494149703   3         3         9h

I tried to change the amount of replicas described in my replicaset:我尝试更改副本集中描述的副本数量:

./cluster/kubectl.sh scale --replicas=5 rs/my-nginx-2494149703
replicaset "my-nginx-2494149703" scaled

But I still see my 3 original pods但我仍然看到我的 3 个原始豆荚

./cluster/kubectl.sh get pods
NAME                        READY     STATUS    RESTARTS   AGE
my-nginx-2494149703-04xrd   1/1       Running   0          9h
my-nginx-2494149703-h3krk   1/1       Running   0          9h
my-nginx-2494149703-hnayu   1/1       Running   0          9h

I would expect to see 5 pods.我希望看到 5 个豆荚。

./cluster/kubectl.sh describe rs/my-nginx-2494149703
Name:       my-nginx-2494149703
Namespace:  default
Image(s):   nginx
Selector:   pod-template-hash=2494149703,run=my-nginx
Labels:     pod-template-hash=2494149703
        run=my-nginx
Replicas:   3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed

Why isn't it scaling up?为什么不扩大规模? Do I also have to change something in the deployment?我是否还必须更改部署中的某些内容?

I see something like this when I describe my rs after scaling up: (Here I try to scale from one running pod to 3 running pods).当我在扩展后描述我的 rs 时,我看到了类似的东西:(在这里我尝试从一个正在运行的 pod 扩展到 3 个正在运行的 pod)。 But it remains one running pod.但它仍然是一个正在运行的吊舱。 The other 2 are started and killed immediatly其他 2 个立即启动并终止

  34s       34s     1   {replicaset-controller }            Normal      SuccessfulCreate    Created pod: my-nginx-1908062973-lylsz
  34s       34s     1   {replicaset-controller }            Normal      SuccessfulCreate    Created pod: my-nginx-1908062973-5rv8u
  34s       34s     1   {replicaset-controller }            Normal      SuccessfulDelete    Deleted pod: my-nginx-1908062973-lylsz
  34s       34s     1   {replicaset-controller }            Normal      SuccessfulDelete    Deleted pod: my-nginx-1908062973-5rv8u

This is working for me这对我有用

kubectl scale --replicas=<expected_replica_num> deployment <deployment_label_name>

Example示例

# kubectl scale --replicas=3 deployment xyz

TL;DR: You need to scale your deployment instead of the replica set directly. TL;DR:您需要扩展部署而不是直接扩展副本集。

If you try to scale the replica set, then it will (for a very short time) have a new count of 5. But the deployment controller will see that the current count of the replica set is 5 and since it knows that it is supposed to be 3, it will reset it back to 3. By manually modifying the replica set that was created for you, you are fighting with the system controller (which is untiring and will pretty much always outlast you).如果您尝试扩展副本集,那么它将(在很短的时间内)有一个新的计数为 5。但是部署控制器将看到副本集的当前计数为 5,并且因为它知道它应该是为 3,它会将其重置为 3。通过手动修改为您创建的副本集,您正在与系统控制器作斗争(这是不倦的,并且几乎总是比您更持久)。

kubectl run my-nginx --image=nginx --replicas=3 --port=80 in this kubectl run will create a deployment or job to manage the created container(s). kubectl run my-nginx --image=nginx --replicas=3 --port=80在这个kubectl run中将创建一个部署作业来管理创建的容器。
Deployment-->ReplicaSet-->Pod this is how kubernetes works. Deployment-->ReplicaSet-->Pod这就是 kubernetes 的工作方式。
If you change the bottom-level object, its higher-level object will undo your change.You have to change the top-level object.如果您更改底层对象,则其更高级别的对象将撤消您的更改。您必须更改顶层对象。

在此处输入图片说明

scale it down to zero and then to the number of pods you required (guess it equals to 3)将其缩小到零,然后缩小到您需要的 pod 数量(猜它等于 3)

kubectl scale deployment <deployment-name> --replicas=0 -n <namespace>
kubectl scale deployment <deployment-name> --replicas=3 -n <namespace>

Below example shows how you should scale up/down your "pods/resource/deployments".下面的例子展示了你应该如何扩大/缩小你的“pods/resource/deployments”。

k8smaster@k8smaster:~/debashish$ more createdeb_deployment1.yaml 


--- 
apiVersion: apps/v1beta2
kind: Deployment
metadata: 
  name: debdeploy-webserver
spec: 
  replicas: 1
  selector: 
    matchLabels: 
      app: debdeploy1webserver
  template: 
    metadata: 
      labels: 
        app: debdeploy1webserver
    spec: 
      containers: 
        - 
          image: "docker.io/debu3645/debapachewebserver:v1"
          name: deb-deploy1-container 
          ports: 
            - 
              containerPort: 6060

deployment created -->部署已创建-->

**kubectl -n debns1 create -f createdeb_deployment1.yaml**




k8smaster@k8smaster:~/debashish$ `kubectl scale --replicas=5 **deployment**/debdeploy-webserver -n debns1`

(Scale up 5 deployments) (扩展 5 个部署)

k8smaster@k8smaster:~/debashish$ kubectl get pods -n debns1


NAME                                   READY   STATUS    RESTARTS   AGE
debdeploy-webserver-7cf4fb74c5-8wvzx   1/1     Running   0          16s
debdeploy-webserver-7cf4fb74c5-jrf6v   1/1     Running   0          16s
debdeploy-webserver-7cf4fb74c5-m9fpw   1/1     Running   0          16s
debdeploy-webserver-7cf4fb74c5-q9n7r   1/1     Running   0          16s
debdeploy-webserver-7cf4fb74c5-ttw6p   1/1     Running   1          19h
resourcepod-deb1                       1/1     Running   5          6d18h




k8smaster@k8smaster:~/debashish$ **kubectl get ep -n debns1**



NAME                ENDPOINTS                                                     AGE
frontend-svc-deb    192.168.1.10:80,192.168.1.11:80,192.168.1.12:80 + 2 more...   18h
frontend-svc1-deb   192.168.1.8:80                                                14d
frontend-svc2-deb   192.168.1.8:80                                                5d19h




k8smaster@k8smaster:~/debashish$ **kubectl scale --replicas=2** deployment/debdeploy-webserver -n debns1 

( Scale down from 5 to 2 ) 从 5 缩小到 2

deployment.extensions/debdeploy-webserver scaled deployment.extensions/debdeploy-webserver 缩放

k8smaster@k8smaster:~/debashish$ **kubectl get pods -n debns1**


NAME                                   READY   STATUS        RESTARTS   AGE
debdeploy-webserver-7cf4fb74c5-8wvzx   1/1     Terminating   0          35m
debdeploy-webserver-7cf4fb74c5-jrf6v   1/1     Terminating   0          35m
debdeploy-webserver-7cf4fb74c5-m9fpw   1/1     Terminating   0          35m
debdeploy-webserver-7cf4fb74c5-q9n7r   1/1     Running       0          35m
debdeploy-webserver-7cf4fb74c5-ttw6p   1/1     Running       1          19h
resourcepod-deb1                       1/1     Running       5          6d19h


k8smaster@k8smaster:~/debashish$ **kubectl get pods -n debns1**


NAME                                   READY   STATUS    RESTARTS   AGE
debdeploy-webserver-7cf4fb74c5-q9n7r   1/1     Running   0          37m
debdeploy-webserver-7cf4fb74c5-ttw6p   1/1     Running   1          19h
resourcepod-deb1                       1/1     Running   5          6d19h

k8smaster@k8smaster:~/debashish$ kubectl **scale --current-replicas=4 --replicas=2** deployment/debdeploy-webserver -n debns1  (Check the current no. of deployments. If current replication is 4, then bring it down to 2, else dont do anything)


error: Expected replicas to be 4, was 2


k8smaster@k8smaster:~/debashish$ **kubectl scale --current-replicas=3 --replicas=10 deployment/debdeploy-webserver -n debns1**


error: Expected replicas to be 3, was 2


k8smaster@k8smaster:~/debashish$ **kubectl scale --current-replicas=2 --replicas=10 deployment/debdeploy-webserver -n debns1**

deployment.extensions/debdeploy-webserver scaled deployment.extensions/debdeploy-webserver 缩放

k8smaster@k8smaster:~/debashish$ **kubectl get pods -n debns1**


    NAME                                   READY   STATUS              RESTARTS   AGE
    debdeploy-webserver-7cf4fb74c5-46bxg   1/1     Running             0          6s
    debdeploy-webserver-7cf4fb74c5-d6qsx   0/1     ContainerCreating   0          6s
    debdeploy-webserver-7cf4fb74c5-fdq6v   1/1     Running             0          6s
    debdeploy-webserver-7cf4fb74c5-gd87t   1/1     Running             0          6s
    debdeploy-webserver-7cf4fb74c5-kqdbj   0/1     ContainerCreating   0          6s
    debdeploy-webserver-7cf4fb74c5-q9n7r   1/1     Running             0          47m
    debdeploy-webserver-7cf4fb74c5-qjvm6   1/1     Running             0          6s
    debdeploy-webserver-7cf4fb74c5-skxq4   0/1     ContainerCreating   0          6s
    debdeploy-webserver-7cf4fb74c5-ttw6p   1/1     Running             1          19h
    debdeploy-webserver-7cf4fb74c5-wlc7q   0/1     ContainerCreating   0          6s
    resourcepod-deb1                       1/1     Running             5          6d19h

Not sure if this is the best way as I'm starting out with kubernetes, but I did this by updating my yaml file不确定这是否是我开始使用 kubernetes 的最佳方式,但我通过更新我的 yaml 文件来做到这一点

# app.yaml
apiVersion: apps/v1
...
spec:
  replicas: <new value>

and running $ kubectl scale -f app.yaml --replicas=<new value>并运行$ kubectl scale -f app.yaml --replicas=<new value>

you can verify your new number of replicas by running $ kubectl get pods你可以通过运行$ kubectl get pods来验证你的新副本数量

In my case I was also interested in scaling back my VMs, on google cloud.就我而言,我也有兴趣在谷歌云上缩减我的虚拟机。 I did this with $ gcloud container clusters resize appName --size=1 --zone "my-zone"我这样做了$ gcloud container clusters resize appName --size=1 --zone "my-zone"

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM