[英]How to AutoScaler a CouchDB Cluster on Kubernetes
我正在 Kubernetes 上部署 1 个 CouchDB 集群。 它起作用了,当我尝试缩放它时遇到错误。
我尝试扩展我的 Statefulset,当我描述 couchdb-3 时出现此错误:
0/3 个节点可用:3 个 pod 具有未绑定的立即 PersistentVolumeClaims。
当我描述 hpa 时出现这个错误:
无效指标(1 个无效),第一个错误是:未能获得 cpu 利用率:缺少 cpu 请求
无法获得 cpu 利用率:缺少 cpu 请求
我运行“ kubectl get pod -o wide
”并收到以下结果:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
couchdb-0 1/1 Running 0 101m 10.244.2.13 node2 <none> <none>
couchdb-1 1/1 Running 0 101m 10.244.2.14 node2 <none> <none>
couchdb-2 1/1 Running 0 100m 10.244.2.15 node2 <none> <none>
couchdb-3 0/1 Pending 0 15m <none> <none> <none> <none>
我该如何解决?
我的 hpa 文件:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-couchdb
spec:
maxReplicas: 16
minReplicas: 6
scaleTargetRef:
apiVersion: apps/v1
kind: StatefulSet
name: couchdb
targetCPUUtilizationPercentage: 50
pv.yaml:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: couch-vol-0
labels:
volume: couch-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.1.100
path: "/var/couchnfs/couchdb-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: couch-vol-1
labels:
volume: couch-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.1.100
path: "/var/couchnfs/couchdb-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: couch-vol-2
labels:
volume: couch-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.1.100
path: "/var/couchnfs/couchdb-2"
我在 /etc/exports 中设置了 nfs: /var/couchnfs 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
statefulset.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: couchdb
labels:
app: couch
spec:
replicas: 3
serviceName: "couch-service"
selector:
matchLabels:
app: couch
template:
metadata:
labels:
app: couch # pod label
spec:
containers:
- name: couchdb
image: couchdb:2.3.1
imagePullPolicy: "Always"
env:
- name: NODE_NETBIOS_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NODENAME
value: $(NODE_NETBIOS_NAME).couch-service # FQDN in vm.args
- name: COUCHDB_USER
value: admin
- name: COUCHDB_PASSWORD
value: admin
- name: COUCHDB_SECRET
value: b1709267
- name: ERL_FLAGS
value: "-name couchdb@$(NODENAME)"
- name: ERL_FLAGS
value: "-setcookie b1709267" # the “password” used when nodes connect to each other.
ports:
- name: couchdb
containerPort: 5984
- name: epmd
containerPort: 4369
- containerPort: 9100
volumeMounts:
- name: couch-pvc
mountPath: /opt/couchdb/data
volumeClaimTemplates:
- metadata:
name: couch-pvc
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
selector:
matchLabels:
volume: couch-volume
服务.yaml
---
apiVersion: v1
kind: Service
metadata:
name: couch-service
namespace: default
labels:
app: couch
spec:
type: ClusterIP
clusterIP: None
ports:
- port: 5984
protocol: TCP
targetPort: 5984
selector:
app: couch # label selector
---
kind: Service
apiVersion: v1
metadata:
name: couch-nodep-svc
labels:
app: couch
spec:
type: NodePort # NodePort service
ports:
- port: 5984
nodePort: 30984 # external port
protocol: TCP
selector:
app: couch # label selector
您有 3 个持久卷和 3 个 pod,每个都在声明。 一个 PV 不能被多个 pod 认领。 由于您使用 NFS 作为后端,因此您可以使用持久卷的动态配置。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.