[英]How to run Dgraph on bare-metal Kubernetes cluster
我正在尝试在HA 集群中设置 Dgraph,但如果不存在卷,它将不会部署。
在裸机集群上直接应用提供的配置将不起作用。
$ kubectl get pod --namespace dgraph
dgraph-alpha-0 0/1 Pending 0 112s
dgraph-ratel-7459974489-ggnql 1/1 Running 0 112s
dgraph-zero-0 0/1 Pending 0 112s
$ kubectl describe pod/dgraph-alpha-0 --namespace dgraph
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler error while running "VolumeBinding" filter plugin for pod "dgraph-alpha-0": pod has unbound immediate PersistentVolumeClaims
Warning FailedScheduling <unknown> default-scheduler error while running "VolumeBinding" filter plugin for pod "dgraph-alpha-0": pod has unbound immediate PersistentVolumeClaims
其他人有这个问题吗? 我已经遇到这个问题好几天了,找不到解决方法。 如何让 Dgraph 使用集群的本地存储?
谢谢
自己找到了一个可行的解决方案。
我必须手动创建pv
和pvc
,然后 Dgraph 可以在部署期间使用它们。
这是我用来创建所需storageclass
、 pv
和pvc
的配置
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: datadir-dgraph-dgraph-alpha-0
labels:
type: local
spec:
storageClassName: local
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/dgraph/alpha-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: datadir-dgraph-dgraph-alpha-1
labels:
type: local
spec:
storageClassName: local
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/dgraph/alpha-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: datadir-dgraph-dgraph-alpha-2
labels:
type: local
spec:
storageClassName: local
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/dgraph/alpha-2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: datadir-dgraph-dgraph-zero-0
labels:
type: local
spec:
storageClassName: local
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/dgraph/zero-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: datadir-dgraph-dgraph-zero-1
labels:
type: local
spec:
storageClassName: local
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/dgraph/zero-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: datadir-dgraph-dgraph-zero-2
labels:
type: local
spec:
storageClassName: local
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/dgraph/zero-2"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datadir-dgraph-dgraph-alpha-0
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datadir-dgraph-dgraph-alpha-1
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datadir-dgraph-dgraph-alpha-2
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datadir-dgraph-dgraph-zero-0
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datadir-dgraph-dgraph-zero-1
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datadir-dgraph-dgraph-zero-2
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
部署 Dgraph 时,它会锁定在pvc
上
$ kubectl get pvc -n dgraph -o wide
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
datadir-dgraph-dgraph-alpha-0 Bound datadir-dgraph-dgraph-zero-2 8Gi RWO local 6h40m Filesystem
datadir-dgraph-dgraph-alpha-1 Bound datadir-dgraph-dgraph-alpha-0 8Gi RWO local 6h40m Filesystem
datadir-dgraph-dgraph-alpha-2 Bound datadir-dgraph-dgraph-zero-0 8Gi RWO local 6h40m Filesystem
datadir-dgraph-dgraph-zero-0 Bound datadir-dgraph-dgraph-alpha-1 8Gi RWO local 6h40m Filesystem
datadir-dgraph-dgraph-zero-1 Bound datadir-dgraph-dgraph-alpha-2 8Gi RWO local 6h40m Filesystem
datadir-dgraph-dgraph-zero-2 Bound datadir-dgraph-dgraph-zero-1 8Gi RWO local 6h40m Filesystem
Dgraph 的配置假定 Kubernetes 集群带有工作卷插件(配置器)。 在托管 Kubernetes 产品(aws、GKE、DO 等)中,此步骤已由提供商负责。
我认为目标应该是实现与云提供商的同等功能,即配置必须是动态的(例如,与 OP 自己的正确但静态配置的答案相反——k8s 文档)。
运行裸机时,您必须手动配置卷插件,然后才能动态配置卷(k8s 文档),从而使用 StatefulSets、PersistentVolumeClaims 等。幸运的是,有许多可用的配置器(k8s 文档)。 对于动态配置的开箱即用支持,列表中选中“内部配置器”的每个项目都可以。
因此,虽然问题有很多解决方案,但我最终还是使用了 NFS。 为了实现动态配置,我必须使用外部配置器。 希望这就像安装Helm Chart一样简单。
ssh 通过终端运行
sudo apt update
sudo apt install nfs-kernel-server nfs-common
sudo mkdir /var/nfs/kubernetes -p
sudo chown nobody:nogroup /var/nfs/kubernetes
打开文件/etc/exports
sudo nano /etc/exports
在底部添加以下行
/var/nfs/kubernetes client_ip(rw,sync,no_subtree_check)
将client_ip
替换为您的主节点 ip。 在我的情况下,这个 IP 是 DHCP 服务器 IP 由我的路由器租用给运行主节点(192.168.1.7)的机器
sudo systemctl restart nfs-kernel-server
helm install nfs-provisioner --set nfs.server=XXX.XXX.XXX.XXX --set nfs.path=/var/nfs/kubernetes --set storageClass.defaultClass=true stable/nfs-client-provisioner
将nfs.server
标志替换为主节点/NFS 服务器的适当 IP/主机名。
注意标志storageClass.defaultClass
必须为true
,以便 Kubernetes 默认使用插件(供应商)创建卷。
标志nfs.path
与在步骤 2 中创建的路径相同。
如果 Helm 抱怨找不到图表运行helm repo add stable https://kubernetes-charts.storage.googleapis.com/
单服务器
kubectl create --filename https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single/dgraph-single.yaml
高可用性集群
kubectl create --filename https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-ha/dgraph-ha.yaml
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.