简体   繁体   English

如何在裸机 Kubernetes 集群上运行 Dgraph

[英]How to run Dgraph on bare-metal Kubernetes cluster

I am trying to setup Dgraph in HA Cluster but it won't deploy if no volumes are present.我正在尝试在HA 集群中设置 Dgraph,但如果不存在,它将不会部署。

When directly applying the provided config on a bare-metal cluster won't work.在裸机集群上直接应用提供的配置将不起作用。

$ kubectl get pod --namespace dgraph
dgraph-alpha-0                      0/1     Pending     0          112s
dgraph-ratel-7459974489-ggnql       1/1     Running     0          112s
dgraph-zero-0                       0/1     Pending     0          112s


$ kubectl describe pod/dgraph-alpha-0 --namespace dgraph
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  error while running "VolumeBinding" filter plugin for pod "dgraph-alpha-0": pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  <unknown>  default-scheduler  error while running "VolumeBinding" filter plugin for pod "dgraph-alpha-0": pod has unbound immediate PersistentVolumeClaims

Anyone else has this problem?其他人有这个问题吗? I've been experiencing this issue for several days now and can not find a way around this.我已经遇到这个问题好几天了,找不到解决方法。 How can I have Dgraph use cluster's local storage?如何让 Dgraph 使用集群的本地存储?

Thanks谢谢

Found a working solution myself.自己找到了一个可行的解决方案。

I have to manually create thepv andpvc , then Dgraph can use them during deployment.我必须手动创建pvpvc ,然后 Dgraph 可以在部署期间使用它们。

Here is the config I used to create the needed storageclass , pv and pvc这是我用来创建所需storageclasspvpvc的配置

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-alpha-0
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/alpha-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-alpha-1
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/alpha-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-alpha-2
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/alpha-2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-zero-0
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/zero-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-zero-1
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/zero-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-zero-2
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/zero-2"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-alpha-0
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-alpha-1
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-alpha-2
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-zero-0
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-zero-1
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-zero-2
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi

When Dgraph is deployed it latches at the pvc部署 Dgraph 时,它会锁定在pvc

$ kubectl get pvc -n dgraph -o wide
NAME                            STATUS   VOLUME                          CAPACITY   ACCESS MODES   STORAGECLASS   AGE     VOLUMEMODE
datadir-dgraph-dgraph-alpha-0   Bound    datadir-dgraph-dgraph-zero-2    8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-alpha-1   Bound    datadir-dgraph-dgraph-alpha-0   8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-alpha-2   Bound    datadir-dgraph-dgraph-zero-0    8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-zero-0    Bound    datadir-dgraph-dgraph-alpha-1   8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-zero-1    Bound    datadir-dgraph-dgraph-alpha-2   8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-zero-2    Bound    datadir-dgraph-dgraph-zero-1    8Gi        RWO            local          6h40m   Filesystem

Dgraph's configs assume a Kubernetes cluster with a working volume plugin (provisioner). Dgraph 的配置假定 Kubernetes 集群带有工作卷插件(配置器)。 In managed Kubernetes offerings (aws, GKE, DO etc) this step is already taken care by the provider.在托管 Kubernetes 产品(aws、GKE、DO 等)中,此步骤已由提供商负责。

I think the goal should be to achieve par functionality with cloud providers, that is the provisioning must be dynamic (in contrast for example with OP's own answer which is correct but statically provisioned - k8s docs).我认为目标应该是实现与云提供商的同等功能,即配置必须是动态的(例如,与 OP 自己的正确但静态配置的答案相反——k8s 文档)。

When running bare-metal you have to manually configure a volume plugin before being able to dynamically provision volumes (k8s docs) and thus use StatefulSets, PersistentVolumeClaims etc. Thankfully there are many provisioners available (k8s docs).运行裸机时,您必须手动配置卷插件,然后才能动态配置卷(k8s 文档),从而使用 StatefulSets、PersistentVolumeClaims 等。幸运的是,有许多可用的配置器(k8s 文档)。 For out of the box support for dynamic provisioning every item in the list that has 'Internal Provisioner' checked will do.对于动态配置的开箱即用支持,列表中选中“内部配置器”的每个项目都可以。

So while the problem has many solutions I ended up using NFS.因此,虽然问题有很多解决方案,但我最终还是使用了 NFS。 To achieve dynamic provisioning I had to use an external provisioner.为了实现动态配置,我必须使用外部配置器。 Hopefully this is as simple as installing a Helm Chart .希望这就像安装Helm Chart一样简单。

  1. Install NFS (original guide) on master node.在主节点上安装 NFS (原始指南)。

ssh via terminal and run ssh 通过终端运行

sudo apt update
sudo apt install nfs-kernel-server nfs-common
  1. Create the directory Kubernetes is going to use and change ownership创建目录 Kubernetes 将使用并更改所有权
sudo mkdir /var/nfs/kubernetes -p
sudo chown nobody:nogroup /var/nfs/kubernetes
  1. Configure NFS配置 NFS

Open the file /etc/exports打开文件/etc/exports

sudo nano /etc/exports

Add the following line at the bottom在底部添加以下行

/var/nfs/kubernetes  client_ip(rw,sync,no_subtree_check)

Replace client_ip with you master node ip.client_ip替换为您的主节点 ip。 In my case this IP was the DHCP server IP leased by my router to the machine running master node (192.168.1.7)在我的情况下,这个 IP 是 DHCP 服务器 IP 由我的路由器租用给运行主节点(192.168.1.7)的机器

  1. Restart NFS to apply the changes.重新启动 NFS 以应用更改。
sudo systemctl restart nfs-kernel-server
  1. After setting up NFS on master and assuming Helm is present, installing the provisioner is as simple as running在 master 上设置 NFS 并假设 Helm 存在后,安装 Provisioner 就像运行一样简单
helm install  nfs-provisioner --set nfs.server=XXX.XXX.XXX.XXX --set nfs.path=/var/nfs/kubernetes --set storageClass.defaultClass=true stable/nfs-client-provisioner

Replace nfs.server flag with the appropriate IP/hostname of master node/NFS server.nfs.server标志替换为主节点/NFS 服务器的适当 IP/主机名。

Note flag storageClass.defaultClass has to be true in order for Kubernetes to default use the plugin (provisioner) for volume creation.注意标志storageClass.defaultClass必须为true ,以便 Kubernetes 默认使用插件(供应商)创建卷。

Flag nfs.path is the same path as the one created in step 2.标志nfs.path与在步骤 2 中创建的路径相同。

In case Helm complains that can not find the chart run helm repo add stable https://kubernetes-charts.storage.googleapis.com/如果 Helm 抱怨找不到图表运行helm repo add stable https://kubernetes-charts.storage.googleapis.com/

  1. After successfully completing previous steps proceed installing Dgraph configs as described in their docs and enjoy you bare-metal dynamically provisioned cluster with out-of-the-box working Dgraph deployment.在成功完成前面的步骤后,按照他们的文档中的描述继续安装 Dgraph 配置,并享受你的裸机动态配置集群和开箱即用的 Dgraph 部署。

Single server单服务器

kubectl create --filename https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single/dgraph-single.yaml

HA Cluster高可用性集群

kubectl create --filename https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-ha/dgraph-ha.yaml

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM