简体   繁体   English

Azure Kubernetes 上的 Redis 分片集群

[英]Redis sharded cluster on Azure Kubernetes

I have followed these links我已经按照这些链接

  1. https://medium.com/zero-to/setup-persistence-redis-cluster-in-kubertenes-7d5b7ffdbd98 , https://medium.com/zero-to/setup-persistence-redis-cluster-in-kubertenes-7d5b7ffdbd98
  2. https://github.com/sanderploegsma/redis-cluster https://github.com/sanderploegsma/redis-cluster

to build a Redis sharded cluster on AKS, But they are connected by using Pod IP's but I need to connect to that cluster using "Python" to feed the data into it, Since they are connected internally using Pod IPs I am not able to connect using Python.在 AKS 上构建 Redis 分片集群,但它们通过使用 Pod IP 连接,但我需要使用“Python”连接到该集群以将数据输入其中,由于它们使用 Pod IP 在内部连接,因此我无法连接使用 Python。 Alternatively instead of 6 replicas of one statefulset, I have created 6 different statefulsets with 6 services and all exposed externally as "Load Balancer", But this command they used to make a cluster或者,不是一个 statefulset 的 6 个副本,我创建了 6 个不同的 statefulsets,其中包含 6 个服务,并且都作为“负载均衡器”对外公开,​​但是他们使用这个命令来创建一个集群

kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 \
$(kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')

I don't know how to edit it to use loadbalancer IP instead of Pod IP, so with this 6 external IP's I am not able to create a cluster.我不知道如何编辑它以使用负载均衡器 IP 而不是 Pod IP,因此使用这 6 个外部 IP 我无法创建集群。 I need a Redis sharded cluster on Azure Kubernetes Services which can be accessible externally from Python library "redis-py-cluster"我需要一个 Azure Kubernetes 服务上的 Redis 分片集群,它可以从 Python 库“redis-py-cluster”外部访问

Thanks in advance提前致谢

theory理论

It is not very clear from your question as there is no answer formy comment under the initial post, but it looks like you have been following this guide: Setup Persistence Redis Cluster in Kubernetes .您的问题不是很清楚,因为在最初的帖子下没有答案形成我的评论,但看起来您一直在遵循本指南: 在 Kubernetes 中设置持久性 Redis 集群

Lets decompose the command you are referring to:让我们分解您所指的命令:

kubectl exec -it redis-cluster-0 -- redis-trib create --replicas 1 $(kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')

the kubectl syntax is: kubectl语法是:

kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...] [options]

That is why: kubectl exec -it redis-cluster-0 stands for "Execute a command in a container" with "Pass stdin to the container" and "Stdin is a TTY" options on redis-cluster-0 pod.这就是为什么: kubectl exec -it redis-cluster-0代表“在容器中执行命令”,在redis-cluster-0 pod 上带有“将标准输入传递到容器”和“标准输入是 TTY”选项。

The second part is redis-trib create --replicas 1 <IPs> and第二部分是redis-trib create --replicas 1 <IPs>

$(kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')

merely lists IPs of pods in your StatefulSet/s created with redis-cluster-deployment.yaml .仅列出使用redis-cluster-deployment.yaml创建的 StatefulSet/s 中 pod 的 IP。

Taking into consideration that the app=redis-cluster label we are using in that command is taken from the spec/template/metadata/labels you can adjust the command the way you like or just list the IPs manually insted of getting them with kubectl get pods -l考虑到我们在该命令中使用的app=redis-cluster标签是从spec/template/metadata/labels您可以按照自己喜欢的方式调整命令,或者只是手动列出使用kubectl get pods -l获取它们的 IP kubectl get pods -l

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
...
spec:
  serviceName: redis-cluster
  replicas: 6
  template:
    metadata:
      labels:
        app: redis-cluster    # This is the label we are using in that command

my answer :我的回答

You can either use the IPs or adjust labels you are using in kubectl get pods -l ... part of the command.您可以使用 IP 或调整您在kubectl get pods -l ...命令中使用的标签。 Can't provide precise answer as I didn't receive steps to reproduce.无法提供准确答案,因为我没有收到重现步骤。

my attempt :我的尝试

I'm not sure how exactly you've been creating your statefulsets;我不确定您是如何创建 statefulset 的; however, you still can create a redis cluster on them.但是,您仍然可以在它们上创建一个 redis 集群。

In my case I have used the following YAMLs: redis-cluster-deployment-1.yaml and redis-cluster-deployment-2.yaml .就我而言,我使用了以下 YAML: redis-cluster-deployment-1.yamlredis-cluster-deployment-2.yaml

$ kubectl create -f redis-cluster-deployment-1.yaml 
statefulset.apps/redis-cluster-set-1 created

$ kubectl create -f redis-cluster-deployment-2.yaml 
statefulset.apps/redis-cluster-set-2 created

$ kubectl get pods -o wide 
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE                                            
redis-cluster-set-1-0   1/1     Running   0          52m   10.12.0.22   gke-6v3n
redis-cluster-set-1-1   1/1     Running   0          51m   10.12.1.17   gke-m7z8
redis-cluster-set-1-2   1/1     Running   0          50m   10.12.1.18   gke-m7z8
redis-cluster-set-2-0   1/1     Running   0          51m   10.12.0.23   gke-6v3n
redis-cluster-set-2-1   1/1     Running   0          50m   10.12.1.19   gke-m7z8
redis-cluster-set-2-2   1/1     Running   0          14m   10.12.0.24   gke-6v3n


$ kubectl exec -it redis-cluster-set-1-0 -- redis-trib create --replicas 1 $(kubectl get pods -l app=redis-cluster-set-app -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
10.12.0.22:6379
10.12.1.17:6379
10.12.1.18:6379
Adding replica 10.12.1.19:6379 to 10.12.0.22:6379
Adding replica 10.12.0.24:6379 to 10.12.1.17:6379
Adding replica 10.12.0.23:6379 to 10.12.1.18:6379
....

the important part here is to specify correct pod name and app= label .这里的重要部分是指定正确的pod nameapp= label 。

In my case , the $(kubectl get pods -l app=redis-cluster-set-app -o jsonpath='{range.items[*]}{.status.podIP}:6379 ') command results in a following list of IPs (you can compare them with the output from above.就我而言, $(kubectl get pods -l app=redis-cluster-set-app -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')命令会生成以下列表IPs(您可以将它们与上面的输出进行比较。

$ kubectl get pods -l app=redis-cluster-set-app -o jsonpath='{range.items[*]}{.status.podIP}:6379 '
10.12.0.22:6379 10.12.1.17:6379 10.12.1.18:6379 10.12.0.23:6379 10.12.1.19:6379 10.12.0.24:6379

Hope that helps.希望有帮助。

Update (06-Dec-2019):更新(2019 年 12 月 6 日):

have created a service LoadBalancer" and I am pretty sure I have given a correct "serviceName" under "spec". That LoadBalancer gave me an external Ip. After I created a redis sharded cluster, when I try to connect to that external IP using python, It was able to connect to only one node of redis and asks for other 5 nodes and eventually failed with "Timeout" error已经创建了一个服务 LoadBalancer”,我很确定我在“spec”下给出了一个正确的“serviceName”。那个 LoadBalancer 给了我一个外部 Ip。在我创建了一个 redis 分片集群之后,当我尝试使用python,它只能连接到redis的一个节点并请求其他5个节点,最终因“超时”错误而失败

I did the same and tested it Redis cluster on 3 nodes:我做了同样的事情并在 3 个节点上测试了它的 Redis 集群:

$ kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP           
redis-cluster-set-1-0   1/1     Running   0          20h   10.12.1.30
redis-cluster-set-1-1   1/1     Running   0          20h   10.12.0.29
redis-cluster-set-1-2   1/1     Running   0          20h   10.12.1.31

Created some data and tried to get it out from Redis cluster.创建了一些数据并尝试将其从 Redis 集群中取出。

$ kubectl exec -it redis-cluster-set-1-0 -- redis-cli SET Australia Sydney
OK
$ kubectl exec -it redis-cluster-set-1-0 -- redis-cli GET Australia
"Sydney"

it worked so I tried to query the same data from another node in cluster它有效,所以我尝试从集群中的另一个节点查询相同的数据

$ kubectl exec -it redis-cluster-set-1-2 -- redis-cli GET Australia
(error) MOVED 1738 10.12.1.30:6379

Redis cluster replied with Internal IP in MOVED response. Redis 集群在MOVED响应中回复了内部 IP。

I think that internal IP adress is the reason for that "timeOut" error.我认为内部 IP 地址是“超时”错误的原因。

At the same time it is possible to get data when logged into that pod directly同时可以直接登录到那个 pod 中获取数据

$ kubectl exec -it redis-cluster-set-1-2 -- bash
root@redis-cluster-set-1-2:/data# redis-cli GET Australia
(error) MOVED 1738 10.12.1.30:6379

root@redis-cluster-set-1-2:/data# redis-cli -h 10.12.1.30  GET Australia
"Sydney"

From this Github Issue it seems that redis doesn't proxy the connection through the Redis cluster, to correct node but rather simply passes the IP address of the appropriate server back to the client, which then initiates the connection directly.从这个Github 问题来看,redis 似乎没有通过 Redis 集群代理连接,以纠正节点,而是简单地将适当服务器的 IP 地址传递回客户端,然后客户端直接启动连接。

(slight off-topic, but still worth mentioning). (稍微偏离主题,但仍然值得一提)。 While checking on that I have found an [another document] that explains ( https://cloud.google.com/memorystore/docs/redis/connect-redis-instance-gke ) how to connect to a Redis instance from a Google Kubernetes Engine cluster (with LoadBalancer).在检查时,我发现了 [another document] 解释了 ( https://cloud.google.com/memorystore/docs/redis/connect-redis-instance-gke ) 如何从 Google Kubernetes 连接到 Redis 实例引擎集群(带负载均衡器)。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Celery不使用Redis处理Kubernetes中的任务 - Celery Does Not Process Task in Kubernetes with Redis 我需要使用 kubernetes python 客户端在 Kubernetes 集群中获取 Pod 的数量 - I need to get number of Pods in a Kubernetes Cluster with kubernetes python client 使用 python api 从 GCP 管理 Kubernetes 集群 - Managing Kubernetes cluster from GCP with python api Python Redis:无法从本地计算机或服务器连接到 AWS Redis 集群 - Python Redis: Not able to connect to AWS Redis cluster from local machine or server 如何从在Kubernetes上运行的Spark集群(2.1)查询hdfs? - How to query hdfs from a spark cluster (2.1) which is running on kubernetes? 如何使用 python 删除 GKE(Google Kubernetes 引擎)集群? - How to delete GKE (Google Kubernetes Engine) cluster using python? 在 Kube.netes cron 作业中运行的应用程序未连接到同一 Kube.netes 集群中的数据库 - Application running in Kubernetes cron job does not connect to database in same Kubernetes cluster 我需要使用 kubernetes python 客户端在 Kubernetes 集群中获取 Pod 的资源使用情况 - I need to get resource usage of Pods in a Kubernetes Cluster with kubernetes python client 在django azure网站中使用消息代理(RabbitMQ / Redis) - Working with message broker(RabbitMQ /Redis) in django azure web site Azure-sdk-for-python AKS-如何在 AKS 上升级 kubernetes - Azure-sdk-for-python AKS- How to upgrade kubernetes on AKS
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM