简体   繁体   English

Redis ha helm chart 错误 - NOREPLICAS 没有足够好的副本来编写

[英]Redis ha helm chart error - NOREPLICAS Not enough good replicas to write

I am trying to setup redis-ha helm chart on my local kubernetes (docker for windows).我正在尝试在我的本地 kubernetes(Windows 的 docker)上设置 redis-ha helm chart。

helm values file I am using is,我正在使用的掌舵值文件是,

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
image:
  repository: redis
  tag: 5.0.3-alpine
  pullPolicy: IfNotPresent
## replicas number for each component
replicas: 3

## Custom labels for the redis pod
labels: {}

## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
  ## Specifies whether a ServiceAccount should be created
  ##
  create: false
  ## The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the redis-ha.fullname template
  # name:

## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##

rbac:
  create: false

## Redis specific configuration options
redis:
  port: 6379
  masterGroupName: mymaster
  config:
    ## Additional redis conf options can be added below
    ## For all available options see http://download.redis.io/redis-stable/redis.conf
    min-slaves-to-write: 1
    min-slaves-max-lag: 5   # Value in seconds
    maxmemory: "0"       # Max memory to use for each redis instance. Default is unlimited.
    maxmemory-policy: "volatile-lru"  # Max memory policy to use for each redis instance. Default is volatile-lru.
    # Determines if scheduled RDB backups are created. Default is false.
    # Please note that local (on-disk) RDBs will still be created when re-syncing with a new slave. The only way to prevent this is to enable diskless replication.
    save: "900 1"
    # When enabled, directly sends the RDB over the wire to slaves, without using the disk as intermediate storage. Default is false.
    repl-diskless-sync: "yes"
    rdbcompression: "yes"
    rdbchecksum: "yes"

  ## Custom redis.conf files used to override default settings. If this file is
  ## specified then the redis.config above will be ignored.
  # customConfig: |-
      # Define configuration here

  resources: 
    requests:
      memory: 200Mi
      cpu: 100m
    limits:
      memory: 700Mi
      cpu: 250m

## Sentinel specific configuration options
sentinel:
  port: 26379
  quorum: 2
  config:
    ## Additional sentinel conf options can be added below. Only options that
    ## are expressed in the format simialar to 'sentinel xxx mymaster xxx' will
    ## be properly templated.
    ## For available options see http://download.redis.io/redis-stable/sentinel.conf
    down-after-milliseconds: 10000
    ## Failover timeout value in milliseconds
    failover-timeout: 180000
    parallel-syncs: 5

  ## Custom sentinel.conf files used to override default settings. If this file is
  ## specified then the sentinel.config above will be ignored.
  # customConfig: |-
      # Define configuration here

  resources: 
    requests:
      memory: 200Mi
      cpu: 100m
    limits:
      memory: 200Mi
      cpu: 250m

securityContext:
  runAsUser: 1000
  fsGroup: 1000
  runAsNonRoot: true

## Node labels, affinity, and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}

# Prometheus exporter specific configuration options
exporter:
  enabled: false
  image: oliver006/redis_exporter
  tag: v0.31.0
  pullPolicy: IfNotPresent

  # prometheus port & scrape path
  port: 9121
  scrapePath: /metrics

  # cpu/memory resource limits/requests
  resources: {}

  # Additional args for redis exporter
  extraArgs: {}

podDisruptionBudget: {}
  # maxUnavailable: 1
  # minAvailable: 1

## Configures redis with AUTH (requirepass & masterauth conf params)
auth: false
# redisPassword:

## Use existing secret containing "auth" key (ignores redisPassword)
# existingSecret:

persistentVolume:
  enabled: true
  ## redis-ha data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"
  accessModes:
    - ReadWriteOnce
  size: 1Gi
  annotations: {}
init:
  resources: {}

# To use a hostPath for data, set persistentVolume.enabled to false
# and define hostPath.path.
# Warning: this might overwrite existing folders on the host system!
hostPath:
  ## path is evaluated as template so placeholders are replaced
  # path: "/data/{{ .Release.Name }}"

  # if chown is true, an init-container with root permissions is launched to
  # change the owner of the hostPath folder to the user defined in the
  # security context
  chown: true

redis-ha is getting deployed correctly and when I do kubectl get all , redis-ha 正确部署,当我执行kubectl get all

NAME                       READY     STATUS    RESTARTS   AGE
pod/rc-redis-ha-server-0   2/2       Running   0          1h
pod/rc-redis-ha-server-1   2/2       Running   0          1h
pod/rc-redis-ha-server-2   2/2       Running   0          1h

NAME                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)              AGE
service/kubernetes               ClusterIP   10.96.0.1        <none>        443/TCP              23d
service/rc-redis-ha              ClusterIP   None             <none>        6379/TCP,26379/TCP   1h
service/rc-redis-ha-announce-0   ClusterIP   10.105.187.154   <none>        6379/TCP,26379/TCP   1h
service/rc-redis-ha-announce-1   ClusterIP   10.107.36.58     <none>        6379/TCP,26379/TCP   1h
service/rc-redis-ha-announce-2   ClusterIP   10.98.38.214     <none>        6379/TCP,26379/TCP   1h

NAME                                  DESIRED   CURRENT   AGE
statefulset.apps/rc-redis-ha-server   3         3         1h

I try to access the redis-ha using Java application, which uses lettuce driver to connect to redis.我尝试使用 Java 应用程序访问 redis-ha,该应用程序使用生菜驱动程序连接到 redis。 Sample java code to access redis,访问redis的示例java代码,

package io.c12.bala.lettuce;

import io.lettuce.core.RedisClient;
import io.lettuce.core.api.StatefulRedisConnection;
import io.lettuce.core.api.sync.RedisCommands;

import java.util.logging.Logger;


public class RedisClusterConnect {

    private static final Logger logger = Logger.getLogger(RedisClusterConnect.class.getName());
    public static void main(String[] args) {
        logger.info("Starting test");

        // Syntax: redis-sentinel://[password@]host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId
        RedisClient redisClient = RedisClient.create("redis-sentinel://rc-redis-ha:26379/0#mymaster");
        StatefulRedisConnection<String, String> connection = redisClient.connect();


        RedisCommands<String, String> command = connection.sync();
        command.set("Hello", "World");
        logger.info("Ran set command successfully");
        logger.info("Value from Redis - " + command.get("Hello"));

        connection.close();
        redisClient.shutdown();
    }
}

I packaged the application as runnable jar, created a container and pushed it to same kubernetes cluster where redis is running.我将应用程序打包为可运行的 jar,创建了一个容器并将其推送到运行 redis 的同一个 kubernetes 集群。 The application now throws an error.应用程序现在抛出错误。

Exception in thread "main" io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
        at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
        at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:122)
        at io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69)
        at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
        at com.sun.proxy.$Proxy0.set(Unknown Source)
        at io.c12.bala.lettuce.RedisClusterConnect.main(RedisClusterConnect.java:22)
Caused by: io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
        at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
        at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:108)
        at io.lettuce.core.protocol.AsyncCommand.completeResult(AsyncCommand.java:120)
        at io.lettuce.core.protocol.AsyncCommand.complete(AsyncCommand.java:111)
        at io.lettuce.core.protocol.CommandHandler.complete(CommandHandler.java:646)
        at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:604)
        at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:556)

I tried with jedis driver too, and with springboot application, getting the same error from the Redis-ha cluster.我也尝试过 jedis 驱动程序和 springboot 应用程序,从 Redis-ha 集群中得到同样的错误。

** UPDATE ** when I run info command inside redis-cli, I am getting getting ** UPDATE ** 当我在 redis-cli 中运行info命令时,我得到了

connected_slaves:2
min_slaves_good_slaves:0

Seems the Slaves are not behaving properly.似乎奴隶行为不正常。 When switched to min-slaves-to-write: 0 .当切换到min-slaves-to-write: 0 Able to read and Write to Redis Cluster.能够读写Redis集群。

Any help on this is appreciated.对此的任何帮助表示赞赏。

Seems that you have to edit redis-ha-configmap configmap and set min-slaves-to-write 0 .似乎您必须编辑redis-ha-configmap configmap 并设置min-slaves-to-write 0

After all redis pod deletion (to apply it) it works like a charm毕竟redis pod删除(应用它)它就像一个魅力

so :所以 :

helm install stable/redis-ha
kubectl edit cm redis-ha-configmap # change min-slaves-to-write from 1 to 0
kubectl delete pod redis-ha-0

When I deployed the helm chart with same values to Kubernetes cluster running on AWS, it works fine.当我将具有相同值的 helm chart 部署到在 AWS 上运行的 Kubernetes 集群时,它运行良好。

Seems issue with Kubernetes on Docker for Windows. Windows 版 Docker 上的 Kubernetes 似乎存在问题。

If you deploying this Helm chart locally on your computer, you only have 1 node available.如果您在计算机上本地部署此 Helm 图表,则只有 1 个可用节点。 If you install the Helm chart with --set hardAntiAffinity=false then it will put the required replica pods all on the same node and thus will startup correctly and not give you that error.如果您使用--set hardAntiAffinity=false安装 Helm 图表,那么它会将所需的副本 pod 全部放在同一个节点上,因此将正确启动并且不会出现该错误。 This hardAntiAffinity value has a documented default of true:这个 hardAntiAffinity 值有记录的默认值为 true:

Whether the Redis server pods should be forced to run on separate nodes.是否应该强制 Redis 服务器 Pod 在不同的节点上运行。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM