簡體   English   中英

Redis ha helm chart 錯誤 - NOREPLICAS 沒有足夠好的副本來編寫

[英]Redis ha helm chart error - NOREPLICAS Not enough good replicas to write

我正在嘗試在我的本地 kubernetes(Windows 的 docker)上設置 redis-ha helm chart。

我正在使用的掌舵值文件是,

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
image:
  repository: redis
  tag: 5.0.3-alpine
  pullPolicy: IfNotPresent
## replicas number for each component
replicas: 3

## Custom labels for the redis pod
labels: {}

## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
  ## Specifies whether a ServiceAccount should be created
  ##
  create: false
  ## The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the redis-ha.fullname template
  # name:

## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##

rbac:
  create: false

## Redis specific configuration options
redis:
  port: 6379
  masterGroupName: mymaster
  config:
    ## Additional redis conf options can be added below
    ## For all available options see http://download.redis.io/redis-stable/redis.conf
    min-slaves-to-write: 1
    min-slaves-max-lag: 5   # Value in seconds
    maxmemory: "0"       # Max memory to use for each redis instance. Default is unlimited.
    maxmemory-policy: "volatile-lru"  # Max memory policy to use for each redis instance. Default is volatile-lru.
    # Determines if scheduled RDB backups are created. Default is false.
    # Please note that local (on-disk) RDBs will still be created when re-syncing with a new slave. The only way to prevent this is to enable diskless replication.
    save: "900 1"
    # When enabled, directly sends the RDB over the wire to slaves, without using the disk as intermediate storage. Default is false.
    repl-diskless-sync: "yes"
    rdbcompression: "yes"
    rdbchecksum: "yes"

  ## Custom redis.conf files used to override default settings. If this file is
  ## specified then the redis.config above will be ignored.
  # customConfig: |-
      # Define configuration here

  resources: 
    requests:
      memory: 200Mi
      cpu: 100m
    limits:
      memory: 700Mi
      cpu: 250m

## Sentinel specific configuration options
sentinel:
  port: 26379
  quorum: 2
  config:
    ## Additional sentinel conf options can be added below. Only options that
    ## are expressed in the format simialar to 'sentinel xxx mymaster xxx' will
    ## be properly templated.
    ## For available options see http://download.redis.io/redis-stable/sentinel.conf
    down-after-milliseconds: 10000
    ## Failover timeout value in milliseconds
    failover-timeout: 180000
    parallel-syncs: 5

  ## Custom sentinel.conf files used to override default settings. If this file is
  ## specified then the sentinel.config above will be ignored.
  # customConfig: |-
      # Define configuration here

  resources: 
    requests:
      memory: 200Mi
      cpu: 100m
    limits:
      memory: 200Mi
      cpu: 250m

securityContext:
  runAsUser: 1000
  fsGroup: 1000
  runAsNonRoot: true

## Node labels, affinity, and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}

# Prometheus exporter specific configuration options
exporter:
  enabled: false
  image: oliver006/redis_exporter
  tag: v0.31.0
  pullPolicy: IfNotPresent

  # prometheus port & scrape path
  port: 9121
  scrapePath: /metrics

  # cpu/memory resource limits/requests
  resources: {}

  # Additional args for redis exporter
  extraArgs: {}

podDisruptionBudget: {}
  # maxUnavailable: 1
  # minAvailable: 1

## Configures redis with AUTH (requirepass & masterauth conf params)
auth: false
# redisPassword:

## Use existing secret containing "auth" key (ignores redisPassword)
# existingSecret:

persistentVolume:
  enabled: true
  ## redis-ha data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"
  accessModes:
    - ReadWriteOnce
  size: 1Gi
  annotations: {}
init:
  resources: {}

# To use a hostPath for data, set persistentVolume.enabled to false
# and define hostPath.path.
# Warning: this might overwrite existing folders on the host system!
hostPath:
  ## path is evaluated as template so placeholders are replaced
  # path: "/data/{{ .Release.Name }}"

  # if chown is true, an init-container with root permissions is launched to
  # change the owner of the hostPath folder to the user defined in the
  # security context
  chown: true

redis-ha 正確部署,當我執行kubectl get all

NAME                       READY     STATUS    RESTARTS   AGE
pod/rc-redis-ha-server-0   2/2       Running   0          1h
pod/rc-redis-ha-server-1   2/2       Running   0          1h
pod/rc-redis-ha-server-2   2/2       Running   0          1h

NAME                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)              AGE
service/kubernetes               ClusterIP   10.96.0.1        <none>        443/TCP              23d
service/rc-redis-ha              ClusterIP   None             <none>        6379/TCP,26379/TCP   1h
service/rc-redis-ha-announce-0   ClusterIP   10.105.187.154   <none>        6379/TCP,26379/TCP   1h
service/rc-redis-ha-announce-1   ClusterIP   10.107.36.58     <none>        6379/TCP,26379/TCP   1h
service/rc-redis-ha-announce-2   ClusterIP   10.98.38.214     <none>        6379/TCP,26379/TCP   1h

NAME                                  DESIRED   CURRENT   AGE
statefulset.apps/rc-redis-ha-server   3         3         1h

我嘗試使用 Java 應用程序訪問 redis-ha,該應用程序使用生菜驅動程序連接到 redis。 訪問redis的示例java代碼,

package io.c12.bala.lettuce;

import io.lettuce.core.RedisClient;
import io.lettuce.core.api.StatefulRedisConnection;
import io.lettuce.core.api.sync.RedisCommands;

import java.util.logging.Logger;


public class RedisClusterConnect {

    private static final Logger logger = Logger.getLogger(RedisClusterConnect.class.getName());
    public static void main(String[] args) {
        logger.info("Starting test");

        // Syntax: redis-sentinel://[password@]host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId
        RedisClient redisClient = RedisClient.create("redis-sentinel://rc-redis-ha:26379/0#mymaster");
        StatefulRedisConnection<String, String> connection = redisClient.connect();


        RedisCommands<String, String> command = connection.sync();
        command.set("Hello", "World");
        logger.info("Ran set command successfully");
        logger.info("Value from Redis - " + command.get("Hello"));

        connection.close();
        redisClient.shutdown();
    }
}

我將應用程序打包為可運行的 jar,創建了一個容器並將其推送到運行 redis 的同一個 kubernetes 集群。 應用程序現在拋出錯誤。

Exception in thread "main" io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
        at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
        at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:122)
        at io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69)
        at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
        at com.sun.proxy.$Proxy0.set(Unknown Source)
        at io.c12.bala.lettuce.RedisClusterConnect.main(RedisClusterConnect.java:22)
Caused by: io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
        at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
        at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:108)
        at io.lettuce.core.protocol.AsyncCommand.completeResult(AsyncCommand.java:120)
        at io.lettuce.core.protocol.AsyncCommand.complete(AsyncCommand.java:111)
        at io.lettuce.core.protocol.CommandHandler.complete(CommandHandler.java:646)
        at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:604)
        at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:556)

我也嘗試過 jedis 驅動程序和 springboot 應用程序,從 Redis-ha 集群中得到同樣的錯誤。

** UPDATE ** 當我在 redis-cli 中運行info命令時,我得到了

connected_slaves:2
min_slaves_good_slaves:0

似乎奴隸行為不正常。 當切換到min-slaves-to-write: 0 能夠讀寫Redis集群。

對此的任何幫助表示贊賞。

似乎您必須編輯redis-ha-configmap configmap 並設置min-slaves-to-write 0

畢竟redis pod刪除(應用它)它就像一個魅力

所以 :

helm install stable/redis-ha
kubectl edit cm redis-ha-configmap # change min-slaves-to-write from 1 to 0
kubectl delete pod redis-ha-0

當我將具有相同值的 helm chart 部署到在 AWS 上運行的 Kubernetes 集群時,它運行良好。

Windows 版 Docker 上的 Kubernetes 似乎存在問題。

如果您在計算機上本地部署此 Helm 圖表,則只有 1 個可用節點。 如果您使用--set hardAntiAffinity=false安裝 Helm 圖表,那么它會將所需的副本 pod 全部放在同一個節點上,因此將正確啟動並且不會出現該錯誤。 這個 hardAntiAffinity 值有記錄的默認值為 true:

是否應該強制 Redis 服務器 Pod 在不同的節點上運行。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM