简体   繁体   English

哪种 Kubernetes 模式适用于对等点配置略有不同的点对点场景?

[英]Which Kubernetes pattern is appropriate for the peer to peer scenario where peers have slightly different configuration?

I'm trying to run private stellar blockchain infrastructure on kubernetes (not to join to existing public or test stellar network) but my question can be generalized to the scenario of running any peer to peer services on kubernetes.我正在尝试在 kubernetes 上运行私有恒星区块链基础设施(不是加入现有的公共或测试恒星网络),但我的问题可以推广到在 kubernetes 上运行任何对等服务的场景。 Therefore, I will try to explain my problem in a generalized way (hoping that it can yield answers that are applicable to any similar topology running on the kubernetes).因此,我将尝试以概括的方式解释我的问题(希望它可以产生适用于在 kubernetes 上运行的任何类似拓扑的答案)。

Here is the scenario:这是场景:

I want to run 3 peers (in kube terms: pods) which are able to communicate with each other in a decentralized way but the problem lies in the fact that each of these peers has a slightly different configuration.我想运行 3 个能够以分散方式相互通信的对等点(用 kube 术语:pods),但问题在于这些对等点中的每一个都具有略有不同的配置。 In general, configuration looks like this (this is an example for pod0):通常,配置如下所示(这是 pod0 的示例):

NETWORK_PASSPHRASE="my private network"

NODE_SEED=<pod0_private_key>

KNOWN_PEERS=[
    "stellar-0",
    "stellar-1",
    "stellar-2"]

[QUORUM_SET]
VALIDATORS=[ <pod1_pub_key>, <pod2_pub_key> ]

The problem lies in the fact that each pod would have different:问题在于每个 pod 会有不同的:

  • NODE_SEED节点_种子
  • VALIDATORS list验证者列表

My first idea (before realizing this problem) was to:我的第一个想法(在意识到这个问题之前)是:

  • Create config map for this configuration为此配置创建配置映射
  • Create statefulset (3 replicas) with headless service to enable stable reachability between pods (stellar-0, stellar-1, stellar-2...etc.)使用无头服务创建 statefulset(3 个副本),以实现 pod(stellar-0、stellar-1、stellar-2...等)之间的稳定可达性

Another idea (after realizing this problem) would be to:另一个想法(在意识到这个问题之后)是:

  • Create separate config maps for each peer为每个对等点创建单独的配置映射
  • Create statefulset (1 replica) with service使用服务创建 statefulset(1 个副本)

I'm wondering if there is any better solution/pattern that could be utilized for this purpose rather than running completely same services with slightly different configuration as separate entities (statefulset, deployment..) with their separate service through which these peers would be available (but this kind of defeats a purpose of using kubernetes high level resources which enable replication)?我想知道是否有任何更好的解决方案/模式可以用于此目的,而不是运行完全相同的服务,配置略有不同,作为单独的实体(状态集,部署..),通过它们的单独服务可以使用这些对等点(但这种破坏了使用能够复制的 kubernetes 高级资源的目的)?

Thanks谢谢

So you can have a single ConfigMap with multiple keys each one uniquely meant for one of your replicas.因此,您可以拥有一个带有多个键的单个ConfigMap ,每个键都唯一地用于您的一个副本。 You can also deploy your pods using a StatefulSet with an initContainer to setup the configs.您还可以使用StatefulSetinitContainer来部署您的 pod 来设置配置。 This is just an example (You'll have to tweak it to your needs):这只是一个示例(您必须根据需要对其进行调整):

ConfigMap:配置映射:

apiVersion: v1
kind: ConfigMap
metadata:
  name: stellar
  labels:
    app: stellar
data:
  stellar0.cnf: |
    NETWORK_PASSPHRASE="my private network"    
    NODE_SEED=<stellar0_private_key>    
    KNOWN_PEERS=[
        "stellar-0",
        "stellar-1",
        "stellar-2"]    
    [QUORUM_SET]
    VALIDATORS=[ <stellar1_pub_key>, <stellar2_pub_key> ]

  stellar1.cnf: |

    NETWORK_PASSPHRASE="my private network"
    NODE_SEED=<stellar1_private_key>
    KNOWN_PEERS=[
        "stellar-0",
        "stellar-1",
        "stellar-2"]

    [QUORUM_SET]
    VALIDATORS=[ <stellar0_pub_key>, <stellar2_pub_key> ]

  stellar2.cnf: |

    NETWORK_PASSPHRASE="my private network"
    NODE_SEED=<stellar2_private_key>
    KNOWN_PEERS=[
        "stellar-0",
        "stellar-1",
        "stellar-2"]

    [QUORUM_SET]
    VALIDATORS=[ <stellar0_pub_key>, <stellar1_pub_key> ]

StatefulSet:状态集:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: stellarblockchain
spec:
  selector:
    matchLabels:
      app: stellar
  serviceName: stellar
  replicas: 3
  template:
    metadata:
      labels:
        app: stellar
    spec:
      initContainers:
      - name: init-stellar
        image: stellar-image:version
        command:
        - bash
        - "-c"
        - |
          set -ex
          # Generate config from pod ordinal index.
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          # Copy appropriate conf.d files from config-map to emptyDir.
          if [[ $ordinal -eq 0 ]]; then
            cp /mnt/config-map/stellar0.cnf /mnt/conf.d/
          elif [[ $ordinal -eq 1 ]]; then
            cp /mnt/config-map/stellar1.cnf /mnt/conf.d/
          else
            cp /mnt/config-map/stellar2.cnf /mnt/conf.d/
          fi
        volumeMounts:
        - name: conf
          mountPath: /mnt/conf.d
        - name: config-map
          mountPath: /mnt/config-map

      containers:
      - name: stellar
        image: stellar-image:version
        ports:
        - name: stellar
          containerPort: <whatever port you need here>
        volumeMounts:
        - name: conf
          mountPath: /etc/stellar/conf.d <== wherever your config for stellar needs to be

     volumes:
     - name: conf
       emptyDir: {}
     - name: config-map
       configMap:
         name: stellar

Service (if you need to expose it)服务(如果你需要暴露它)

apiVersion: v1
kind: Service
metadata:
  name: stellar
  labels:
    app: stellar
spec:
  ports:
  - name: stellar
    port: <stellar-port>
  clusterIP: None
  selector:
    app: stellar

Hope it helps!希望能帮助到你!

It is worth stating: Kube's main strength is managing scalable workloads of identical Pods .值得一提的是:Kube 的主要优势是管理相同 Pod 的可扩展工作负载 That's why the ReplicaSet exists in the Kube API.这就是 Kube API 中存在 ReplicaSet 的原因。

Blockchain validator nodes are not identical Pods.区块链验证器节点不是相同的 Pod。 They are not anonymous ;他们不是匿名的; they are identified by their public addresses which require unique private keys.它们由需要唯一私钥的公共地址标识。

Blockchain nodes which serve as RPC nodes are simpler in this sense;从这个意义上说,充当 RPC 节点的区块链节点更简单; they can be replicated and RPC requests can be round robined between the nodes.它们可以被复制并且 RPC 请求可以在节点之间循环。

There is value to using Kube for blockchain networks;将 Kube 用于区块链网络是有价值的; but if deploying validators (and boot nodes) feels like it goes against the grain, that's because it doesn't neatly fit into the ReplicaSet model.但是,如果部署验证器(和启动节点)感觉有悖常理,那是因为它没有完全适合 ReplicaSet 模型。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 在Kubernetes中正确配置Fabric Peers - Proper configuration for Fabric Peers in Kubernetes 错误:没有来自任何对等方的有效响应。 错误:peer=undefined,status=grpc,message=Endorsement has failed - Error: No valid responses from any peers. Errors: peer=undefined, status=grpc, message=Endorsement has failed 在将Hyperledger Fabric对等方部署到Kubernetes时获得“无法初始化加密” - Getting “cannot init crypto” while deploying hyperledger fabric peer to Kubernetes 为什么 calico 在 Kubernetes 中缺少一些节点对等地址? - Why calico missing some nodes peer address in Kubernetes? CLI 和 Peer/Orderer 之间的连接无法正常工作(Kubernetes 设置) - Connection between CLI and Peer/Orderer not working properly (Kubernetes setup) 使用“对等名称冲突”在 Kubernetes 中设置 um WeaveNet 时出错 - Errors setting um WeaveNet in Kubernetes with “peer names collision” Kubernetes(Istio)Mongodb 企业集群:HostUnreachable:对等方重置连接 - Kubernetes(Istio) Mongodb enterprise cluster: HostUnreachable: Connection reset by peer 每个 kubernetes 吊舱的不同配置 - Different configuration for each kubernetes pod 具有不同CPU配置的Kubernetes集群 - Kubernetes Cluster with different CPU configuration 使用 Kubernetes 集群 k8s 中的 SRV 记录在 Golang 中通过 DNS 实现对等发现逻辑 - implement peer discovery logic through DNS in Golang using SRV records in Kubernetes cluster k8s
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM