简体   繁体   English

Kubernetes 带身份验证的负载平衡

[英]Kubernetes Load Balancing With Authentication

How do you architecture a Kubernetes application so that a logged-in user is always served back session information stored inside the correct Redis replica?您如何构建 Kubernetes 应用程序,以便始终为登录的用户返回 session 信息存储在正确的 Redis 副本中?

I've got a working Apollo/GraphQL application written in Typescript which logs users in and stores their session information in Redis.我有一个用 Typescript 编写的 Apollo/GraphQL 应用程序,它可以登录用户并将他们的 session 信息存储在 Redis 中。 I'm not sure how to architect the application for production, when I'll have multiple Redis instances running, via Kubernetes.当我将通过 Kubernetes 运行多个 Redis 实例时,我不确定如何为生产构建应用程序。 The Kubernetes configuration files that I've currently written (for Redis and the application) are here .我目前编写的Kubernetes配置文件(针对Redis和应用程序)在这里

Presumably I'll need to have some sort of Load Balancer service sitting in front of my application in order to distribute traffic.大概我需要在我的应用程序前面安装某种负载均衡器服务来分配流量。 But here's where I'm a little confused––但这就是我有点困惑的地方——

When a user makes a request to my application (via a kubernetes loadbalancer service, for instance) how do I ensure that my application checks the "right" Redis replica?当用户向我的应用程序发出请求时(例如,通过 kubernetes 负载均衡器服务),我如何确保我的应用程序检查“正确”的 Redis 副本? It's my understanding that would be necessary to ensure that their credential information is retrieved, for instance to check their logged-in status.据我了解,有必要确保检索到他们的凭证信息,例如检查他们的登录状态。 If my application is checking a different Redis replica every time for the user's details (via a cookie/session) then I'm not sure how the logged-in functionality would work... unless I'm mistaken and somehow Kubernetes knows how to search across all the replicas?如果我的应用程序每次都检查不同的 Redis 副本以获取用户的详细信息(通过 cookie/会话),那么我不确定登录功能将如何工作......除非我弄错了并且不知何故 Kubernetes 知道如何搜索所有副本?

Here's how my current application connects to Redis (this works after starting up Redis and exposing it via a ClusterIP) if that's relevant:这是我当前的应用程序如何连接到 Redis (这在启动 Redis 并通过 ClusterIP 公开它之后工作)如果相关的话:

import Redis from "ioredis";
import session from "express-session";
import connectRedis from "connect-redis";

// Running Redis with docker-compose
let tries = 5;
const connectionOpts: Redis.RedisOptions = {
  host: process.env.REDIS_HOST,
  port: parseInt(process.env.REDIS_PORT as string) || 6379,
  retryStrategy: (time) => {
    if (tries === 0) {
      throw new Error("Could not connect to Redis.");
    } else {
      setTimeout(() => {
        tries--;
      }, time);
      return 2000;
    }
  },
};

// Connect to Redis
export const redis = new Redis(connectionOpts);

// Configure Redis to store session information
const RedisStore = connectRedis(session);

// Initialize session parameters and cookie name, etc.
export const mySession = session({
  store: new RedisStore({
    client: redis,
  }),
  name: "qid",
  secret: process.env.SECRET || "wiuy10b1la",
  resave: false,
  saveUninitialized: false,
  cookie: {
    httpOnly: true,
    secure: process.env.ENV === "production",
    maxAge: 1000 * 60 * 60 * 24 * 7 * 365,
  },
});

Looks like you are running single master node Redis.看起来您正在运行单个主节点 Redis。 if you are running the multiple Redis replicas then they must be running in sync or i think in cluster mode.如果您正在运行多个 Redis 副本,那么它们必须同步运行,或者我认为是集群模式。

In cluster mode, Redis will clone the data across the multiple replicas.在集群模式下,Redis 将跨多个副本克隆数据。

you can read at: https://redis.io/topics/cluster-tutorial#redis-cluster-101你可以阅读: https://redis.io/topics/cluster-tutorial#redis-cluster-101

you can also read more about redis replication concepts: https://redislabs.com/redis-enterprise/technology/highly-available-redis/您还可以阅读有关 redis 复制概念的更多信息: https://redislabs.com/redis-enterprise/technology/highly-available-redis/

Regarding load balancing requests it will be done by the K8s service but your service won't be knowing which is "right" (read)( slave ) replica and which one is "read/write"( Master ) replica of redis so there is another component come in picture known as Sentinel .关于负载平衡请求,它将由 K8s 服务完成,但您的服务将不知道哪个是 redis 的“正确”(读取)(从属)副本,哪个是“读/写”()副本,所以有图中的另一个组件称为Sentinel

Sentinel always checks master and slave nodes and tries to stable the cluster of Redis if any failure occurs without human intervention.如果在没有人工干预的情况下发生任何故障, Sentinel始终检查主节点和从节点,并尝试稳定 Redis 的集群。

if you are running HA Redis cluster it will replicate your data across multiple replicas and using the library your can first request to sentinel and it will give you master IP , where you can perform a write operation on other IPs, read operation.如果您正在运行HA Redis 集群,它将跨多个副本复制您的数据,并使用您可以首先向哨兵请求的库,它将为您提供主 IP ,您可以在其中对其他 IP 执行写入操作,读取操作。

simple python code简单的 python 代码

from redis.sentinel import Sentinel
sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1)
sentinel.discover_master('mymaster')
('127.0.0.1', 6379)
sentinel.discover_slaves('mymaster')
[('127.0.0.1', 6380)]

On the master node, you will be able to write the data and on slave you can only read the data and replication will be there at a place across all replicas.在主节点上,您将能够写入数据,而在从属节点上,您只能读取数据,并且复制将存在于所有副本的某个位置。

You should have to check the replication first inside Redis, still as i am not sure how you have set up the Redis can't suggest much.您应该首先检查 Redis 中的复制,但我仍然不确定您是如何设置 Redis 的,因此建议不多。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM