简体   繁体   English

我可以在多个 DevOps 产品团队之间安全地共享一个 k8s 集群吗?

[英]Can I share a k8s cluster securely between many DevOps product teams?

would there be a secure way to work with different product DevOps teams on the same k8s cluster?是否有一种安全的方式可以在同一个 k8s 集群上与不同的产品 DevOps 团队合作? How can I isolate workloads between the teams?如何隔离团队之间的工作负载? I know there is k8s rbac and namespaces available, but is that secure to run different prod workloads?我知道有可用的 k8s rbac 和命名空间,但是运行不同的 prod 工作负载是否安全? I know istio but as I understood there is no direct answer to my Südasien.我知道 istio,但据我所知,我的 Südasien 没有直接的答案。 How can we handle different in ingress configuration from different teams in the same cluster?我们如何处理来自同一集群中不同团队的不同入口配置? If not securely possible to isolate workloads how do you orchestrate k8s clusters to reduce maintenance.如果无法安全地隔离工作负载,您如何编排 k8s 集群以减少维护。

Thanks a lot!非常感谢!

The answer is: it depends.答案是:视情况而定。 First, Kubernetes is not insecure by default and containers give a base layer of abstraction.首先,Kubernetes 默认情况下并非不安全,容器提供了一个基础抽象层。 The better questions are:更好的问题是:

  • How many isolation do you need?你需要多少次隔离?
  • Whats about user management?用户管理呢?
  • Do you need to encrypt traffic between your workload?您是否需要加密工作负载之间的流量?

Isolation Levels隔离级别

If you need strong isolation between your workloads (and i mean really strong), do yourself a favor and use different clusters.如果您需要在工作负载之间进行强隔离(我的意思是非常强),请帮自己一个忙并使用不同的集群。 There may be some business cases where you need guarantee that some kind of workload is not allowed to run on the same (virtual) machine.在某些业务案例中,您需要保证某种工作负载不允许在同一台(虚拟)机器上运行。 You could also try to do this by adding nodes that are only for one of your sub-projects and use Affinities and Anti-Affinities to handle the scheduling.您也可以尝试通过添加仅用于您的一个子项目的节点并使用 Affinities 和 Anti-Affinities 来处理调度来做到这一点。 But if need this level of isolation, you'll probably ran into problems when thinking about log aggregation, metrics or in general any point where you have a component that's used across all of your services.但是,如果需要这种级别的隔离,在考虑日志聚合、指标或通常在您拥有跨所有服务使用的组件的任何地方时,您可能会遇到问题。

For any other use case: Build one cluster and divide by namespaces.对于任何其他用例:构建一个集群并按命名空间划分。 You could even create a couple ingress-controllers which belong just to one of your teams.你甚至可以创建几个只属于你的一个团队的入口控制器。

User Management用户管理

Managing RBAC and users by hand could be a little bit tricky.手动管理 RBAC 和用户可能有点棘手。 Kubernetes itself supports OIDC-Tokens . Kubernetes 本身支持 OIDC-Tokens If you already use OIDC for SSO or similar, you could re-use your tokens to authenticate users in Kubernetes.如果您已经将 OIDC 用于 SSO 或类似的,您可以重新使用您的令牌来验证 Kubernetes 中的用户。 I've never used this, so i can't tell about role mapping using OIDC.我从来没有使用过这个,所以我不能告诉使用 OIDC 的角色映射。

Another solution would be Rancher or another cluster orchestrating tool.另一种解决方案是 Rancher 或其他集群编排工具。 I can't tell about the other, but Rancher comes with built-in user management.另一个我不知道,但是 Rancher 带有内置的用户管理。 You could also create projects to group several namespaces for one of your audiences.您还可以创建项目来为您的一位受众分组多个命名空间。

Traffic Encryption流量加密

By using a service mesh like Istio or Linkerd you can encrypt traffic between your pods.通过使用 Istio 或 Linkerd 等服务网格,您可以加密 pod 之间的流量。 Even if it sounds seductive to encrypt your workload, make clear if you really need this.即使加密您的工作负载听起来很诱人,也要明确您是否真的需要它。 Service meshes come with some downsides, eg resource usage.服务网格有一些缺点,例如资源使用。 Also you have one more component that needs to be managed and updated.此外,您还有一个需要管理和更新的组件。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM