简体   繁体   中英

Kubernetes pod level restricted access to other EC2 instances from AWS EKS nodes

I had a Elastic search DB running on a EC2 instance. Backend services which connect to Elastic DB are running on AWS EKS nodes.

In order for the backend kubernetes pods to access Elastic DB, i added allowed security groups to EKS nodes and it is working fine.

But my question is all other pods(not the backend ones) which are running in the same node had possible access to Elastic DB because of the underlying node security groups, is there a better secure way to handle this.

In this situation you could use additionally Kubernetes` Network Policies to define rules, which specify what traffic is allowed to Elastic DB from the selected pods.

For instance start with creating a default deny all egress traffic policy in namespace for all pods, like this:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
spec:
  podSelector: {}
  policyTypes:
  - Egress

and allow outgoing traffic from specific pods (holding role: db) to CIDR 10.0.0.0/24 on TCP port 5978

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 5978

Please consult official documentation for more information on NetworkPolicies .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM