简体   繁体   中英

Kubernetes Network Policy, allow communication within namespace

On an Azure AKS cluster with the Calico network policies plugin enabled, I want to:

  1. by default block all incoming traffic.
  2. allow all traffic within a namespace (from a pod in a namespace, to another pod in the same namespace.

I tried something like:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny.all
  namespace: test
spec:
  podSelector: {}
  policyTypes:
  - Ingress

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow.same.namespace
  namespace: test
spec:
  podSelector: {}
  ingress:
  - from:
    - podSelector: {}
  policyTypes:
  - Ingress

But is seems to block traffic between two deployments/pods in the same namespace. What am I doing wrong, am I misreading the documentation?

Perhaps it is good to mention that the above setup seems to work on an AWS EKS based Kubernetes cluster.

You can label the Namespace first like

kubectl label ns <Namespace name> env: test

and apply the policy like

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-labled-namespace
spec:
  podSelector: {} 
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          env: test
    ports:
    - protocol: TCP
      port: 80

so this network policy will allow traffic only from that namespace which has a specific label test .

If you are using Calico, you can apply one GlobalNetworkPolicy to deny Ingress that is valid for all existing and future Namespaces:

apiVersion: crd.projectcalico.org/v1
kind: GlobalNetworkPolicy
metadata:
  name: default-global-deny-all-ingress
spec:
  namespaceSelector: has(projectcalico.org/name) && projectcalico.org/name not in {"kube-system", "calico-system", "tigera-operator"}
  order: 3000 # normal NPs (order: 1000) should have higher order
  types:
    - Ingress
  ingress:
    # allow collect metrics from Kubernetes Metrics Server
    - action: Allow
      protocol: TCP
      destination:
        selector: 'k8s-app == "metrics-server"'
        ports:
          - 443
    # Deny all ingress
    - action: Deny
      source:
        nets:
          - 0.0.0.0/0

Important is that the order is above 1000 as this is the default order for NetworkPolicies. Now you can use Kuberentes 1.21 default Namespaces labels for your policy per Namespace(Example for ingress-nginx):

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: ingress-allow-ingress-nginx
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: ingress-nginx
          podSelector: {}

After investigation it turned out that:

  1. I used terraform to create a k8s cluster with two node pools. System, and worker. (Note: that this is (not yet) possible in the GUI).
  2. Both node pools are in different subnets (system subnet, and worker subnet).
  3. AKS configures kubeproxy to masquerade traffic that goes outside the system subnet.
  4. Pods are deployed on the worker node, and thus use the worker subnet. All traffic that they send outside the node they are running on, is masqueraded.
  5. Calico managed iptables drop the masqueraded traffic. I did not look into more details here.
  6. However, if I change the kubeproxy masquerade setting to either a larger CIDR range, or remove it all together, it works. Azure however resets this setting after a while.

In conclusion. I tried to use something that is not yet supported by Azure. I now use a single (larger) subnet for both node pools.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM