简体   繁体   中英

kubernetes: NetworkPolicy deny-all not denying

I am deploying an application on a kubernetes cluster on aws with weave

I have one additional (besides the default ) namespace: my-staging .

I want to apply and test the following deny-all policy which is suggested by kubernetes :

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: staging-default-deny-all
  namespace: my-staging
spec:
  podSelector: {}
  policyTypes:
  - Ingress

I then spin up a busybox for testing purposes in the default namespace:

kubectl run busybox --rm -ti --image=busybox /bin/sh  --namespace=default

...and my ui service (supposed to be listening on port 80 ) in my-staging namespace is reachable!

/ # wget --spider ui.staging-els.svc.cluster.local
Connecting to ui.staging-els.svc.cluster.local (100.68.222.37:80)

Why is this happening?

ps I am applying the NetworkPolicy while the app has already been deployed if this is of any significance.

update : this must be a weave thing cause when deleting my cluster and re-creating with --networking calico everything worked out without any issue whatsoever.

In order for your namespace to have a default-deny policy you need to annotate it:

kind: Namespace
apiVersion: v1
metadata:
  name: my-staging
  annotations:
    net.beta.kubernetes.io/network-policy: |
      {
        "ingress": {
          "isolation": "DefaultDeny"
        }
      }

You will then only need policies for explicitly enabling connectivity between pods.

For more information, refer to docs: https://kubernetes.io/docs/tasks/administer-cluster/weave-network-policy/

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM