简体   繁体   中英

EKS + Calico netpol issues (only daemonset?)

I've got the following situation:

  • EKS 1.21 (installed via eksctl)
  • 2 managed node groups (1xspot->currently m type, 1xon_demand->t type)
  • tigera-operator v3.23.1
  • elasticsearch deployed via elasitc-operator (in logging ns)
  • filebeat running as daemonset (also in logging ns)

Now I want to isolate the logging namespace with following.netpol:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: logging-default-netpol
  namespace: logging
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: logging
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: monitoring
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: elastic-system
---

After applying everything seems to work fine, however if I "restart" a filebeat pod with deleting it, it is no longer able to reach elasticsearch. Also the filebeat which is running on the same node as es seems not to be affected and can still reach es after restarting it.

In addition to that a random test pod which was create via kubectl run command seems also to work as expected.

I know it should not be any different whether the pod was created via ds or deployment. What's going on there?

After further investigation, I've found what's caused the issue.

The actual problem was in the filebeat manifest:

hostNetwork: true

The filebeat daemonset had hostNetwork enabled.

I can't explain why this is causing the issue, whether this is an expected/intended behaviour. But after removing it from the filebeat daemonset everthing works as expected.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM