简体   繁体   English

如何从 Kubernetes pod 到本地运行的 Wireshark 的 pipe 实时 pcap 记录?

[英]How do I pipe live pcap records from a Kubernetes pod to Wireshark running locally?

I'm attempting to get a view in Wireshark of live network traffic in one of my Kubernetes pods.我试图在 Wireshark 中查看我的 Kubernetes pod 中的一个实时网络流量。 In plain old Docker, I was able to run this:在普通的旧 Docker 中,我能够运行这个:

docker run --rm --net=container:app_service_1 crccheck/tcpdump -i any --immediate-mode -w - | wireshark -k -i -

This spins up a simple container that runs tcpdump with the arguments shown, and pipes the packet captures in pcap format to stdout (the -w - argument).这将启动一个简单的容器,该容器运行tcpdump并显示 arguments,并将 pcap 格式的数据包捕获通过管道传输到 stdout( -w -参数)。 This output is then piped to Wireshark running on my host machine, which displays the packets as they arrive.这个 output 然后通过管道传输到在我的主机上运行的 Wireshark,它会在数据包到达时显示它们。

How do I do something similar in Kubernetes?如何在 Kubernetes 中做类似的事情?

I've tried applying a patch as follows:我尝试按如下方式应用补丁:

  template:
    spec:
      containers:
        - name: tcpdumper
          image: crccheck/tcpdump
          args: ["-i", "any", "--immediate-mode", "-w", "-"]
          tty: true
          stdin: true

And I apply this by running k attach -it app-service-7bdb7798c5-2lr6q | wireshark -k -i -我通过运行k attach -it app-service-7bdb7798c5-2lr6q | wireshark -k -i - k attach -it app-service-7bdb7798c5-2lr6q | wireshark -k -i -

But this doesn't seem to work;但这似乎不起作用; Wireshark starts up but it immediately shows an error: Wireshark 启动但立即显示错误:

Data written to the pipe is neither in a supported pcap format nor in pcapng format

I haven't used k8s a lot, but the docker run gets the entire clean stdout, while I get the impression that k attach doesn't.我没有经常使用 k8s,但是 docker 运行获得了整个干净的标准输出,而我的印象是 k attach 没有。

I don't think that kubectl has an equivalent of docker run, that gives you clean stdout, but you might be able to do something with kubectl exec.我不认为 kubectl 有相当于 docker 运行,它给你干净的标准输出,但你也许可以用 kubectl exec 做一些事情。

A possible test would be to redirect the output to a file, and see if it's valid output for the command you're running, and that there's nothing unexpected there.一个可能的测试是将 output 重定向到一个文件,并查看它对于您正在运行的命令是否有效 output ,并且那里没有任何意外。

I highly suggest you to read Using sidecars to analyze and debug network traffic in OpenShift and Kubernetes pods .我强烈建议您阅读使用边车分析和调试 OpenShift 和 Kubernetes pods 中的网络流量

This article explains why you cant read traffic data directly from a pod and gives you an alternative on how to do it using a sidecar.本文解释了为什么不能直接从 pod 读取流量数据,并为您提供了如何使用 sidecar 的替代方法。

In short words, the containers most likely run on an internal container platform network that is not directly accessible by your machine.简而言之,容器很可能在您的机器无法直接访问的内部容器平台网络上运行。

A sidecar container is a container that is running in the same pod as the actual service/application and is able to provide additional functionality to the service/application. Sidecar 容器是与实际服务/应用程序在同一个 pod 中运行的容器,并且能够为服务/应用程序提供附加功能。

TCPdump effectively in Kubernetes is a bit tricky and requires you to create a side car to your pod.在 Kubernetes 中有效的 TCPdump 有点棘手,需要您为您的 pod 创建一个边车。 What you are facing is actually the expected behavior.您所面临的实际上是预期的行为。

run good old stuff like TCPdump or ngrep would not yield much interesting information, because you link directly to the bridge network or overlay in a default scenario.运行像 TCPdump 或 ngrep 这样的好东西不会产生很多有趣的信息,因为您直接链接到桥接网络或在默认情况下覆盖。

The good news is, that you can link your TCPdump container to the host network or even better, to the container network stack.好消息是,您可以将 TCPdump 容器链接到主机网络,甚至更好地链接到容器网络堆栈。 Source: How to TCPdump effectively in Docker来源: 如何在 Docker 中有效地 TCPdump

The thing is that you have two entry points, one is for nodeIP:NodePort the second is ClusterIP:Port.问题是你有两个入口点,一个是 nodeIP:NodePort,第二个是 ClusterIP:Port。 Both are pointing to the same set of randomization rules for endpoints set on kubernetes iptables.两者都指向在 kubernetes iptables 上设置的端点的同一组随机化规则。

As soon as it can happen on any node it's hard to configure tcpdump to catch all interesting traffic in just one point.一旦它可能发生在任何节点上,就很难将 tcpdump 配置为仅在一个点上捕获所有有趣的流量。

The best tool I know for such kind of analysis is Istio, but it works mostly for HTTP traffic.我知道进行此类分析的最佳工具是 Istio,但它主要适用于 HTTP 流量。

Considering this, the best solution is to use a tcpdumper sidecar for each pod behind the service.考虑到这一点,最好的解决方案是为服务后面的每个 pod 使用 tcpdumper sidecar。

Let's go trough an example on how to achieve this让我们通过 go 举例说明如何实现这一点

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: web
  name: web-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: web-app
        image: nginx
        imagePullPolicy: Always        
        ports:
        - containerPort: 80
          protocol: TCP
      - name: tcpdumper
        image: docker.io/dockersec/tcpdump
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: web-svc
  namespace: default
spec:
  ports:
  - nodePort: 30002
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: web
  type: NodePort

On this manifest we can notice tree important things.在这个清单上,我们可以注意到树上重要的事情。 We have a nginx container and one tcpdumper container as a side car and we have a service defined as NodePort.我们有一个 nginx 容器和一个 tcpdumper 容器作为边车,我们有一个定义为 NodePort 的服务。

To access our sidecar, you have to run the following command:要访问我们的 sidecar,您必须运行以下命令:

$ kubectl attach -it web-app-db7f7c59-d4xm6 -c tcpdumper

Example:例子:

$ kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        13d
web-svc      NodePort    10.108.142.180   <none>        80:30002/TCP   9d
$ curl localhost:30002
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
$ kubectl attach -it web-app-db7f7c59-d4xm6 -c tcpdumper
Unable to use a TTY - container tcpdumper did not allocate one
If you don't see a command prompt, try pressing enter.
> web-app-db7f7c59-d4xm6.80: Flags [P.], seq 1:78, ack 1, win 222, options [nop,nop,TS val 300957902 ecr 300958061], length 77: HTTP: GET / HTTP/1.1
12:03:16.884512 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.1336: Flags [.], ack 78, win 217, options [nop,nop,TS val 300958061 ecr 300957902], length 0
12:03:16.884651 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.1336: Flags [P.], seq 1:240, ack 78, win 217, options [nop,nop,TS val 300958061 ecr 300957902], length 239: HTTP: HTTP/1.1 200 OK
12:03:16.884705 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.1336: Flags [P.], seq 240:852, ack 78, win 217, options [nop,nop,TS val 300958061 ecr 300957902], length 612: HTTP
12:03:16.884743 IP 192.168.250.64.1336 > web-app-db7f7c59-d4xm6.80: Flags [.], ack 240, win 231, options [nop,nop,TS val 300957902 ecr 300958061], length 0
12:03:16.884785 IP 192.168.250.64.1336 > web-app-db7f7c59-d4xm6.80: Flags [.], ack 852, win 240, options [nop,nop,TS val 300957902 ecr 300958061], length 0
12:03:16.889312 IP 192.168.250.64.1336 > web-app-db7f7c59-d4xm6.80: Flags [F.], seq 78, ack 852, win 240, options [nop,nop,TS val 300957903 ecr 300958061], length 0
12:03:16.889351 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.1336: Flags [F.], seq 852, ack 79, win 217, options [nop,nop,TS val 300958062 ecr 300957903], length 0
12:03:16.889535 IP 192.168.250.64.1336 > web-app-db7f7c59-d4xm6.80: Flags [.], ack 853, win 240, options [nop,nop,TS val 300957903 ecr 300958062], length 0
12:08:10.336319 IP6 fe80::ecee:eeff:feee:eeee > ff02::2: ICMP6, router solicitation, length 16
12:15:47.717966 IP 192.168.250.64.2856 > web-app-db7f7c59-d4xm6.80: Flags [S], seq 3314747302, win 28400, options [mss 1420,sackOK,TS val 301145611 ecr 0,nop,wscale 7], length 0
12:15:47.717993 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.2856: Flags [S.], seq 2539474977, ack 3314747303, win 27760, options [mss 1400,sackOK,TS val 301145769 ecr 301145611,nop,wscale 7], length 0
12:15:47.718162 IP 192.168.250.64.2856 > web-app-db7f7c59-d4xm6.80: Flags [.], ack 1, win 222, options [nop,nop,TS val 301145611 ecr 301145769], length 0
12:15:47.718164 IP 192.168.250.64.2856 > web-app-db7f7c59-d4xm6.80: Flags [P.], seq 1:78, ack 1, win 222, options [nop,nop,TS val 301145611 ecr 301145769], length 77: HTTP: GET / HTTP/1.1
12:15:47.718191 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.2856: Flags [.], ack 78, win 217, options [nop,nop,TS val 301145769 ecr 301145611], length 0
12:15:47.718339 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.2856: Flags [P.], seq 1:240, ack 78, win 217, options [nop,nop,TS val 301145769 ecr 301145611], length 239: HTTP: HTTP/1.1 200 OK
12:15:47.718403 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.2856: Flags [P.], seq 240:852, ack 78, win 217, options [nop,nop,TS val 301145769 ecr 301145611], length 612: HTTP
12:15:47.718451 IP 192.168.250.64.2856 > web-app-db7f7c59-d4xm6.80: Flags [.], ack 240, win 231, options [nop,nop,TS val 301145611 ecr 301145769], length 0
12:15:47.718489 IP 192.168.250.64.2856 > web-app-db7f7c59-d4xm6.80: Flags [.], ack 852, win 240, options [nop,nop,TS val 301145611 ecr 301145769], length 0
12:15:47.723049 IP 192.168.250.64.2856 > web-app-db7f7c59-d4xm6.80: Flags [F.], seq 78, ack 852, win 240, options [nop,nop,TS val 301145612 ecr 301145769], length 0
12:15:47.723093 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.2856: Flags [F.], seq 852, ack 79, win 217, options [nop,nop,TS val 301145770 ecr 301145612], length 0
12:15:47.723243 IP 192.168.250.64.2856 > web-app-db7f7c59-d4xm6.80: Flags [.], ack 853, win 240, options [nop,nop,TS val 301145612 ecr 301145770], length 0
12:15:50.493995 IP 192.168.250.64.31340 > web-app-db7f7c59-d4xm6.80: Flags [S], seq 124258064, win 28400, options [mss 1420,sackOK,TS val 301146305 ecr 0,nop,wscale 7], length 0
12:15:50.494022 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.31340: Flags [S.], seq 3544403648, ack 124258065, win 27760, options [mss 1400,sackOK,TS val 301146463 ecr 301146305,nop,wscale 7], length 0
12:15:50.494189 IP 192.168.250.64.31340 > web-app-db7f7c59-d4xm6.80: Flags [.], ack 1, win 222, options 

You can also take a look at ksniff tool, a kubectl plugin that utilize tcpdump and Wireshark to start a remote capture on any pod in your Kubernetes cluster.您还可以查看ksniff工具,这是一个 kubectl 插件,它利用 tcpdump 和 Wireshark 在您的 Kubernetes 集群中的任何 pod 上启动远程捕获。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何从 Kubernetes pod 中运行 curl 命令 - How do I run curl command from within a Kubernetes pod 我有一个 python 脚本在 kubernetes pod 的容器内运行。如何停止与 pod 一起运行的脚本? - I have a python script running inside a container of kubernetes pod.How do i stop the script which runs along with the starting of the pod? 如何登录运行在特定Kubernetes容器内的Docker容器并运行test.sh文件? - How do I login into a Docker container running inside the specific Kubernetes pod and run an test.sh file? 如何在具有运行中的spring-boot应用程序的kubernetes容器/容器中运行cron? - How do I run a cron inside a kubernetes pod/container which has a running spring-boot application? 如何使用 nsenter 实用程序从 POD 连接和管理 Kubernetes 集群 - How do I connect and manage Kubernetes cluster from POD using nsenter utility 如何基于每个pod从Kubernetes将日志发送到GELF UDP端点 - How do I send logs to GELF UDP endpoint from Kubernetes on a per-pod basis 如何从运行在同一 Kube.netes 集群中的容器连接到 Docker,它在 Kube.netes 集群中的主机虚拟机上运行? - How do I connect to Docker, running on host VMs in a Kubernetes cluster from a container running in the same Kubernetes cluster? 如何在此Kubernetes示例中选择主Redis窗格? - How do I select the master Redis pod in this Kubernetes example? 如何修改我的 DOCKERFILE 以将 wget 安装到 kubernetes pod 中? - How do I modify my DOCKERFILE to install wget into kubernetes pod? 如何将Kubernetes配置图复制到Pod的可写区域? - How do I copy a Kubernetes configmap to a write enabled area of a pod?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM