简体   繁体   English

Kubernetes 集群中的两个服务之间的通信使用 Ingress 作为 API 网关

[英]Communication Between Two Services in Kubernetes Cluster Using Ingress as API Gateway

I am having problems trying to get communication between two services in a kubernetes cluster.我在尝试在 kubernetes 集群中的两个服务之间进行通信时遇到问题。 We are using a kong ingress object as an 'api gateway' to reroute http calls from a simple Angular frontend to send it to a .NET Core 3.1 API Controller Interface backend. We are using a kong ingress object as an 'api gateway' to reroute http calls from a simple Angular frontend to send it to a .NET Core 3.1 API Controller Interface backend.

In front of these two ClusterIP services sits an ingress controller to take external http(s) calls from our kubernetes cluster to launch the frontend service.在这两个 ClusterIP 服务的前面有一个入口 controller 从我们的 kubernetes 集群接收外部 http(s) 调用以启动前端服务。 This ingress is shown here:此入口显示在此处:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-nginx
  namespace: kong
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - host: app.***.*******.com     <<  Obfuscated
      http:
        paths:
            - path: /
              backend:
                serviceName: frontend-service
                servicePort: 80

The first service is called 'frontend-service', a simple Angular 9 frontend that allows me to type in http strings and submit those strings to the backend.第一个服务称为“前端服务”,一个简单的 Angular 9 前端,允许我输入 http 字符串并将这些字符串提交到后端。
The manifest yaml file for this is shown below.清单 yaml 文件如下所示。 Note that the image name is obfuscated for various reasons.请注意,由于各种原因,图像名称被混淆了。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  namespace: kong
  labels:
    app: frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      imagePullSecrets:
      - name: regcred
      containers:
      - name: frontend
        image: ***********/*******************:****  << Obfuscated
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  namespace: kong
  name: frontend-service
spec:
  type: ClusterIP  
  selector:
    app: frontend
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP

The second service is a simple .NET Core 3.1 API interface that prints back some text when the controller is reached.第二个服务是一个简单的 .NET Core 3.1 API 接口,当到达 controller 时打印回一些文本。 The backend service is called 'dataapi' and has one simple Controller in it called ValuesController.后端服务称为“dataapi”,其中有一个简单的 Controller,称为 ValuesController。

The manifest yaml file for this is shown below.清单 yaml 文件如下所示。

  replicas: 1
  selector:
    matchLabels:
      app: dataapi
  template:
    metadata:
      labels:
        app: dataapi
    spec:
      imagePullSecrets:
      - name: regcred
      containers:
      - name: dataapi
        image: ***********/*******************:****  << Obfuscated
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: dataapi
  namespace: kong
  labels:
    app: dataapi
spec:
  ports:
  - port: 80
    name: http
    targetPort: 80
  selector:
    app: dataapi

We are using a kong ingress as a proxy to redirect incoming http calls to the dataapi service.我们使用 kong 入口作为代理将传入的 http 调用重定向到 dataapi 服务。 This manifest file is shown below:此清单文件如下所示:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kong-gateway
  namespace: kong
spec:
  ingressClassName: kong
  rules:
  - http:
      paths:
      - path: /dataapi
        pathType: Prefix
        backend:
          service:
            name: dataapi
            port:
              number: 80

Performing a 'kubectl get all' produces the following output:执行“kubectl get all”会产生以下 output:

kubectl get all

NAME                                READY   STATUS    RESTARTS   AGE
pod/dataapi-dbc8bbb69-mzmdc         1/1     Running   0          2d2h
pod/frontend-5d5ffcdfb7-kqxq9       1/1     Running   0          65m
pod/ingress-kong-56f8f44fd5-rwr9j   2/2     Running   0          6d

NAME                              TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE
service/dataapi                   ClusterIP      10.128.72.137    <none>         80/TCP,443/TCP               2d2h
service/frontend-service          ClusterIP      10.128.44.109    <none>         80/TCP                       2d
service/kong-proxy                LoadBalancer   10.128.246.165   XX.XX.XX.XX    80:31289/TCP,443:31202/TCP   6d
service/kong-validation-webhook   ClusterIP      10.128.138.44    <none>         443/TCP                      6d

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dataapi        1/1     1            1           2d2h
deployment.apps/frontend       1/1     1            1           2d
deployment.apps/ingress-kong   1/1     1            1           6d

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/dataapi-dbc8bbb69         1         1         1       2d2h
replicaset.apps/frontend-59bf9c75dc       0         0         0       25h
replicaset.apps/ingress-kong-56f8f44fd5   1         1         1       6d

and 'kubectl get ingresses' gives: 'kubectl get ingresses' 给出:

NAME                    CLASS    HOSTS  (Obfuscated)
ingress-nginx           <none>   ***.******.com,**.********.com,**.****.com,**.******.com  + 1 more...        xx.xx.xxx.xx   80      6d                                                                             ADDRESS        PORTS   AGE
kong-gateway            kong     *                                                                            xx.xx.xxx.xx   80      2d2h

From the frontend, the expectation is that constructing the http string:从前端来看,期望构造 http 字符串:

http://kong-proxy/dataapi/api/values

will enter our 'values' controller in the backend and return the text string from that controller.将在后端输入我们的“值”controller 并从该 controller 返回文本字符串。

Both services are running on the same kubernetes cluster, here using Linode.这两个服务都在同一个 kubernetes 集群上运行,这里使用 Linode。 Our thinking is that it is a 'within cluster' communication between two services both of type ClusterIP.我们的想法是,它是两种类型为 ClusterIP 的两个服务之间的“集群内”通信。

The error reported in the Chrome console is: Chrome 控制台报错是:

zone-evergreen.js:2828 GET http://kong-proxy/dataapi/api/values net::ERR_NAME_NOT_RESOLVED

Note that we had found a similar StackOverflow issue as ours and the suggestion in that result was to add 'default.svc.cluster.local' to the http string as follows:请注意,我们发现了与我们类似的StackOverflow 问题,并且该结果中的建议是将“default.svc.cluster.local”添加到 http 字符串中,如下所示:

http://kong-proxy.default.svc.cluster.local/dataapi/api/values

This did not work.这没有用。 We also substituted kong, which is the namespace of the service, for default like this:我们还将服务的命名空间 kong 替换为默认值,如下所示:

http://kong-proxy.kong.svc.cluster.local/dataapi/api/values

yielding the same errors as above.产生与上述相同的错误。

Is there a critical step I am missing?我是否缺少关键步骤? Any advice is greatly appreciated!任何意见是极大的赞赏!

*************** UPDATE From Eric Gagnon's Response(s) ************** ****************** Eric Gagnon 的回复更新 **************

Again, thank you Eric for Responding.再次感谢 Eric 的回复。 Here are what my colleague and I have tried per your suggestions这是我的同事和我根据您的建议尝试过的

  1. Pod dns misconfiguration: check if pod's first nameserver equals 'kube-dns' svc ip and if search start with kong.svc.cluster.local: Pod dns 配置错误:检查 pod 的第一个名称服务器是否等于 'kube-dns' svc ip 以及搜索是否以 kong.svc.cluster.local 开头:
kubectl exec -i -t -n kong frontend-simple-deployment-7b8b9cfb44-f2shk -- cat /etc/resolv.conf

nameserver 10.128.0.10
search kong.svc.cluster.local svc.cluster.local cluster.local members.linode.com
options ndots:5

kubectl get -n kube-system svc 

NAME       TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.128.0.10   <none>        53/UDP,53/TCP,9153/TCP   55d

kubectl describe -n kube-system svc kube-dns

Name:              kube-dns
Namespace:         kube-system
Labels:            k8s-app=kube-dns
                   kubernetes.io/cluster-service=true
                   kubernetes.io/name=KubeDNS
Annotations:       lke.linode.com/caplke-version: v1.19.9-001
                   prometheus.io/port: 9153
                   prometheus.io/scrape: true
Selector:          k8s-app=kube-dns
Type:              ClusterIP
IP:                10.128.0.10
Port:              dns  53/UDP
TargetPort:        53/UDP
Endpoints:         10.2.4.10:53,10.2.4.14:53
Port:              dns-tcp  53/TCP
TargetPort:        53/TCP
Endpoints:         10.2.4.10:53,10.2.4.14:53
Port:              metrics  9153/TCP
TargetPort:        9153/TCP
Endpoints:         10.2.4.10:9153,10.2.4.14:9153
Session Affinity:  None
Events:            <none>    
  1. App Not using pod dns: in Node, output dns.getServers() to console应用程序未使用 pod dns:在节点中,output dns.getServers() 到控制台
I do not understand where and how to do this.  We tried to add DNS directly inside our Angular frontend app, but we found out it is not possible to add this.
  1. Kong-proxy doesn't like something: set logging debug, hit the app a bunch of times, and grep logs. Kong-proxy 不喜欢某些东西:设置日志记录调试,多次点击应用程序,以及 grep 日志。

We have tried two tests here.我们在这里尝试了两个测试。 First, our kong-proxy service reachable from an ingress controller.首先,我们的 kong-proxy 服务可以从入口 controller 访问。 Note that this is not our simple frontend app.请注意,这不是我们简单的前端应用程序。 It is nothing more than a proxy that passes an http string to a public gateway we have set up.它只不过是一个将 http 字符串传递到我们设置的公共网关的代理。 This does work.这确实有效。 We have exposed this through as:我们通过以下方式暴露了这一点:

http://gateway.cwg.stratbore.com/test/api/test

["Successfully pinged Test controller!!"]

kubectl logs -n kong ingress-kong-56f8f44fd5-rwr9j | grep test 

10.2.4.11 - - [16/Apr/2021:16:03:42 +0000] "GET /test/api/test HTTP/1.1" 200 52 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36"

So this works.

But when we try and do it from a simple frontend interface running in the same cluster as our backend:但是,当我们尝试从与后端在同一集群中运行的简单前端界面执行此操作时:

在此处输入图像描述

it does not work with the text shown in the text box.它不适用于文本框中显示的文本。 This command does not add anything new:此命令不会添加任何新内容:

kubectl logs -n kong ingress-kong-56f8f44fd5-rwr9j | grep test 

The front end comes back with an error.前端返回错误。

But if we do add this http text:但是如果我们添加这个 http 文本:

在此处输入图像描述

The kong-ingress pod is hit: kong-ingress pod 被击中:

kubectl logs -n kong ingress-kong-56f8f44fd5-rwr9j | grep test 

10.2.4.11 - - [16/Apr/2021:16:03:42 +0000] "GET /test/api/test HTTP/1.1" 200 52 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36"
10.2.4.11 - - [17/Apr/2021:16:55:50 +0000] "GET /test/api/test HTTP/1.1" 200 52 "http://app-basic.cwg.stratbore.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36"

but the frontend gets an error back.但前端得到一个错误。

So at this point, we have tried a lot of things to get our frontend app to successfully send an http to our backend and get a response back and we are unsuccessful.所以在这一点上,我们已经尝试了很多事情来让我们的前端应用程序成功地将 http 发送到我们的后端并得到响应,但我们没有成功。 I have also tried various configurations of our nginx.conf file that is packaged with our frontend app but no luck there either.我还尝试了与我们的前端应用程序一起打包的 nginx.conf 文件的各种配置,但也没有运气。

I am about to package all of this up in a github project.我即将在 github 项目中完成 package 所有这些。 Thanks.谢谢。

Chris,克里斯,

I haven't used linode or kong and don't know what your frontend actually does, so I'll just point out what I can see:我没有使用过 linode 或 kong 也不知道你的前端实际上做了什么,所以我只指出我能看到的:

  • The simplest dns check is to curl (or ping, dig, etc.):最简单的dns检查是到curl(或ping、dig等):

    • http://[dataapi's pod ip]:80 from a host node http://[dataapi's pod ip]:80 来自主机节点
    • http://[kong-proxy svc's internal ip]/dataapi/api/values from a host node (or another pod - see below) http://[kong-proxy svc's internal ip]/dataapi/api/values 来自主机节点(或另一个 pod - 见下文)
  • default path matching on nginx ingress controller is pathPrefix, so your nginx ingress with path: / and nginx.ingress.kubernetes.io/rewrite-target: / actually matches everything and rewrites to /. default path matching on nginx ingress controller is pathPrefix, so your nginx ingress with path: / and nginx.ingress.kubernetes.io/rewrite-target: / actually matches everything and rewrites to /. This may not be an issue if you properly specify all your ingresses so they take priority over "/".如果您正确指定所有入口以便它们优先于“/”,这可能不是问题。

  • you said 'using a kong ingress as a proxy to redirect incoming', just want to make sure you're proxying (not redirecting the client).你说'使用kong入口作为代理来重定向传入',只是想确保你正在代理(而不是重定向客户端)。

  • Is chrome just relaying its upstream error from frontend-service? chrome 是否只是从前端服务转发其上游错误? An external client shouldn't be able to resolve the cluster's urls (unless you've joined your local machine to the cluster's network or done some other fancy trick).外部客户端不应该能够解析集群的 url(除非您已将本地计算机加入集群的网络或执行了其他一些花哨的技巧)。 By default, dns only works within the cluster.默认情况下,dns 仅在集群内有效。

  • cluster dns generally follows [service name].[namespace name].svc.cluster.local.集群 dns 一般遵循[服务名称].[命名空间名称].svc.cluster.local。 If dns cluster dns is working, then using curl, ping, wget, etc. from a pod in the cluster and pointing it to that svc will send it to the cluster svc ip, not an external ip. If dns cluster dns is working, then using curl, ping, wget, etc. from a pod in the cluster and pointing it to that svc will send it to the cluster svc ip, not an external ip.

  • is your dataapi service configured to respond to /dataapi/api/values or does it not care what the uri is?您的 dataapi 服务是否配置为响应 /dataapi/api/values 或者它不关心 uri 是什么?

If you don't have any network policies restricting traffic within a namespace, you should be able to create a test pod in the same namespace, and curl the service dns and the pod ip's directly:如果您没有任何限制命名空间内流量的网络策略,您应该能够在同一个命名空间中创建一个测试 pod,并且 curl 服务 dns 和 pod ip 直接:

apiVersion: v1
kind: Pod
metadata:
  name: curl-test
  namespace: kong
spec:
  containers:
  - name: curl-test
    image: buildpack-deps
    imagePullPolicy: Always
    command:
    - "curl"
    - "-v"
    - "http://dataapi:80/dataapi/api/values"
  #nodeSelector:
  #  kubernetes.io/hostname: [a more different node's hostname]

The pod should attempt dns resolution from the cluster. pod 应尝试从集群解析 dns。 So it should find dataapi's svc ip and curl port 80 path /dataapi/api/values.所以它应该找到dataapi的svc ip和curl端口80路径/dataapi/api/values。 Service IP's are virtual so they aren't actually 'reachable'.服务 IP 是虚拟的,因此它们实际上不是“可访问的”。 Instead, iptables routes them to the pod ip, which has an actual network endpoint and IS addressable.相反,iptables 将它们路由到 pod ip,它有一个实际的网络端点并且是可寻址的。

once it completes, just check the logs: kubectl logs curl-test, and then delete it.完成后,只需检查日志:kubectl logs curl-test,然后将其删除。

If this fails, the nature of the failure in the logs should tell you if it's a dns or link issue.如果此操作失败,日志中的故障性质应该会告诉您是 dns 还是链接问题。 If it works, then you probably don't have a cluster dns issue.如果它有效,那么您可能没有集群 dns 问题。 But it's possible you have an inter-node communication issue.但是您可能存在节点间通信问题。 To test this, you can run the same manifest as above, but uncomment the node selector field to force it to run on a different node than your kong-proxy pod.要对此进行测试,您可以运行与上述相同的清单,但取消注释节点选择器字段以强制它在与您的 kong-proxy pod 不同的节点上运行。 It's a manual method, but it's quick for troubleshooting.这是一种手动方法,但可以快速进行故障排除。 Just rinse and repeat as needed for other nodes.只需冲洗并根据其他节点的需要重复。

Of course, it may not be any of this, but hopefully this helps troubleshoot.当然,它可能不是任何一个,但希望这有助于排除故障。

After a lot of help from Eric G (thank you,) on this, and reading this previous StackOverflow , I finally solved the issue.在 Eric G(谢谢)对此提供了大量帮助并阅读了之前的 StackOverflow之后,我终于解决了这个问题。 As the answer in this link illustrates, our frontend pod was serving up our application in a web browser which knows NOTHING about Kubernetes clusters.正如此链接中的答案所示,我们的前端 pod 在 web 浏览器中提供我们的应用程序,该浏览器对Kubernetes集群一无所知。

As the link suggests, we added another rule in our nginx ingress to successfully route our http requests to the proper service如链接所示,我们在 nginx 入口中添加了另一条规则,以成功地将 http 请求路由到正确的服务

    - host: gateway.*******.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: gateway-service
                port:
                  number: 80

Then from our Angular frontend, we sent our HTTP requests as follows:然后从我们的 Angular 前端,我们发送 HTTP 请求如下:

...
http.get<string>("http://gateway.*******.com/api/name_of_contoller');
...

And we were finally able to communicate with our backend service the way we wanted.我们终于能够以我们想要的方式与我们的后端服务进行通信。 Both frontend and backend in the same Kubernetes Cluster.前端和后端都在同一个 Kubernetes 集群中。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM