简体   繁体   中英

Kubernetes - Calling Microservice by Service Name

I have two Microservices deployed on K8S cluster (Locally on 3 VMs - 1 Master and 2 Worker Nodes):
1- currency-exchange Microservice
2- currency-conversion Microservice

I am trying to call currency-exchange Microservice from currency-conversion by using service name:
http:///currency-exchange:8000.

It returns error as below:
{"timestamp":"2021-02-17T08:38:25.590+0000","status":500,"error":"Internal Server Error","message":"currency-exchange executing GET http://currency-exchange:8000/currency-exchange/from/EUR/to/INR","path":"/currency-conversion/from/EUR/to/INR/quantity/10"}

I am using Kubernetes, CentOS8 using Calico CNI with set FELIX_IPTABLESBACKEND=NFT, based on this link to facilitate POD-TO-POD communications.
Current services available:

[root@k8s-master ~]# kubectl get svc
NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE

currency-conversion                  NodePort    10.106.70.108    <none>        8100:32470/TCP               3h40m

currency-exchange                    NodePort    10.110.232.189   <none>        8000:31776/TCP               3h41m

Pods:

[root@k8s-master ~]# kubectl get pods -o wide
NAME                                        READY   STATUS    RESTARTS   AGE     IP                NODE            NOMINATED NODE   READINESS GATES

currency-conversion-86d9bc4698-rxdkh        1/1     Running   0          5h45m   192.168.212.125   worker-node-1   <none>           <none>
currency-exchange-c79ff888b-c8sdd           1/1     Running   0          5h44m   192.168.19.160    worker-node-2   <none>           <none>
currency-exchange-c79ff888b-nfqpx           1/1     Running   0          5h44m   192.168.212.65    worker-node-1   <none>           <none>

List of CoreDNS Pods available:

[root@k8s-master ~]# kubectl get pods -o wide -n kube-system | grep coredns
coredns-74ff55c5b-9x5qm                    1/1     Running   8          25d   192.168.235.218   k8s-master      <none>           <none>
coredns-74ff55c5b-zkkn7                    1/1     Running   8          25d   192.168.235.220   k8s-master      <none>           <none>

List all ENV variables:

[root@k8s-master ~]# kubectl exec -it currency-conversion-86d9bc4698-rxdkh -- printenv

HOSTNAME=currency-conversion-86d9bc4698-rxdkh
CURRENCY_EXCHANGE_SERVICE_HOST=http://currency-exchange
KUBERNETES_SERVICE_HOST=10.96.0.1
CURRENCY_EXCHANGE_SERVICE_PORT=8000

........

nslookup kubernetes.default exec Command:

[root@k8s-master ~]# kubectl exec -it currency-conversion-86d9bc4698-rxdkh -- nslookup kubernetes.default
    nslookup: can't resolve '(null)': Name does not resolve

nslookup: can't resolve 'kubernetes.default': Try again
command terminated with exit code 1

How do people solve such a problem? do they configure/tweak DNS to work properly as service registry?

Thanks in advance

EDITED:

[root@k8s-master ~]# kubectl describe service currency-conversion
Name:                     currency-conversion
Namespace:                default
Labels:                   app=currency-conversion
Annotations:              <none>
Selector:                 app=currency-conversion
Type:                     NodePort
IP Families:              <none>
IP:                       10.106.70.108
IPs:                      10.106.70.108
Port:                     <unset>  8100/TCP
TargetPort:               8100/TCP
NodePort:                 <unset>  32470/TCP
Endpoints:                192.168.212.125:8100
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

[root@k8s-master ~]# kubectl describe service currency-exchange
Name:                     currency-exchange
Namespace:                default
Labels:                   app=currency-exchange
Annotations:              <none>
Selector:                 app=currency-exchange
Type:                     NodePort
IP Families:              <none>
IP:                       10.110.232.189
IPs:                      10.110.232.189
Port:                     <unset>  8000/TCP
TargetPort:               8000/TCP
NodePort:                 <unset>  31776/TCP
Endpoints:                192.168.19.160:8000,192.168.212.65:8000
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

I just discovered wrong behavior in all coredns PODS, a lot of timeouts:

[root@k8s-master ~]# kubectl logs coredns-74ff55c5b-zkkn7 -n kube-system
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
[ERROR] plugin/errors: 2 6675909625369157619.3582573715596351475. HINFO: read udp 192.168.235.221:59744->192.168.100.1:53: i/o timeout
[ERROR] plugin/errors: 2 6675909625369157619.3582573715596351475. HINFO: read udp 192.168.235.221:53400->192.168.100.1:53: i/o timeout
[ERROR] plugin/errors: 2 6675909625369157619.3582573715596351475. HINFO: read udp 192.168.235.221:58465->192.168.100.1:53: i/o timeout
[ERROR] plugin/errors: 2 6675909625369157619.3582573715596351475. HINFO: read udp 192.168.235.221:58197->192.168.100.1:53: i/o timeout
[ERROR] plugin/errors: 2 6675909625369157619.3582573715596351475. HINFO: read udp 192.168.235.221:57794->192.168.100.1:53: i/o timeout
[ERROR] plugin/errors: 2 6675909625369157619.3582573715596351475. HINFO: read udp 192.168.235.221:43345->192.168.100.1:53: i/o timeout
[ERROR] plugin/errors: 2 6675909625369157619.3582573715596351475. HINFO: read udp 192.168.235.221:57361->192.168.100.1:53: i/o timeout
[ERROR] plugin/errors: 2 6675909625369157619.3582573715596351475. HINFO: read udp 192.168.235.221:51716->192.168.100.1:53: i/o timeout

how can i start trace the problem?

Extra Details:

[root@k8s-master ~]# kubectl exec -i -t currency-conversion-86d9bc4698-rxdkh -- sh
/ # wget http://currency-exchange:8000/currency-exchange/from/EUR/to/INR
wget: bad address 'currency-exchange:8000'

It looks to me that you have incorrectly setup your CNI overlay network. I checked your previous question to verify node's ip address and to me it looks that your pod network overlap with your host network:

The Kubernetes pod-network-cidr is the IP prefix for all pods in the Kubernetes cluster. This range must not clash with other networks in your VPC

The Kubernetes pod network documentation describes this as well:

Take care that your Pod network must not overlap with any of the host networks: you are likely to see problems if there is any overlap. (If you find a collision between your network plugin's preferred Pod network and some of your host networks, you should think of a suitable CIDR block to use instead, then use that during kubeadm init with --pod-network-cidr and as a replacement in your network plugin's YAML).

This is also mentioned in calico instructions when creating a cluster :

Note : If 192.168.0.0/16 is already in use within your network you must select a different pod network CIDR, replacing 192.168.0.0/16 in the above command.

PS. You can always wget curl from here .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM