简体   繁体   English

两个pod之间的istio路由

[英]istio routing between two pods

trying to get into istio on kubernetes but it seems i am missing either some fundamentals, or i am doing things back to front. 试图进入kubernetes istio,但似乎我缺少一些基本面,或者我正在做的事情回到前面。 I am quite experienced in kubernetes, but istio and its virtualservice confuses me a bit. 我在kubernetes方面很有经验,但istio和它的虚拟服务让我感到困惑。

I created 2 deployments (helloworld-v1/helloworld-v2). 我创建了2个部署(helloworld-v1 / helloworld-v2)。 Both have the same image, the only thing thats different is the environment variables - which output either version: "v1" or version: "v2". 两者都有相同的图像,唯一不同的是环境变量 - 输出版本:“v1”或版本:“v2”。 I am using a little testcontainer i wrote which basically returns the headers i got into the application. 我正在使用我写的一个小测试容器,它基本上返回我进入应用程序的标头。 A kubernetes service named "helloworld" can reach both. 名为“helloworld”的kubernetes服务可以同时达到。

I created a Virtualservice and a Destinationrule 我创建了一个Virtualservice和一个Destinationrule

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: helloworld
spec:
  hosts:
  - helloworld
http:
  - route:
     - destination:
       host: helloworld
       subset: v1
     weight: 90
     - destination:
       host: helloworld
       subset: v2
     weight: 10
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: helloworld
spec:
  host: helloworld
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2

According to the docs not mentioning any gateway should use the internal "mesh" one. 根据文档不提及任何网关应该使用内部“网格”之一。 Sidecar containers are successfully attached: Sidecar容器成功连接:

kubectl -n demo get all
NAME                                 READY     STATUS    RESTARTS   AGE
pod/curl-6657486bc6-w9x7d            2/2       Running   0          3h
pod/helloworld-v1-d4dbb89bd-mjw64    2/2       Running   0          6h
pod/helloworld-v2-6c86dfd5b6-ggkfk   2/2       Running   0          6h

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/helloworld   ClusterIP   10.43.184.153   <none>        80/TCP     6h

NAME                            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/curl            1         1         1            1           3h
deployment.apps/helloworld-v1   1         1         1            1           6h
deployment.apps/helloworld-v2   1         1         1            1           6h

NAME                                       DESIRED   CURRENT   READY     AGE
replicaset.apps/curl-6657486bc6            1         1         1         3h
replicaset.apps/helloworld-v1-d4dbb89bd    1         1         1         6h
replicaset.apps/helloworld-v2-6c86dfd5b6   1         1         1         6h

Everything works quite fine when i access the application from "outside" (istio-ingressgateway), v2 is called one times, v1 9 nine times: 当我从“外部”(istio-ingressgateway)访问应用程序时,一切正常,v2被调用一次,v1 9次被调用九次:

curl --silent -H 'host: helloworld' http://localhost
{"host":"helloworld","user-agent":"curl/7.47.0","accept":"*/*","x-forwarded-for":"10.42.0.0","x-forwarded-proto":"http","x-envoy-internal":"true","x-request-id":"a6a2d903-360f-91a0-b96e-6458d9b00c28","x-envoy-decorator-operation":"helloworld:80/*","x-b3-traceid":"e36ef1ba2229177e","x-b3-spanid":"e36ef1ba2229177e","x-b3-sampled":"1","x-istio-attributes":"Cj0KF2Rlc3RpbmF0aW9uLnNlcnZpY2UudWlkEiISIGlzdGlvOi8vZGVtby9zZXJ2aWNlcy9oZWxsb3dvcmxkCj8KGGRlc3RpbmF0aW9uLnNlcnZpY2UuaG9zdBIjEiFoZWxsb3dvcmxkLmRlbW8uc3ZjLmNsdXN0ZXIubG9jYWwKJwodZGVzdGluYXRpb24uc2VydmljZS5uYW1lc3BhY2USBhIEZGVtbwooChhkZXN0aW5hdGlvbi5zZXJ2aWNlLm5hbWUSDBIKaGVsbG93b3JsZAo6ChNkZXN0aW5hdGlvbi5zZXJ2aWNlEiMSIWhlbGxvd29ybGQuZGVtby5zdmMuY2x1c3Rlci5sb2NhbApPCgpzb3VyY2UudWlkEkESP2t1YmVybmV0ZXM6Ly9pc3Rpby1pbmdyZXNzZ2F0ZXdheS01Y2NiODc3NmRjLXRyeDhsLmlzdGlvLXN5c3RlbQ==","content-length":"0","version":"v1"}
"version": "v1",
"version": "v1",
"version": "v2",
"version": "v1",
"version": "v1",
"version": "v1",
"version": "v1",
"version": "v1",
"version": "v1",

But as soon as i do the curl from within a pod (in this case just byrnedo/alpine-curl) against the service things start to get confusing: 但是,一旦我从一个吊舱内进行卷曲(在这种情况下只是byrnedo / alpine-curl)对服务的事情开始变得混乱:

curl --silent -H 'host: helloworld' http://helloworld.demo.svc.cluster.local
{"host":"helloworld","user-agent":"curl/7.61.0","accept":"*/*","version":"v1"}
"version":"v2"
"version":"v2"
"version":"v1"
"version":"v1"
"version":"v2"
"version":"v2"
"version":"v1"
"version":"v2“
"version":"v1"

Not only that i miss all the istio attributes (which i understand in a service to service communication because as i understand it they are set when the request first enters the mesh via gateway), but the balance for me looks like the default 50:50 balance of a kubernetes service. 不仅我错过了所有的istio属性(我在服务通信中理解这一点,因为我理解它们是在请求首次通过网关进入网格时设置的),但我的余额看起来像默认50:50 kubernetes服务的平衡。

What do i have to do to achieve the same 1:9 balance on an inter-service communication? 我需要做些什么才能在服务间通信上实现相同的1:9平衡? Do i have to create a second, "internal" gateway to use instead the service fqdn? 我是否必须创建第二个“内部”网关来代替服务fqdn? Did i miss a definition? 我错过了一个定义吗? Should calling a service fqdn from within a pod respect a virtualservice routing? 应该从pod中调用服务fqdn尊重虚拟服务路由吗?

used istio version is 1.0.1, used kubernetes version v1.11.1. 使用的istio版本是1.0.1,使用kubernetes版本v1.11.1。

UPDATE deployed the sleep-pod as suggested, (this time not relying on the auto-injection of the demo namespace) but manually as described in the sleep sample UPDATE按照建议部署了睡眠窗口(这次不依赖于演示命名空间的自动注入),而是按照睡眠示例中的描述手动完成

kubectl -n demo get deployment sleep -o wide
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE       CONTAINERS          IMAGES                                     SELECTOR
sleep     1         1         1            1           2m        
sleep,istio-proxy   tutum/curl,docker.io/istio/proxyv2:1.0.1   app=sleep

Also changed the Virtualservice to 0/100 to see if it works at first glance . 还将Virtualservice更改为0/100以查看它是否乍一看。 Unfortunately this did not change much: 不幸的是,这并没有太大变化:

export SLEEP_POD=$(kubectl get -n demo pod -l app=sleep -o jsonpath={.items..metadata.name})
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user- agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v2"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v1"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v2"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v1"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v2"

Found the solution, one of the prerequisites (i forgot) is that a proper routing requires named ports: @see https://istio.io/docs/setup/kubernetes/spec-requirements/ . 找到解决方案,其中一个先决条件(我忘了)是正确的路由需要命名端口:@see https://istio.io/docs/setup/kubernetes/spec-requirements/

Wrong: 错误:

spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 3000

Right: 对:

spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 3000

After using name http everything works like a charm 使用名称http后,一切都像魅力

Routing rules are evaluated on the client side, so you need to make sure that the pod you are running curl from has an Istio sidecar attached to it. 路由规则在客户端进行评估,因此您需要确保运行curl的pod有一个Istio sidecar连接到它。 If it just calls the service directly, it can't evaluate the 90-10 rule that you set, but instead will just fall through to default kube round-robin routing. 如果它只是直接调用服务,则无法评估您设置的90-10规则,而是直接进入默认的kube循环路由。

The Istio sleep sample is a good one to use as a test client pod. Istio 睡眠样本是一个很好的用作测试客户端pod的样本

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM