[英]Linkerd, k8s and routing
I'm trying to currently get my head around k8s and linkerd. 我正在尝试让我了解k8s和linkerd。 I used docker-compose before and consul before.
我之前使用过docker-compose,之前使用过consul。
I haven't fully figured out what I have been doing wrong, so I would be glad if someone could check the logic and see where the mistake is. 我还没有完全弄清楚自己在做什么错,所以如果有人可以检查逻辑并查看错误出在哪里,我会很高兴。
I'm using minikube
locally and would like to use GCE for deployments. 我在本地使用
minikube
,并希望使用GCE进行部署。
I'm basically trying to get a simple container which runs a node application running in k8s and linkerd, but for some reaons I can't get the routing to work. 我基本上是想得到一个简单的容器,该容器运行在k8s和链接器中运行的节点应用程序,但是对于某些原因,我无法使路由正常工作。
config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: l5d-config
data:
config.yaml: |-
admin:
port: 9990
namers:
- kind: io.l5d.k8s
experimental: true
host: localhost
port: 8001
routers:
- protocol: http
label: outgoing
baseDtab: |
/srv => /#/io.l5d.k8s/default/http;
/host => /srv;
/http/*/* => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: default
port: incoming
service: l5d
servers:
- port: 4140
ip: 0.0.0.0
- protocol: http
label: incoming
baseDtab: |
/srv => /#/io.l5d.k8s/default/http;
/host => /srv;
/http/*/* => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
servers:
- port: 4141
ip: 0.0.0.0
I then deploy a deamonset
from which I understood, that that is the most sensible way to use linkerd
然后我部署了一个
deamonset
了解的deamonset
,这是使用linkerd
的最明智的方法
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: l5d
name: l5d
spec:
template:
metadata:
labels:
app: l5d
spec:
volumes:
- name: l5d-config
configMap:
name: "l5d-config"
containers:
- name: l5d
image: buoyantio/linkerd:0.8.6
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
- /io.buoyant/linkerd/config/config.yaml
ports:
- name: outgoing
containerPort: 4140
hostPort: 4140
- name: incoming
containerPort: 4141
- name: admin
containerPort: 9990
volumeMounts:
- name: "l5d-config"
mountPath: "/io.buoyant/linkerd/config"
readOnly: true
- name: kubectl
image: buoyantio/kubectl:v1.4.0
args:
- "proxy"
- "-p"
- "8001"
I then deploy a replication controller with a docker container I build: 然后,我使用构建的docker容器部署复制控制器:
apiVersion: v1
kind: ReplicationController
metadata:
name: testservice
spec:
replicas: 3
selector:
app: hello
template:
metadata:
labels:
app: hello
spec:
dnsPolicy: ClusterFirst
containers:
- name: service
image: eu.gcr.io/xxxx/testservice:1.0
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: http_proxy
value: $(NODE_NAME):4140
command:
- "pm2-docker"
- "processes.json"
ports:
- name: service
containerPort: 8080
When I then enter minikube service l5d
the service and linkerd are shown, but I don't get the default page that should be shown. 然后,当我进入
minikube service l5d
将显示服务和链接器,但没有得到应显示的默认页面。
To test if everything was working, I build another service which points directly to the port 8080 and then it works, but not via the linkerd proxy. 为了测试一切是否正常,我构建了另一个服务,该服务直接指向端口8080,然后它可以工作,但不能通过链接器代理。
Could someone spot the error? 有人可以发现错误吗? Thanks a lot in advance.
非常感谢。
We discussed this with some additional details in the linkerd Slack. 我们在链接器Slack中对此进行了详细讨论。 The issue was not with the configs itself, but with the fact that the host header was not being set on the request.
问题不在于配置本身,而在于没有在请求上设置主机头的事实。
The above configs will route based on host header, so this header must correspond to a service name. 上面的配置将基于主机头进行路由,因此此头必须与服务名称相对应。
curl -H "Host: world" http://$IPADDRESS
(or whatever) would have worked. curl -H "Host: world" http://$IPADDRESS
(或其他任何方式)会起作用。
(It's also possible to route based on other bits of the request, eg the URL path in the case of HTTP requests.) (也可以根据请求的其他位进行路由,例如HTTP请求的URL路径。)
Thanks to the linkerd slack channel and some further trying, I managed to figure it out and build two services that are talking to each other, posting and getting data. 多亏了linkerd松弛通道以及进一步的尝试,我设法弄清了这一点,并构建了两个相互通信的服务,即发布和获取数据。 This was just to get the hang of linkerd.
这只是为了摆脱linkerd的困扰。 When I have some time I will write a tutorial about it so that others can learn from it.
有空的时候,我会写一篇有关它的教程,以便其他人可以从中学习。
I was missing a kubectl proxy in my replication controller: 我在复制控制器中缺少一个kubectl代理:
- name: kubectl
image: buoyantio/kubectl:1.2.3
args:
- proxy
- "-p"
- "8001"
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.