简体   繁体   中英

Linkerd, k8s and routing

I'm trying to currently get my head around k8s and linkerd. I used docker-compose before and consul before.

I haven't fully figured out what I have been doing wrong, so I would be glad if someone could check the logic and see where the mistake is.

I'm using minikube locally and would like to use GCE for deployments.

I'm basically trying to get a simple container which runs a node application running in k8s and linkerd, but for some reaons I can't get the routing to work.

config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-config
data:
  config.yaml: |-
    admin:
      port: 9990

    namers:
    - kind: io.l5d.k8s
      experimental: true
      host: localhost
      port: 8001

    routers:
    - protocol: http
      label: outgoing
      baseDtab: |
        /srv        => /#/io.l5d.k8s/default/http;
        /host       => /srv;
        /http/*/*   => /host;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.daemonset
          namespace: default
          port: incoming
          service: l5d
      servers:
      - port: 4140
        ip: 0.0.0.0

    - protocol: http
      label: incoming
      baseDtab: |
        /srv        => /#/io.l5d.k8s/default/http;
        /host       => /srv;
        /http/*/*   => /host;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.localnode
      servers:
      - port: 4141
        ip: 0.0.0.0

I then deploy a deamonset from which I understood, that that is the most sensible way to use linkerd

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: l5d
  name: l5d
spec:
  template:
    metadata:
      labels:
        app: l5d
    spec:
      volumes:
      - name: l5d-config
        configMap:
          name: "l5d-config"
      containers:
      - name: l5d
        image: buoyantio/linkerd:0.8.6
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: outgoing
          containerPort: 4140
          hostPort: 4140
        - name: incoming
          containerPort: 4141
        - name: admin
          containerPort: 9990
        volumeMounts:
        - name: "l5d-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: true

      - name: kubectl
        image: buoyantio/kubectl:v1.4.0
        args:
        - "proxy"
        - "-p"
        - "8001"

I then deploy a replication controller with a docker container I build:

apiVersion: v1
kind: ReplicationController
metadata:
  name: testservice
spec:
  replicas: 3
  selector:
    app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      dnsPolicy: ClusterFirst
      containers:
      - name: service
        image: eu.gcr.io/xxxx/testservice:1.0
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: http_proxy
          value: $(NODE_NAME):4140
        command:
        - "pm2-docker"
        - "processes.json"
        ports:
        - name: service
          containerPort: 8080

When I then enter minikube service l5d the service and linkerd are shown, but I don't get the default page that should be shown.

To test if everything was working, I build another service which points directly to the port 8080 and then it works, but not via the linkerd proxy.

Could someone spot the error? Thanks a lot in advance.

We discussed this with some additional details in the linkerd Slack. The issue was not with the configs itself, but with the fact that the host header was not being set on the request.

The above configs will route based on host header, so this header must correspond to a service name. curl -H "Host: world" http://$IPADDRESS (or whatever) would have worked.

(It's also possible to route based on other bits of the request, eg the URL path in the case of HTTP requests.)

Thanks to the linkerd slack channel and some further trying, I managed to figure it out and build two services that are talking to each other, posting and getting data. This was just to get the hang of linkerd. When I have some time I will write a tutorial about it so that others can learn from it.

I was missing a kubectl proxy in my replication controller:

- name: kubectl
  image: buoyantio/kubectl:1.2.3
  args:
   - proxy
   - "-p"
   - "8001"

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM