简体   繁体   English

Linkerd和k8s不起作用

[英]Linkerd and k8s not working

I'm trying to get my head around linkerd in kubernetes. 我正在努力使自己在kubernetes中的linkerd周围。 I'm using the linkerd deamonset example from their website in my local minikube 我在我的本地minikube使用他们网站上的linkerd deamonset示例

It is all deployed in the production namespace. 它全部部署在production名称空间中。 When I try to 当我尝试

http_proxy=$(kubectl --namespace=production get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -s http://apiserver/readinezs

Nothing happens. 什么都没发生。 Where am I going wrong in my setup? 我的设定哪里出问题了?

My Linkerd yaml: 我的Linkerd yaml:

# runs linkerd in a daemonset, in linker-to-linker mode
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-config
data:
  config.yaml: |-
    admin:
      port: 9990

    namers:
    - kind: io.l5d.k8s
      experimental: true
      host: localhost
      port: 8001

    telemetry:
    - kind: io.l5d.prometheus
    - kind: io.l5d.recentRequests
      sampleRate: 0.25

    usage:
      orgId: linkerd-examples-daemonset

    routers:
    - protocol: http
      label: outgoing
      dtab: |
        /srv        => /#/io.l5d.k8s/production/http;
        /host       => /srv;
        /svc        => /host;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.daemonset
          namespace: production
          port: incoming
          service: l5d
      servers:
      - port: 4140
        ip: 0.0.0.0
      responseClassifier:
        kind: io.l5d.retryableRead5XX

    - protocol: http
      label: incoming
      dtab: |
        /srv        => /#/io.l5d.k8s/production/http;
        /host       => /srv;
        /svc        => /host;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.localnode
      servers:
      - port: 4141
        ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: l5d
  name: l5d
spec:
  template:
    metadata:
      labels:
        app: l5d
    spec:
      volumes:
      - name: l5d-config
        configMap:
          name: "l5d-config"
      containers:
      - name: l5d
        image: buoyantio/linkerd:0.9.1
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: outgoing
          containerPort: 4140
          hostPort: 4140
        - name: incoming
          containerPort: 4141
        - name: admin
          containerPort: 9990
        volumeMounts:
        - name: "l5d-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: true

      - name: kubectl
        image: buoyantio/kubectl:v1.4.0
        args:
        - "proxy"
        - "-p"
        - "8001"
---
apiVersion: v1
kind: Service
metadata:
  name: l5d
spec:
  selector:
    app: l5d
  type: LoadBalancer
  ports:
  - name: outgoing
    port: 4140
  - name: incoming
    port: 4141
  - name: admin
    port: 9990

Here's my deployment for an apiservice: 这是我对apiservice的部署:

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: apiserver-production
spec:
  replicas: 1
  template:
    metadata:
      name: apiserver
      labels:
        app: apiserver
        role: gateway
        env: production
    spec:
      dnsPolicy: ClusterFirst
      containers:
      - name: apiserver
        image: eu.gcr.io/xxxxx/apiservice:latest
        env:
        - name: MONGO_HOST
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: host
        - name: MONGO_PORT
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: port
        - name: MONGO_USR
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: username
        - name: MONGO_PWD
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: password
        - name: MONGO_DB
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: db
        - name: MONGO_PREFIX
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: prefix
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: http_proxy
          value: $(NODE_NAME):4140
        resources:
          limits:
            memory: "300Mi"
            cpu: "50m"
        imagePullPolicy: Always
        command:
        - "pm2-docker"
        - "processes.json"
        ports:
        - name: apiserver
          containerPort: 8080
      - name: kubectl
        image: buoyantio/kubectl:1.2.3
        args:
        - proxy
        - "-p"
        - "8001"

Here's the service: 这是服务:

kind: Service
apiVersion: v1
metadata:
  name: apiserver
spec:
  selector:
    app: apiserver
    role: gateway
  type: LoadBalancer
  ports:
  - name: http
    port: 8080
  - name: external
    port: 80
    targetPort: 8080

In my node application I'm using global tunnel : 在我的节点应用程序中,我正在使用global tunnel

const server = app.listen(port);
server.on('listening', function(){

  // make sure all traffic goes over linkerd
  globalTunnel.initialize({
    host: 'localhost',
    port: 4140
  });

 console.log(`Feathers application started on ${app.get('host')}:${app.get('port')} `);

Where is your curl command being run? curl命令在哪里运行?

 http_proxy=$(kubectl --namespace=production get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -s http://apiserver/readinezs` 

The linkerd service in the example doesn't expose a public IP address. 该示例中的链接器服务未公开公共IP地址。 You can confirm this with kubectl get svc/l5d -- I expect you'll see no external IP. 您可以使用kubectl get svc/l5d进行确认-我希望您不会看到任何外部IP。

I think that you'll need to modify the service definition---or create an additional explicitly external service that exposes a ClusterIP ---in order to receive ingress traffic. 我认为您需要修改服务定义-或创建其他公开暴露ClusterIP显式外部服务-才能接收入口流量。

Deploying two of the same node applications and making them send requests to each other it worked. 部署两个相同的节点应用程序并使它们相互发送请求,这是可行的。 Weirdly the requests don't show up in the linkerd dashboard. 奇怪的是,这些请求未显示在链接的信息中心中。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM