简体   繁体   English

使用 linkerd 和 argo 推出的 Canary 推出

[英]Canary rollouts with linkerd and argo rollouts

I'm trying to configure a canary rollout for a demo, but I'm having trouble getting the traffic splitting to work with linkerd.我正在尝试为演示配置金丝雀推出,但我无法让流量拆分与 linkerd 一起使用。 The funny part is I was able to get this working with istio and i find istio to be much more complicated then linkerd.有趣的是,我能够使用 istio 来实现这一点,而且我发现 istio 比 linkerd 复杂得多。

I have a basic go-lang service define like this:我有一个基本的 go-lang 服务定义,如下所示:

apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: fish
spec:
  [...]
  strategy:
    canary:
      canaryService: canary-svc
      stableService: stable-svc
      trafficRouting:
        smi: {}
      steps:
      - setWeight: 5
      - pause: {}
      - setWeight: 20
      - pause: {}
      - setWeight: 50
      - pause: {}
      - setWeight: 80
      - pause: {}
---
apiVersion: v1
kind: Service
metadata:
  name: canary-svc
spec:
  selector:
    app: fish
  ports:
    - name: http
      port: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: stable-svc
spec:
  selector:
    app: fish
  ports:
    - name: http
      port: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: fish
  annotations:
    kubernetes.io/ingress.class: 'nginx'
    cert-manager.io/cluster-issuer: letsencrypt-production
    cert-manager.io/acme-challenge-type: dns01
    external-dns.alpha.kubernetes.io/hostname: fish.local
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
    nginx.ingress.kubernetes.io/cors-allow-origin: "*"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
spec:
  rules:
    - host: fish.local
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: stable-svc
              port:
                number: 8080

When I do the deploy (sync) via ArgoCD I can see the traffic split is 50/50:当我通过 ArgoCD 进行部署(同步)时,我可以看到流量拆分为 50/50:

- apiVersion: split.smi-spec.io/v1alpha2
  kind: TrafficSplit
  metadata:
    [...]
    name: fish
    namespace: default
  spec:
    backends:
    - service: canary-svc
      weight: "50"
    - service: stable-svc
      weight: "50"
    service: stable-svc

However doing a curl command in a while loop i only get back the stable-svc.但是,在 while 循环中执行 curl 命令我只能取回 stable-svc。 The only time i see a change is after I have completely moved the service to 100%.我唯一一次看到变化是在我将服务完全移动到 100% 之后。

I tried to follow this: https://argoproj.github.io/argo-rollouts/getting-started/smi/我试图遵循这个: https://argoproj.github.io/argo-rollouts/getting-started/smi/

Any help would be greatly appreciated.任何帮助将不胜感激。

Thanks谢谢

After reading this: https://linkerd.io/2.10/tasks/using-ingress/ I discovered you need to modify your ingress controller with a special annotation:阅读此内容后: https://linkerd.io/2.10/tasks/using-ingress/我发现您需要使用特殊注释修改您的入口 controller:

$ kubectl get deployment <ingress-controller> -n <ingress-namespace> -o yaml | linkerd inject --ingress - | kubectl apply -f -

TLDR; TLDR; if you want Linkerd functionality like Service Profiles, Traffic Splits, etc, there is additional configuration required to make the Ingress controller's Linkerd proxy run in ingress mode.如果您想要服务配置文件、流量拆分等 Linkerd 功能,则需要额外的配置才能使 Ingress 控制器的 Linkerd 代理在入口模式下运行。

So there's a bit more context in this issue but the TL;DR is ingresses tend to target individual pods instead of the service address.所以这个问题有更多的上下文,但 TL;DR 是入口倾向于针对单个 pod 而不是服务地址。 Putting Linkerd's proxy in ingress mode tells it to override that behaviour.将 Linkerd 的代理置于入口模式会告诉它覆盖该行为。 NGINX does already have a setting that will let it hit services instead of endpoints directly, you can see that in their docs here . NGINX 确实已经有一个设置,可以让它直接命中服务而不是端点,你可以在他们的文档中看到

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM