简体   繁体   中英

How does an Argo Workflow carry out control flow with Kubernetes?

If desperate, one can consider Argo Workflows as a programming language implemented in YAML and using Kubernetes as a back-end.

  • A procedure can be defined using steps:
  • Functions are Templates with arguments coming in two flavours:
    • Parameters, which are strings
    • Artifacts, which are files shared by some tool, such as S3 or NFS
  • There is flow control
    • Conditionals are implemented by when:
    • Iterators are implemented by withSequence: and withItems:
    • Recursion is possible by Templates calling themselves

The templates map somewhat directly onto Kubernetes YAML specs. Parameters appear to be shared via annotations and artifacts are shared via native Kubernetes functionality.

How is the flow-control implemented? What features of Kubernetes does Argo use to accomplish this? Does it have something to do with the Kubernetes Control Plane?

Argo Workflows is implemented with custom Kubernetes Custom Resources , eg its own yaml manifest types. For every custom resource there is an associated custom pod that acts as a Kubernetes Controller with the logic.

The custom controller may create other resources or Pods, and watch the result of their execution status in the status-fields and then implement its workflow logics accordingly, eg watch results and follow the declared when: expressions depending on the results.

I have more experience using Tekton Pipelines but it works the same way as Argo Workflows. If you are interested in implementing similar things, I recommend to start with Kubebuilder and read The Kubebuilder book .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM