简体   繁体   English

Apache Nifi 是否准备好在生产中与 Kube.netes 一起使用?

[英]Is Apache Nifi ready to use with Kubernetes in production?

I am planning to setup Apache Nifi on Kube.netes and make it to production.我计划在 Kube.netes 上设置 Apache Nifi 并将其投入生产。 During my surfing I didn't find any one who potentially using this combination for production setup.在我浏览期间,我没有发现任何人可能将这种组合用于生产设置。

Is this good idea to choose this combination.选择这个组合是个好主意吗? Could you please share your thoughts/experience here about the same.您能否在这里分享您的想法/经验。

https://community.cloudera.com/t5/Support-Questions/NiFi-on-Kube.netes/td-p/203864 https://community.cloudera.com/t5/Support-Questions/NiFi-on-Kube.netes/td-p/203864

As mentioned in the Comments, work has been done regarding Nifi on Kubernetes, but currently this is not generally available.正如评论中提到的,已经在 Kubernetes 上完成了有关 Nifi 的工作,但目前这并不普遍。

It is good to know that there will be dataflow offerings where Nifi and Kubernetes meet in some shape or form during the coming year.* So I would recommend to keep an eye out for this and discuss with your local contacts before trying to build it from scratch.很高兴知道在来年 Nifi 和 Kubernetes 会以某种形式或形式会面的数据流产品。刮。

*Disclaimer: Though I am an employee of Cloudera, the main driving force behind Nifi, I am not qualified to make promises and this is purely my own view. *免责声明:虽然我是Nifi背后的主要驱动力Cloudera的员工,但我没有资格做出承诺,这纯粹是我个人的看法。

I would like to invite you to try a Helm chart at https://github.com/Datenworks/apache-nifi-helm我想邀请您在https://github.com/Datenworks/apache-nifi-helm试用 Helm 图表

We've been maintaining a 5-node Nifi cluster on GKE (Google Kubernetes Engine) in a production environment without major issues and performing pretty good.我们一直在生产环境中在 GKE(Google Kubernetes 引擎)上维护一个 5 节点 Nifi 集群,没有出现重大问题并且性能非常好。 Please let me know if you find any issues on running this chart on your environment.如果您发现在您的环境中运行此图表有任何问题,请告诉我。

Regarding any high volume set on k8s.关于 k8s 上的任何高音量设置。 Be sure to tune your linux kernel (primarily related to the Linux Connection Tracker (Contrack) service. You will also expect to see non-zero tcp timeouts, retries, out of window acks, et al. Depending on which container.networking implementation is used, there may be additional configuration changes required.请务必调整您的 linux kernel(主要与 Linux 连接跟踪器 (Contrack) 服务相关。您还将期望看到非零的 tcp 超时、重试、window acks 等)。使用,可能需要额外的配置更改。

I will assume you are using containerd and NOT using docker.networking (except obviously the container(s) within a pod)我假设您使用的是 containerd 而不是使用 docker.networking(显然 pod 中的容器除外)

The issue applies to ANY heavy IO pod: kafka, NiFi, mySQL, PostGreSQL, you name it.该问题适用于任何重型 IO pod:kafka、NiFi、mySQL、PostGreSQL,随你便。

The incident increases when:在以下情况下事件会增加:

  • "high" volumes of cross pod (especially cross node) tcp connections occur发生“大量”交叉 pod(尤其是交叉节点)tcp 连接
  • additional errors if you have large (megabyte or larger) messages如果您有大(兆字节或更大)消息,则会出现其他错误

Be aware of any other components using either the Pod or VM tcp stack (eg PVC software supporting NiFi persisted storage)注意使用 Pod 或 VM tcp 堆栈的任何其他组件(例如支持 NiFi 持久存储的 PVC 软件)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM