[英]How to reference local Kafka and Zookeeper config on Spring Cloud Dataflow “Cloudfoundry” server start
Here's is what I have successfully done so far on SCDF Local Server这是我迄今为止在 SCDF 本地服务器上成功完成的工作
mymac$ java -jar spring-cloud-dataflow-server-local-1.3.0.RELEASE.jar --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers=localhost:9092 --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes=localhost:2181 mymac$ java -jar spring-cloud-dataflow-server-local-1.3.0.RELEASE.jar --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers=localhost:9092 - -spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes=localhost:2181
I was able to create my stream我能够创建我的流
ingest = producer-app > :broker1
filter = :broker1 > filter-app > :broker2
Now I need help to do the exact same thing on PCFDev现在我需要帮助在 PCFDev 上做同样的事情
1.1) cf push -f manifest-scdf.yml --no-start -p /XXX/XXX/XXX/spring-cloud-dataflow-server-cloudfoundry-1.3.0.BUILD-SNAPSHOT.jar -k 1500M 1.1) cf push -f manifest-scdf.yml --no-start -p /XXX/XXX/XXX/spring-cloud-dataflow-server-cloudfoundry-1.3.0.BUILD-SNAPSHOT.jar -k 1500M
this runs good...no problem.这运行良好...没问题。 but 1.2但 1.2
1.2) cf start dataflow-server --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers=host.pcfdev.io:9092 --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes=host.pcfdev.io:2181 1.2) cf 启动数据流服务器 --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers=host.pcfdev.io:9092 --spring.cloud.dataflow.applicationProperties.stream。 spring.cloud.stream.kafka.binder.zkNodes=host.pcfdev.io:2181
gives me this error:--给我这个错误:--
Incorrect Usage: unknown flag `spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers'不正确的用法:未知标志`spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers'
below is my manifest-scdf.yml file下面是我的 manifest-scdf.yml 文件
---
instances: 1
memory: 2048M
applications:
- name: dataflow-server
host: dataflow-server
services:
- redis
- rabbit
env:
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_URL: https://api.local.pcfdev.io
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_ORG: pcfdev-org
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SPACE: pcfdev-space
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DOMAIN: local.pcfdev.io
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_USERNAME: admin
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_PASSWORD: admin
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SKIP_SSL_VALIDATION: true
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_SERVICES: rabbit
MAVEN_REMOTE_REPOSITORIES_REPO1_URL: https://repo.spring.io/libs-snapshot
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DISK: 512
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_BUILDPACK: java_buildpack
spring.cloud.deployer.cloudfoundry.stream.memory: 400
spring.cloud.dataflow.features.tasks-enabled: true
spring.cloud.dataflow.features.streams-enabled: true
Please help me.请帮帮我。 Thank you.谢谢。
There are few options to supply Kafka credentials to Stream-apps in PCF.为 PCF 中的 Stream-apps 提供 Kafka 凭证的选项很少。
This option allows you to create CUPs for an external Kafka-service.此选项允许您为外部 Kafka 服务创建 CUP。 While deploying the stream, you can then supply the coordinates to each application either individually as described in the docs or you can supply them as global properties for all the stream-apps deployed by the SCDF-server.在部署流时,您可以按照文档中的描述单独向每个应用程序提供坐标,也可以将它们作为 SCDF 服务器部署的所有流应用程序的全局属性提供。
Instead of extracting from CUPs, you can also directly supply the HOST/PORT while deploying the stream.您还可以在部署流时直接提供 HOST/PORT,而不是从 CUP 中提取。 Again, this can be applied globally, too.同样,这也可以在全球范围内应用。
stream deploy myTest --properties "app.*.spring.cloud.stream.kafka.binder.brokers=<HOST>:9092,app.*.spring.cloud.stream.kafka.binder.zkNodes=<HOST>:2181
Note: The HOST must be reachable for the stream-apps;注意:对于流应用程序,主机必须是可访问的; o'wise, it ill continue to connect to localhost and potentially fail since the apps are running inside a VM.明智的做法是,它会继续连接到 localhost 并且可能会失败,因为应用程序在 VM 内运行。
The error you're seeing is coming from the CF CLI, it's interpreting those (I'm assuming environment) variables you're providing as flags to the cf start
command and failing.您看到的错误来自 CF CLI,它正在解释您作为标志提供给cf start
命令并失败的那些(我假设是环境)变量。
You could either provide them in your manifest.yml
or set their values manually using the CLI's cf set-env
command by doing something like this:您可以在manifest.yml
提供它们,也可以使用 CLI 的cf set-env
命令通过执行以下操作手动设置它们的值:
cf set-env dataflow-server spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers host.pcfdev.io:9092
cf set-env dataflow-server spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes host.pcfdev.io:2181
After you've set them they should be picked up when you run cf start dataflow-server
.设置它们后,当您运行cf start dataflow-server
时,它们应该会被选中。
Relevant CLI docs: http://cli.cloudfoundry.org/en-US/cf/set-env.html相关 CLI 文档: http : //cli.cloudfoundry.org/en-US/cf/set-env.html
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.