简体   繁体   中英

Use of flume as a kafka consumer

Is it possible to config flume sink to be my agent's file system. Do I have to sink to hdfs or hadoop?
I am working with flume 1.6.0 and kafka 10.1.1
I will show you my flume config and flume command line args if you ask but maybe I'm doing something that is just not meant to be done.
I am trying to do some proof-of-concept on the kafka side without installing hadoop or hdfs.
I see config for roll_file but maybe in these versions such a concept is for hdfs only?

File Roll Sink says

Stores events on the local filesystem

However, I would suggest not using Flume as it requires you to install extra Hadoop libraries.

Kafka Connect is a native Kafka library and you can consume to a File (or HDFS).

if you want to use flume,you have to create a flume agent from ambari or cloudera manager which ever you are using.You will have to have hdfs to sink the data from kafka. Source will be kafka topic Channel can be mem Sink HDFS

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM