简体   繁体   中英

Write a Flume configuration to upload the ever growing file to HDFS

I'm newbie to Flume and have some problems with it's configuration.

I use Hortonworks Sandbox HDP 2.6.5 on Oracle VirtualBox (if this is important).

I have a text file input_data.txt in my VM:

在此处输入图片说明

Content of input_data.txt looks like this:

在此处输入图片说明

I use the following command to create and gradually grow the input:

cat input_data.txt | while read line ; do echo "$line" ; sleep 0.2 ; done > output.txt

在此处输入图片说明

What I'm trying to achieve:

1) Write a Flume configuration to upload the ever growing output.txt file to HDFS

2) If it is possible - destination file in HDFS must be updated every time when the source file (/usr/AUX/output.txt) changes.

For example: I open /usr/AUX/output.txt , write several strings in the end and save it:

Nov 16 10:21:22 ephubudw3000 avahi-daemon[990]: Invalid response packet from host 10.0.9.31.
Nov 16 10:21:22 ephubudw3000 avahi-daemon[990]: Invalid response packet from host 10.0.12.143.
...
lolkek
hehehehe
azazaza322

and then this new data must appear in HDFS destination file in hdfs://sandbox.hortonworks.com:8020/user/tutorial/

That's what I've already tried :

I create this configutation (flume.conf file):

a1.sources = src1
a1.sinks =  sink1
a1.channels = memoryChannel

a1.sources.src1.type = exec
a1.sources.src1.shell = /bin/bash -c
a1.sources.src1.command = cat /usr/AUX/output.txt
a1.sources.src1.batchSize = 1
a1.sources.src1.channels = memoryChannel 

a1.channels.memoryChannel.type = memory
a1.channels.memoryChannel.capacity = 1000
a1.channels.memoryChannel.transactionCapacity = 100

a1.sinks.sink1.type = hdfs
a1.sinks.sink1.channels = memoryChannel
a1.sinks.sink1.hdfs.filetype = DataStream
a1.sinks.sink1.hdfs.writeFormat = Text
a1.sinks.sink1.hdfs.path = hdfs://sandbox.hortonworks.com:8020/user/tutorial/

Then I launch Flume agent (a1) using this command:

/usr/hdp/current/flume-server/bin/flume-ng agent -c /etc/flume/conf -f /etc/flume/conf/flume.conf -n a1

[root@sandbox-hdp AUX]# /usr/hdp/current/flume-server/bin/flume-ng agent -c /etc/flume/conf -f /etc/flume/conf/flume.conf -n a1
Warning: JAVA_HOME is not set!
Info: Including Hadoop libraries found via (/bin/hadoop) for HDFS access
Info: Excluding /usr/hdp/2.6.5.0-292/hadoop/lib/slf4j-api-1.7.10.jar from classpath
Info: Excluding /usr/hdp/2.6.5.0-292/hadoop/lib/slf4j-log4j12-1.7.10.jar from classpath
Info: Excluding /usr/hdp/2.6.5.0-292/tez/lib/slf4j-api-1.7.5.jar from classpath
Info: Including HBASE libraries found via (/bin/hbase) for HBASE access
Info: Excluding /usr/hdp/2.6.5.0-292/hbase/lib/slf4j-api-1.7.7.jar from classpath
Info: Excluding /usr/hdp/2.6.5.0-292/hadoop/lib/slf4j-api-1.7.10.jar from classpath
Info: Excluding /usr/hdp/2.6.5.0-292/hadoop/lib/slf4j-log4j12-1.7.10.jar from classpath
Info: Excluding /usr/hdp/2.6.5.0-292/tez/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/hdp/2.6.5.0-292/hadoop/lib/slf4j-api-1.7.10.jar from classpath
Info: Excluding /usr/hdp/2.6.5.0-292/hadoop/lib/slf4j-log4j12-1.7.10.jar from classpath
Info: Including Hive libraries found via () for Hive access
+ exec /usr/bin/java -Xmx20m -cp '/etc/flume/conf:/usr/hdp/2.6.5.0-292/flume/lib/*:/usr/hdp/2.6.5.0-292/hadoop/conf:/usr/hdp/2.6.5.0-292/hadoop/lib/activation-1.1.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/asm-3.2.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/aws-java-sdk-core-1.10.6.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/aws-java-sdk-kms-1.10.6.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/aws-java-sdk-s3-1.10.6.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/azure-keyvault-core-0.8.0.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/azure-storage-5.4.0.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.6.5.0-292/hadoop/lib/commons-codec-1.4.jar:
...........................
2/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.5.0-292/hadoop/lib/native::/usr/hdp/2.6.5.0-292/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.5.0-292/hadoop/lib/native org.apache.flume.node.Application -f /etc/flume/conf/flume.conf -n a1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.5.0-292/flume/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.5.0-292/flume/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

After this I add several strings in the end of /usr/AUX/output.txt...

在此处输入图片说明

and nothing happens. There is no files with updates in HDFS

在此处输入图片说明

I will be grateful for help. Is it possible to achieve the goal I mentioned (automatic update file in HDFS after updating the file in VM), and what is wrong with my Flume configuration?

Thank you!

If it is possible - destination file in HDFS must be updated every time when the source file (/usr/AUX/output.txt) changes

Well, the thing is that HDFS files are not meant to be "updated", as HDFS is optimized FileSystem for appends. Therefore, the recommended pattern would be to create a new file. And almost all Hadoop processing engines can read entire directories, so this should not be a problem.

As far as Flume is concerned, you should use the Spooling Directory source, not the Exec Source with cat or tail -f . Otherwise, Flume agent is not designed to read "file updates", only "newly seen " files. Which it then marks as complete, and moves/ignores them later on.

Therefore, you would want something like this, which generates a timestamped file for every time your process runs. This is enough for Flume to say the file is new and should be read/processed.

some_process >> /flume_watcher/output_$(date +%s%3N).txt

See Spooling Directory , and why Exec Source is discouraged (red box).


Additional notice: HDP has deprecated Flume , and recommends Hortonworks DataFlow (Apache Nifi) instead. Ie in the HDP 3.0 Sandbox (if there is one), you wouldn't have Flume. Therefore, don't waste too much of your time on it.

Try to use your original configuration file with the following modification in your conf file:

a1.sinks.sink1.channel = memoryChannel

Note that you have an extra 's', because according to Flume documentation the correct property is only channel. I think it is supposed to work a exec Source with hdfs Sink.

You also may want to fix the warning message: JAVA_HOME is not set.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM