繁体   English   中英

将数据写入水槽,然后写入HDFS

[英]Writing data into flume and then to HDFS

我正在使用水槽1.5.0.1和hadoop 2.4.1,尝试将字符串放入水槽并保存到HDFS。 Flume配置文件如下:

    agentMe.channels = memory-channel
agentMe.sources = my-source AvroSource
agentMe.sinks = log-sink hdfs-sink

agentMe.sources.AvroSource.channels = memory-channel
agentMe.sources.AvroSource.type = avro
agentMe.sources.AvroSource.bind = 0.0.0.0 # i tried client ip as well
agentMe.sources.AvroSource.port = 41414

agentMe.channels.memory-channel.type = memory
agentMe.channels.memory-channel.capacity = 1000
agentMe.channels.memory-channel.transactionCapacity = 100

agentMe.sources.my-source.type = netcat
agentMe.sources.my-source.bind = 127.0.0.1 #If i use any other IP like the client from where the string is going to come from then i get unable to bind exception.
agentMe.sources.my-source.port = 9876
agentMe.sources.my-source.channels = memory-channel


# Define a sink that outputs to hdfs.
agentMe.sinks.hdfs-sink.channel = memory-channel
agentMe.sinks.hdfs-sink.type = hdfs
agentMe.sinks.hdfs-sink.hdfs.path = hdfs://localhost:54310/user/netlog/flume.txt
agentMe.sinks.hdfs-sink.hdfs.fileType = DataStream
agentMe.sinks.hdfs-sink.hdfs.batchSize = 2
agentMe.sinks.hdfs-sink.hdfs.rollCount = 0
agentMe.sinks.hdfs-sink.hdfs.inUsePrefix = tcptest-
agentMe.sinks.hdfs-sink.hdfs.inUseSuffix = .txt
agentMe.sinks.hdfs-sink.hdfs.rollSize = 0
agentMe.sinks.hdfs-sink.hdfs.rollInterval = 3
agentMe.sinks.hdfs-sink.hdfs.writeFormat = Text
agentMe.sinks.hdfs-sink.hdfs.path = /user/name/%y-%m-%d/%H%M/%S

我已经在这里提出了同样的问题

client.sendDataToFlume("hello world")

我看到NettyAvroRpcClient无法连接到运行水槽的服务器。 但是我只是发送一个简单的字符串,我错过了什么。

专家建议

如我所见,如果要与NettyAvroRpcClient连接,实际上需要设置Avro RPC源。 示例配置如下:

# Define an Avro source called AvroSource on SpoolAgent and tell it
# to bind to 0.0.0.0:41414. Connect it to channel MemChannel.
agentMe.sources.AvroSource.channels = MemChannel
agentMe.sources.AvroSource.type = avro
agentMe.sources.AvroSource.bind = 0.0.0.0
agentMe.sources.AvroSource.port = 41414

这将在端口41414上创建一个AvroRPC源。

配置必须正确,否则可能无法解决问题。 因此,这里是将数据读入水槽,然后读入HDFS的配置。

a1.sources = r1
a1.sinks =  k2
a1.channels = c1

a1.channels.c1.type = memory

a1.sources.r1.channels = c1
a1.sources.r1.type = avro
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 41414
a1.sources.r1.interceptors = a
a1.sources.r1.interceptors.a.type = org.apache.flume.interceptor.TimestampInterceptor$Builder

a1.sinks.k2.type = hdfs
a1.sinks.k2.channel = c1
a1.sinks.k2.hdfs.fileType = DataStream
a1.sinks.k2.hdfs.batchSize = 10
a1.sinks.k2.hdfs.rollCount = 10
a1.sinks.k2.hdfs.rollSize = 10
a1.sinks.k2.hdfs.rollInterval = 10
a1.sinks.k2.hdfs.writeFormat = Text
a1.sinks.k2.hdfs.path = /user/flume/%y-%m-%d/%H%M/

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k2.channel = c1

这应该有帮助:)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM