簡體   English   中英

Flume:來自下游的意外異常。 java.io.IOException:對等重置連接

[英]Flume : Unexpected exception from downstream. java.io.IOException: Connection reset by peer

嘗試將多個日志發送到單個端口時,出現異常。 是否有人知道我的配置是否有問題,還是需要將其作為錯誤提出? 我嘗試配置多個端口以及仍然遇到相同的異常。 自從一個星期以來我一直堅持下去,任何幫助都將非常有用。

[WARN -   org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.exceptionCaught(NettyServer.java:201)    ] Unexpected exception from downstream.
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:225)
at sun.nio.ch.IOUtil.read(IOUtil.java:193)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:66)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:364)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:238)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)

收集器配置

hdfs-agent.sources = avro-collect<br>
hdfs-agent.sinks = hdfs-write<br>
hdfs-agent.channels = fileChannel<br>


hdfs-agent.sources.avro-collect.type = avro<br>
hdfs-agent.sources.avro-collect.bind = <<System IP>><br>
hdfs-agent.sources.avro-collect.port = 41414<br>
hdfs-agent.sources.avro-collect.channels = fileChannel<br>

hdfs-agent.sinks.hdfs-write.type = hdfs<br>
hdfs-agent.sinks.hdfs-write.hdfs.path = hdfs://hadoop:54310/flume/%{host}/%Y%m%d/%{logFileType}<br>
hdfs-agent.sinks.hdfs-write.hdfs.rollSize = 209715200<br>
hdfs-agent.sinks.hdfs-write.hdfs.rollCount = 6000<br>
hdfs-agent.sinks.hdfs-write.hdfs.fileType = DataStream<br>
hdfs-agent.sinks.hdfs-write.hdfs.writeFormat = Text<br>
hdfs-agent.sinks.hdfs-write.hdfs.filePrefix = %{host}<br>
hdfs-agent.sinks.hdfs-write.hdfs.maxOpenFiles = 100000<br>
hdfs-agent.sinks.hdfs-write.hdfs.batchSize = 5000<br>
hdfs-agent.sinks.hdfs-write.hdfs.rollInterval = 75<br>
hdfs-agent.sinks.hdfs-write.hdfs.callTimeout = 5000000<br>
hdfs-agent.sinks.hdfs-write.channel = fileChannel<br>


hdfs-agent.channels.fileChannel.type=file<br>
hdfs-agent.channels.fileChannel.dataDirs=/u01/Collector/flume_channel/dataDir13<br>
hdfs-agent.channels.fileChannel.checkpointDir=/u01/Collector/flume_channel/checkpointDir13<br>
hdfs-agent.channels.fileChannel.transactionCapacity = 50000<br>
hdfs-agent.channels.fileChannel.capacity = 9000000<br>
hdfs-agent.channels.fileChannel.write-timeout = 250000<br>

發件人配置

app-agent.sources = tail tailapache<br>
app-agent.channels = fileChannel<br>
app-agent.sinks = avro-forward-sink avro-forward-sink-apache<br>

app-agent.sources.tail.type = exec<br>
app-agent.sources.tail.command = tail -f /server/default/log/server.log<br>
app-agent.sources.tail.channels = fileChannel<br>

app-agent.sources.tailapache.type = exec<br>
app-agent.sources.tailapache.command = tail -f /logs/access_log<br>
app-agent.sources.tailapache.channels = fileChannel<br>

app-agent.sources.tail.interceptors = ts st stt<br>
app-agent.sources.tail.interceptors.ts.type =     org.apache.flume.interceptor.TimestampInterceptor$Builder<br>
app-agent.sources.tail.interceptors.st.type = static<br>
app-agent.sources.tail.interceptors.st.key = logFileType<br>
app-agent.sources.tail.interceptors.st.value = jboss<br>
app-agent.sources.tail.interceptors.stt.type = static<br>
app-agent.sources.tail.interceptors.stt.key = host<br>
app-agent.sources.tail.interceptors.stt.value = Mart<br>

app-agent.sources.tailapache.interceptors = ts1 i1 st1<br>
app-agent.sources.tailapache.interceptors.ts1.type =     org.apache.flume.interceptor.TimestampInterceptor$Builder<br>
app-agent.sources.tailapache.interceptors.i1.type = static<br>
app-agent.sources.tailapache.interceptors.i1.key = logFileType<br>
app-agent.sources.tailapache.interceptors.i1.value = apache<br>
app-agent.sources.tailapache.interceptors.st1.type = static<br>
app-agent.sources.tailapache.interceptors.st1.key = host<br>
app-agent.sources.tailapache.interceptors.st1.value = Mart<br>

app-agent.sinks.avro-forward-sink.type = avro<br>
app-agent.sinks.avro-forward-sink.hostname = <<Host IP>><br>
app-agent.sinks.avro-forward-sink.port = 41414<br>
app-agent.sinks.avro-forward-sink.channel = fileChannel<br>

app-agent.sinks.avro-forward-sink-apache.type = avro<br>
app-agent.sinks.avro-forward-sink-apache.hostname = <<Host IP>><br>
app-agent.sinks.avro-forward-sink-apache.port = 41414<br>
app-agent.sinks.avro-forward-sink-apache.channel = fileChannel<br>

app-agent.channels.fileChannel.type=file<br>
app-agent.channels.fileChannel.dataDirs=/usr/local/lib/flume-ng/flume_channel/dataDir13<br>
app-agent.channels.fileChannel.checkpointDir=/usr/local/lib/flume-ng/flume_channel/checkpointDir13<br>
app-agent.channels.fileChannel.transactionCapacity = 50000<br>
app-agent.channels.fileChannel.capacity = 9000000
app-agent.channels.fileChannel.write-timeout = 250000
app-agent.channles.fileChannel.keep-alive=600

從這里開始: 什么時候拋出“ java.io.IOException:對等重置連接”?

BalusC的回答:

對方卻突然中止在事務的中間連接。 這可能有許多原因,這些原因從服務器端是無法控制的。 例如,最終用戶決定關閉客戶端或突然更改服務器,同時仍與服務器進行交互,或者客戶端程序崩潰,或者最終用戶的Internet連接斷開,或者最終用戶的機器崩潰,等等。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM