简体   繁体   English

Flume:来自下游的意外异常。 java.io.IOException:对等重置连接

[英]Flume : Unexpected exception from downstream. java.io.IOException: Connection reset by peer

I'm getting below exception when I'm trying to send multiple logs to a single port. 尝试将多个日志发送到单个端口时,出现异常。 Does any one have idea if its the problem with my configuration or do I need raise this as a bug. 是否有人知道我的配置是否有问题,还是需要将其作为错误提出? I tried to configure multiple port as well still got the same exception. 我尝试配置多个端口以及仍然遇到相同的异常。 Any help would be really great as I'm stuck with this since a week. 自从一个星期以来我一直坚持下去,任何帮助都将非常有用。

[WARN -   org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.exceptionCaught(NettyServer.java:201)    ] Unexpected exception from downstream.
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:225)
at sun.nio.ch.IOUtil.read(IOUtil.java:193)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:66)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:364)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:238)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)

Collector Configuration 收集器配置

hdfs-agent.sources = avro-collect<br>
hdfs-agent.sinks = hdfs-write<br>
hdfs-agent.channels = fileChannel<br>


hdfs-agent.sources.avro-collect.type = avro<br>
hdfs-agent.sources.avro-collect.bind = <<System IP>><br>
hdfs-agent.sources.avro-collect.port = 41414<br>
hdfs-agent.sources.avro-collect.channels = fileChannel<br>

hdfs-agent.sinks.hdfs-write.type = hdfs<br>
hdfs-agent.sinks.hdfs-write.hdfs.path = hdfs://hadoop:54310/flume/%{host}/%Y%m%d/%{logFileType}<br>
hdfs-agent.sinks.hdfs-write.hdfs.rollSize = 209715200<br>
hdfs-agent.sinks.hdfs-write.hdfs.rollCount = 6000<br>
hdfs-agent.sinks.hdfs-write.hdfs.fileType = DataStream<br>
hdfs-agent.sinks.hdfs-write.hdfs.writeFormat = Text<br>
hdfs-agent.sinks.hdfs-write.hdfs.filePrefix = %{host}<br>
hdfs-agent.sinks.hdfs-write.hdfs.maxOpenFiles = 100000<br>
hdfs-agent.sinks.hdfs-write.hdfs.batchSize = 5000<br>
hdfs-agent.sinks.hdfs-write.hdfs.rollInterval = 75<br>
hdfs-agent.sinks.hdfs-write.hdfs.callTimeout = 5000000<br>
hdfs-agent.sinks.hdfs-write.channel = fileChannel<br>


hdfs-agent.channels.fileChannel.type=file<br>
hdfs-agent.channels.fileChannel.dataDirs=/u01/Collector/flume_channel/dataDir13<br>
hdfs-agent.channels.fileChannel.checkpointDir=/u01/Collector/flume_channel/checkpointDir13<br>
hdfs-agent.channels.fileChannel.transactionCapacity = 50000<br>
hdfs-agent.channels.fileChannel.capacity = 9000000<br>
hdfs-agent.channels.fileChannel.write-timeout = 250000<br>

Sender Configuration 发件人配置

app-agent.sources = tail tailapache<br>
app-agent.channels = fileChannel<br>
app-agent.sinks = avro-forward-sink avro-forward-sink-apache<br>

app-agent.sources.tail.type = exec<br>
app-agent.sources.tail.command = tail -f /server/default/log/server.log<br>
app-agent.sources.tail.channels = fileChannel<br>

app-agent.sources.tailapache.type = exec<br>
app-agent.sources.tailapache.command = tail -f /logs/access_log<br>
app-agent.sources.tailapache.channels = fileChannel<br>

app-agent.sources.tail.interceptors = ts st stt<br>
app-agent.sources.tail.interceptors.ts.type =     org.apache.flume.interceptor.TimestampInterceptor$Builder<br>
app-agent.sources.tail.interceptors.st.type = static<br>
app-agent.sources.tail.interceptors.st.key = logFileType<br>
app-agent.sources.tail.interceptors.st.value = jboss<br>
app-agent.sources.tail.interceptors.stt.type = static<br>
app-agent.sources.tail.interceptors.stt.key = host<br>
app-agent.sources.tail.interceptors.stt.value = Mart<br>

app-agent.sources.tailapache.interceptors = ts1 i1 st1<br>
app-agent.sources.tailapache.interceptors.ts1.type =     org.apache.flume.interceptor.TimestampInterceptor$Builder<br>
app-agent.sources.tailapache.interceptors.i1.type = static<br>
app-agent.sources.tailapache.interceptors.i1.key = logFileType<br>
app-agent.sources.tailapache.interceptors.i1.value = apache<br>
app-agent.sources.tailapache.interceptors.st1.type = static<br>
app-agent.sources.tailapache.interceptors.st1.key = host<br>
app-agent.sources.tailapache.interceptors.st1.value = Mart<br>

app-agent.sinks.avro-forward-sink.type = avro<br>
app-agent.sinks.avro-forward-sink.hostname = <<Host IP>><br>
app-agent.sinks.avro-forward-sink.port = 41414<br>
app-agent.sinks.avro-forward-sink.channel = fileChannel<br>

app-agent.sinks.avro-forward-sink-apache.type = avro<br>
app-agent.sinks.avro-forward-sink-apache.hostname = <<Host IP>><br>
app-agent.sinks.avro-forward-sink-apache.port = 41414<br>
app-agent.sinks.avro-forward-sink-apache.channel = fileChannel<br>

app-agent.channels.fileChannel.type=file<br>
app-agent.channels.fileChannel.dataDirs=/usr/local/lib/flume-ng/flume_channel/dataDir13<br>
app-agent.channels.fileChannel.checkpointDir=/usr/local/lib/flume-ng/flume_channel/checkpointDir13<br>
app-agent.channels.fileChannel.transactionCapacity = 50000<br>
app-agent.channels.fileChannel.capacity = 9000000
app-agent.channels.fileChannel.write-timeout = 250000
app-agent.channles.fileChannel.keep-alive=600

From here: When is "java.io.IOException:Connection reset by peer" thrown? 从这里开始: 什么时候抛出“ java.io.IOException:对等重置连接”?

Answer from BalusC: BalusC的回答:

The other side has abruptly aborted the connection in midst of a transaction. 对方却突然中止在事务的中间连接。 That can have many causes which are not controllable from the server side on. 这可能有许多原因,这些原因从服务器端是无法控制的。 Eg the enduser decided to shutdown the client or change the server abruptly while still interacting with your server, or the client program has crashed, or the enduser's internet connection went down, or the enduser's machine crashed, etc, etc. 例如,最终用户决定关闭客户端或突然更改服务器,同时仍与服务器进行交互,或者客户端程序崩溃,或者最终用户的Internet连接断开,或者最终用户的机器崩溃,等等。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 java.io.IOException:由peer重置连接 - java.io.IOException: Connection reset by peer Cassandra-unit:java.io.IOException:由peer重置连接 - Cassandra-unit : java.io.IOException: Connection reset by peer 什么时候抛出“java.io.IOException:Connection reset by peer”? - When is "java.io.IOException:Connection reset by peer" thrown? ElasticSearch RestHighLevelClient 抛出 java.io.IOException: Connection reset by peer - ElasticSearch RestHighLevelClient throws java.io.IOException: Connection reset by peer 什么时候出现java.io.IOException:用Netty抛出的对等重置连接? - When is java.io.IOException: Connection reset by peer thrown with Netty? 异步http客户端中的对等方重置java.io.IOException连接 - java.io.IOException connection reset by peer in asynchronous http client BrowserMob代理警告和异常java.io.IOException:对等重置连接 - BrowserMob Proxy warning and exception java.io.IOException: Connection reset by peer Flume:org.apache.avro.ipc.NettyServer:来自下游的意外异常。 java.nio.channels.ClosedChannelException - Flume: org.apache.avro.ipc.NettyServer: Unexpected exception from downstream. java.nio.channels.ClosedChannelException SSTable加载器流式传输失败,给出java.io.IOException:对等体重置连接 - SSTable loader streaming failed giving java.io.IOException: Connection reset by peer java.io.IOException:在Weblogic服务器上使用AsyncRestTemplate时,连接被对等方重置 - java.io.IOException: Connection reset by peer' when using AsyncRestTemplate on weblogic server
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM