简体   繁体   English

Flume:org.apache.avro.ipc.NettyServer:来自下游的意外异常。 java.nio.channels.ClosedChannelException

[英]Flume: org.apache.avro.ipc.NettyServer: Unexpected exception from downstream. java.nio.channels.ClosedChannelException

How to solve the problem? 如何解决问题? When I config the flume server, it had the below questiosns. 当我配置水槽服务器时,它具有以下问题。

2014-10-20 22:24:01,480 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /ip:57063 => /ip:34001] OPEN
2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /ip:57063 => /ip:34001] BOUND: /ip:34001
2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /10.182.4.70:57063 => /ip:34001] CONNECTED: /ip:57063
2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /ip:57063 :> /ip:34001] DISCONNECTED
2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /ip:57063 :> /10.182.4.79:34001] UNBOUND
2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: [id: 0x2fe09f1a, /10.182.4.70:57063 :> /10.182.4.79:34001] CLOSED
2014-10-20 22:24:01,481 INFO org.apache.avro.ipc.NettyServer: Connection to /10.182.4.70:57063 disconnected.
2014-10-20 22:24:01,481 WARN org.apache.avro.ipc.NettyServer: Unexpected exception from downstream.
java.nio.channels.ClosedChannelException
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.cleanUpWriteBuffer(AbstractNioWorker.java:673)
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:400)
        at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:120)
        at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:59)
        at org.jboss.netty.channel.Channels.write(Channels.java:733)
        at org.jboss.netty.channel.Channels.write(Channels.java:694)
        at org.jboss.netty.handler.codec.compression.ZlibEncoder.finishEncode(ZlibEncoder.java:380)
        at org.jboss.netty.handler.codec.compression.ZlibEncoder.handleDownstream(ZlibEncoder.java:316)
        at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:55)
        at org.jboss.netty.channel.Channels.close(Channels.java:821)

And the flume.conf is as belows. 而flume.conf如下。

instance_35001.channels.channel1.checkpointDir=editlog/checkpoint
instance_35001.channels.channel1.dataDirs=editlog/data
instance_35001.channels.channel1.capacity=200000000
instance_35001.channels.channel1.transactionCapacity=1000000
instance_35001.channels.channel1.checkpointInterval=10000

instance_35001.sources=source1
instance_35001.sources.source1.type=avro
instance_35001.sources.source1.bind=0.0.0.0
instance_35001.sources.source1.port=34001
instance_35001.sources.source1.compression-type=deflate
instance_35001.sources.source1.channels=channel1

instance_35001.sources.source1.interceptors = inter1
instance_35001.sources.source1.interceptors.inter1.type = host
instance_35001.sources.source1.interceptors.inter1.hostHeader = servername

instance_35001.sinks=sink1

instance_35001.sinks.sink1.type=hdfs
instance_35001.sinks.sink1.hdfs.path=hdfs://address:5000/user/admin/%{appname}/%Y/%m/%d/
instance_35001.sinks.sink1.hdfs.filePrefix=%{appname}-%{hostname}-%{servername}.34001
instance_35001.sinks.sink1.hdfs.rollInterval=0
instance_35001.sinks.sink1.hdfs.rollCount=0
instance_35001.sinks.sink1.hdfs.rollSize=21521880492

The environment is CDH5. 环境是CDH5。 And the sink is the hdfs program. 接收器是hdfs程序。 The log is usually very normal. 该日志通常非常正常。 but the sink is very slowly. 但是下沉很慢 So please help me. 所以请帮帮我。 Thanks. 谢谢。

One thing I can see here is that your roll size is significantly larger than the channel capacity. 我在这里可以看到的一件事是,您的纸卷尺寸明显大于通道容量。 So before rolling the file everything is stored in the channel which gets filled up after a point and starts throwing an error. 因此,在滚动文件之前,所有内容都存储在通道中,该通道在一点后被填满并开始引发错误。

instance_35001.channels.channel1.capacity=200000000

instance_35001.sinks.sink1.hdfs.rollSize=21521880492

Keep your role size around the block size you have set for HDFS. 将角色大小保持在为HDFS设置的块大小附近。 Also HDFS sink has a default batch size of 100. Change it to some larger value and see how it behaves. 此外,HDFS接收器的默认批处理大小为100。将其更改为更大的值,然后查看其行为。

capacity is measured in # of events while rollsize in actual bytes so it is difficult to properly correlate those two. capacity是用事件#来衡量的,而rollsize用实际字节来衡量的,因此很难正确地将这两者关联起来。
However you want your roll size to be close to your hdfs block size (default 128mb). 但是,您希望卷大小接近于hdfs块大小(默认为128mb)。

rollsize = 21521880492 -> 21GB

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 异常-java.nio.channels.ClosedChannelException - exception - java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException - java.nio.channels.ClosedChannelException Apache Kafka:无法更新Metadata / java.nio.channels.ClosedChannelException - Apache Kafka: Failed to Update Metadata/java.nio.channels.ClosedChannelException 写入客户端通道时出现异常java.nio.channels.ClosedChannelException - Exception java.nio.channels.ClosedChannelException when write to client channel 使用SSL和Postman的java.nio.channels.ClosedChannelException - java.nio.channels.ClosedChannelException with SSL and Postman Flume:来自下游的意外异常。 java.io.IOException:对等重置连接 - Flume : Unexpected exception from downstream. java.io.IOException: Connection reset by peer java.nio.channels.ClosedChannelException HTTP/2 与码头 - java.nio.channels.ClosedChannelException HTTP/2 with jetty java.nio.channels.ClosedChannelException-客户端关闭SSL - java.nio.channels.ClosedChannelException -Client shuts down SSL GATLING Rest API 测试 - java.nio.channels.ClosedChannelException: Z3099A62586648C1 - GATLING Rest API testing - java.nio.channels.ClosedChannelException: null jenkins主连接失败,出现java.nio.channels.ClosedChannelException - jenkins master connection fails with java.nio.channels.ClosedChannelException
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM