简体   繁体   English

HBase LZO表扫描导致RegionServer关闭

[英]hbase lzo table scan cause regionserver shutdown

i have a problem , some info follow: 我有一个问题,一些信息如下:

nodes : 3 node , but only configurtion 2 regionserver 节点:3个节点,但仅配置2个区域服务器
os : Centos6.3 操作系统:Centos6.3
Apache Hadoop2.7.1 Apache Hadoop2.7.1
Apache Hbase0.98.12 Apache Hbase0.98.12

my hadoop and hbase support lzo compression and at the same time support snappy comression successs , i have a hbase table using lzo compression and have other hbase table useing snappy compression, i insert 50 recoder data into this table , ok ,insert is no problem ,but when i use java api to scan this table , one of regionserver is deaded. 我的hadoop和hbase支持lzo压缩,同时支持snappy的成功安装,我有一个使用lzo压缩的hbase表,而另一个使用hashpy压缩的hbase表,我在该表中插入了50个编码器数据,确定,插入没问题,但是当我使用Java api扫描此表时,regionserver之一已死。

i check hbase log ,but no error or Exception , but i check hadoop log , i found some Exception : 我检查了hbase日志,但没有错误或异常,但是我检查了hadoop日志,发现了一些异常:

java.io.IOException: Premature EOF from inputStream
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:849)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:804)
    at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
    at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)


i don't know why throws Exception in only scan hbase table , because i execute MR job read lzo file is Normal , thanks for your answer ! 我不知道为什么只在扫描hbase表中抛出异常,因为我执行MR作业读取的lzo文件是Normal,谢谢您的回答!

You are missing a return for the last line of your content. 您缺少内容最后一行的回报。 You have to use such condition to control the EOF: 您必须使用以下条件来控制EOF:

while (line = mycontent.readLine()) != null)
{
...
...
}

好的,我终于找到了答案,这真是令人难以置信,通过Hbase gc日志,我看到很长的完整gc建议,我的hbase的堆大小默认为1 gb,所以当我将其增加到4 GB堆时,可能出现了问题,我使用大量压缩是正常的,所以请记住这一点!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM