简体   繁体   English

HBase:无法存储数据(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException)

[英]HBase : Failed to store data (org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException)

Unable to store data using table.put(p) method, Which is throwing an exception:无法使用 table.put(p) 方法存储数据,抛出异常:

Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException

Please check below for complete exception details:请在下面查看完整的异常详细信息:

Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family ColumnFamily1 
 does not exist in region hbasesample2,,1440880732948.e63f5e1b82327208a862a98b302b9c85. in table 'hbasesample2', {NAME => 'ColumnFamily1 ', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'ColumnFamily10', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'ColumnFamily2', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'ColumnFamily3', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'ColumnFamily4', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'ColumnFamily5', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'ColumnFamily6', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'ColumnFamily7', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'ColumnFamily8', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'ColumnFamily9', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:659)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:615)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:1896)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31451)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)
: 1 time, 
    at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:227)
    at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1700(AsyncProcess.java:207)
    at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1658)
    at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:208)
    at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
    at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1482)
    at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1095)
    at Hbase.Hbase_Auto.main(Hbase_Auto.java:93)

Here is the code snippet :这是代码片段:

p.addColumn(Bytes.toBytes(colfamily),Bytes.toBytes(column),Bytes.toBytes(temp));
table.put(p);

Please somebody who know about this help me out.请知道这件事的人帮帮我。

EDIT #1编辑#1

I have pasted below the "description" of the table 'hbasesample2', the column family are same as the one which we find in that exception.我已经粘贴在表“hbasesample2”的“描述”下方,列族与我们在该异常中找到的列族相同。 Totally I have 10 column family for that table, as of now I have copy pasted some 4 family desc.我总共有 10 个列族用于该表,到目前为止我已经复制粘贴了一些 4 个族的描述。


hbase(main):003:0> describe 'hbasesample2'
Table hbasesample2 is ENABLED                                                                                                                                           
hbasesample2                                                                                                                                                            
COLUMN FAMILIES DESCRIPTION                                                                                                                                             
{NAME => 'ColumnFamily1 ', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', T
TL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}                                                       
{NAME => 'ColumnFamily10', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', T
TL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}                                                       
{NAME => 'ColumnFamily2', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TT
L => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}                                                        
{NAME => 'ColumnFamily3', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TT
L => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}

For my case make sure all region on line: check you region count and you region data file count in hdfs.对于我的情况,请确保所有区域都在线:检查您的区域计数和您在 hdfs 中的区域数据文件计数。 If they are different, you need try to fix hbase metadata.如果它们不同,则需要尝试修复 hbase 元数据。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Spark Hbase:如何将数据帧转换为 Hbase org.apache.hadoop.hbase.client.Result - Spark Hbase : How to convert a dataframe to Hbase org.apache.hadoop.hbase.client.Result HBASE错误:设置代理接口org.apache.hadoop.hbase.ipc.HRegionInterface失败 - HBASE ERROR: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface HBase [错误]:org.apache.hadoop.hbase.client.AsyncProcess - 无法获取副本0位置 - HBase [ERROR]: org.apache.hadoop.hbase.client.AsyncProcess - Cannot get replica 0 location for Hbase作为Mapreduce的接收器:线程“ main”中的异常org.apache.hadoop.hbase.client.RetriesExhaustedException - Hbase as sink for Mapreduce: Exception in thread “main” org.apache.hadoop.hbase.client.RetriesExhaustedException 如何通过修改org.apache.hadoop.hbase.mapreduce.RowCounter处理hbase中的大数据? - How to process huge data in hbase by modifying org.apache.hadoop.hbase.mapreduce.RowCounter? org.apache.hadoop.hbase.MasterNotRunningException - org.apache.hadoop.hbase.MasterNotRunningException Hbase java.lang.NoClassDefFoundError:org / apache / hadoop / hbase / MasterNotRunningException - Hbase java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/MasterNotRunningException 异常:org.apache.hadoop.hbase.masternotrunningexception - Exception :org.apache.hadoop.hbase.masternotrunningexception 如何使用来自不同网络的Java库(org.apache.hadoop.hbase.client)连接Hbase? - How can I connect Hbase using Java Library(org.apache.hadoop.hbase.client) from different network? HBase 客户端 - java.lang.ClassNotFoundException:org.apache.hadoop.crypto.key.KeyProviderTokenIssuer - HBase client - java.lang.ClassNotFoundException: org.apache.hadoop.crypto.key.KeyProviderTokenIssuer
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM