简体   繁体   English

使用Pig将数据存储到Hbase

[英]Using pig to store data to Hbase

I have a table with schema 我有一个带模式的表

TOP: {businessid: bytearray,best_review::review_weight: double,best_review::REVIEWS_DATA::businessid: bytearray,best_review::REVIEWS_DATA::reviewid: bytearray,best_review::REVIEWS_DATA::stars: bytearray}

I want to store it in a Hbase table and I am using the following command 我想将其存储在Hbase表中,并且正在使用以下命令

STORE TOP INTO 'hbase://master_data'USING org.apache.pig.backend.hadoop.hbase.HBaseStorage(
'master_info:best_review::review_weight, master_info:best_review::REVIEWS_DATA::reviewid, mastert_info:best_review::REVIEWS_DATA::stars '); 

I get the following error: 我收到以下错误:

java.lang.Exception: java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)Caused by: java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:479)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:442)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:422)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:269)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)Caused by: java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:1003)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:477)

Please let me know how to run the storage command. 请让我知道如何运行存储命令。 thanks! 谢谢!

The arguments to HBaseStorage takes a list of colFamily:columns with space separated something like HBaseStorage的参数采用colFamily:columns列表,它们之间用空格分隔,例如

raw = LOAD 'hbase://SampleTable' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage( 'info:first_name info:last_name friends:* info:*') raw =使用org.apache.pig.backend.hadoop.hbase.HBaseStorage('info:first_name info:last_name friends:* info:*')加载'hbase:// SampleTable'

Each column inside column list argument is space separated, and obviously two arguments are separated by comma. 列列表参数中的每一列均以空格分隔,并且显然两个参数之间以逗号分隔。

I see you have :: at many places. 我看到你在很多地方都有::。 Can you verify that 你能验证一下

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM