简体   繁体   English

无法使用mapreduce.LoadIncrementalHFiles将HFile加载到HBase中

[英]Unable to load the HFiles into HBase using mapreduce.LoadIncrementalHFiles

I want to insert the out-put of my map-reduce job into a HBase table using HBase Bulk loading API LoadIncrementalHFiles.doBulkLoad(new Path(), hTable) . 我想使用HBase Bulk加载API LoadIncrementalHFiles.doBulkLoad(new Path(), hTable)将map-reduce作业的输出插入到HBase表中。

I am emitting the KeyValue data type from my mapper and then using the HFileOutputFormat to prepare my HFiles using its default reducer. 我从我的mapper中发出KeyValue数据类型,然后使用HFileOutputFormat使用其默认的reducer来准备我的HFiles。

When I run my map-reduce job, it gets completed without any errors and it creates the outfile, however, the final step - inserting HFiles to HBase is not happening. 当我运行map-reduce作业时,它会在没有任何错误的情况下完成并创建outfile,但最后一步 - 将HFiles插入HBase不会发生。 I get the below error after my map-reduce completes: map-reduce完成后,我收到以下错误:

13/09/08 03:39:51 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory hdfs://localhost:54310/user/xx.xx/output/_SUCCESS
13/09/08 03:39:51 WARN mapreduce.LoadIncrementalHFiles: Bulk load operation did not find any files to load in directory output/.  Does it contain files in subdirectories that correspond to column family names?

But I can see the output directory containing: 但我可以看到输出目录包含:

1. _SUCCESS
2. _logs
3. _0/2aa96255f7f5446a8ea7f82aa2bd299e file (which contains my data)

I have no clue as to why my bulkloader is not picking the files from output directory. 我不知道为什么我的bulkloader没有从输出目录中选择文件。

Below is the code of my Map-Reduce driver class: 下面是我的Map-Reduce驱动程序类的代码:

public static void main(String[] args) throws Exception{

    String inputFile = args[0];
    String tableName = args[1];
    String outFile = args[2];
    Path inputPath = new Path(inputFile);
    Path outPath = new Path(outFile);

    Configuration conf = new Configuration();
    FileSystem fs = FileSystem.get(conf);

    //set the configurations
    conf.set("mapred.job.tracker", "localhost:54311");

    //Input data to HTable using Map Reduce
    Job job = new Job(conf, "MapReduce - Word Frequency Count");
    job.setJarByClass(MapReduce.class);

    job.setInputFormatClass(TextInputFormat.class);

    FileInputFormat.addInputPath(job, inputPath);

    fs.delete(outPath);
    FileOutputFormat.setOutputPath(job, outPath);

    job.setMapperClass(MapReduce.MyMap.class);
    job.setMapOutputKeyClass(ImmutableBytesWritable.class);
    job.setMapOutputValueClass(KeyValue.class);

    HTable hTable = new HTable(conf, tableName.toUpperCase());

    // Auto configure partitioner and reducer
    HFileOutputFormat.configureIncrementalLoad(job, hTable);

    job.waitForCompletion(true);

    // Load generated HFiles into table
    LoadIncrementalHFiles loader = new LoadIncrementalHFiles(conf);
    loader.doBulkLoad(new Path(outFile), hTable);

}

How can I figure out the wrong thing happening here which I avoiding my data insert to HBase? 我怎么能弄清楚我在这里发生的错误事情,我避免将数据插入到HBase中?

Finally, I figured out as to why my HFiles were not getting dumped into HBase. 最后,我想出了为什么我的HFiles没有被倾倒到HBase中。 Below are the details: 以下是详细信息:

My create statement ddl was not having any default column-name so my guess is that Phoenix created the default column-family as "_0". 我的create语句ddl没有任何默认的列名,所以我的猜测是Phoenix创建了默认列系列为“_0”。 I was able to see this column-family in my HDFS/hbase dir. 我能够在我的HDFS / hbase目录中看到这个列族。

However, when I use the HBase's LoadIncrementalHFiles API for fetching the files from my output directory, it was not picking my dir named after the col-family (" 0") in my case. 但是,当我使用HBase的LoadIncrementalHFiles API从我的输出目录中获取文件时,它并没有在我的情况下选择以col-family(“ 0”)命名的目录 I debugged the LoadIncrementalHFiles API code and found that it skips all the directories from the output path that starts with " " (for eg "_logs"). 我调试了LoadIncrementalHFiles API代码,发现它跳过了以“开头的输出路径中的所有目录 (例如“_logs”)。

I re-tried the same again but now by specifying some column-family and everything worked perfectly fine. 我再次尝试了相同但现在通过指定一些列族,一切都很好。 I am able to query data using Phoenix SQL. 我可以使用Phoenix SQL查询数据。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM