简体   繁体   中英

set HBase properties for Spark Job using spark-submit

During Hbase data migration I have encountered a java.lang.IllegalArgumentException: KeyValue size too large

In long term :

I need to increase the properties hbase.client.keyvalue.maxsize (from 1048576 to 10485760) in the /etc/hbase/conf/hbase-site.xml but I can't change this file now (I need validation).

In short term :

I have success to import data using command :

hbase org.apache.hadoop.hbase.mapreduce.Import \
  -Dhbase.client.keyvalue.maxsize=10485760 \
  myTable \
  myBackupFile

Now I need to run a Spark Job using spark-submit

What is the better way :

  • Prefix the HBase properties with 'spark.' (I'm not sure it's possible and if it's works)
spark-submit \
  --conf spark.hbase.client.keyvalue.maxsize=10485760
  • Using 'spark.executor.extraJavaOptions' and 'spark.driver.extraJavaOptions' to explicitly transmit HBase properties
spark-submit \
  --conf spark.executor.extraJavaOptions=-Dhbase.client.keyvalue.maxsize=10485760 \
  --conf spark.driver.extraJavaOptions=-Dhbase.client.keyvalue.maxsize=10485760

If you can change your code, you should be able to set these properties programmatically. I think something like this used to work for me in the past in Java:

Configuration conf = HBaseConfiguration.create();
conf.set("hbase.client.scanner.timeout.period", SCAN_TIMEOUT); // set BEFORE you create the connection object below:
Connection conn = ConnectionFactory.createConnection(conf);

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM