During Hbase data migration I have encountered a java.lang.IllegalArgumentException: KeyValue size too large
In long term :
I need to increase the properties hbase.client.keyvalue.maxsize
(from 1048576 to 10485760) in the /etc/hbase/conf/hbase-site.xml
but I can't change this file now (I need validation).
In short term :
I have success to import data using command :
hbase org.apache.hadoop.hbase.mapreduce.Import \
-Dhbase.client.keyvalue.maxsize=10485760 \
myTable \
myBackupFile
Now I need to run a Spark Job using spark-submit
What is the better way :
spark-submit \
--conf spark.hbase.client.keyvalue.maxsize=10485760
spark-submit \
--conf spark.executor.extraJavaOptions=-Dhbase.client.keyvalue.maxsize=10485760 \
--conf spark.driver.extraJavaOptions=-Dhbase.client.keyvalue.maxsize=10485760
If you can change your code, you should be able to set these properties programmatically. I think something like this used to work for me in the past in Java:
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.client.scanner.timeout.period", SCAN_TIMEOUT); // set BEFORE you create the connection object below:
Connection conn = ConnectionFactory.createConnection(conf);
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.