简体   繁体   中英

Hadoop Key-Value store with remote deploy

My application is launched from remotely pc via spark-submit in yarn-cluster mode with Kerberos keytab and principals by this guide: https://spark.apache.org/docs/latest/running-on-yarn.html . The advantages of this approach are that I have my own version of the spark at any cluster.

Is it possible to automatically deploy Ignite/Hazelcast/Accumulo/Kudu or other NoSQL DB with random access on read/write into a Hadoop YARN cluster without sftp/ssh only by running a bash-script with HADOOP_CONF_DIR/YARN_CONF_DIR configs?

在YARN群集上部署Hazelcast是可能且容易的,请查看https://github.com/hazelcast/hazelcast-yarn

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM