简体   繁体   English

具有远程部署的Hadoop键值存储

[英]Hadoop Key-Value store with remote deploy

My application is launched from remotely pc via spark-submit in yarn-cluster mode with Kerberos keytab and principals by this guide: https://spark.apache.org/docs/latest/running-on-yarn.html . 我的应用程序是通过使用Kerberos keytab和主体通过yarn-cluster模式下的spark-submit通过spark-submit从远程pc启动的,该指南如下: https : //spark.apache.org/docs/latest/running-on-yarn.html The advantages of this approach are that I have my own version of the spark at any cluster. 这种方法的优点是,我在任何群集上都有自己的spark版本。

Is it possible to automatically deploy Ignite/Hazelcast/Accumulo/Kudu or other NoSQL DB with random access on read/write into a Hadoop YARN cluster without sftp/ssh only by running a bash-script with HADOOP_CONF_DIR/YARN_CONF_DIR configs? 是否可以仅通过运行具有HADOOP_CONF_DIR / YARN_CONF_DIR配置的bash脚本来自动部署具有随机访问权限的Ignite / Hazelcast / Accumulo / Kudu或其他NoSQL DB,而无需对sftp / ssh进行读写就可以对Hadoop YARN集群进行随机访问?

在YARN群集上部署Hazelcast是可能且容易的,请查看https://github.com/hazelcast/hazelcast-yarn

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM