简体   繁体   English

Google Compute Engine上的Hadoop

[英]Hadoop on Google Compute Engine

I am trying to setup hadoop cluster in Google Compute Engine through "Launch click-to-deploy software" feature .I have created 1 master and 1 slave node and tried to start the cluster using start-all.sh script from master node and i got error "permission denied(publickey)" . 我正在尝试通过“启动点击部署软件”功能在Google Compute Engine中设置hadoop集群。我创建了1个主节点和1个从属节点,并尝试使用来自主节点和i的start-all.sh脚本启动集群出现错误“权限被拒绝(公钥)”。

I have generated public and private keys in both slave and master nodes . 我已经在从属节点和主节点中生成了公钥和私钥。

currently i logged into the master with my username, is it mandatory to login into master as "hadoop" user .If so ,what is the password for that userid . 当前,我使用我的用户名登录到master,是否必须以“ hadoop”用户身份登录master。如果是,该用户id的密码是什么。

please let me know how to overcome this problem . 请让我知道如何解决这个问题。

The deployment creates a user hadoop which owns Hadoop-specific SSH keys which were generated dynamically at deployment time; 部署会创建一个用户hadoop ,该用户拥有Hadoop特有的SSH密钥,这些密钥在部署时会动态生成。 this means since start-all.sh uses SSH under the hood, you must do the following: 这意味着由于start-all.shstart-all.sh使用SSH,因此您必须执行以下操作:

sudo su hadoop
/home/hadoop/hadoop-install/bin/start-all.sh

Otherwise, your "normal" username doesn't have SSH keys properly set up so you won't be able to launch the Hadoop daemons, as you saw. 否则,您的“普通”用户名没有正确设置SSH密钥,因此您将无法启动Hadoop守护程序,如您所见。

Another thing to note is that the deployment should have already started all the Hadoop daemons automatically, so you shouldn't need to manually run start-all.sh unless you're rebooting the daemons after some manual configuration updates. 需要注意的另一件事是,部署应该已经自动启动了所有Hadoop守护程序,因此,除非在进行一些手动配置更新后重新启动守护程序,否则您无需手动运行start-all.sh。 If the daemons weren't running after the deployment ran, you may have encountered some unexpected error during initialization. 如果部署运行后守护程序未运行,则可能在初始化期间遇到了一些意外错误。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM