繁体   English   中英

安装Github Project级联/ vagrant-cascading-hadoop-cluster时出错

[英]error in installing Github Project Cascading/vagrant-cascading-hadoop-cluster

我开始安装vagrant-cascading-hadoop-cluster github项目,但是发生了一些错误,它无法结束安装。

当我做“无所事事”时

sina@linux:/media/sina/passport/vagrant-cascading-hadoop-cluster$ sudo vagrant up
Bringing machine 'hadoop1' up with 'virtualbox' provider...
Bringing machine 'hadoop2' up with 'virtualbox' provider...
Bringing machine 'hadoop3' up with 'virtualbox' provider...
Bringing machine 'master' up with 'virtualbox' provider...
==> hadoop1: Importing base box 'cascading-hadoop-base'...
==> hadoop1: Matching MAC address for NAT networking...
==> hadoop1: Setting the name of the VM: vagrant-cascading-hadoop-cluster_hadoop1_1409806559206_53275
==> hadoop1: Clearing any previously set network interfaces...
==> hadoop1: Preparing network interfaces based on configuration...
    hadoop1: Adapter 1: nat
    hadoop1: Adapter 2: hostonly
==> hadoop1: Forwarding ports...
    hadoop1: 22 => 2222 (adapter 1)
==> hadoop1: Running 'pre-boot' VM customizations...
==> hadoop1: Booting VM...
==> hadoop1: Waiting for machine to boot. This may take a few minutes...
    hadoop1: SSH address: 127.0.0.1:2222
    hadoop1: SSH username: vagrant
    hadoop1: SSH auth method: private key
==> hadoop1: Machine booted and ready!
==> hadoop1: Checking for guest additions in VM...
    hadoop1: The guest additions on this VM do not match the installed version of
    hadoop1: VirtualBox! In most cases this is fine, but in rare cases it can
    hadoop1: prevent things such as shared folders from working properly. If you see
    hadoop1: shared folder errors, please make sure the guest additions within the
    hadoop1: virtual machine match the version of VirtualBox you have installed on
    hadoop1: your host and reload your VM.
    hadoop1: 
    hadoop1: Guest Additions Version: 4.2.0
    hadoop1: VirtualBox Version: 4.3
==> hadoop1: Setting hostname...
==> hadoop1: Configuring and enabling network interfaces...
==> hadoop1: Mounting shared folders...
    hadoop1: /vagrant => /media/sina/passport/vagrant-cascading-hadoop-cluster
    hadoop1: /tmp/vagrant-puppet-1/manifests => /media/sina/passport/vagrant-cascading-hadoop-cluster/manifests
    hadoop1: /tmp/vagrant-puppet-1/modules-0 => /media/sina/passport/vagrant-cascading-hadoop-cluster/modules
==> hadoop1: Running provisioner: puppet...
==> hadoop1: Running Puppet with datanode.pp...
==> hadoop1: stdin: is not a tty
==> hadoop1: warning: Could not retrieve fact fqdn
==> hadoop1: notice: /Stage[main]/Base/File[/etc/motd]/ensure: defined content as '{md5}0c3e6f224eb6cf6fbff62de3067eaef9'
==> hadoop1: notice: /Stage[main]/Hbase/File[/srv/zookeeper]/ensure: created
==> hadoop1: notice: /Stage[main]/Base/File[/root/.ssh]/ensure: created
==> hadoop1: notice: /Stage[main]/Base/File[/root/.ssh/config]/ensure: defined content as '{md5}880efd788ff2d77bf3989a13a9e0344a'
==> hadoop1: notice: /Stage[main]/Base/File[/root/.ssh/id_rsa.pub]/ensure: defined content as '{md5}622c3becafba74b1f4f1267436cbd28b'
==> hadoop1: notice: /Stage[main]/Base/Ssh_authorized_key[ssh_key]/ensure: created
==> hadoop1: notice: /Stage[main]/Base/Exec[apt-get update]/returns: executed successfully
==> hadoop1: notice: /Stage[main]/Base/Package[openjdk-6-jdk]/ensure: ensure changed 'purged' to 'present'
==> hadoop1: notice: /Stage[main]/Base/File[/root/.ssh/id_rsa]/ensure: defined content as '{md5}a9e4aa776fe92555716b7963488838f6'
==> hadoop1: notice: /Stage[main]/Avahi/Package[avahi-daemon]/ensure: ensure changed 'purged' to 'present'
==> hadoop1: notice: /Stage[main]/Avahi/File[/etc/avahi/avahi-daemon.conf]/content: content changed '{md5}bd8d4eda789abe26c48c1f1f74d19551' to '{md5}e45468ec4a7369471c5101403f5b8f87'
==> hadoop1: notice: /Stage[main]/Avahi/File[/etc/avahi/avahi-daemon.conf]/mode: mode changed '0644' to '0600'
==> hadoop1: notice: /Stage[main]/Hbase/File[/etc/profile.d/hbase-path.sh]/ensure: defined content as '{md5}06cf529d2063f3060bfca646dd2d1a18'
==> hadoop1: notice: /Stage[main]/Avahi/File[/etc/avahi/hosts]/content: content changed '{md5}186990ae1edac95a88dbef6a36a07716' to '{md5}c90385145a2d6900d7d027bd87cd8ff0'
==> hadoop1: notice: /Stage[main]/Avahi/File[/etc/avahi/hosts]/mode: mode changed '0644' to '0600'
==> hadoop1: notice: /Stage[main]/Avahi/Service[avahi-daemon]: Triggered 'refresh' from 4 events
==> hadoop1: notice: /Stage[main]/Hadoop/File[/etc/profile.d/hadoop-path.sh]/ensure: defined content as '{md5}da4327f03f22df21251fece99b4fda68'
==> hadoop1: notice: /Stage[main]/Hadoop/File[/tmp/verifier]/ensure: defined content as '{md5}ee3850511912c0b432c98426be818253'
==> hadoop1: err: /Stage[main]/Hadoop/Exec[download_grrr]/returns: change from notrun to 0 failed: Command exceeded timeout at /tmp/vagrant-puppet-1/modules-0/hadoop/manifests/init.pp:37
==> hadoop1: notice: /Stage[main]/Hadoop/Exec[download_checksum]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/Exec[download_checksum]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/Exec[download_hadoop]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/Exec[download_hadoop]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hbase/Exec[download_hbase]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hbase/Exec[download_hbase]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Avahi/File[/etc/hosts]/content: content changed '{md5}28728fdc2cb16bf53da7ba1988a7e978' to '{md5}c90385145a2d6900d7d027bd87cd8ff0'
==> hadoop1: notice: /Stage[main]/Avahi/File[/etc/hosts]/mode: mode changed '0644' to '0600'
==> hadoop1: notice: /Stage[main]/Hadoop/Exec[verify_tarball]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/Exec[verify_tarball]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hbase/Exec[unpack_hbase]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hbase/Exec[unpack_hbase]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hbase/File[/opt/hbase-0.96.2-hadoop2/conf/hbase-site.xml]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hbase/File[/opt/hbase-0.96.2-hadoop2/conf/hbase-site.xml]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/Exec[unpack_hadoop]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/Exec[unpack_hadoop]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hbase/File[/opt/hbase-0.96.2-hadoop2/conf/regionservers]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hbase/File[/opt/hbase-0.96.2-hadoop2/conf/regionservers]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/slaves]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/slaves]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/hdfs-site.xml]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/hdfs-site.xml]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/core-site.xml]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/core-site.xml]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/yarn-site.xml]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/yarn-site.xml]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/hadoop-env.sh]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/hadoop-env.sh]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/bin/stop-all.sh]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/bin/stop-all.sh]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/Exec[hadoop_conf_permissions]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/Exec[hadoop_conf_permissions]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/mapred-site.xml]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/mapred-site.xml]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/masters]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/masters]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/yarn-env.sh]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/etc/hadoop/yarn-env.sh]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/bin/start-all.sh]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/bin/start-all.sh]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hbase/File[/opt/hbase-0.96.2-hadoop2/conf/hbase-env.sh]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hbase/File[/opt/hbase-0.96.2-hadoop2/conf/hbase-env.sh]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/bin/prepare-cluster.sh]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/bin/prepare-cluster.sh]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/Group[hadoop]/ensure: created
==> hadoop1: notice: /Stage[main]/Hadoop/User[hdfs]/ensure: created
==> hadoop1: notice: /Stage[main]/Hadoop/File[/srv/hadoop/]/ensure: created
==> hadoop1: notice: /Stage[main]/Hadoop/File[/srv/hadoop/namenode]/ensure: created
==> hadoop1: notice: /Stage[main]/Hadoop/File[/srv/hadoop/datanode/]/ensure: created
==> hadoop1: notice: /Stage[main]/Hadoop/User[yarn]/ensure: created
==> hadoop1: notice: /Stage[main]/Hadoop/User[mapred]/ensure: created
==> hadoop1: 
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/logs]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/logs]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/logs/mapred]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/logs/mapred]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/logs/yarn]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/logs/yarn]: Skipping because of failed dependencies
==> hadoop1: notice: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/logs/hadoop]: Dependency Exec[download_grrr] has failures: true
==> hadoop1: warning: /Stage[main]/Hadoop/File[/opt/hadoop-2.3.0/logs/hadoop]: Skipping because of failed dependencies
==> hadoop1: notice: Finished catalog run in 1838.19 seconds
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

它在exec download_grrr中给出错误

==> hadoop1: err: /Stage[main]/Hadoop/Exec[download_grrr]/returns: change from notrun to 0 failed: Command exceeded timeout at /tmp/vagrant-puppet-1/modules-0/hadoop/manifests/init.pp:37

该错误所引用的exec命令在/modules/hadoop/manifests/init.pp中

  exec { "download_grrr":
    command => "wget --no-check-certificate http://raw.github.com/fs111/grrrr/master/grrr -O /tmp/grrr && chmod +x /tmp/grrr",
    path => $path,
    creates => "/tmp/grrr",
  }

我自己下载了grrr文件,它起作用了。 因此,下载文件本身没有问题

grrr文件包含:

#!/bin/bash

# author: André Kelpe <efeshunderelf at googlemail.com>
# licencse: Apache v2

GRRR_WGET_OPTIONS="--user-agent grrr/1.0"

# find out our region and yes, you can get this as csv file. How cool is that?
GEOIP_REGION=$(wget -qO- freegeoip.net/csv/ | tr '[A-Z]' '[a-z]' | tr -d '"'| awk -F, '{print $2}')

# classic confusion between geoip db and apache mirror list
if [ $GEOIP_REGION == "gb" ]; then 
    GEOIP_REGION=uk
fi

MIRRORLIST_FILE_NAME=$(mktemp)

# download the latest mirror list from apache. we ignore the last
# sync times and hope for the best...
wget -qO- http://www.apache.org/mirrors/mirrors.list | grep -v '^$' \
    | grep http | grep -v ' 0$' | grep -v '^#' > $MIRRORLIST_FILE_NAME

# use US as the default region. apache does the same in their scripts...
REGION=us

# check if there is a mirror in our region
if grep -q " $GEOIP_REGION " $MIRRORLIST_FILE_NAME; then
    REGION=$GEOIP_REGION
fi

# finally download it all
wget $GRRR_WGET_OPTIONS $(grep " $REGION " $MIRRORLIST_FILE_NAME | shuf | head -1 | awk '{print $3}')/$*

retval=$?

# clean up after ourselves.
rm $MIRRORLIST_FILE_NAME

exit $retval

因此,由于某些其他exec命令需要download_grrr exec,因此由于依赖关系失败而被跳过。 我该如何解决这个错误?

通常,超时意味着文件花费了太长时间才能从服务器下载。 您需要为该命令添加一个timeout => 0或足够高的值。 Puppet的exec默认超时是300秒。

但是,由于它正在下载一个很小的Shell脚本,因此它获取的URL可能存在网络问题。 您可能受到速率限制,或者Github在尝试运行该命令时花了很长时间做出响应。

最简单的方法是通过执行以下操作来手动修复它:

vagrant ssh
wget --no-check-certificate http://raw.github.com/fs111/grrrr/master/grrr -O /tmp/grrr && chmod +x /tmp/grrr
exit
vagrant --provision

我只是克隆了该回购协议,然后让他们无所事事,对我来说效果很好。 可能是暂时的网络故障。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM