简体   繁体   中英

Cannot connect to ec2 instance after rubber automatic reboot

enter code hereI I'm trying to deploy to ec2 a rails 3 app using rubber for my first time. During the run of the command cap rubber:create_staging the instance was asked to do a reboot and then it won't accept a connection. This is how it looks:

 ** [out :: production.foo.com] Setting up grub2-common (1.99-21ubuntu3.1) ...
 ** [out :: production.foo.com] Setting up grub-pc-bin (1.99-21ubuntu3.1) ...
 ** [out :: production.foo.com] Setting up grub-pc (1.99-21ubuntu3.1) ...
 ** [out :: production.foo.com] Generating grub.cfg ...
 ** [out :: production.foo.com] /usr/sbin/grub-probe: warn:
 ** [out :: production.foo.com] 
 ** [out :: production.foo.com] disk does not exist, so falling back to partition device /dev/xvda1
 ** [out :: production.foo.com] .
 ** [out :: production.foo.com] Found linux image: /boot/vmlinuz-3.2.0-26-virtual
 ** [out :: production.foo.com] Found initrd image: /boot/initrd.img-3.2.0-26-virtual
 ** [out :: production.foo.com] /usr/sbin/grub-probe: warn:
 ** [out :: production.foo.com] 
 ** [out :: production.foo.com] disk does not exist, so falling back to partition device /dev/xvda1
 ** [out :: production.foo.com] .
 ** [out :: production.foo.com] /usr/sbin/grub-probe: warn:
 ** [out :: production.foo.com] 
 ** [out :: production.foo.com] disk does not exist, so falling back to partition device /dev/xvda1.
 ** [out :: production.foo.com] /usr/sbin/grub-probe: warn:
 ** [out :: production.foo.com] 
 ** [out :: production.foo.com] disk does not exist, so falling back to partition device /dev/xvda1
 ** [out :: production.foo.com] .
 ** [out :: production.foo.com] /usr/sbin/grub-probe: warn:
 ** [out :: production.foo.com] 
 ** [out :: production.foo.com] disk does not exist, so falling back to partition device /dev/xvda1
 ** [out :: production.foo.com] .
 ** [out :: production.foo.com] Found linux image: /boot/vmlinuz-3.2.0-23-virtual
 ** [out :: production.foo.com] Found initrd image: /boot/initrd.img-3.2.0-23-virtual
 ** [out :: production.foo.com] /usr/sbin/grub-probe: warn:
 ** [out :: production.foo.com] 
 ** [out :: production.foo.com] disk does not exist, so falling back to partition device /dev/xvda1
 ** [out :: production.foo.com] .
 ** [out :: production.foo.com] /usr/sbin/grub-probe: warn:
 ** [out :: production.foo.com] 
 ** [out :: production.foo.com] disk does not exist, so falling back to partition device /dev/xvda1
 ** [out :: production.foo.com] .
 ** [out :: production.foo.com] /usr/sbin/grub-probe: warn:
 ** [out :: production.foo.com] 
 ** [out :: production.foo.com] disk does not exist, so falling back to partition device /dev/xvda1
 ** [out :: production.foo.com] .
 ** [out :: production.foo.com] /usr/sbin/grub-probe: warn:
 ** [out :: production.foo.com] 
 ** [out :: production.foo.com] disk does not exist, so falling back to partition device /dev/xvda1
 ** [out :: production.foo.com] .
 ** [out :: production.foo.com] /usr/sbin/grub-probe: warn:
 ** [out :: production.foo.com] 
 ** [out :: production.foo.com] disk does not exist, so falling back to partition device /dev/xvda1
 ** [out :: production.foo.com] .
 ** [out :: production.foo.com] Found memtest86+ image: /boot/memtest86+.bin
 ** [out :: production.foo.com] done
 ** [out :: production.foo.com] Processing triggers for libc-bin ...
 ** [out :: production.foo.com] ldconfig deferred processing now taking place
 ** [out :: production.foo.com] Processing triggers for resolvconf ...
 ** [out :: production.foo.com] resolvconf: Error: /etc/resolv.conf isn't a symlink, not doing anything.
 ** [out :: production.foo.com] Processing triggers for initramfs-tools ...
 ** [out :: production.foo.com] update-initramfs: Generating /boot/initrd.img-3.2.0-26-virtual
    command finished in 131854ms
  * executing "echo $(ls /var/run/reboot-required 2> /dev/null)"
    servers: ["production.foo.com"]
    [production.foo.com] executing command
    command finished in 460ms
  * executing "echo $(ls /mnt/your_app_name-production 2> /dev/null)"
    servers: ["production.foo.com"]
    [production.foo.com] executing command
    command finished in 473ms
 ** Updates require a reboot on hosts ["production.foo.com"]
 ** Rebooting ...
  * executing "sudo -p 'sudo password: ' reboot"
    servers: ["production.foo.com"]
    [production.foo.com] executing command
    command finished in 479ms
  * executing `rubber:_direct_connection_production.foo.com_887'
  * executing "echo"
    servers: ["production.foo.com"]
 ** Failed to connect to production.foo.com, retrying

The problem is that by running rubber:create or any other rubber command, after the instance is created and initialized, after the /etc/hosts file is written, then I receive a "connection failed" error and everything stops there.

If I do an ssh to the address written in the hosts file, then I am able to connect perfectly to the instance so I don't understand where the problem lies…

make sure from the EBS volume /dev/xvda1 is already attached to the ex2 instance.

Goto EC2 -> Volumes and take a look to the EBS volume that you have. id you see that volume is in the available state try to attach it the ec2 instance and reboot the instance.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM