简体   繁体   中英

Locked myself out of SSH with UFW in EC2 AWS

I have an EC2 Instance with Ubuntu. I used sudo ufw enable and after only allow the mongodb port

sudo ufw allow 27017

When the ssh connection broke, I can´t reconnect

# Update

Easiest way is to update the instance's user data

  • Stop your instance

  • Right click (windows) or ctrl + click (Mac) on the instance to open context menu, then go to Instance Settings -> Edit User Data or select the instance and go to Actions -> Instance Settings -> Edit User Data

    If you're still on the old AWS console, select the instance, go to Actions -> Instance Settings -> View/Change User Data

And paste this

Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
ufw disable
iptables -L
iptables -F
--//
  • Once added, restart the instance and ssh should work. The userdata disables ufw if enabled and also flushes any iptable rules blocking ssh access

Source here

# Old Answer

Detach and fix the volume of the problem instance using another instance

  • Launch a new instance (recovery instance).

  • Stop the original instance (DO NOT TERMINATE)

  • Detach the volume (problem volume) from the original instance

  • Attached it to the recovery instance as /dev/sdf.

  • Login to the recovery instance via ssh/putty

  • Run sudo lsblk to display attached volumes and confirm the name of the problem volume. It usually begins with /dev/xvdf . Mine is /dev/xvdf1

  • Mount problem volume.

     $ sudo mount /dev/xvdf1 /mnt $ cd /mnt/etc/ufw
  • Open ufw configuration file

     $ sudo vim ufw.conf
  • Press i to edit the file.

  • Change ENABLED=yes to ENABLED=no

  • Type Ctrl-C and type :wq to save the file.

  • Display content of ufw conf file using the command below and ensure that ENABLED=yes has been changed to ENABLED=no

     $ sudo cat ufw.conf
  • Unmount volume

     $ cd ~ $ sudo umount /mnt
  • Detach problem volume from recovery instance and re-attach it to the original instance as /dev/sda1.

  • Start the original instance and you should be able to log back in.

Source: here

I have the same problem and found out that this steps works:

1- Stop your instance

2- Go to Instance Settings -> View/Change user Data

UPDATE: PATH ON NEW AWS CONSOLE UI

Right click on your Stopped instance -> Instance Settings -> Edit User Data

3- Paste this on the option Modify user data as text and Save

Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
ufw disable
iptables -L
iptables -F
--//

4- Start your instance

Hope it works for you!

  • Launch another EC2 server instance The best way to accomplish this is use EC2's “Launch More Like This” feature. This will ensure that the OS type, security group and other attributes are the same thus saving a bit of setup time.
  • Stop the problem instance
  • Detach volume from problem instance
  • Attach volume to new instance

Note: Newer Linux kernels may rename your devices to /dev/xvdf through /dev/xvdp internally, even when the device name entered is /dev/sdf through /dev/sdp.

  • Mount the volume
cd ~ mkdir lnx1 sudo mount /dev/xvdf ./lnx1
  • Disable UFW
 cd lnx1 sudo vim ufw.conf

Now find ENABLED=yes and change it to ENABLED=no.

  • Detach volume

Be sure to unmount the volume first:

sudo umount ./lnx1/
  • Reattach the volume to /dev/sda1 on our problem instance
  • Boot problem instance
  • Reassign elastic IP address if necessary
  • Delete the temporary instance and its associated volume

Hola !! you are good go.

Other approaches didn't work for me. My EC2 instance is based on Bitnami image. Attaching volume to another instance didn't work because of marketplace locks.

So instead stop the problem instance and paste this script in instanceSettings > view-change user data.

This approach do not require detaching the volume so it's more straight forward as compared to other ones.


Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
ufw disable
iptables -L
iptables -F
--//

Must stop instance before pasting this, after this start your instance and you should be able to ssh.

I know this is an old question but I fixed mine by adding a command in View/Change User Data using bootcmd

I first stopped my instance

Then I added this in User Data

#cloud-config
bootcmd:
 - cloud-init-per always fix_broken_ufw_1 sh -xc "/usr/sbin/service ufw stop >> /var/tmp/svc_$INSTANCE_ID 2>&1 || true" 
 - cloud-init-per always fix_broken_ufw_2 sh -xc "/usr/sbin/ufw disable>> /var/tmp/ufw_$INSTANCE_ID 2>&1 || true"

#Note: My instance is Ubuntu

Here's a little more extended version of the user-data-script thing:

Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0

--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"

#cloud-config
cloud_final_modules:
- [scripts-user, always]

--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"

#!/bin/bash
set -x
USERNAME="ubuntu"
ls -Al
ls -Al /home
ls -Al /home/${USERNAME}
ls -Al /home/${USERNAME}/.ssh
sudo cat /home/${USERNAME}/.ssh/authorized_keys
ls -Al /etc/ssh
ls -ld /etc/ssh

sudo grep -vE '^$|^#' /etc/hosts.*
sudo sed -i -e 's/^\([^#].*\)/# \1/g' /etc/hosts.deny
sudo sed -i -e 's/^\([^#].*\)/# \1/g' /etc/hosts.allow
sudo grep -vE '^$|^#' /etc/hosts.*
sed '/^$\|^#/d' /etc/ssh/sshd_config

chown -v root:root /home
chmod -v 755 /home
chown -v ${USERNAME}:${USERNAME} /home/${USERNAME} -R
chmod -v 700 /home/${USERNAME}
chmod -v 700 /home/${USERNAME}/.ssh
chmod -v 600 /home/${USERNAME}/.ssh/authorized_keys

sudo tail /var/log/auth.log
sudo ufw status numbered
sudo ufw disable
sudo iptables -F
sudo service iptables stop
sudo service sshd restart
sudo service sshd status -l
--//

After struggling for 2 days I found few easy alternatives, here are those:

  • Use AWS session manager to connect without ssh ( yt tutorial )
  • Use EC2 serial console

Use either of these approaches to get into the machine and later you can change the ufw or ssh keys...etc

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM