简体   繁体   中英

AWS EC2 Instance store is disappeared after rebooting

I've launch EC2 instance (i3en.xlarge) from my other AMI, then mount 2 EBS volume with one of them is root device . After accident reboot my EC2 instance, its Instance Store was DISAPPEARED . I can't find it by lsblk or `df -Th.

I understand that data in the instance store will lost when accidently reboot. However, It was totally DISAPPEARED .

ubuntu@ip-10-0-0-10:~$ lsblk -p
NAME             MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
/dev/loop0         7:0    0 28.1M  1 loop /snap/amazon-ssm-agent/2012
/dev/loop1         7:1    0   18M  1 loop /snap/amazon-ssm-agent/1566
/dev/loop2         7:2    0 97.1M  1 loop /snap/core/9993
/dev/loop3         7:3    0 96.6M  1 loop /snap/core/9804
/dev/nvme0n1     259:0    0  500G  0 disk /data-2
/dev/nvme1n1     259:1    0  500G  0 disk 
└─/dev/nvme1n1p1 259:2    0  500G  0 part /


ubuntu@ip-10-0-0-10:~$ df -Th
Filesystem     Type      Size  Used Avail Use% Mounted on
udev           devtmpfs   16G     0   16G   0% /dev
tmpfs          tmpfs     3.1G  896K  3.1G   1% /run
/dev/nvme1n1p1 ext4      485G  358G  128G  74% /
tmpfs          tmpfs      16G   36K   16G   1% /dev/shm
tmpfs          tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs          tmpfs      16G     0   16G   0% /sys/fs/cgroup
/dev/loop0     squashfs   29M   29M     0 100% /snap/amazon-ssm-agent/2012
/dev/loop1     squashfs   18M   18M     0 100% /snap/amazon-ssm-agent/1566
/dev/loop2     squashfs   98M   98M     0 100% /snap/core/9993
/dev/loop3     squashfs   97M   97M     0 100% /snap/core/9804
/dev/nvme0n1   ext4      492G   62G  405G  14% /data-2
tmpfs          tmpfs     3.1G     0  3.1G   0% /run/user/111
tmpfs          tmpfs     3.1G     0  3.1G   0% /run/user/1001
tmpfs          tmpfs     3.1G     0  3.1G   0% /run/user/1000

I just tried to reproduce your situation with an i3en instance.

I mounted an instance store volume following directions from Add instance store volumes to your EC2 instance - Amazon Elastic Compute Cloud :

sudo mkfs -t xfs /dev/nvme1n1
sudo mkdir /data
sudo mount /dev/nvme1n1 /data

I then put a file in the /data directory and rebooted.

Guess what... it disappeared too!

But then I noticed that the volume was not mounted.

I ran this command again:

sudo mount /dev/nvme1n1 /data

and the volume reappeared.

If you want a mounted volume to remain after a reboot, use fstab .

See: Making an Amazon EBS volume available for use on Linux - Amazon Elastic Compute Cloud

(Not enough reputation to comment)

Note that John's answer will work if you simply reboot (as in the actual question). That is because the ephemeral disk will remain formatted but no longer be mounted.

However, if you stop and start the instance, then you will get new hardware and the ephemeral disk will no longer be formatted. You will need to add a start script to the User Data to format / mount the disk etc every start.

See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html But you need to run on every start not just on Launch. So also see https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/

Altogether you want to add something like this as User Data:

Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0

--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"

#cloud-config
cloud_final_modules:
- [scripts-user, always]

--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"

#!/bin/bash
mkfs -t xfs /dev/nvme1n1
mount /dev/nvme1n1 /tmp
chmod 777 /tmp

--//--

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM