简体   繁体   English

将现有 EBS 卷附加和挂载到 EC2 实例文件系统问题

[英]Attaching and mounting existing EBS volume to EC2 instance filesystem issue

I had some unknown issue with my old EC2 instance so that I can't ssh into it anymore.我的旧 EC2 实例存在一些未知问题,因此我无法再通过 ssh 进入它。 Therefore I'm attempting to create a new EBS volume from a snapshot of the old volume and mount it into the new instance.因此,我尝试从旧卷的快照创建一个新的 EBS 卷并将其挂载到新实例中。 Here is exactly what I did:这正是我所做的:

  1. Created a new volume from snapshot of the old one.从旧卷的快照创建一个新卷。
  2. Created a new EC2 instance and attached the volume to it as /dev/xvdf (or /dev/sdf )创建了一个新的 EC2 实例并将卷作为/dev/xvdf (或/dev/sdf )附加到它
  3. SSHed into the instance and attempted to mount the old volume with:通过 SSH 连接到实例并尝试使用以下命令挂载旧卷:

    $ sudo mkdir -m 000 /vol $ sudo mount /dev/xvdf /vol

And the output was:输出是:

mount: block device /dev/xvdf is write-protected, mounting read-only
mount: you must specify the filesystem type

I know I should specify the filesytem as ext4 but the volume contains a lot of important data, so I cannot afford to format it with $ sudo mkfs -t ext4 /dev/xvdf .我知道我应该将文件系统指定为ext4 ,但该卷包含很多重要数据,所以我无法使用$ sudo mkfs -t ext4 /dev/xvdf If I try sudo mount /dev/xvdf /vol -t ext4 (no formatting) I get:如果我尝试sudo mount /dev/xvdf /vol -t ext4 (无格式),我会得到:

mount: wrong fs type, bad option, bad superblock on /dev/xvdf,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

And dmesg | taildmesg | tail dmesg | tail gives me: dmesg | tail给了我:

[ 1433.217915] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.222107] FAT-fs (xvdf): bogus number of reserved sectors
[ 1433.226127] FAT-fs (xvdf): Can't find a valid FAT filesystem
[ 1433.260752] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.265563] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.270477] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.274549] FAT-fs (xvdf): bogus number of reserved sectors
[ 1433.277632] FAT-fs (xvdf): Can't find a valid FAT filesystem
[ 1433.306549] ISOFS: Unable to identify CD-ROM format.
[ 2373.694570] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem

By the way, the 'mounting read-only' message also worries me but I haven't look into it yet since I can't mount the volume at all.顺便说一句,'mounting read-only' 消息也让我担心,但我还没有研究它,因为我根本无法安装卷。

Thanks in advance!提前致谢!

The One Liner一个班轮


🥇 Mount the partition (if disk is partitioned) : 🥇 挂载分区(如果磁盘已分区)

sudo mount /dev/xvdf1 /vol -t ext4

Mount the disk (if not partitioned) :挂载磁盘(如果未分区)

sudo mount /dev/xvdf /vol -t ext4

where:在哪里:

  • /dev/xvdf is changed to the EBS Volume device being mounted /dev/xvdf更改为正在挂载的 EBS Volume设备
  • /vol is changed to the folder you want to mount to. /vol更改为您要挂载到的文件夹
  • ext4 is the filesystem type of the volume being mounted ext4是正在挂载的卷的文件系统类型

Common Mistakes How To:常见错误如何:


✳️ Attached Devices List ✳️ 附加设备列表

Check your mount command for the correct EBS Volume device name and filesystem type .检查您的 mount 命令以获取正确的 EBS 卷设备名称文件系统类型 The following will list them all:以下将全部列出:

sudo lsblk --output NAME,TYPE,SIZE,FSTYPE,MOUNTPOINT,UUID,LABEL

If your EBS Volume displays with an attached partition , mount the partition ;如果您的 EBS 卷显示带有附加partition ,请挂载该partition not the disk.不是磁盘。


✳️ If your volume isn't listed ✳️ 如果您的音量未列出

If it doesn't show, you didn't Attach your EBS Volume in AWS web-console如果它没有显示,您没有在 AWS Web 控制台中Attach您的 EBS 卷


✳️ Auto Remounting on Reboot ✳️ 重启时自动重新挂载

These devices become unmounted again if the EC2 Instance ever reboots.如果 EC2 实例重新启动,这些设备将再次卸载。

A way to make them mount again upon startup is to add the volume to the server's /etc/fstab file.让它们在启动时再次挂载的一种方法是将卷添加到服务器的/etc/fstab文件中。

🔥 Caution:🔥 🔥 注意:🔥
If you corrupt the /etc/fstab file, it will make your system unbootable.如果您损坏了/etc/fstab文件,它将使您的系统无法启动。 Read AWS's short article so you know to check that you did it correctly.阅读 AWS 的简短文章,以便了解检查是否正确。

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html#ebs-mount-after-reboot https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html#ebs-mount-after-reboot

First :首先
With the lsblk command above, find your volume's UUID & FSTYPE .使用上面的lsblk命令,找到您的卷的UUIDFSTYPE

Second :第二
Keep a copy of your original fstab file.保留原始fstab文件的副本。

sudo cp /etc/fstab /etc/fstab.original

Third :第三
Add a line for the volume in sudo nano /etc/fstab .sudo nano /etc/fstab中为卷添加一行。

The fields of fstab are 'tab-separated' and each line has the following fields: fstab的字段是“制表符分隔”的,每一行都有以下字段:

<UUID>  <MOUNTPOINT>    <FSTYPE>    defaults,discard,nofail 0   0

Here's an example to help you, my own fstab reads as follows:这是一个可以帮助您的示例,我自己的fstab内容如下:

LABEL=cloudimg-rootfs   /   ext4    defaults,discard,nofail 0   0
UUID=e4a4b1df-cf4a-469b-af45-89beceea5df7   /var/www-data   ext4    defaults,discard,nofail 0   0

That's it, you're done.就是这样,你完成了。 Check for errors in your work by running:通过运行检查工作中的错误:

sudo mount --all --verbose

You will see something like this if things are 👍:如果事情是 👍,你会看到这样的事情:

/                   : ignored
/var/www-data       : already mounted

I encountered this problem too after adding a new 16GB volume and attaching it to an existing instance.在添加新的 16GB 卷并将其附加到现有实例后,我也遇到了这个问题。 First of all you need to know what disks you have present Run首先你需要知道你有哪些磁盘运行

  sudo fdisk -l 

You'll' have an output that appears like the one shown below detailing information about your disks (volumes"您将获得如下所示的输出,详细说明有关您的磁盘(卷)的信息

 Disk /dev/xvda: 12.9 GB, 12884901888 bytes
  255 heads, 63 sectors/track, 1566 cylinders, total 25165824 sectors
  Units = sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  Disk identifier: 0x00000000

Device Boot      Start         End      Blocks   Id  System
/dev/xvda1   *       16065    25157789    12570862+  83  Linux

 Disk /dev/xvdf: 17.2 GB, 17179869184 bytes
 255 heads, 63 sectors/track, 2088 cylinders, total 33554432 sectors
 Units = sectors of 1 * 512 = 512 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 Disk identifier: 0x00000000

 Disk /dev/xvdf doesn't contain a valid partition table

As you can see the newly added Disk /dev/xvdf is present.如您所见,新添加的磁盘 /dev/xvdf 存在。 To make it available you need to create a filesystem on it and mount it to a mount point.要使其可用,您需要在其上创建文件系统并将其挂载到挂载点。 You can achieve that with the following commands您可以使用以下命令来实现

 sudo mkfs -t ext4 /dev/xvdf

Making a new file system clears everything in the volume so do this on a fresh volume without important data创建新文件系统会清除卷中的所有内容,因此请在没有重要数据的新卷上执行此操作

Then mount it maybe in a directory under the /mnt folder然后将其挂载在 /mnt 文件夹下的目录中

 sudo mount /dev/xvdf /mnt/dir/

Confirm that you have mounted the volume to the instance by running通过运行确认您已将卷安装到实例

  df -h

This is what you should have这是你应该拥有的

Filesystem      Size  Used Avail Use% Mounted on
 udev            486M   12K  486M   1% /dev
 tmpfs           100M  400K   99M   1% /run
 /dev/xvda1       12G  5.5G  5.7G  50% /
 none            4.0K     0  4.0K   0% /sys/fs/cgroup
 none            5.0M     0  5.0M   0% /run/lock
 none            497M     0  497M   0% /run/shm
 none            100M     0  100M   0% /run/user
 /dev/xvdf        16G   44M   15G   1% /mnt/ebs

And that's it you have the volume for use there attached to your existing instance.就是这样,您将在那里使用的卷附加到您的现有实例。 credit 信用

I noticed that for some reason the volume was located at /dev/xvdf1 , not /dev/xvdf .我注意到由于某种原因该卷位于/dev/xvdf1 ,而不是/dev/xvdf

Using使用

sudo mount /dev/xvdf1 /vol -t ext4

worked like a charm像魅力一样工作

I encountered this problem, and I got it now,遇到了这个问题,现在搞定了

[ec2-user@ip-172-31-63-130 ~]$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   8G  0 disk
└─xvda1 202:1    0   8G  0 part /
xvdf    202:80   0   8G  0 disk
└─xvdf1 202:81   0   8G  0 part

You should mount the partition你应该挂载partition

/dev/xvdf1 (which type is a partition) /dev/xvdf1(分区类型)

not mount the disk不挂载disk

/dev/xvdf (which type is a disk) /dev/xvdf(磁盘类型)

I had different issue, here when I checked in dmesg logs, the issue was with same UUID of existing root volume and UUID of root volume of another ec2.我有不同的问题,在这里当我检查 dmesg 日志时,问题是现有根卷的 UUID 和另一个 ec2 的根卷的 UUID 相同。 So to fix this I mounted it on another Linux type of ec2.所以为了解决这个问题,我将它安装在另一种 Linux 类型的 ec2 上。 It worked.有效。

For me it was duplicate UUID error while mounting the volume, so I used "-o nouuid" option.对我来说,安装卷时出现重复的 UUID 错误,所以我使用了“-o nouuid”选项。

for eg mount -o nouuid /dev/xvdf1 /mnt例如 mount -o nouuid /dev/xvdf1 /mnt

I found the clue from system logs, on CentOs, /var/log/messages and found the error: kernel: XFS (xvdf1): Filesystem has duplicate UUID f41e390f-835b-4223-a9bb-9b45984ddf8d - can't mount我从系统日志中找到了线索,在 CentOs 上,/var/log/messages 并发现了错误:内核:XFS (xvdf1): Filesystem has duplicate UUID f41e390f-835b-4223-a9bb-9b45984ddf8d - can't mount

First run below command首先运行以下命令

lsblk /dev/xvdf lsblk /dev/xvdf

Output will be something like below输出将如下所示

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT名称 MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

xvdf 202:80 0 10G 0 disk xvdf 202:80 0 10G 0 磁盘

├─xvdf1 202:81 0 1M 0 part ├─xvdf1 202:81 0 1M 0 部分

└─xvdf2 202:82 0 10G 0 part └─xvdf2 202:82 0 10G 0 部分

Then, check the size and then mount it that one.然后,检查尺寸,然后安装它。 In above cases, mount it like below在上述情况下,像下面这样安装它

mount /dev/xvdf2 /foldername挂载 /dev/xvdf2 /文件夹名

You do not need to create a file system of the newly created volume from the snapshot.simply attach the volume and mount the volume to the folder where you want.您不需要从快照创建新创建的卷的文件系统。只需附加卷并将卷挂载到您想要的文件夹。 I have attached the new volume to the same location of the previously deleted volume and it was working fine.我已将新卷附加到先前删除卷的同一位置,并且工作正常。

[ec2-user@ip-x-x-x-x vol1]$ sudo lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   8G  0 disk 
└─xvda1 202:1    0   8G  0 part /
xvdb    202:16   0  10G  0 disk /home/ec2-user/vol1

I usually persist by pre-defining the UUID at the time of creating ext4 FS,I add a script on user data and launch the instance, works just fine without any issues:我通常通过在创建 ext4 FS 时预先定义 UUID 来坚持,我在用户数据上添加一个脚本并启动实例,工作得很好,没有任何问题:

Ex script:前脚本:

#!/bin/bash
# Create the directory to be mounted
sudo mkdir -p /data
# Create file system with pre-defined & Label (edit the device name as needed)
sudo mkfs -U aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa -L DATA -t ext4 /dev/nvme1n1 

# Mount
sudo mount /dev/nvme1n1 /data -t ext4

# Update the fstab to persist after reboot
sudo su -c "echo 'UUID=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa   /data  ext4    defaults,discard,nofail 0   0' >> /etc/fstab"

For me there was some mysterious file causing this issue.对我来说,有一些神秘的文件导致了这个问题。

For me I had to clear the directory using the following command.对我来说,我必须使用以下命令清除目录。

sudo mkfs -t ext3 /dev/sdf

Warning : this might delete files you have saved.警告:这可能会删除您保存的文件。 So you can run ls to make sure you don't lose important saved files所以你可以运行ls来确保你不会丢失重要的保存文件

First check file system type with "lsblk -f" command, in my case it is "XFS"首先使用“lsblk -f”命令检查文件系统类型,在我的例子中是“XFS”

#lsblk -f
NAME    FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINT
xvda
├─xvda1
├─xvda2 vfat   FAT16 EFI   31C3-C85B                              17.1M    14% /boot/efi
└─xvda3 xfs          ROOT  6f6ccaeb-068f-4eb7-9228-afeb8e4d25df    7.6G    24% /
xvdf
├─xvdf1
├─xvdf2 vfat   FAT16 EFI   31C3-C85B
└─xvdf3 xfs          ROOT  6f6ccaeb-068f-4eb7-9228-afeb8e4d25df    5.4G    46% /mnt/da

modify your command according to the file system type.根据文件系统类型修改您的命令。

mount -t xfs -o nouuid /dev/xvdf3 /mnt/data/

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM