[英]Is there a way to mount a dd based file as a disk for cstore pool
I was trying to deploy a openebs cstore-pool based dynamically provisioned storage class so that I could have 3 seperate disks on 3 different machines.我试图部署一个基于 openebs cstore-pool 的动态配置存储 class 以便我可以在 3 台不同的机器上拥有 3 个单独的磁盘。
While doing this I realized that I do not have an external drive and for capacity management I have to use a separate disk for pooling.这样做时,我意识到我没有外部驱动器,并且为了容量管理,我必须使用单独的磁盘进行池化。
I created a disk image with dd with the size of 4GB for trying the feature.我用 dd 创建了一个大小为 4GB 的磁盘映像来尝试该功能。
$ dd if=/dev/zero of=diskImage4 bs=1M count=4096
When I mounted it I saw that it is mounted as a loop device to loop0, as shown in the lsblk
command output当我安装它时,我看到它作为循环设备安装到loop0,如
lsblk
命令output所示
loop0 8:0 0 8K 1 loop mountPoint
What I was trying to achieve was,我想要达到的是,
sda 8:16 0 23.5G 0 disk
└─sda1 8:18 0 23.5G 0 part /
sdb 8:0 0 4.0G 0 disk
└─sdb1 8:1 0 4.0G 0 part
How can I mount the new created file "diskImage4" as a disk partition.如何将新创建的文件“diskImage4”挂载为磁盘分区。
I saw some mount parameters and the losetup
command but they were all finally used for mounting the image as a loop device.我看到了一些挂载参数和
losetup
命令,但它们最终都用于将映像挂载为循环设备。
Or if there is a way to use files as disks in cstore-pools I would love to learn that.或者,如果有一种方法可以将文件用作 cstore-pools 中的磁盘,我很想知道这一点。
If there is no common or understandable way to achieve this, thanks anyways.如果没有通用或可理解的方法来实现这一点,无论如何,谢谢。
You havent created a partition table on the virtual disk.您尚未在虚拟磁盘上创建分区表。
Do the DD as above, then run the output of that through gparted or fdisk and creat a partition table像上面那样做DD,然后通过gparted或者fdisk运行那个的output,创建一个分区表
then do an losteup losetup -f diskImage4
然后做一个 losteup losttup
losetup -f diskImage4
then read the partitions partx -a /dev/loop0
(or whatever the loop device is created as然后读取分区
partx -a /dev/loop0
(或任何循环设备创建为
Then do a lsblk然后做一个 lsblk
loop0 and loop0p1 should be visible loop0 和 loop0p1 应该是可见的
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.