简体   繁体   中英

qemu-img convert rbd volumes between different ceph clusters accelerate

Is there an elegant way to copy an RBD volume to another Ceph cluster?

I calculate the convert time with qemu-img 2.5 version or qemu-img 6.0 version, by copying a volume(capability is 2.5T and 18G only used) to another Ceph cluster.

qemu-img [2.5 or 6.0] convert -p -f raw rbd:pool_1/volume-orig_id:id=cinder:conf=1_ceph.conf:keyring=1_ceph.client.cinder.keyring -O raw rbd:pool_2/volume-new_id:id=cinder:conf=2_ceph.conf:keyring=2_ceph.client.cinder.keyring [-n -m 16 -W -S 4k]

Test qemu-img convert result:

qemu-img 2.5 spend 2 hours and 40 minutes with no option parameter:

在此处输入图像描述

qemu-img 6.0 spend 3 hours and 3 minutes with option parameter ( -m 16 -W -S 4k ):

在此处输入图像描述

Questions:

1, why 2.5 version write only used disk capability(18G), but 6.0 version write the hole disk 2.5T?

2, how to use qemu-img (2.5 or 6.0 version) accelerating convert RBD volume to another Ceph cluster or there is some other ways to approach?

The main feature is qemu-img convert -n the -n option parameter.

If convert the disk with '-n' skips the target volume creation (useful if the volume is created prior to running qemu-img) parameter, it will write the hole disk capability to the destination rbd volume. Without it, the qemu-img convert only read the source volume used capability and write them to destination volume

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM