oVirt(KVM) Multiple Disk images ( lvm and vhdx disks) question
Eugen Block
eblock at nde.ag
Wed Jul 28 06:43:23 UTC 2021
Hi,
I'm not sure what you mean by "one [disk image] for OS image", but I'm
not aware of any quick method to accomplish what you're asking for.
The only way I can think of right now is to create a new (larger) disk
image for each VM, prepare the disk layout exactly as in the original
VM (e.g. with 'fdisk'), map the empty image and then use 'dd' (or
similar) to copy every block into the new (empty) image.
That new image could be uploaded to glance (as base image) or
depending on your cinder backend directly as a managed volume to Ceph
if you're using that.
Be careful to not mix up the source and destination disks with 'dd', I
would test it first with a backed up VM image and see if that works at
all.
If that works that of course means a lot of manual work and time per
VM. Maybe someone else has a smoother and simpler way though.
Regards,
Eugen
Zitat von KK CHN <kkchn.in at gmail.com>:
> Members,
>
> We have a set of VMs running in KVM environment.
> Both Windows and Linux OS VMs with separate data storage disks attached to
> it.
>
> We are planning to migrate/populate all these VMs to Our New OpenStack
> setup using Ussuri version with KVM virtualizer and glance storage.
>
> The current service provider provides three image files for each VMs in
> question.
>
> Both for Linux ( CentOS and Redhat) and WIndows machines they are
> providing us three disk images for each VM
>
> 1. one for OS image
> 2. second for boot
> 3. third for attached disk for data volumes
>
> The third comes as LVMs (Linux) and Vhdx (Windows)image files.
>
> *Is there a way to combine all three files from our vendor* as a single
> qcow2 image/or any other supported format in openstack so that we can
> directly populate a VM in our Open Stack setup ( ussuri, glance with KVM)
> using horizon dashboard.
>
> Is thi is possible? If not whats the best way to do this if you were
> me.
>
> kindly share your expertise and thoughts for a solution.
>
> Thanks
> Kris
More information about the openstack-discuss
mailing list