I want to thank all who replied, your input is valuable. It's obvious this isn't a trivial undertaking, I'm going to consult with the team concerning the direction we decide to take. On Wednesday, July 16, 2025 at 07:28:51 PM CDT, Joel McLean <joel.mclean@micron21.com> wrote: Leroy, I will preface this with the context that I am relatively new to OpenStack myself; I engage with this list to help me further my own Openstack knowledge, so by researching your problems for potential solutions, I learn more about this platform than I would otherwise. So, take this with a grain of salt – I haven’t tested any of this. Apologies in advance for the wall of text. As Dmitriy said, Nova assumes it has exclusive access to libvirt, so you can’t really set up an Openstack control plane, adopt some hosts, and have it “just work”. However if the pain-point is that moving VMs from a host to perform an upgrade is a nightmare exercise in downtime and people-hours, then you will want to work out how much time it will actually cost you, versus how much time it will cost you to set up OpenStack. Setting up an OpenStack control plane to a production standard for people who know what they’re doing will run you about 40-80 hours, depending on exactly the requirements. I’ve assumed that your 15-20 hypervisors are managing between 4 to 21 VMs _each_, so you’re probably looking at around 80-100 VMs? If each VM takes you about 2 hours of contact time from an engineer over the lifecycle of the migration, plus having to book it in with a customer/stakeholder, that whole project might cost you about 250 hours of effort, give or take. Your mileage may vary. If you set up an Openstack Control Plane, you’ll want to consider availability and redundancy; the company I work for has a control plane of 5 nodes, spanning a geographically redundant site – since you’re looking at a single site you could consider a 3 node control plane. Typically the control plane does not run any virtual machine workload, but services the API endpoints and things like the software defined networking used by Openstack. More on the control plane: https://docs.openstack.org/arch-design/design-control-plane.html It goes without saying that you’ll need the hardware to seed this, just lying around ready to go; if not, you’re now looking at capital costs to buy hardware for a control plane, and the engineering time to provision them. If you set up a control plane, you could theoretically adopt hypervisors by: * Creating an inventory of all VMs on the host (the virsh’s XML with all the VM details might be a good way to ingest that). * Note which images on local storage are used by which VM. * Power off the VMs, and have Openstack adopt hypervisor. There may be combability issues to consider here. * Register the QCOW2 or RAW disk images as images with OpenStack. * If this can be done on the same host (sufficient free storage allowing) it should run as fast as you can read/write to disk. I do not think it is possible to have the Openstack Glance or Cinder register a file on local storage, but we do not use local storage in my OpenStack deployment, so this is not something I have tested, and I cannot find much about this topic in the official documentation. This might be possible with some very creative database commands, but be warned – beyond here, lies dragons. * Re-create the instances based off your inventory, using the volumes/images as the source, using the OpenStack API. * This should allow Nova to create the libvirt/QEMU VMs, with you specifying the IP addresses and other things that are part of that VM’s identity This should theoretically not require transferring the disks of the VMs across the network to another host, it should be possible for you to have less impact to your VMs. Once you have adopted a hypervisor to OpenStack, migration between hosts, even for VMs using ephemeral/local storage, is possible. There are challenges though; adopting a compute host into an OpenStack might require the hypervisor to meet minimum hardware or operating system version requirements, which is the actual catalyst for the workload in the first place. Alternatively, instead of trying to adopt VMs in the above fashion, you could build the Openstack, and migrate the VMs into Openstack – the process is broadly the same as what you’re doing at the moment (Power off VM, migrate virtual disk, re-create VM from that disk). The main benefit is not time saved now, but next time – effectively you’re working with the technical debt of not having had any forward planning to start with, so by putting in the effort now and setting up an OpenStack, and migrating your workload into an OpenStack environment, you are removing that technical debt as you go. It will ultimately take longer this time – but next time? A relatively simple procedure for the future: https://wiki.openstack.org/wiki/OpsGuide-Maintenance-Compute It might be worthwhile having a serious discussion with a professional consultant. Spending some cash and talking to a real expert who does this kind of thing for a living could be the difference between a successful deployment which ultimately saves you time and money, or falling into a pit of despair and sunk-cost fallacy. OpenStack is open source and community driven, but many members of this community work for OpenInfra (the ‘parent’ of the OpenStack platform) has a list of partners, some of whom provide MSP services. https://openinfra.org/members/ Kind Regards, Joel McLean – Micron21 Pty Ltd From: Dmitriy Rabotyagov <noonedeadpunk@gmail.com> Sent: Thursday, 17 July 2025 7:10 AMTo: Leroy Tennison <leroy.tennison@verizon.net>Cc: openstack-discuss <openstack-discuss@lists.openstack.org>Subject: Re: [ops] Very basic newby questions I think the main quirk to succeed with such migration would be clear understanding of what you are doing, what OpenStack actually is and what options you have on the table, which might not be a straightforward process, though, at the end of the day it should be quite rewarding:) On Wed, 16 Jul 2025, 23:06 Dmitriy Rabotyagov, <noonedeadpunk@gmail.com> wrote:
Generally it would be recommend to actually reset up your hypervisors, as OpenStack Nova assumes having exclusive access to librit/qemu.
So you would pretty much to re-create your VMs through the API to ensure they will be managed from it in the future.
I guess in general it might be possible to accomplish such migration on per-node basis, by doing something similar to offline migration of VMs. It might be not the cleanest process, but should be doable from my perspective.
Also shared storage is not a requirement per say, you still can have an OpenStack setup with local store. Moreover, most of the providers have local storage anyway to provide low latency/high throughput workloads (like databases), so this is totally supported to have a local storage.
On Wed, 16 Jul 2025, 23:00 Leroy Tennison, <leroy.tennison@verizon.net> wrote:
What I was hoping to do was get all our hypervisors managed by OpenStack but no OpenStack environment exists yet, is that possible without destroying (meaning tear down and rebuild) the hypervisors? The plan would be to get everything OpenStack-managed then start moving VMs.
And right now we do not have shared storage. This environment started out with a few hypervisors in some cases attached to Synology nas units with as low as 100Mbit NICs. It has grown and morphed since then but no SAN has been added. I realize this means a live migration will take time and that's acceptable as long as the cutover is brief. I understand the effort will be significant but for a potential of 140+ VMs, it's worth it.
On Wednesday, July 16, 2025 at 03:13:21 PM CDT, John van Ommen <john.vanommen@gmail.com> wrote:
Based on your description, it sounds like you are trying to do a live migration from an existing Ubuntu hypervisor to an OpenStack compute node that doesn't exist yet?
Unfortunately, that's a tremendous amount of work for twenty five virtual machines.
Doing a live migration from an OpenStack compute node to another OpenStack compute node managed by the same OpenStack Control Plane is possible. The reason it's possible is because when you have shared storage, the data itself doesn't have to move; you just have to tell the control plane to instantiate the VM on the new Compute Node, and cease using the old Compute Node. (This is part of the process of upgrading OpenStack; we migrate off the old and onto the new.)
But once you introduce a hypervisor that's NOT managed by OpenStack, live migration becomes significantly more complex.
Anecdotally, there were quite a few 'hybrid cloud solutions' that were popular ten years ago, which attempted to provide a single pane of glass for multiple clouds running competing technologies. Many of those products have 'gaps' in their functionality now, because the upstream APIs changed.
An example of this is Red Hat Cloudforms. It supported OpenStack and Red Hat Enterprise Virtualization, but removed support for AWS because the upstream API on Amazon's end, it changed.
https://github.com/orgs/ManageIQ/discussions/22343
That last paragraph is based on my personal opinion and doesn't represent my employer. Folks from Red Hat certainly know more about the long term strategy for their products than I do, so please consider this when evaluating my statement.
On Wed, Jul 16, 2025 at 12:54 PM Leroy Tennison <leroy.tennison@verizon.net> wrote:
First of all, thank you so much for your response, I certainly appreciate it. I'm glad to answer your questions about our situation:
We are not currently using OpenStack or any other "multiple hypervisor management" solution.
Yes, we already have between 15-20 hypervisors (running various releases of Ubuntu LTS) with KVM/QUMU installed managing from 4 (oldest hardware) to 21 (newest hardware) VMs.
The only issue we're facing is hypervisor OS upgrade - we've discovered that the upgrade process ("do-release-upgrade" for Ubuntu) has significant risks and therefore don't want any running VMs on the hypervisor while we're doing its OS upgrade.
Right now "moving a VM elsewhere" means shutting it down, copying the image (which uses local hypervisor storage) to a different hypervisor (which can take a significant part of an hour), defining the VM there, starting it up and confirming functionality. With only a few VMs this is time and labor consuming but "do-able", with as many VMs as are running on the newest hardware this is becoming prohibitive.
We have the capacity to be able to move a single hypervisor's VMs elsewhere. The network is all local (no WAN links) and assume 1Gbit speed (but not higher). The VM images are between roughly 30GB to 200GB. We can determine if a target hypervisor has adequate CPU/RAM/DASD for an additional VM. It's just the effort to "execute" the move.
The fundamental problem is that a number of VMs are Internet-facing with potential 7x24 hour access requiring notification/coordination with affected external parties before moving the VM to another hypervisor.
We are looking for a solution where we can keep our current infrastructure (and add a management solution to it) and a VM can remain running while it is being moved to a different hypervisor with the cut over process being brief. The VM's (static) IP and MAC address need to remain the same. If the solution could remove the configuration/image of the VM from it's former location that would be desirable but not necessary. This is the only solution we are looking to implement.
And, again, thank you for any input. Feel free to ask additional questions.
On Wednesday, July 16, 2025 at 01:04:00 AM CDT, Joel McLean <joel.mclean@micron21.com> wrote:
G'day Leroy,
OpenStack isn't a hypervisor itself, but more a collection of projects that can manage hypervisors and the compute workload that runs on them, among other things.
If you have running hypervisors with workload already - as you mentioned, QEMU/KVM - migrating that workload to an OpenStack platform would require that platform to have hypervisors; which are generally also QEMU/KVM, although they do not have to be.
To give you the right information, we'll want to understand better what it is you're trying to do: * It sounds like you have an existing workload on QEMU/KVM hypervisors. Why are you considering moving it? * What aspects of Openstack do you actually need? OS is a pretty big platform, and has a lot of features and benefits, but you pay for that with the overhead of having to manage the OpenStack platform. * You mentioned that you had looked at ProxMox (PVE) but were worried about it's requirement for centralised storage and fast networking; I can assure you that if you can run OpenStack, you're going to be more than capable of running PVE.
Alternatively, if I have misunderstood your question, and you already have a running OpenStack, and you just want to know how to live-migrate a VM from one hypervisor host to another hypervisor host within the aggregate/availability zone? You can achieve this with a command like : openstack --os-compute-api-version 2.30 server migrate --live-migration --host {target-host-name} {instance-uuid}
Give us a full write up - what have you got, what are you trying to do, what have you tried, why isn't it working? Maybe then we'll be able to help!
Kind Regards,
Joel McLean – Micron21 Pty Ltd
-----Original Message----- From: Leroy Tennison <leroy.tennison@verizon.net> Sent: Wednesday, 16 July 2025 3:09 PM To: openstack-discuss@lists.openstack.org Subject: [ops] Very basic newby questions
I am looking for a solution to automatically move (upon request) running VMs from one hypervisor to another. The current hypervisors are KVM/QEMU. Just looking at the documentation I am becoming overwhelmed.
Questions (A URL which directly addresses the question is quite acceptable):
Does OpenStack work with existing hypervisors or would existing hypervisors have to be "converted" (somehow) to OpenStack?
If it can be used with existing hypervisors, is there a how-to for this specific task (I understand that OpenStack can do many things).
If OpenStack can't be used with existing hypervisors, are you aware of solutions for this need? I have looked at ProxMox but it appears to want centralized storage and very fast networks which doesn't match the environment.
Thank you for your help.