[nova][dev] Revisiting qemu emulation where guest arch != host arch

Sean Mooney smooney at redhat.com
Wed Jul 15 14:36:33 UTC 2020


On Wed, 2020-07-15 at 14:17 +0000, Apsey, Christopher wrote:
> All,
> 
> A few years ago I asked a question[1] about why nova, when given a hw_architecture property from glance for an image,
> would not end up using the correct qemu-system-xx binary when starting the guest process on a compute node if that
> compute nodes architecture did not match the proposed guest architecture.  As an example, if we had all x86 hosts, but
> wanted to run an emulated ppc guest, we should be able to do that given that at least one compute node had qemu-
> system-ppc already installed and libvirt was successfully reporting that as a supported architecture to nova.  It
> seemed like a heavy lift at the time, so it was put on the back burner.
> 
> I am now in a position to fund a contract developer to make this happen, so the question is: would this be a useful
> blueprint that would potentially be accepted?
this came up during the ptg and the over all felling was it should really work already and if it does not its a bug.
so yes i fa blueprint was filed to support emulation based on the image hw_architecture property i dont think you will
get objection altough we proably will want to allso have schduler support for this and report it to placemnt or have a
whigher of some kind to make it a compelte solution. i.e. enhance the virt driver to report all the achitecure it
support via traits and add a weigher to prefer native execution over emulation. so placement can tell use where it can
run and the weigher can say where it will run best.


see line 467 https://etherpad.opendev.org/p/nova-victoria-ptg

>   Most of the time when people want to run an emulated guest they would just nest it inside of an already running
> guest of the native architecture, but that severely limits observability and the task of managing any more than a
> handful of instances in this manner quickly becomes a tangled nightmare of networking, etc.  I see real benefit in
> allowing this scenario to run natively so all of the tooling that exists for fleet management 'just works'.  This
> would also be a significant differentiator for OpenStack as a whole.
> 
> Thoughts?
> 
> [1]
> http://lists.openstack.org/pipermail/openstack-operators/2018-August/015653.html
> 
> Chris Apsey
> Director | Georgia Cyber Range
> GEORGIA CYBER CENTER
> 
> 100 Grace Hopper Lane | Augusta, Georgia | 30901
> https://www.gacybercenter.org
> 




More information about the openstack-discuss mailing list