[nova][dev] Revisiting qemu emulation where guest arch != host arch

Apsey, Christopher CAPSEY at augusta.edu
Tue Jan 26 13:09:11 UTC 2021


Resurrecting this old thread…

The first bits of work for this are starting to be submitted for review:

https://review.opendev.org/c/openstack/nova/+/772156

The developer is going to go through the rest of the tcg guest supported architectures in QEMU and add them in (he only has aarch64 done right now) which will require a bit of time/testing, but we hope to have it ready to go for potential inclusion in wallaby.

Any comments from nova team on approach/implementation are welcome.

Chris Apsey
GEORGIA CYBER CENTER

From: Sean Mooney <smooney at redhat.com>
Sent: Wednesday, July 15, 2020 10:37 AM
To: Apsey, Christopher <CAPSEY at augusta.edu>; openstack-discuss at lists.openstack.org
Cc: Belmiro Moreira <moreira.belmiro.email.lists at gmail.com>
Subject: [EXTERNAL] Re: [nova][dev] Revisiting qemu emulation where guest arch != host arch

CAUTION: EXTERNAL SENDER This email originated from an external source. Please exercise caution before opening attachments, clicking links, replying, or providing information to the sender. If you believe it to be fraudulent, contact the AU Cybersecurity Hotline at 72-CYBER (2-9237 / 706-722-9237) or 72CYBER at augusta.edu<mailto:72CYBER at augusta.edu>

On Wed, 2020-07-15 at 14:17 +0000, Apsey, Christopher wrote:
> All,
>
> A few years ago I asked a question[1] about why nova, when given a hw_architecture property from glance for an image,
> would not end up using the correct qemu-system-xx binary when starting the guest process on a compute node if that
> compute nodes architecture did not match the proposed guest architecture. As an example, if we had all x86 hosts, but
> wanted to run an emulated ppc guest, we should be able to do that given that at least one compute node had qemu-
> system-ppc already installed and libvirt was successfully reporting that as a supported architecture to nova. It
> seemed like a heavy lift at the time, so it was put on the back burner.
>
> I am now in a position to fund a contract developer to make this happen, so the question is: would this be a useful
> blueprint that would potentially be accepted?
this came up during the ptg and the over all felling was it should really work already and if it does not its a bug.
so yes i fa blueprint was filed to support emulation based on the image hw_architecture property i dont think you will
get objection altough we proably will want to allso have schduler support for this and report it to placemnt or have a
whigher of some kind to make it a compelte solution. i.e. enhance the virt driver to report all the achitecure it
support via traits and add a weigher to prefer native execution over emulation. so placement can tell use where it can
run and the weigher can say where it will run best.


see line 467 https://etherpad.opendev.org/p/nova-victoria-ptg<https://protect-us.mimecast.com/s/2eFkClYVvJcywrxruqA7hQ>

> Most of the time when people want to run an emulated guest they would just nest it inside of an already running
> guest of the native architecture, but that severely limits observability and the task of managing any more than a
> handful of instances in this manner quickly becomes a tangled nightmare of networking, etc. I see real benefit in
> allowing this scenario to run natively so all of the tooling that exists for fleet management 'just works'. This
> would also be a significant differentiator for OpenStack as a whole.
>
> Thoughts?
>
> [1]
> http://lists.openstack.org/pipermail/openstack-operators/2018-August/015653.html<https://protect-us.mimecast.com/s/_HyfCmZV2KiZoxzxiQsIGH>
>
> Chris Apsey
> Director | Georgia Cyber Range
> GEORGIA CYBER CENTER
>
> 100 Grace Hopper Lane | Augusta, Georgia | 30901
> https://www.gacybercenter.org<https://protect-us.mimecast.com/s/n5XeCn5VYLTzZEvEFEjrF->
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210126/1d5ad9e2/attachment.html>


More information about the openstack-discuss mailing list