[openstack-dev] Blueprint for Nova native image building

Ian McLeod imcleod at redhat.com
Thu Aug 8 14:29:35 UTC 2013

On Wed, 2013-08-07 at 15:53 +0100, Daniel P. Berrange wrote:
> On Wed, Aug 07, 2013 at 10:34:57AM -0400, Russell Bryant wrote:
> > On 08/06/2013 06:06 PM, Ian McLeod wrote:
> > > On Tue, 2013-08-06 at 16:02 -0300, Monty Taylor wrote:
> > > The proof of concept approach is limited to full-virt hypervisors.  It's
> > > unclear to me if there's a way we can make this work for Xen-backed
> > > installs without the kind of lower level access to the virt environment
> > > that we'll get if we exist inside of Nova.
> > 
> > Can you write up some more detail on this point?
> > 
> > > More generally, it's likely that we'll have more flexibility to behave
> > > in a sane/optimized manner based on backing hypervisor if the code is
> > > inside of Nova.  For example, we've talked about improving our detection
> > > of a failed install by monitoring disk IO.
> > 
> > If we were to service-ify this, it would be interesting to look into
> > using Ceilometer for monitoring.
> NB, 
> > 
> > >>>>
> > >>>> It sounds like this is mostly an extension to nova that implements a
> > >>>> series of operations that can be done just as well outside of Nova.  Are
> > >>>> there enhancements you are making or scenarios that won't work at all
> > >>>> unless it lives inside of Nova?
> > > 
> > > Other than the Xen issue above, I'm not sure there's anything that
> > > simply won't work at all, though there are things that perhaps won't
> > > scale as well or won't run as quickly.
> > 
> > Why would it be slower?

I'm thinking specifically about the task of constructing the boot
environment used to launch the installer.  We had envisioned doing this
directly on the compute node, potentially avoiding adding a "hop"
through glance for the kernel and ramdisk (or the bootable install image
where appropriate).

> I don't think there's any particular reason why Xen should be slower
> or less scalable from an architectural POV here. Any perf differences
> would be just those inherant to the hypervisor platform in question.
> > How about scale?  Just the increased API load?  I wouldn't expect this
> > to be something done frequently.  It's more API calls, but removes a
> > long running task from inside nova (longer than anything else that
> > exists in nova today).

Again, I'm thinking mainly of the load associated with preparing the
installer environment, which would be farmed out to the compute nodes
rather than happening on either the client or the node hosting the image
building service API.

> In terms of load, all the heavy I/O & CPU burn would be in the context
> of a VM running in nova. So I don't think this approach to image building
> is would be introducing any new architectural scalability problems. Indeed
> this is the main attraction of running the OS installer inside a VM managed
> by Nova - the image builder just takes advantage of all Nova's support for
> resource manager/VM scheduler placement etc.
> > >> Yes to everything Russel said. I'd like to see the tool be standalone.
> > >> Then, if there is a desire to provide the ability to run it via an api,
> > >> the tool could be consumed (similar discussions have happened around
> > >> putting diskimage-builder behind a service as well)
> > >>
> > >> That said - if we did service-ify the tool, wouldn't glance be a more
> > >> appropriate place for that sort of thing?
> > > 
> > > Possibly, though the proof of concept (and we hope our proposed
> > > nova-based re-implementation) can build both glance images and cinder
> > > volume backed images.
> > 
> > I like this idea (glance seeming to make more sense conceptually).
> I've gone back & forth with thinking about whether it makes sense in
> glance or nova, and don't have a strong opinion either way really.
> From a technical POV I think it could be made to work in either without
> much bother.
> > It seems like the main sticking point is whether or not it can be made
> > to work for all (or most) hypervisors from outside of nova.  Can we dig
> > into this point a bit deeper?
> I think that it ought to be possible to make it work for any hypervisor
> that is doing full-machine virt (ie not container drivers like LXC). We
> may not have sufficient APIs, or we may not have enough features implemented
> in some virt drivers, but that's just a case of donkey work, rather than
> any architectural blocker.
> Daniel

More information about the OpenStack-dev mailing list