[openstack-dev] [Nova] virt driver architecture
Devananda van der Veen
devananda.vdv at gmail.com
Thu May 9 18:30:30 UTC 2013
On Thu, May 9, 2013 at 10:44 AM, Russell Bryant <rbryant at redhat.com> wrote:
> On 05/09/2013 12:30 PM, Devananda van der Veen wrote:
> >
> > I don't feel like a new project is needed here -- the ongoing discussion
> > about moving scheduling/orchestration logic out of nova-compute and into
> > conductor-or-something-else seems to frame this discussion, too.
> >
> > The biggest change to Nova that I recall around adding the Baremetal
> > code was the addition of the "node" aka "hypervisor_hostname" concept --
> > that a single nova compute host might control more than one discrete
> > thing, which thereby need to be identified as (host, node). That change
> > opened the door for other cluster drivers to fit under the virt
> > interface. IMBW, but I believe this is exactly what the vCenter and
> > Hyper-V folks are looking at. It's also my current plan for Ironic.
> > However, I also believe that this logic doesn't necessarily have to live
> > under the virt API layer; I think it's a really good fit for the
> > orchestration/conductor discussions....
>
> Yep, that was the change that had the most impact on the rest of Nova.
> I think there's a big difference between baremetal and these other
> drivers. In the case of baremetal, the Nova component is still in full
> control of all nodes. There's not another system that is also (or
> instead of Nova) in control of the individual nodes.
>
With Ironic, there will be another system in control of the individual
hardware nodes. It'll have an API. The operator will talk to that API (eg.
for enrollment, status, and management). Nova will talk to that API, and so
will some other OpenStack services. At least that's the plan ...
>
> > We were talking about this a few days ago in -nova, particularly how
> > moving some of the ComputeManager logic out to conductor might fit
> > together with simplifying the (host, node) complexities, and help make
> > nova-compute just a thin virt API layer. Here is a very poor summary of
> > what I recall...
> > * AMQP topic is based on "nodename", not "hostname"
> > * for local hypervisors (KVM, etc), the topic identifies the local host,
> > and the local nova-compute agent subscribes to it
> > * for clustered hypervisors (ironic, vCenter, etc), the topic identifies
> > the unique resource, and any nova-compute which can manage that resource
> > subscribes to the topic.
> >
> > This would also remove the SPoF that nova-compute currently has for any
> > cluster-of-discrete-things it manages today (eg, baremetal).
>
> Totally agreed with this. However, I'm not sure having clustered
> hypervisors expose individual resources is something they want to do.
> It's in conflict with what the underlying system we're talking to wants
> to be in control of.
>
I disagree. I think those systems are trying to pass control up the stack
to us -- to use the OpenStack API, and all the richness that adds (eg,
Heat), to interact with a bunch of compute resources. So it seems to me
that exposing the individual resources inside the clustered hypervisor is
specifically important to meeting that goal. Whether those resources are
_also_ managed by something else is orthogonal to whether they are exposed
as individual or aggregated resources to Nova.
But as I'm not working on vSphere or Hyper-V, perhaps I should be quiet and
let them answer :)
-Deva
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130509/d5db0b3f/attachment.html>
More information about the OpenStack-dev
mailing list