[openstack-dev] [Nova] virt driver architecture

Nandavar, Divakar Padiyar (STSD) divakar.padiyar-nandavar at hp.com
Fri May 10 03:50:02 UTC 2013


>Totally agreed with this.  However, I'm not sure having clustered
>hypervisors expose individual resources is something they want to do.
>It's in conflict with what the underlying system we're talking to wants
>to be in control of.

>I disagree. I think those systems are trying to pass control up the stack to us -- to use the OpenStack API, and all the richness that adds (eg, Heat), to interact with a bunch of compute >resources. So it seems to me that exposing the individual resources inside the clustered hypervisor is specifically important to meeting that goal. Whether those resources are _also_ managed >by something else is orthogonal to whether they are exposed as individual or aggregated resources to Nova.

>But as I'm not working on vSphere or Hyper-V, perhaps I should be quiet and let them answer :)

It's a choice point we have whether to expose individual resources of a cluster or not.   For an "admin" user it is important to expose the underlying individual resources from a monitoring, metering and performance purpose.  However from a provisioning perspective sticking to "nova-compute" perspective is good enough where it needs to provide a view of available capacity for making allocation and scheduling decisions.    When we closely look at the Clustering features provided by some of hypervisors, we see that there is no point in directing the workload to an individual hypervisor host as it is not guaranteed that the newly created instance is powered on in that host.  Due to cluster policies newly created instances may power-on in another available hypervisor host in the cluster.  So it makes sense to direct the scheduling to the cluster itself and leave the cluster to manage the scheduling within the cluster.   Further,  Healthnmon comes in handy from monitoring, metering and performance perspective with this approach wherein it provides a drilldown for a nova-compute and provides insights into associated Clusters, Hypervisor hosts and Resource pools.

Thanks,
Divakar

From: Devananda van der Veen [mailto:devananda.vdv at gmail.com]
Sent: Friday, May 10, 2013 12:01 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] virt driver architecture
Importance: High

On Thu, May 9, 2013 at 10:44 AM, Russell Bryant <rbryant at redhat.com<mailto:rbryant at redhat.com>> wrote:
On 05/09/2013 12:30 PM, Devananda van der Veen wrote:
>
> I don't feel like a new project is needed here -- the ongoing discussion
> about moving scheduling/orchestration logic out of nova-compute and into
> conductor-or-something-else seems to frame this discussion, too.
>
> The biggest change to Nova that I recall around adding the Baremetal
> code was the addition of the "node" aka "hypervisor_hostname" concept --
> that a single nova compute host might control more than one discrete
> thing, which thereby need to be identified as (host, node). That change
> opened the door for other cluster drivers to fit under the virt
> interface. IMBW, but I believe this is exactly what the vCenter and
> Hyper-V folks are looking at. It's also my current plan for Ironic.
> However, I also believe that this logic doesn't necessarily have to live
> under the virt API layer; I think it's a really good fit for the
> orchestration/conductor discussions....
Yep, that was the change that had the most impact on the rest of Nova.
I think there's a big difference between baremetal and these other
drivers.  In the case of baremetal, the Nova component is still in full
control of all nodes.  There's not another system that is also (or
instead of Nova) in control of the individual nodes.

With Ironic, there will be another system in control of the individual hardware nodes. It'll have an API. The operator will talk to that API (eg. for enrollment, status, and management). Nova will talk to that API, and so will some other OpenStack services. At least that's the plan ...


> We were talking about this a few days ago in -nova, particularly how
> moving some of the ComputeManager logic out to conductor might fit
> together with simplifying the (host, node) complexities, and help make
> nova-compute just a thin virt API layer. Here is a very poor summary of
> what I recall...
> * AMQP topic is based on "nodename", not "hostname"
> * for local hypervisors (KVM, etc), the topic identifies the local host,
> and the local nova-compute agent subscribes to it
> * for clustered hypervisors (ironic, vCenter, etc), the topic identifies
> the unique resource, and any nova-compute which can manage that resource
> subscribes to the topic.
>
> This would also remove the SPoF that nova-compute currently has for any
> cluster-of-discrete-things it manages today (eg, baremetal).
Totally agreed with this.  However, I'm not sure having clustered
hypervisors expose individual resources is something they want to do.
It's in conflict with what the underlying system we're talking to wants
to be in control of.

I disagree. I think those systems are trying to pass control up the stack to us -- to use the OpenStack API, and all the richness that adds (eg, Heat), to interact with a bunch of compute resources. So it seems to me that exposing the individual resources inside the clustered hypervisor is specifically important to meeting that goal. Whether those resources are _also_ managed by something else is orthogonal to whether they are exposed as individual or aggregated resources to Nova.

But as I'm not working on vSphere or Hyper-V, perhaps I should be quiet and let them answer :)



-Deva
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130510/7ce95c9b/attachment.html>


More information about the OpenStack-dev mailing list