[openstack-dev] [Nova] virt driver architecture

Nandavar, Divakar Padiyar (STSD) divakar.padiyar-nandavar at hp.com
Thu May 9 17:06:59 UTC 2013


I agree with Devananda here.   Also I would like to highlight the following with respect to the suggested Blueprints on proxy compute model:



*         All the enhancements that are being talked about are in-line with the nova constructs and works with all the features of nova.



*         What we are trying to bring out with the blueprint is to leverage the existing logical openstack construct "nova-compute".



*         We are proposing to have a Cluster or Resourcepool represented as a "nova-compute"  where all the "nova-compute's" traits are kept intact and at the same time expose the hypervisor provided capabilities like DRS / HA etc.,



*         With this BP implementation no change would be required to nova-scheduler and all the scheduler logic would work the same way as is with the added hidden advantage of leverage hypervisor features.



*         More use cases possible with the "Host-aggregates" and the "Cluster or Resource pool" represented as "nova-compute".   For eg:  Adding Tenant affinity to a particular CLUSTER / RESOURCE POOL or a group of CLUSTER / Resource pool,    Add capability to deploy new instances to a group of DRS / HA enabled Cluster etc.,

*         We have other blueprints wherein we are talking about using Glance, Cinder etc along with vCenter and has been demoed during the Havana Summit.  Following link provides additional details of the related blueprints:  Proxy Compute Service managing multiple VMware vCenter Clusters and Resource Pools<http://www.openstack.org/summit/portland-2013/session-videos/presentation/proxy-compute-service-managing-multiple-vmware-vcenter-clusters-and-resource-pools>



Hope this helps



Thanks,
Divakar


From: Devananda van der Veen [mailto:devananda.vdv at gmail.com]
Sent: Thursday, May 09, 2013 10:01 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] virt driver architecture
Importance: High

On Thu, May 9, 2013 at 8:45 AM, Sean Dague <sean at dague.net<mailto:sean at dague.net>> wrote:
On 05/09/2013 10:53 AM, Russell Bryant wrote:
Greetings,

I've been growing concerned with the evolution of Nova's architecture in
terms of the virt drivers and the impact they have on the rest of Nova.
  I've heard these concerns from others in private conversation.  Another
thread on the list today pushed me to where I think it's time we talk
about it:

http://lists.openstack.org/pipermail/openstack-dev/2013-May/008801.html

At our last design summit, there was a discussion of adding a new virt
driver to support oVirt (RHEVM).  That seems inappropriate for Nova to
me.  oVirt is a full virt management system and uses libvirt+KVM
hypervisors.  We use libvirt+KVM directly.  Punting off to yet another
management system that wants to manage all of the same things as
OpenStack seems like a broken architecture.  In fact, oVirt has done a
lot of work to *consume* OpenStack resources (glance, quantum), which
seems completely appropriate.

+1

Things get more complicated if we take that argument and apply it to
other drivers that we already have in Nova.  In particular, I think this
applies to the VMware (vCenter mode, not ESX mode) and Hyper-V drivers.
  I'm not necessarily proposing that those drivers work significantly
different.  I don't think that's practical if we want to support these
systems.

We now have two different types of drivers: those that manage individual
hypervisor nodes, and those that proxy to much more complex systems.

We need to be very aware of what's going on in all virt drivers, even
the ones we don't care about as much because we don't use them.  We also
need to continue to solidify the virt driver interface and be extremely
cautious when these drivers require changes to other parts of Nova.
Above all, let's make sure that evolution in this area is well thought
out and done by conscious decision.

Comments airing more specific concerns in this area would be appreciated.

I think we learned a really important lesson in baremetal: putting a different complex management system underneath the virt driver interface is a bad fit, requires nova to do unnatural things, and just doesn't make anyone happy at the end of the day. That's since resulted in baremetal spinning out to a new incubated project, Ironic, which I think is really the right long term approach.

I think we need to take that lesson for what it was, and realize these virt cluster drivers are very much the same kind of problem. They are better served living in some new incubated effort instead of force fitting into the nova-compute virt layer and driving a lot more complexity into nova.

I don't feel like a new project is needed here -- the ongoing discussion about moving scheduling/orchestration logic out of nova-compute and into conductor-or-something-else seems to frame this discussion, too.

The biggest change to Nova that I recall around adding the Baremetal code was the addition of the "node" aka "hypervisor_hostname" concept -- that a single nova compute host might control more than one discrete thing, which thereby need to be identified as (host, node). That change opened the door for other cluster drivers to fit under the virt interface. IMBW, but I believe this is exactly what the vCenter and Hyper-V folks are looking at. It's also my current plan for Ironic. However, I also believe that this logic doesn't necessarily have to live under the virt API layer; I think it's a really good fit for the orchestration/conductor discussions....

We were talking about this a few days ago in -nova, particularly how moving some of the ComputeManager logic out to conductor might fit together with simplifying the (host, node) complexities, and help make nova-compute just a thin virt API layer. Here is a very poor summary of what I recall...
* AMQP topic is based on "nodename", not "hostname"
* for local hypervisors (KVM, etc), the topic identifies the local host, and the local nova-compute agent subscribes to it
* for clustered hypervisors (ironic, vCenter, etc), the topic identifies the unique resource, and any nova-compute which can manage that resource subscribes to the topic.

This would also remove the SPoF that nova-compute currently has for any cluster-of-discrete-things it manages today (eg, baremetal).


Thoughts?

-Devananda

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130509/503f731a/attachment.html>


More information about the OpenStack-dev mailing list