[openstack-dev] [Nova] virt driver architecture

Dan Wendlandt dan at nicira.com
Tue May 14 21:57:48 UTC 2013


On Tue, May 14, 2013 at 1:49 PM, Russell Bryant <rbryant at redhat.com> wrote:

> On 05/13/2013 12:58 PM, Dan Wendlandt wrote:
> >
> >
> >
> > On Fri, May 10, 2013 at 11:36 AM, Russell Bryant <rbryant at redhat.com
> > <mailto:rbryant at redhat.com>> wrote:
>
> >
> >
> > I may be at fault here for introducing the work "proxy", but I think it
> > may be confusing things a bit, as many cases where you would want to use
> > something like vCenter are not really a proxy.
> >
> > They way I see it, there are two extremes:
> > 1) The current virt-driver approach, with one nova-compute per per
> > "host", where "host" is a single unit of capacity in terms of
> > scheduling, etc.  In KVM-world a "host" is a hypervisor node.  In
> > current vCenter driver, this is a cluster, with vCenter exposing one
> > large capacity and spreading workloads evenly.  This approach leverages
> > all scheduling logic available within nova.scheduler, uses nova DB
> > model, etc.
>
> I would add that we assume that Nova is in complete control of the
> compute resources in this case, meaning that there is not another system
> making changes to instances.  That's where we start to run into problems
> with putting the cluster-based drivers at this level.
>

I think we agree here, or at least that anything done to the compute
resource should be transparent to Nova.  For example, if libvirt
auto-starts the VM on a host reboot, that is OK, as it is transport to
Nova.  You example below about the volume support would be an example of a
change that is not transparent to Nova/Cinder.   Is that inline with your
thinking?


>
> > 2) A true "API proxy" approach, possibly implemented using cells.  All
> > scheduling/placement, data modeling, etc. logic would be implemented by
> > a back-end system such as vCenter and one cannot leverage existing nova
> > scheduler logic or database models.   I think this would mean that the
> > nova-api, nova-scheduler, and nova-compute code used with the
> > virt-driver model would not be used, and the new cell driver would have
> > to create its own versions of this.
>
> I would actually break this up into two cases.
>
> 2.a) A true API proxy. You have an existing virt management solution
> (vCenter, oVirt, whatever), and you want to interact with it using the
> OpenStack APIs.  For this, I would propose not using Nova (or any
> OpenStack component) at all.  Instead, I would implement the API in the
> project/product itself, or use something built to be an API proxy, like
> deltacloud.


> 2.b) A cell-based nova deployment.  A cell may be a compute cell (what
> exists today) where Nova is managing all of the compute resources.
> (Here is where the proposal comes in) A cell could also be a different,
> existing virt management solution.  In that case, the other system is
> responsible for everything that a compute cell does today, but does it
> its own way and is responsible for reporting state up to the API cell.
> Systems would of course be welcome to reuse Nova components if
> applicable, such as nova-scheduler.
>

Yeah, i agree.



>
> > is actually something
> > in between these two extremes and in fact closer to the virt-driver
> > model.  I suspect the proposal sees the following benefits to a model
> > that is closer to the existing virt-driver (note: I am not working
> > closely with the author, so I am guessing here based on my own views):
> > -  the nova scheduler logic is actually very beneficial even when you
> > are using something like vCenter.  It lets you do a  KVM + vmware
> > deployment, where workloads are directed to vmware vs. KVM by the
> > scheduler based on disk type, host aggregates, etc.  It also lets you
> > expose different vCenter clusters with different properties via host
> > aggregates (e.g., an HA cluster and a non-HA cluster).  According to the
> > docs I've read on cells (may be out of date), it seems like the current
> > cell scheduler is very simple (i.e., random), so doing this with cells
> > would require adding similar intelligence at the cell scheduler layer.
> > Additionally, I'm aware of people who would like to use nova's pluggable
> > scheduling to even do fine-grain per-hypervisor scheduling on a
> > "cluster" platform like vCenter (which for a cluster with DRS enabled
> > would make sense).
>
> You could re-use the host scheduler in your cell if you really wanted to.
>
> The cell scheduler will have filter/weight support just like the host
> scheduler very soon, so we should be able to have very intelligent cell
> scheduling, just like host scheduling.
>
> https://review.openstack.org/#/c/16221/


Ok, that makes sense.


>
>
>
> Like I mentioned before, my intent is to look beyond this blueprint.  We
> need to identify where we want to go so that the line can be drawn
> appropriately in the virt driver layer for future proposals.
>

Yup, I think we agree that this is the key question.  Some back-ends will
require changes to the virt-layer that are natural extensions (e.g.,
scheduling across host, node pairs as you mentioned) and some back-ends
will want to do something that is so fundamentally different that it
doesn't make sense to unaturally contort the virt-driver API to match it,
at which point we need a different plan (e.g., cells).  In my last email I
was just trying to say that splitting host, node for scheduling struck me
as a reasonable extension of the existing virt-driver api, but it sounds
like my nova knowledge is out of date and that has happened already :)



> There are other blueprints that perhaps do a better job at demonstrating
> the problems with putting these systems at the existing virt layer.  As
> an example, check out the vCenter blueprint related to volume support [2].
>
> To make things work, this blueprint proposes that Nova gets volume
> connection info *for every possible host* that the volume may end up
> getting connected to.  Presumably this is because the system wants to
> move things around on its own.  This feels very, very wrong to me.
>
> If another system wants to manage the VMs, that's fine, but I'd rather
> Nova not *also* think it's in charge, which is what we have right now.
>

 I need to learn more about Nova/Cinder integration in general, but at a
high-level I agree that such an approach seems to be closer to the
"unnatural contortion" side of things.

Dan



>
> [1]
>
> https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
> [2]
> https://blueprints.launchpad.net/nova/+spec/fc-support-for-vcenter-driver
>
> --
> Russell Bryant
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130514/1a783579/attachment.html>


More information about the OpenStack-dev mailing list