[openstack-dev] [Nova] virt driver architecture
Dan Wendlandt
dan at nicira.com
Mon May 13 16:58:25 UTC 2013
On Fri, May 10, 2013 at 11:36 AM, Russell Bryant <rbryant at redhat.com> wrote:
> On 05/10/2013 12:25 PM, Armando Migliaccio wrote:
> > Um...I wonder if we are saying the same thing: I am thinking that for
> > implementing a nova-api proxy one would need to provide their
> > compute_api_class, that defaults to nova.compute.api.API. Incidentally,
> > when using cells this becomes nova.compute.cells_api.ComputeCellsAPI
> > (for the top-level). By implementing the compute API class surely you
> > don't need necessarily to hook into how Nova works, no?
>
> We were talking about different things. I was talking about a REST API
> proxy ... OpenStack Compute API -> whatever API.
>
I may be at fault here for introducing the work "proxy", but I think it may
be confusing things a bit, as many cases where you would want to use
something like vCenter are not really a proxy.
They way I see it, there are two extremes:
1) The current virt-driver approach, with one nova-compute per per "host",
where "host" is a single unit of capacity in terms of scheduling, etc. In
KVM-world a "host" is a hypervisor node. In current vCenter driver, this
is a cluster, with vCenter exposing one large capacity and spreading
workloads evenly. This approach leverages all scheduling logic available
within nova.scheduler, uses nova DB model, etc.
2) A true "API proxy" approach, possibly implemented using cells. All
scheduling/placement, data modeling, etc. logic would be implemented by a
back-end system such as vCenter and one cannot leverage existing nova
scheduler logic or database models. I think this would mean that the
nova-api, nova-scheduler, and nova-compute code used with the virt-driver
model would not be used, and the new cell driver would have to create its
own versions of this.
However, what is being proposed in the blueprint is actually something in
between these two extremes and in fact closer to the virt-driver model. I
suspect the proposal sees the following benefits to a model that is closer
to the existing virt-driver (note: I am not working closely with the
author, so I am guessing here based on my own views):
- the nova scheduler logic is actually very beneficial even when you are
using something like vCenter. It lets you do a KVM + vmware deployment,
where workloads are directed to vmware vs. KVM by the scheduler based on
disk type, host aggregates, etc. It also lets you expose different vCenter
clusters with different properties via host aggregates (e.g., an HA cluster
and a non-HA cluster). According to the docs I've read on cells (may be
out of date), it seems like the current cell scheduler is very simple
(i.e., random), so doing this with cells would require adding similar
intelligence at the cell scheduler layer. Additionally, I'm aware of
people who would like to use nova's pluggable scheduling to even do
fine-grain per-hypervisor scheduling on a "cluster" platform like vCenter
(which for a cluster with DRS enabled would make sense).
- there is a lot of nova code used in the virt-driver model that is still
needed when implementing Nova with a system like vCenter. This isn't just
the API WSGI + scheduling logic, it includes code to talk to quantum,
glance, cinder, etc. There is also data modeled in the Nova DB that is
likely not modeled in the back-end system. Perhaps with significant
refactoring this shared functionality could be put in proper libraries that
could be re-used in cells of different types, but my guess is that it would
be a significant shake-up of the Nova codebase.
As I read the blueprint, it seems like the idea is to make some more
targeted changes. In particular:
1) remove the hard coupling in the scheduler logic between a device being
scheduler to, and the queue that the scheduling message is placed into.
2) On the nova-compute side, do not limit a nova-compute to creating a
single "host" record, but allow it to dynamically update the set of
available hosts based on its own mechanism for discovering available hosts.
To me these seem like fairly clean separations of duty: the scheduler is
still in charge of deciding where workloads should run, and the set of
nova-computes are still responsible for exposing the set of available
resources, and implementing requests to place a workload on a particular
host resource. Maybe its more complicated than that, but at a high level
this strikes me as reasonable.
Dan
> --
> Russell Bryant
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130513/448ebdc8/attachment.html>
More information about the OpenStack-dev
mailing list