<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, May 10, 2013 at 11:36 AM, Russell Bryant <span dir="ltr"><<a href="mailto:rbryant@redhat.com" target="_blank">rbryant@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">On 05/10/2013 12:25 PM, Armando Migliaccio wrote:<br>
> Um...I wonder if we are saying the same thing: I am thinking that for<br>
> implementing a nova-api proxy one would need to provide their<br>
> compute_api_class, that defaults to nova.compute.api.API. Incidentally,<br>
> when using cells this becomes nova.compute.cells_api.ComputeCellsAPI<br>
> (for the top-level). By implementing the compute API class surely you<br>
> don't need necessarily to hook into how Nova works, no?<br>
<br>
</div>We were talking about different things. I was talking about a REST API<br>
proxy ... OpenStack Compute API -> whatever API.<br></blockquote><div><br></div><div style>I may be at fault here for introducing the work "proxy", but I think it may be confusing things a bit, as many cases where you would want to use something like vCenter are not really a proxy. </div>
<div style><br></div><div style>They way I see it, there are two extremes: </div><div style>1) The current virt-driver approach, with one nova-compute per per "host", where "host" is a single unit of capacity in terms of scheduling, etc. In KVM-world a "host" is a hypervisor node. In current vCenter driver, this is a cluster, with vCenter exposing one large capacity and spreading workloads evenly. This approach leverages all scheduling logic available within nova.scheduler, uses nova DB model, etc. </div>
<div style>2) A true "API proxy" approach, possibly implemented using cells. All scheduling/placement, data modeling, etc. logic would be implemented by a back-end system such as vCenter and one cannot leverage existing nova scheduler logic or database models. I think this would mean that the nova-api, nova-scheduler, and nova-compute code used with the virt-driver model would not be used, and the new cell driver would have to create its own versions of this. </div>
<div style><br></div><div style>However, what is being proposed in the blueprint is actually something in between these two extremes and in fact closer to the virt-driver model. I suspect the proposal sees the following benefits to a model that is closer to the existing virt-driver (note: I am not working closely with the author, so I am guessing here based on my own views): </div>
<div style>- the nova scheduler logic is actually very beneficial even when you are using something like vCenter. It lets you do a KVM + vmware deployment, where workloads are directed to vmware vs. KVM by the scheduler based on disk type, host aggregates, etc. It also lets you expose different vCenter clusters with different properties via host aggregates (e.g., an HA cluster and a non-HA cluster). According to the docs I've read on cells (may be out of date), it seems like the current cell scheduler is very simple (i.e., random), so doing this with cells would require adding similar intelligence at the cell scheduler layer. Additionally, I'm aware of people who would like to use nova's pluggable scheduling to even do fine-grain per-hypervisor scheduling on a "cluster" platform like vCenter (which for a cluster with DRS enabled would make sense). </div>
<div style>- there is a lot of nova code used in the virt-driver model that is still needed when implementing Nova with a system like vCenter. This isn't just the API WSGI + scheduling logic, it includes code to talk to quantum, glance, cinder, etc. There is also data modeled in the Nova DB that is likely not modeled in the back-end system. Perhaps with significant refactoring this shared functionality could be put in proper libraries that could be re-used in cells of different types, but my guess is that it would be a significant shake-up of the Nova codebase. </div>
<div><br></div><div style>As I read the blueprint, it seems like the idea is to make some more targeted changes. In particular: </div><div style>1) remove the hard coupling in the scheduler logic between a device being scheduler to, and the queue that the scheduling message is placed into. </div>
<div style>2) On the nova-compute side, do not limit a nova-compute to creating a single "host" record, but allow it to dynamically update the set of available hosts based on its own mechanism for discovering available hosts. </div>
<div style><br></div><div style>To me these seem like fairly clean separations of duty: the scheduler is still in charge of deciding where workloads should run, and the set of nova-computes are still responsible for exposing the set of available resources, and implementing requests to place a workload on a particular host resource. Maybe its more complicated than that, but at a high level this strikes me as reasonable.</div>
<div style><br></div><div style>Dan</div><div style><br></div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5"><br>
--<br>
Russell Bryant<br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br>~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>Dan Wendlandt <div>Nicira, Inc: <a href="http://www.nicira.com" target="_blank">www.nicira.com</a><br><div>twitter: danwendlandt<br>
~~~~~~~~~~~~~~~~~~~~~~~~~~~<br></div></div>
</div></div>