<div dir="ltr"><div><div><div><div><div><div><div>Thanks Jay Pipes.<br><br></div>If go back to having a single nova-compute managing a single vCenter cluster, then there might be problems in a large sacle vCenter cluster. There are still problems that we can not handle:<br>
1) The VCDriver can also manage multiple resource pools with a single nova compute, the resource pool is another concept, we can create multiple resource pools in one vCenter cluster or create multiple resource pools in one ESX host. In a large scale cluster, there can be thousands of resource pools, it would make the admin crazy for the configuration. ;-)<br>
</div>2) How to manage ESX host which not belong to any cluster or resource pools? Such as following case:<br></div>DC<br> |<br></div> |--- ESX host1<br> |<br></div> |--- ESX host2<br><br></div>3) There is another bp <span class=""><a href="https://blueprints.launchpad.net/nova/+spec/vmware-auto-inventory">https://blueprints.launchpad.net/nova/+spec/vmware-auto-inventory</a></span><span class=""> filed by Shawn, this bp want to report all resources including clusters, resource pools, esx hosts, this bp can be treated as the base for VCDriver, as if the VCDriver can get all resources, then it would be very easy to do what we want.<br>
<br></span></div><span class="">Thanks!<br></span></div><div class="gmail_extra"><br><br><div class="gmail_quote">2014-04-06 4:32 GMT+08:00 Jay Pipes <span dir="ltr"><<a href="mailto:jaypipes@gmail.com" target="_blank">jaypipes@gmail.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On Fri, 2014-04-04 at 13:30 +0800, Jay Lau wrote:<br>
><br>
><br>
><br>
> 2014-04-04 12:46 GMT+08:00 Jay Pipes <<a href="mailto:jaypipes@gmail.com">jaypipes@gmail.com</a>>:<br>
> On Fri, 2014-04-04 at 11:08 +0800, Jay Lau wrote:<br>
> > Thanks Jay and Chris for the comments!<br>
> ><br>
> > @Jay Pipes, I think that we still need to enable "one nova<br>
> compute<br>
> > live migration" as one nova compute can manage multiple<br>
> clusters and<br>
> > VMs can be migrated between those clusters managed by one<br>
> nova<br>
> > compute.<br>
><br>
><br>
> Why, though? That is what I am asking... seems to me like this<br>
> is an<br>
> anti-feature. What benefit does the user get from moving an<br>
> instance<br>
> from one VCenter cluster to another VCenter cluster if the two<br>
> clusters<br>
> are on the same physical machine?<br>
> @Jay Pipes, for VMWare, one physical machine (ESX server) can only<br>
> belong to one VCenter cluster, so we may have following scenarios.<br>
><br>
> DC<br>
> |<br>
><br>
> |---Cluster1<br>
> | |<br>
><br>
> | |---host1<br>
> |<br>
><br>
> |---Cluser2<br>
> |<br>
><br>
> |---host2<br>
><br>
><br>
> Then when using VCDriver, I can use one nova compute manage both<br>
> Cluster1 and Cluster2, this caused me cannot migrate VM from host2 to<br>
> host1 ;-(<br>
><br>
><br>
> The bp was introduced by<br>
> <a href="https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service" target="_blank">https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service</a><br>
<br>
</div></div>Well, it seems to me that the problem is the above blueprint and the<br>
code it introduced. This is an anti-feature IMO, and probably the best<br>
solution would be to remove the above code and go back to having a<br>
single nova-compute managing a single vCenter cluster, not multiple<br>
ones.<br>
<span class="HOEnZb"><font color="#888888"><br>
-jay<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div dir="ltr"><div>Thanks,<br><br></div>Jay<br></div>
</div>