<div dir="ltr"><div>Hi Divakar,<br><br></div><div>Can I say that the bare metal provisioning is now using kind of "Parent - Child compute" mode? I was also thinking that we can use host:node to identify a kind of "Parent-Child" or "Hierarchy Compute". So can you please show some difference for your "Parent - Child Compute Node" and bare metal provisioning?<br>
<br></div><div>Thanks! </div></div><div class="gmail_extra"><br><br><div class="gmail_quote">2014-04-06 14:59 GMT+08:00 Nandavar, Divakar Padiyar <span dir="ltr"><<a href="mailto:divakar.padiyar-nandavar@hp.com" target="_blank">divakar.padiyar-nandavar@hp.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="">>> Well, it seems to me that the problem is the above blueprint and the code it introduced. This is an anti-feature IMO, and probably the best solution would be to remove the above code and go back to having a single >> nova-compute managing a single vCenter cluster, not multiple ones.<br>
<br>
</div>Problem is not introduced by managing multiple clusters from single nova-compute proxy node. Internally this proxy driver is still presenting the "compute-node" for each of the cluster its managing. What we need to think about is applicability of the live migration use case when a "cluster" is modelled as a compute. Since the "cluster" is modelled as a compute, it is assumed that a typical use case of live-move is taken care by the underlying "cluster" itself. With this there are other use cases which are no-op today like host maintenance mode, live move, setting instance affinity etc., In order to resolve this I was thinking of<br>
"A way to expose operations on individual ESX Hosts like Putting host in maintenance mode, live move, instance affinity etc., by introducing Parent - Child compute node concept. Scheduling can be restricted to Parent compute node and Child compute node can be used for providing more drill down on compute and also enable additional compute operations". Any thoughts on this?<br>
<br>
Thanks,<br>
Divakar<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
-----Original Message-----<br>
From: Jay Pipes [mailto:<a href="mailto:jaypipes@gmail.com">jaypipes@gmail.com</a>]<br>
Sent: Sunday, April 06, 2014 2:02 AM<br>
To: <a href="mailto:openstack-dev@lists.openstack.org">openstack-dev@lists.openstack.org</a><br>
Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute<br>
Importance: High<br>
<br>
On Fri, 2014-04-04 at 13:30 +0800, Jay Lau wrote:<br>
><br>
><br>
><br>
> 2014-04-04 12:46 GMT+08:00 Jay Pipes <<a href="mailto:jaypipes@gmail.com">jaypipes@gmail.com</a>>:<br>
> On Fri, 2014-04-04 at 11:08 +0800, Jay Lau wrote:<br>
> > Thanks Jay and Chris for the comments!<br>
> ><br>
> > @Jay Pipes, I think that we still need to enable "one nova<br>
> compute<br>
> > live migration" as one nova compute can manage multiple<br>
> clusters and<br>
> > VMs can be migrated between those clusters managed by one<br>
> nova<br>
> > compute.<br>
><br>
><br>
> Why, though? That is what I am asking... seems to me like this<br>
> is an<br>
> anti-feature. What benefit does the user get from moving an<br>
> instance<br>
> from one VCenter cluster to another VCenter cluster if the two<br>
> clusters<br>
> are on the same physical machine?<br>
> @Jay Pipes, for VMWare, one physical machine (ESX server) can only<br>
> belong to one VCenter cluster, so we may have following scenarios.<br>
><br>
> DC<br>
> |<br>
><br>
> |---Cluster1<br>
> | |<br>
><br>
> | |---host1<br>
> |<br>
><br>
> |---Cluser2<br>
> |<br>
><br>
> |---host2<br>
><br>
><br>
> Then when using VCDriver, I can use one nova compute manage both<br>
> Cluster1 and Cluster2, this caused me cannot migrate VM from host2 to<br>
> host1 ;-(<br>
><br>
><br>
> The bp was introduced by<br>
> <a href="https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-" target="_blank">https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-</a><br>
> by-one-service<br>
<br>
Well, it seems to me that the problem is the above blueprint and the code it introduced. This is an anti-feature IMO, and probably the best solution would be to remove the above code and go back to having a single nova-compute managing a single vCenter cluster, not multiple ones.<br>
<br>
-jay<br>
<br>
<br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div dir="ltr"><div>Thanks,<br><br></div>Jay<br></div>
</div>