<html><body>
<p><font size="2" face="sans-serif">we used to have one compute service corresponding to multiple hypervisors (like host and nodes concept )</font><br>
<font size="2" face="sans-serif">our major issue on our platform is we can't run nova-compute service on the hypervisor and we need to find another place to run the nova-compute in order to talk to </font><br>
<font size="2" face="sans-serif">hypervisor management API through REST API </font><br>
<font size="2" face="sans-serif">which means we have to run multiple compute service out side of our hypervisors and it's hard for us to control the compute services at that time,</font><br>
<font size="2" face="sans-serif">but we have no choice since nova migration only can be migrated to another host instead of node ,so we implement according to it</font><br>
<font size="2" face="sans-serif">if we can support host + node, then it might be helpful for the hypervisors with different arch</font><br>
<br>
<font size="2" face="sans-serif">The point is whether we are able to expose the internal (say, not only the host concept but also the node concept ) to outside</font><br>
<font size="2" face="sans-serif">guess live-migration is admin only feature, can we expose those node concept to admin and let admin decide it?</font><br>
<br>
<font size="2" face="sans-serif">Best Regards! <br>
<br>
Kevin (Chen) Ji 纪 晨<br>
<br>
Engineer, zVM Development, CSTL<br>
Notes: Chen CH Ji/China/IBM@IBMCN Internet: jichenjc@cn.ibm.com<br>
Phone: +86-10-82454158<br>
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC </font><br>
<br>
<img width="16" height="16" src="cid:1__=C7BBF626DFB33F9E8f9e8a93df938@cn.ibm.com" border="0" alt="Inactive hide details for Jay Lau ---04/06/2014 07:02:15 PM---Hi Divakar, Can I say that the bare metal provisioning is now usi"><font size="2" color="#424282" face="sans-serif">Jay Lau ---04/06/2014 07:02:15 PM---Hi Divakar, Can I say that the bare metal provisioning is now using kind of "Parent -</font><br>
<br>
<font size="1" color="#5F5F5F" face="sans-serif">From: </font><font size="1" face="sans-serif">Jay Lau <jay.lau.513@gmail.com></font><br>
<font size="1" color="#5F5F5F" face="sans-serif">To: </font><font size="1" face="sans-serif">"OpenStack Development Mailing List (not for usage questions)" <openstack-dev@lists.openstack.org>, </font><br>
<font size="1" color="#5F5F5F" face="sans-serif">Date: </font><font size="1" face="sans-serif">04/06/2014 07:02 PM</font><br>
<font size="1" color="#5F5F5F" face="sans-serif">Subject: </font><font size="1" face="sans-serif">Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute</font><br>
<hr width="100%" size="2" align="left" noshade style="color:#8091A5; "><br>
<br>
<br>
<font size="3" face="serif">Hi Divakar,<br>
</font><br>
<font size="3" face="serif">Can I say that the bare metal provisioning is now using kind of "Parent - Child compute" mode? I was also thinking that we can use host:node to identify a kind of "Parent-Child" or "Hierarchy Compute". So can you please show some difference for your "Parent - Child Compute Node" and bare metal provisioning?<br>
</font><br>
<font size="3" face="serif">Thanks! </font><br>
<font size="3" face="serif"><br>
</font><br>
<font size="3" face="serif">2014-04-06 14:59 GMT+08:00 Nandavar, Divakar Padiyar <</font><a href="mailto:divakar.padiyar-nandavar@hp.com" target="_blank"><font size="3" color="#0000FF" face="serif"><u>divakar.padiyar-nandavar@hp.com</u></font></a><font size="3" face="serif">>:</font>
<ul style="padding-left: 9pt"><font size="3" face="serif">>> Well, it seems to me that the problem is the above blueprint and the code it introduced. This is an anti-feature IMO, and probably the best solution would be to remove the above code and go back to having a single >> nova-compute managing a single vCenter cluster, not multiple ones.<br>
</font><br>
<font size="3" face="serif">Problem is not introduced by managing multiple clusters from single nova-compute proxy node. Internally this proxy driver is still presenting the "compute-node" for each of the cluster its managing. What we need to think about is applicability of the live migration use case when a "cluster" is modelled as a compute. Since the "cluster" is modelled as a compute, it is assumed that a typical use case of live-move is taken care by the underlying "cluster" itself. With this there are other use cases which are no-op today like host maintenance mode, live move, setting instance affinity etc., In order to resolve this I was thinking of<br>
"A way to expose operations on individual ESX Hosts like Putting host in maintenance mode, live move, instance affinity etc., by introducing Parent - Child compute node concept. Scheduling can be restricted to Parent compute node and Child compute node can be used for providing more drill down on compute and also enable additional compute operations". Any thoughts on this?<br>
<br>
Thanks,<br>
Divakar</font><br>
<font size="3" face="serif"><br>
<br>
-----Original Message-----<br>
From: Jay Pipes [mailto:</font><a href="mailto:jaypipes@gmail.com"><font size="3" color="#0000FF" face="serif"><u>jaypipes@gmail.com</u></font></a><font size="3" face="serif">]<br>
Sent: Sunday, April 06, 2014 2:02 AM<br>
To: </font><a href="mailto:openstack-dev@lists.openstack.org"><font size="3" color="#0000FF" face="serif"><u>openstack-dev@lists.openstack.org</u></font></a><font size="3" face="serif"><br>
Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute<br>
Importance: High<br>
<br>
On Fri, 2014-04-04 at 13:30 +0800, Jay Lau wrote:<br>
><br>
><br>
><br>
> 2014-04-04 12:46 GMT+08:00 Jay Pipes <</font><a href="mailto:jaypipes@gmail.com"><font size="3" color="#0000FF" face="serif"><u>jaypipes@gmail.com</u></font></a><font size="3" face="serif">>:<br>
> On Fri, 2014-04-04 at 11:08 +0800, Jay Lau wrote:<br>
> > Thanks Jay and Chris for the comments!<br>
> ><br>
> > @Jay Pipes, I think that we still need to enable "one nova<br>
> compute<br>
> > live migration" as one nova compute can manage multiple<br>
> clusters and<br>
> > VMs can be migrated between those clusters managed by one<br>
> nova<br>
> > compute.<br>
><br>
><br>
> Why, though? That is what I am asking... seems to me like this<br>
> is an<br>
> anti-feature. What benefit does the user get from moving an<br>
> instance<br>
> from one VCenter cluster to another VCenter cluster if the two<br>
> clusters<br>
> are on the same physical machine?<br>
> @Jay Pipes, for VMWare, one physical machine (ESX server) can only<br>
> belong to one VCenter cluster, so we may have following scenarios.<br>
><br>
> DC<br>
> |<br>
><br>
> |---Cluster1<br>
> | |<br>
><br>
> | |---host1<br>
> |<br>
><br>
> |---Cluser2<br>
> |<br>
><br>
> |---host2<br>
><br>
><br>
> Then when using VCDriver, I can use one nova compute manage both<br>
> Cluster1 and Cluster2, this caused me cannot migrate VM from host2 to<br>
> host1 ;-(<br>
><br>
><br>
> The bp was introduced by<br>
> </font><a href="https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-" target="_blank"><font size="3" color="#0000FF" face="serif"><u>https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-</u></font></a><font size="3" face="serif"><br>
> by-one-service<br>
<br>
Well, it seems to me that the problem is the above blueprint and the code it introduced. This is an anti-feature IMO, and probably the best solution would be to remove the above code and go back to having a single nova-compute managing a single vCenter cluster, not multiple ones.<br>
<br>
-jay<br>
<br>
<br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list</font><font size="3" color="#0000FF" face="serif"><u><br>
</u></font><a href="mailto:OpenStack-dev@lists.openstack.org"><font size="3" color="#0000FF" face="serif"><u>OpenStack-dev@lists.openstack.org</u></font></a><font size="3" color="#0000FF" face="serif"><u><br>
</u></font><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank"><font size="3" color="#0000FF" face="serif"><u>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</u></font></a><font size="3" face="serif"><br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list</font><font size="3" color="#0000FF" face="serif"><u><br>
</u></font><a href="mailto:OpenStack-dev@lists.openstack.org"><font size="3" color="#0000FF" face="serif"><u>OpenStack-dev@lists.openstack.org</u></font></a><font size="3" color="#0000FF" face="serif"><u><br>
</u></font><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank"><font size="3" color="#0000FF" face="serif"><u>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</u></font></a></ul>
<font size="3" face="serif"><br>
<br>
<br>
-- </font><br>
<font size="3" face="serif">Thanks,<br>
</font><br>
<font size="3" face="serif">Jay</font><tt><font size="2">_______________________________________________<br>
OpenStack-dev mailing list<br>
OpenStack-dev@lists.openstack.org<br>
</font></tt><tt><font size="2"><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a></font></tt><tt><font size="2"><br>
</font></tt><br>
</body></html>