<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">2014-04-04 12:46 GMT+08:00 Jay Pipes <span dir="ltr"><<a href="mailto:jaypipes@gmail.com" target="_blank">jaypipes@gmail.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="">On Fri, 2014-04-04 at 11:08 +0800, Jay Lau wrote:<br>
> Thanks Jay and Chris for the comments!<br>
><br>
> @Jay Pipes, I think that we still need to enable "one nova compute<br>
> live migration" as one nova compute can manage multiple clusters and<br>
> VMs can be migrated between those clusters managed by one nova<br>
> compute.<br>
<br>
</div>Why, though? That is what I am asking... seems to me like this is an<br>
anti-feature. What benefit does the user get from moving an instance<br>
from one VCenter cluster to another VCenter cluster if the two clusters<br>
are on the same physical machine?<br></blockquote><div>@Jay Pipes, for VMWare, one physical machine (ESX server) can only belong to one VCenter cluster, so we may have following scenarios.<br></div><div>DC<br> |<br></div>
<div> |---Cluster1<br> | |<br></div><div> | |---host1<br> |<br></div><div> |---Cluser2<br> |<br></div><div> |---host2<br><br></div><div>Then when using VCDriver, I can use one nova compute manage both Cluster1 and Cluster2, this caused me cannot migrate VM from host2 to host1 ;-(<br>
<br></div><div>The bp was introduced by <a href="https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service">https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service</a><br>
</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Secondly, why is it that a single nova-compute manages multiple VCenter<br>
clusters? This seems like a hack to me... perhaps someone who wrote the<br>
code for this or knows the decision behind it could chime in here?<br>
<div class=""><br>
> For cell, IMHO, each "cell" can be treated as a small "cloud" but not<br>
> a "compute", each "cell cloud" should be able to handle VM operations<br>
> in the small cloud itself. Please correct me if I am wrong.<br>
<br>
</div>Yes, I agree with you that a cell is not a compute. Not sure if I said<br>
otherwise in my previous response. Sorry if it was confusing! :)<br>
<br>
Best,<br>
-jay<br>
<div class=""><div class="h5"><br>
> @Chris, "OS-EXT-SRV-ATTR:host" is the host where nova compute is<br>
> running and "OS-EXT-SRV-ATTR:hypervisor_hostname" is the hypervisor<br>
> host where the VM is running. Live migration is now using "host" for<br>
> live migration. What I want to do is enable migration with one "host"<br>
> and the "host" managing multiple "hyperviosrs".<br>
><br>
><br>
> I'm planning to draft a bp for review which depend on<br>
> <a href="https://blueprints.launchpad.net/nova/+spec/vmware-auto-inventory" target="_blank">https://blueprints.launchpad.net/nova/+spec/vmware-auto-inventory</a><br>
><br>
><br>
> Thanks!<br>
><br>
><br>
><br>
> 2014-04-04 8:03 GMT+08:00 Chris Friesen <<a href="mailto:chris.friesen@windriver.com">chris.friesen@windriver.com</a>>:<br>
> On 04/03/2014 05:48 PM, Jay Pipes wrote:<br>
> On Mon, 2014-03-31 at 17:11 +0800, Jay Lau wrote:<br>
> Hi,<br>
><br>
> Currently with VMWare VCDriver, one nova<br>
> compute can manage multiple<br>
> clusters/RPs, this caused cluster admin cannot<br>
> do live migration<br>
> between clusters/PRs if those clusters/PRs<br>
> managed by one nova compute<br>
> as the current live migration logic request at<br>
> least two nova<br>
> computes.<br>
><br>
><br>
> A bug [1] was also filed to trace VMWare live<br>
> migration issue.<br>
><br>
> I'm now trying the following solution to see<br>
> if it is acceptable for a<br>
> fix, the fix wants enable live migration with<br>
> one nova compute:<br>
> 1) When live migration check if host are same,<br>
> check both host and<br>
> node for the VM instance.<br>
> 2) When nova scheduler select destination for<br>
> live migration, the live<br>
> migration task should put (host, node) to<br>
> attempted hosts.<br>
> 3) Nova scheduler needs to be enhanced to<br>
> support ignored_nodes.<br>
> 4) nova compute need to be enhanced to check<br>
> host and node when doing<br>
> live migration.<br>
><br>
> What precisely is the point of "live migrating" an<br>
> instance to the exact<br>
> same host as it is already on? The failure domain is<br>
> the host, so moving<br>
> the instance from one "cluster" to another, but on the<br>
> same host is kind<br>
> of a silly use case IMO.<br>
><br>
><br>
> Here is where precise definitions of "compute node",<br>
> "OS-EXT-SRV-ATTR:host", and<br>
> "OS-EXT-SRV-ATTR:hypervisor_hostname", and "host" as<br>
> understood by novaclient would be nice.<br>
><br>
> Currently the "nova live-migration" command takes a "host"<br>
> argument. It's not clear which of the above this corresponds<br>
> to.<br>
><br>
> My understanding is that one nova-compute process can manage<br>
> multiple VMWare physical hosts. So it could make sense to<br>
> support live migration between separate VMWare hosts even if<br>
> they're managed by a single nova-compute process.<br>
><br>
> Chris<br>
><br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
><br>
><br>
><br>
><br>
> --<br>
> Thanks,<br>
><br>
><br>
> Jay<br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br>
<br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div dir="ltr"><div>Thanks,<br><br></div>Jay<br></div>
</div></div>