[openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

Chen CH Ji jichenjc at cn.ibm.com
Wed Apr 9 06:07:08 UTC 2014


we used to have one compute service corresponding to multiple hypervisors
(like host and nodes concept )
our major issue on our platform is we can't run nova-compute service on the
hypervisor and we need to find another place to run the nova-compute in
order to talk to
hypervisor management API through REST API
which means we have to run multiple compute service out side of our
hypervisors and it's hard for us to control the compute services at that
time,
but we have no choice since nova migration only can be migrated to another
host instead of node ,so we implement according to it
if we can support host + node, then it might be helpful for the hypervisors
with different arch

The point is whether we are able to expose the internal (say, not only the
host concept but also the node concept ) to outside
guess live-migration is admin only feature, can we expose those node
concept to admin and let admin decide it?

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM at IBMCN   Internet: jichenjc at cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:	Jay Lau <jay.lau.513 at gmail.com>
To:	"OpenStack Development Mailing List (not for usage questions)"
            <openstack-dev at lists.openstack.org>,
Date:	04/06/2014 07:02 PM
Subject:	Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live
            migration with one nova compute



Hi Divakar,

Can I say that the bare metal provisioning is now using kind of "Parent -
Child compute" mode? I was also thinking that we can use host:node to
identify a kind of "Parent-Child" or "Hierarchy Compute". So can you please
show some difference for your "Parent - Child Compute Node" and bare metal
provisioning?

Thanks!


2014-04-06 14:59 GMT+08:00 Nandavar, Divakar Padiyar <
divakar.padiyar-nandavar at hp.com>:
  >> Well, it seems to me that the problem is the above blueprint and the
  code it introduced. This is an anti-feature IMO, and probably the best
  solution would be to remove the above code and go back to having a single
  >> nova-compute managing a single vCenter cluster, not multiple ones.

  Problem is not introduced by managing multiple clusters from single
  nova-compute proxy node.  Internally this proxy driver is still
  presenting the "compute-node" for each of the cluster its managing.
  What we need to think about is applicability of the live migration use
  case when a "cluster" is modelled as a compute.   Since the "cluster" is
  modelled as a compute, it is assumed that a typical use case of live-move
  is taken care by the underlying "cluster" itself.       With this there
  are other use cases which are no-op today like host maintenance mode,
  live move, setting instance affinity etc.,     In order to resolve this I
  was thinking of
  "A way to expose operations on individual ESX Hosts like Putting host in
  maintenance mode,  live move, instance affinity etc., by introducing
  Parent - Child compute node concept.   Scheduling can be restricted to
  Parent compute node and Child compute node can be used for providing more
  drill down on compute and also enable additional compute operations".
  Any thoughts on this?

  Thanks,
  Divakar


  -----Original Message-----
  From: Jay Pipes [mailto:jaypipes at gmail.com]
  Sent: Sunday, April 06, 2014 2:02 AM
  To: openstack-dev at lists.openstack.org
  Subject: Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live
  migration with one nova compute
  Importance: High

  On Fri, 2014-04-04 at 13:30 +0800, Jay Lau wrote:
  >
  >
  >
  > 2014-04-04 12:46 GMT+08:00 Jay Pipes <jaypipes at gmail.com>:
  >         On Fri, 2014-04-04 at 11:08 +0800, Jay Lau wrote:
  >         > Thanks Jay and Chris for the comments!
  >         >
  >         > @Jay Pipes, I think that we still need to enable "one nova
  >         compute
  >         > live migration" as one nova compute can manage multiple
  >         clusters and
  >         > VMs can be migrated between those clusters managed by one
  >         nova
  >         > compute.
  >
  >
  >         Why, though? That is what I am asking... seems to me like this
  >         is an
  >         anti-feature. What benefit does the user get from moving an
  >         instance
  >         from one VCenter cluster to another VCenter cluster if the two
  >         clusters
  >         are on the same physical machine?
  > @Jay Pipes, for VMWare, one physical machine (ESX server) can only
  > belong to one VCenter cluster, so we may have following scenarios.
  >
  > DC
  >  |
  >
  >  |---Cluster1
  >  |      |
  >
  >  |      |---host1
  >  |
  >
  >  |---Cluser2
  >         |
  >
  >         |---host2
  >
  >
  > Then when using VCDriver, I can use one nova compute manage both
  > Cluster1 and Cluster2, this caused me cannot migrate VM from host2 to
  > host1 ;-(
  >
  >
  > The bp was introduced by
  > https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-
  > by-one-service

  Well, it seems to me that the problem is the above blueprint and the code
  it introduced. This is an anti-feature IMO, and probably the best
  solution would be to remove the above code and go back to having a single
  nova-compute managing a single vCenter cluster, not multiple ones.

  -jay



  _______________________________________________
  OpenStack-dev mailing list
  OpenStack-dev at lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  _______________________________________________
  OpenStack-dev mailing list
  OpenStack-dev at lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Thanks,

Jay_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140409/df9bdbf1/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140409/df9bdbf1/attachment.gif>


More information about the OpenStack-dev mailing list