[openstack-dev] [nova][cinder][neutron] Cross-cell cold migration
Matt Riedemann
mriedemos at gmail.com
Fri Aug 24 20:01:07 UTC 2018
On 8/22/2018 9:14 PM, Sam Morrison wrote:
> I think in our case we’d only migrate between cells if we know the network and storage is accessible and would never do it if not.
> Thinking moving from old to new hardware at a cell level.
If it's done via the resize API at the top, initiated by a non-admin
user, how would you prevent it? We don't really know if we're going
across cell boundaries until the scheduler picks a host, and today we
restrict all move operations to within the same cell. But that's part of
the problem that needs addressing - how to tell the scheduler when it's
OK to get target hosts for a move from all cells rather than the cell
that the server is currently in.
>
> If storage and network isn’t available ideally it would fail at the api request.
Not sure this is something we can really tell beforehand in the API, but
maybe possible depending on whatever we come up with regarding volumes
and ports. I expect this is a whole new orchestrated task in the
(super)conductor when it happens. So while I think about using
shelve/unshelve from a compute operation standpoint, I don't want to try
and shoehorn this into existing conductor tasks.
>
> There is also ceph backed instances and so this is also something to take into account which nova would be responsible for.
Not everyone is using ceph and it's not really something the API is
aware of...at least not today - but long-term with shared storage
providers in placement we might be able to leverage this for
non-volume-backed instances, i.e. if we know the source and target host
are on the same shared storage, regardless of cell boundary, we could
just move rather than use snapshots (shelve). But I think phase1 is
easiest universally if we are using snapshots to get from cell 1 to cell 2.
>
> I’ll be in Denver so we can discuss more there too.
Awesome.
--
Thanks,
Matt
More information about the OpenStack-dev
mailing list