[nova] Breaking up and migrating the nova-live-migration job to Zuulv3

Arkady.Kanevsky at dell.com Arkady.Kanevsky at dell.com
Tue Mar 10 17:04:19 UTC 2020


Thank Lee. Sound approach.
A few questions/comments.
1. Assume that we have unwritten assumption that all nova nodes have access to volumes on the backend. 
So we rely on it except for ephemeral storage.
2. What need to be done for volumes that use FC not iSCSI?
3. You have one for Ceph. Does that mean that we need an analog for other cinder back ends?
4. Do we need to anything analogous for Manila?
5. How do we address multi-attach volumes and multipathing? Expect that if we have multipthaing on origin node we laso have multipathing at destination at the end.


Thanks,
Arkady
 

-----Original Message-----
From: Lee Yarwood <lyarwood at redhat.com> 
Sent: Tuesday, March 10, 2020 10:22 AM
To: openstack-discuss at lists.openstack.org
Subject: [nova] Breaking up and migrating the nova-live-migration job to Zuulv3

Hello all,

I've started PoC'ing some ideas around $subject in the topic below and wanted to ask the wider team for feedback on the approach I'm taking:

https://review.opendev.org/#/q/topic:nova-live-migration-zuulv3

My initial idea is to break the job up into the following smaller multinode jobs that are hopefully easier to understand and maintain.

* nova-multinode-live-migration-py3

A simple LM job using the qcow2 imagebackend and LVM/iSCSI c-vol.

* nova-multinode-live-migration-ceph-py3

A ceph based LM job using rbd for both imagebackend and c-vol.

* nova-multinode-evacuate-py3

A separate evacuation job using qcow2 imagebackend and LVM/iSCSI c-vol.
The existing script *could* then be ported to an Ansible role as part of the migration to Zuulv3.

Hopefully this is pretty straight forward but I'd appreciate any feedback on this all the same.

Cheers,

-- 
Lee Yarwood                 A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76


More information about the openstack-discuss mailing list