[nova] Breaking up and migrating the nova-live-migration job to Zuulv3
Sean Mooney
smooney at redhat.com
Wed Mar 11 14:37:35 UTC 2020
On Tue, 2020-03-10 at 17:04 +0000, Arkady.Kanevsky at dell.com wrote:
> Thank Lee. Sound approach.
> A few questions/comments.
> 1. Assume that we have unwritten assumption that all nova nodes have access to volumes on the backend.
> So we rely on it except for ephemeral storage.
well the job deploys the storage backend so its not an asusmtion we deploy it that way intentionally.
we also set up ssh keys so we can rsync the qcow files between host when we do block migration.
> 2. What need to be done for volumes that use FC not iSCSI?
we dont test FC in the migration job currently so i think that is out of scope of this refactor.
the goal is to move it to zuulv3 while testing all the existing cases not to add more cases in this phase.
> 3. You have one for Ceph. Does that mean that we need an analog for other cinder back ends?
no. the ceph backend is tested seperatly as there are basicaly 3 storage backend to the libvirt driver.
local file which is tested as part of block migration with qcow2
local block device which is tested via cinder/lvm with the block device mounted on the host vi isci (FC would be the
same form a qemu point of view)
and finally ceph is used to test the qemu nataive network block device support.
so we are not trying to test different cinder backends but rahter the different image backends/qemu storage types
supprot in nova
> 4. Do we need to anything analogous for Manila?
maybe but again that seams like its out of scope so intally i would say no
> 5. How do we address multi-attach volumes and multipathing? Expect that if we have multipthaing on origin node we laso
> have multipathing at destination at the end.
multi attach is already tested in the job i belive so we would continue that. i think both cinder lvm and ceph support
multi attach. i dont think we test multipath in the gate in the current jobs so i would not imediatly assume we woudl
add it as part of this refactor.
>
>
> Thanks,
> Arkady
>
>
> -----Original Message-----
> From: Lee Yarwood <lyarwood at redhat.com>
> Sent: Tuesday, March 10, 2020 10:22 AM
> To: openstack-discuss at lists.openstack.org
> Subject: [nova] Breaking up and migrating the nova-live-migration job to Zuulv3
>
> Hello all,
>
> I've started PoC'ing some ideas around $subject in the topic below and wanted to ask the wider team for feedback on
> the approach I'm taking:
>
> https://review.opendev.org/#/q/topic:nova-live-migration-zuulv3
>
> My initial idea is to break the job up into the following smaller multinode jobs that are hopefully easier to
> understand and maintain.
>
> * nova-multinode-live-migration-py3
>
> A simple LM job using the qcow2 imagebackend and LVM/iSCSI c-vol.
>
> * nova-multinode-live-migration-ceph-py3
this would be replaceing our existing devstack-plugin-ceph-tempest-py3 job
runing all the same test but in a multinode config with live migration tests enabled in the tempest config.
>
> A ceph based LM job using rbd for both imagebackend and c-vol.
>
> * nova-multinode-evacuate-py3
so this would be the only new job although i am not sure it should be seperated out.
we likely want to test evacuate with file,block and network storage so i think it makes sense to do this as a post
playbook in the other two jobs.
>
> A separate evacuation job using qcow2 imagebackend and LVM/iSCSI c-vol.
> The existing script *could* then be ported to an Ansible role as part of the migration to Zuulv3.
>
> Hopefully this is pretty straight forward but I'd appreciate any feedback on this all the same.
>
> Cheers,
>
> --
> Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76
More information about the openstack-discuss
mailing list