[nova] Breaking up and migrating the nova-live-migration job to Zuulv3
Lee Yarwood
lyarwood at redhat.com
Wed Mar 11 18:34:53 UTC 2020
On 11-03-20 14:37:35, Sean Mooney wrote:
> On Tue, 2020-03-10 at 17:04 +0000, Arkady.Kanevsky at dell.com wrote:
> > Thank Lee. Sound approach.
> > A few questions/comments.
> > 1. Assume that we have unwritten assumption that all nova nodes have
> > access to volumes on the backend.
> > So we rely on it except for ephemeral storage.
> well the job deploys the storage backend so its not an asusmtion we
> deploy it that way intentionally. we also set up ssh keys so we can
> rsync the qcow files between host when we do block migration.
Correct, the jobs are simple multinode deployments of one main
controller/compute and a smaller subnode compute.
> > 2. What need to be done for volumes that use FC not iSCSI?
> we dont test FC in the migration job currently so i think that is out
> of scope of this refactor. the goal is to move it to zuulv3 while
> testing all the existing cases not to add more cases in this phase.
Yes, apologies if that wasn't clear from my initial post.
That said I'd argue that FC testing of any kind would be out of scope
for our jobs in openstack/nova. Specific backends and interconnects
being better tested by openstack/cinder and openstack/os-brick IMHO.
> > 3. You have one for Ceph. Does that mean that we need an analog for
> > other cinder back ends?
> no. the ceph backend is tested seperatly as there are basicaly 3
> storage backend to the libvirt driver. local file which is tested as
> part of block migration with qcow2 local block device which is tested
> via cinder/lvm with the block device mounted on the host vi isci (FC
> would be the same form a qemu point of view) and finally ceph is used
> to test the qemu nataive network block device support.
>
> so we are not trying to test different cinder backends but rahter the
> different image backends/qemu storage types supprot in nova
Correct.
> > 4. Do we need to anything analogous for Manila?
> maybe but again that seams like its out of scope so intally i would
> say no
Correct, we don't have any coverage for this at the moment.
> > 5. How do we address multi-attach volumes and multipathing? Expect
> > that if we have multipthaing on origin node we laso have
> > multipathing at destination at the end.
> multi attach is already tested in the job i belive so we would
> continue that. i think both cinder lvm and ceph support
I'm actually not sure if we do have any multiattach LM coverage,
something to potentially add with this refactor.
> multi attach. i dont think we test multipath in the gate in the
> current jobs so i would not imediatly assume we woudl add it as part
> of this refactor.
As with FC I don't think this should live in our jobs tbh.
> > -----Original Message-----
> > From: Lee Yarwood <lyarwood at redhat.com>
> > Sent: Tuesday, March 10, 2020 10:22 AM
> > To: openstack-discuss at lists.openstack.org
> > Subject: [nova] Breaking up and migrating the nova-live-migration job to Zuulv3
> >
> > Hello all,
> >
> > I've started PoC'ing some ideas around $subject in the topic below
> > and wanted to ask the wider team for feedback on the approach I'm
> > taking:
> >
> > https://review.opendev.org/#/q/topic:nova-live-migration-zuulv3
> >
> > My initial idea is to break the job up into the following smaller
> > multinode jobs that are hopefully easier to understand and maintain.
> >
> > * nova-multinode-live-migration-py3
> >
> > A simple LM job using the qcow2 imagebackend and LVM/iSCSI c-vol.
> >
> > * nova-multinode-live-migration-ceph-py3
>
> this would be replaceing our existing devstack-plugin-ceph-tempest-py3
> job runing all the same test but in a multinode config with live
> migration tests enabled in the tempest config.
If we want to merge the evacuation tests back into this I was going to
limit it to live migration tests only and continue running
devstack-plugin-ceph-tempest-py3 for everything else.
FWIW devstack-plugin-ceph-tempest-py3 is still NV even when we've been
gating on the success of ceph live migration in the original
nova-live-migration job.
> > A ceph based LM job using rbd for both imagebackend and c-vol.
> >
> > * nova-multinode-evacuate-py3
> so this would be the only new job although i am not sure it should be
> seperated out. we likely want to test evacuate with file,block and
> network storage so i think it makes sense to do this as a post
> playbook in the other two jobs.
Yeah that's fair, I might start with this broken out just to work on
that playbook/role before merging it back into the above jobs tbh.
> > A separate evacuation job using qcow2 imagebackend and LVM/iSCSI
> > c-vol. The existing script *could* then be ported to an Ansible
> > role as part of the migration to Zuulv3.
> >
> > Hopefully this is pretty straight forward but I'd appreciate any
> > feedback on this all the same.
--
Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20200311/4c4fe364/attachment.sig>
More information about the openstack-discuss
mailing list