[victoria][cinder] Volume Delete In server live migrate

Ammad Syed syedammad83 at gmail.com
Thu Feb 18 10:38:31 UTC 2021


Hi,

The nexentastor5 driver version in cinder victoria release is 1.4.3 which
had some iscsi target portal bugs for multipath. I have then downloaded the
latest driver from
https://github.com/Nexenta/cinder/tree/master/cinder/volume/drivers/nexenta
which had the version 1.5.3. The new version resolved the multiple target
portal issue but having trouble in live migration. Everything is working
smoothly with the LVM driver.

I'll raise a case with the vendor about it.

Ammad

On Thu, Feb 18, 2021 at 3:27 PM Lee Yarwood <lyarwood at redhat.com> wrote:

> On Thu, 18 Feb 2021 at 08:06, Ammad Syed <syedammad83 at gmail.com> wrote:
> >
> > Hi,
> >
> > Thanks it was really helpful. I have two backend storages lvm and
> nexentastor5 over iscsi. The to and fro migration is working fine on LVM
> backed VMs.
> >
> > The problem looks specifically in nexentastor5 drivers. The migration
> goes successful but the vm start having IO errors and seeing at nexenta
> fusion UI, the LUN of that migrated VM was not there.
>
> Odd, neither Nova or Cinder should delete the underlying LUN during
> live migration, just the host mappings.
>
> I'm not a Cinder dev but if that's an in-tree cinder-volume driver
> then I would raise a bug, otherwise talk to your vendor.
>
> Cheers,
>
> Lee
>
> > On Wed, Feb 17, 2021 at 8:19 PM Lee Yarwood <lyarwood at redhat.com> wrote:
> >>
> >> On Wed, 17 Feb 2021 at 12:25, Ammad Syed <syedammad83 at gmail.com> wrote:
> >> >
> >> > Hi,
> >> >
> >> > I am using Nexentastor5.3 community edition with cinder victoria.
> When I try to do live migrate machine from one host to other host, the
> volume does deleted from nexentastor.
> >> >
> >> > I see below logs in cinder logs.
> >> >
> >> > 2021-02-17 12:14:10.881 79940 INFO cinder.volume.manager
> [req-26344304-7180-4622-9915-b5f2f76ccd3e 2af528fdf3244e15b4f3f8fcfc0889c5
> 890eb2b7d1b8488aa88de7c34d08817a - - -] attachment_update completed
> successfully.
> >> > 2021-02-17 12:14:20.867 79940 INFO cinder.volume.manager
> [req-37ff44a8-5bf9-4c4d-b114-68e6c0504859 2af528fdf3244e15b4f3f8fcfc0889c5
> 890eb2b7d1b8488aa88de7c34d08817a - - -] Terminate volume connection
> completed successfully.
> >> > 2021-02-17 12:14:20.917 79940 WARNING py.warnings
> [req-37ff44a8-5bf9-4c4d-b114-68e6c0504859 2af528fdf3244e15b4f3f8fcfc0889c5
> 890eb2b7d1b8488aa88de7c34d08817a - - -]
> /usr/lib/python3/dist-packages/sqlalchemy/orm/evaluator.py:95: SAWarning:
> Evaluating non-mapped column expression 'updated_at' onto ORM instances;
> this is a deprecated use case.  Please make use of the actual mapped
> columns in ORM-evaluated UPDATE / DELETE expressions.
> >> >   util.warn(
> >>
> >> The volume is not being deleted here, rather `Terminate volume
> >> connection completed successfully.` highlights that the volume
> >> attachments for the source compute host are being removed *after* the
> >> migration has completed so that the host can no longer access the
> >> volume. The `attachment_update completed successfully.` line prior to
> >> this is the destination compute host being granted access before the
> >> instance is migrated.
> >>
> >> Hope this helps,
> >>
> >> Lee
> >>
> >
> >
> > --
> > Regards,
> >
> >
> > Syed Ammad Ali
>
>

-- 
Regards,


Syed Ammad Ali
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210218/fecfadd7/attachment.html>


More information about the openstack-discuss mailing list