[Cinder][driver][ScaleIO]

Kulazhenkov, Yury Yury.Kulazhenkov at dell.com
Wed Feb 6 16:19:15 UTC 2019


Hi Martin,

Martin wrote:
> So if we get volumes that are still mapped to hypervisors after deleting the attached instances with sio_unmap_volume_before_deletion set to False, there's a good chance it's a bug?
Yes,  volumes should be detached from host even without  set sio_unmap_volume_before_deletion = True.

Yury

From: Martin Chlumsky <martin.chlumsky at gmail.com>
Sent: Wednesday, February 6, 2019 6:20 PM
To: Kulazhenkov, Yury
Cc: Kanevsky, Arkady; jsbryant at electronicjungle.net; openstack-discuss at lists.openstack.org; Walsh, Helen; Belogrudov, Vladislav
Subject: Re: [Cinder][driver][ScaleIO]


[EXTERNAL EMAIL]
Hi Yury,

Thank you for the clarification.
So if we get volumes that are still mapped to hypervisors after deleting the attached instances with sio_unmap_volume_before_deletion set to False, there's a good chance it's a bug? I will open a bug report in this case.

Cheers,

Martin

On Wed, Feb 6, 2019 at 9:35 AM Kulazhenkov, Yury <Yury.Kulazhenkov at dell.com<mailto:Yury.Kulazhenkov at dell.com>> wrote:
Hi Martin,

Martin wrote:
> It seems you would always
> want to unmap the volume from the hypervisor before deleting it.
If you remove or shelve instance from hypervisor host, then nova will trigger ScaleIO to unmap volume from that host.
No issues should happen during deletion at this point, because volume is already unmapped(unmounted).
No need to change sio_unmap_volume_before_deletion default value here.

Martin wrote:
> What is the reasoning behind this option?
Setting sio_unmap_volume_before_deletion option to  True means that cinder driver will force unmount volume from ALL ScaleIO client nodes (not only Openstack nodes) during volume deletion.
Enabling this option can be useful if you periodically detect compute nodes with unmanaged ScaleIO volume mappings(volume mappings that not managed by Openstack) in your environment.  You can get such unmanaged mappings in some cases, for example if there was hypervisor node power failure. If during that power failure instances with mapped volumes were moved to another host, than unmanaged mappings may appear on failed node after its recovery.

Martin wrote:
>Why would we ever set this
> to False and why is it False by default?
Force unmounting volumes from ALL ScaleIO clients is additional overhead. It doesn't required in most environments.


Best regards,
Yury

-----Original Message-----
From: Arkady.Kanevsky at dell.com<mailto:Arkady.Kanevsky at dell.com> <Arkady.Kanevsky at dell.com<mailto:Arkady.Kanevsky at dell.com>>
Sent: Wednesday, February 6, 2019 7:24 AM
To: jsbryant at electronicjungle.net<mailto:jsbryant at electronicjungle.net>; openstack-discuss at lists.openstack.org<mailto:openstack-discuss at lists.openstack.org>; Walsh, Helen; Belogrudov, Vladislav
Subject: RE: [Cinder][driver][ScaleIO]

Adding Vlad who is the right person for ScaleIO driver.

-----Original Message-----
From: Jay Bryant <jungleboyj at gmail.com<mailto:jungleboyj at gmail.com>>
Sent: Tuesday, February 5, 2019 5:30 PM
To: openstack-discuss at lists.openstack.org<mailto:openstack-discuss at lists.openstack.org>; Walsh, Helen
Subject: Re: [Cinder][driver][ScaleIO]

Adding Helen Walsh to this as she may be able to provide insight.

Jay

On 2/5/2019 12:16 PM, Martin Chlumsky wrote:
> Hello,
>
> We are using EMC ScaleIO as our backend to cinder.
> When we delete VMs that have attached volumes and then try deleting
> said volumes, the volumes will sometimes end in state error_deleting.
> The state is reached because for some reason the volumes are still
> mapped (in the ScaleIO sense of the word) to the hypervisor despite
> the VM being deleted.
> We fixed the issue by setting the following option to True in cinder.conf:
>
> # Unmap volume before deletion. (boolean value)
> sio_unmap_volume_before_deletion=False
>
>
> What is the reasoning behind this option? Why would we ever set this
> to False and why is it False by default? It seems you would always
> want to unmap the volume from the hypervisor before deleting it.
>
> Thank you,
>
> Martin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190206/d519c88b/attachment-0001.html>


More information about the openstack-discuss mailing list