[nova][cinder][ops] question/confirmation of legacy vol attachment migration
Walter Boring
waboring at hemna.com
Fri Oct 4 16:03:40 UTC 2019
So looking into the cinder code,
calling attachment_delete should be what we want to call. But.
I think if we don't have a host connector passed in and the attachment
record doesn't have a connector saved,
then that results in the volume manager not calling the cinder driver to
terminate_connection and return.
This also bypasses the driver's remove_export() which is the last chance
for a driver to unexport a volume.
Walt
On Thu, Oct 3, 2019 at 7:27 PM Matt Riedemann <mriedemos at gmail.com> wrote:
> Hello Cinderinos,
>
> I've now got a working patch that migrates legacy volume attachments to
> new style v3 attachments [1]. The fun stuff showing it working is in
> this paste [2].
>
> We want to do this data migration in nova because we still have a lot of
> compatibility code since Queens for pre-v3 style attachments and we
> can't remove that compatibility code (ever) if we don't first make sure
> we provide a data migration routine for operators to roll through. So
> for example if this lands in Ussuri we can can enforce a nova-status
> upgrade check in V and rip out code in X.
>
> Without digging into the patch, this is the flow:
>
> 1. On nova-compute restart, query the nova DB for instances on the
> compute host with legacy volume attachments.
>
> 2. For each of those, create a new style attachment with the host
> connector and update the BlockDeviceMapping information in the nova DB
> (attachment_id and connection_info).
>
> 3. Delete the existing legacy attachment so when the server is deleted
> the volume status goes back to 'available' due to proper attachment
> reference counting in the Cinder DB.
>
> My main question is on #3. Right now I'm calling the v3 attachment
> delete API rather than the v2 os-terminate_connection API. Is that
> sufficient to cleanup the legacy attachment on the storage backend even
> though the connection was created via os-initialize_connection
> originally? Looking at the cinder code, attachment_delete hits the
> connection terminate code under the covers [3]. So that looks OK. The
> only thing I can really think of is if a host connector is not provided
> or tracked with the legacy attachment, is that going to cause problems?
> Note that I think volume drivers are already required to deal with that
> today anyway because of the "local delete" scenario in the compute API
> where the compute host that the server is on is down and thus we don't
> have a host connector to provide to Cinder to terminate the connection.
>
> So Cinder people, are you OK with this flow?
>
> Hello Novaheads,
>
> Do you have any issues with the above? Note the migration routine is
> threaded out on compute start so it doesn't block, similar to the ironic
> flavor data migration introduced in Pike.
>
> One question I have is if we should add a config option for this so
> operators can enable/disable it as needed. Note that this requires nova
> to be configured with a service user that has the admin role to do this
> stuff in cinder since we don't have a user token, similar to nova doing
> things with neutron ports without a user token. Testing this with
> devstack requires [4]. By default [cinder]/auth_type is None and not
> required so by default this migration routine is not going to run so
> maybe that is sufficient?
>
> Hello Operatorites,
>
> Do you have any issues with what's proposed above?
>
> [1] https://review.opendev.org/#/c/549130/
> [2] http://paste.openstack.org/show/781063/
> [3]
>
> https://github.com/openstack/cinder/blob/410791580ef60ddb03104bf20766859ed9d78932/cinder/volume/manager.py#L4650
> [4] https://review.opendev.org/#/c/685488/
>
> --
>
> Thanks,
>
> Matt
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20191004/f582e9a4/attachment-0001.html>
More information about the openstack-discuss
mailing list