[nova] Support adding/changing delete_on_termination in volume attach api

Brin Zhang(张百林) zhangbailin at inspur.com
Tue Mar 5 02:34:15 UTC 2019


As Matt said, attach the discussion results at the Forum in Berlin(https://etherpad.openstack.org/p/BER-bfv-improvements #52), you can also read it yourself on the etherpad.:
        1.Allow updating delete_on_termination for existing attachments: https://review.openstack.org/#/c/580336/
                1.1 How to model this in the API?
                        -- PUT /servers/{server_id}/os-volume_attachments/{volume_id}  body={'volumeAttachment': {'delete_on_termination': True}}?
                        -- Note that the PUT route is already used for swap volume.
                1.2 This could break Heat if Heat created the volume, the user changed the flag and then nova deletes the volume but Heat expected it to exist.
                1.3 Alternative: if you really love the pet, snapshot it.
                1.4 Alternative: if we do the spec above, detach the volume and then re-attach with the delete_on_termination flag set to what you want.

        For the above [1.2], it has been verified in our R&D environment and customer production environment, no abnormalities have occurred, and everything is operating well.
        I wonder if any friends have encountered the scene mentioned here, does it have a certain impact on this?

        Changes to the current nova API:
        a. I want to add the change of [spec 1 ] to update a volume attachment API, and add "block_device_mapping_v2": [{ "volume_id": "763b894f-af35-4e44-bdb8-2ca45db9ecd8", "delete_on_termination": False }] in the request body.
        b. Add the change of [spec 2] to attach a volume to an instance API, add the "delete_on_termination" field to request body.

        [spec 1] https://review.openstack.org/#/c/580336/  Support for changing deleted_on_termination after boot
        [spec 2] https://review.openstack.org/#/c/612949/  Support delete_on_termination in server attach volume


Brin Zhang

-----邮件原件-----
发件人: Matt Riedemann [mailto:mriedemos at gmail.com]
发送时间: 2019年3月5日 2:04
收件人: openstack-discuss at lists.openstack.org
主题: [lists.openstack.org代发]Re: [nova] Support adding/changing delete_on_termination in volume attach api

On 3/3/2019 9:20 PM, Brin Zhang wrote:
>> Hi all:
>>
>>           Currently, you can set the "delete_on_termination" field to
>> the rood disk when creating the server. You can delete the root disk
>> when you delete the server. However, this configuration item cannot be
>> updated, so here we propose a scheme that can modify the
>> "delete_on_termination" configuration item, such as [1].
>>
>> In addition, the data volume of the instance should be allowed to be
>> set whether it can be deleted when it is deleted to clean up the
>> environment and free up storage space, such as [2].
>>
>> These are the more consistent and strongly added requirements from our
>> various OpenStack users.
>>
>> Thank you, looking forward to your reply.
>>
>> [1] https://review.openstack.org/#/c/580336/
>>
>> [2] https://review.openstack.org/#/c/612949/
>>
>> Brin Zhang
>>

>As you know we discussed this at the Forum in Berlin (I know you were not there but you know about the notes in the etherpad). Can you summarize the output of those discussions and what, if anything, it changed about the proposed specs?
>
>The specs are targeted at Stein, and we are closed for new specs in Stein so those will have to be re-targeted to Train.
>
>Finally, I was thinking about this recently in regards to the root volume attach/detach change that is proposed because that does not change delete_on_termination when attaching a new root volume, so could be a case for [2] above. In other words, I could boot from volume and say delete_on_termination=True for the root volume (maybe because nova creates it and I do not care to preserve it when the server is gone), detach the root volume and attach a new volume, and now want that delete_on_termination flag to be False. So that might further justify this change.
>
>--
>
>Thanks,
>
>Matt



More information about the openstack-discuss mailing list