[nova] Updates about Detaching/Attaching root volumes

Matt Riedemann mriedemos at gmail.com
Tue Feb 26 13:21:22 UTC 2019


On 2/26/2019 6:40 AM, Zhenyu Zheng wrote:
> I'm working on a blueprint to support Detach/Attach root volumes. The 
> blueprint has been proposed for quite a while since mitaka[1] in that 
> version of proposal, we only talked about instances in shelved_offloaded 
> status. And in Stein[2] the status of stopped was also added. But now we 
> realized that support detach/attach root volume on a stopped instance 
> could be problemastic since the underlying image could change which 
> might invalidate the current host.[3]
> 
> So Matt and Sean suggested maybe we could just do it for 
> shelved_offloaded instances, and I have updated the patch according to 
> this comment. And I will update the spec latter, so if anyone have 
> thought on this, please let me know.

I mentioned this during the spec review but didn't push on it I guess, 
or must have talked myself out of it. We will also have to handle the 
image potentially changing when attaching a new root volume so that when 
we unshelve, the scheduler filters based on the new image metadata 
rather than the image metadata stored in the RequestSpec from when the 
server was originally created. But for a stopped instance, there is no 
run through the scheduler again so I don't think we can support that 
case. Also, there is no real good way for us (right now) to even compare 
the image ID from the new root volume to what was used to originally 
create the server because for volume-backed servers the 
RequestSpec.image.id is not set (I'm not sure why, but that's the way 
it's always been, the image.id is pop'ed from the metadata [1]). And 
when we detach the root volume, we null out the BDM.volume_id so we 
can't get back to figure out what that previous root volume's image ID 
was to compare, i.e. for a stopped instance we can't enforce that the 
underlying image is the same to support detach/attach root volume. We 
could probably hack stuff up by stashing the old volume_id/image_id in 
system_metadata but I'd rather not play that game.

It also occurs to me that the root volume attach code is also not 
verifying that the new root volume is bootable. So we really need to 
re-use this code on root volume attach [2].

tl;dr when we attach a new root volume, we need to update the 
RequestSpec.image (ImageMeta) object based on the new root volume's 
underlying volume_image_metadata so that when we unshelve we use that 
image rather than the original image.

> 
> Another thing I wanted to discuss is that in the proposal, we will reset 
> some fields in the root_bdm instead of delete the whole record, among 
> those fields, the tag field could be tricky. My idea was to reset it 
> too. But there also could be cases that the users might think that it 
> would not change[4].

Yeah I am not sure what to do here. Here is a scenario:

User boots from volume with a tag "ubuntu1604vol" to indicate it's the 
root volume with the operating system. Then they shelve offload the 
server and detach the root volume. At this point, the GET 
/servers/{server_id}/os-volume_attachments API is going to show None for 
the volume_id on that BDM but should it show the original tag or also 
show None for that. Kevin currently has the tag field being reset to 
None when the root volume is detached.

When the user attaches a new root volume, they can provide a new tag so 
even if we did not reset the tag, the user can overwrite it. As a user, 
would you expect the tag to be reset when the root volume is detached or 
have it persist but be overwritable?

If in this scenario the user then attaches a new root volume that is 
CentOS or Ubuntu 18.04 or something like that, but forgets to update the 
tag, then the old tag would be misleading.

So it is probably safest to just reset the tag like Kevin's proposed 
code is doing, but we could use some wider feedback here.

[1] 
https://github.com/openstack/nova/blob/33f367ec2f32ce36b00257c11c5084400416774c/nova/utils.py#L943
[2] 
https://github.com/openstack/nova/blob/33f367ec2f32ce36b00257c11c5084400416774c/nova/compute/api.py#L1091-L1101

-- 

Thanks,

Matt



More information about the openstack-discuss mailing list