[nova] Updates about Detaching/Attaching root volumes

Zhenyu Zheng zhengzhenyulixi at gmail.com
Thu Feb 28 01:41:18 UTC 2019


As for this case, and what Matt mentioned in the patch review:
> restriction for attaching volumes with a tag because while it's
 > true we don't know if the compute that the instance is unshelved on
 > will support device tags, we also don't know that during initial
 > server create but we still allow bdm tags in that case. In fact, in
 > the server create case, if you try to create a server with bdm tags
 > on a compute host that does not support them, it will fail the
 > build (not even try to reschedule)

There is something I don't quite understand, what will be different for the
volumes that are newly attached and
the existing volumes in case you mentioned? I mean, the existing volumes
could also have tags, and when
we unshelve,  we still have to handle the tags in bdms, no matter it is
existing bdms or newly atteched when the
instance is in ``shelved_offloaded`` status. What is the difference?

BR,

On Thu, Feb 28, 2019 at 12:08 AM Artom Lifshitz <alifshit at redhat.com> wrote:

> On Tue, Feb 26, 2019 at 8:23 AM Matt Riedemann <mriedemos at gmail.com>
> wrote:
> >
> > On 2/26/2019 6:40 AM, Zhenyu Zheng wrote:
> > > I'm working on a blueprint to support Detach/Attach root volumes. The
> > > blueprint has been proposed for quite a while since mitaka[1] in that
> > > version of proposal, we only talked about instances in
> shelved_offloaded
> > > status. And in Stein[2] the status of stopped was also added. But now
> we
> > > realized that support detach/attach root volume on a stopped instance
> > > could be problemastic since the underlying image could change which
> > > might invalidate the current host.[3]
> > >
> > > So Matt and Sean suggested maybe we could just do it for
> > > shelved_offloaded instances, and I have updated the patch according to
> > > this comment. And I will update the spec latter, so if anyone have
> > > thought on this, please let me know.
> >
> > I mentioned this during the spec review but didn't push on it I guess,
> > or must have talked myself out of it. We will also have to handle the
> > image potentially changing when attaching a new root volume so that when
> > we unshelve, the scheduler filters based on the new image metadata
> > rather than the image metadata stored in the RequestSpec from when the
> > server was originally created. But for a stopped instance, there is no
> > run through the scheduler again so I don't think we can support that
> > case. Also, there is no real good way for us (right now) to even compare
> > the image ID from the new root volume to what was used to originally
> > create the server because for volume-backed servers the
> > RequestSpec.image.id is not set (I'm not sure why, but that's the way
> > it's always been, the image.id is pop'ed from the metadata [1]). And
> > when we detach the root volume, we null out the BDM.volume_id so we
> > can't get back to figure out what that previous root volume's image ID
> > was to compare, i.e. for a stopped instance we can't enforce that the
> > underlying image is the same to support detach/attach root volume. We
> > could probably hack stuff up by stashing the old volume_id/image_id in
> > system_metadata but I'd rather not play that game.
> >
> > It also occurs to me that the root volume attach code is also not
> > verifying that the new root volume is bootable. So we really need to
> > re-use this code on root volume attach [2].
> >
> > tl;dr when we attach a new root volume, we need to update the
> > RequestSpec.image (ImageMeta) object based on the new root volume's
> > underlying volume_image_metadata so that when we unshelve we use that
> > image rather than the original image.
> >
> > >
> > > Another thing I wanted to discuss is that in the proposal, we will
> reset
> > > some fields in the root_bdm instead of delete the whole record, among
> > > those fields, the tag field could be tricky. My idea was to reset it
> > > too. But there also could be cases that the users might think that it
> > > would not change[4].
> >
> > Yeah I am not sure what to do here. Here is a scenario:
> >
> > User boots from volume with a tag "ubuntu1604vol" to indicate it's the
> > root volume with the operating system. Then they shelve offload the
> > server and detach the root volume. At this point, the GET
> > /servers/{server_id}/os-volume_attachments API is going to show None for
> > the volume_id on that BDM but should it show the original tag or also
> > show None for that. Kevin currently has the tag field being reset to
> > None when the root volume is detached.
> >
> > When the user attaches a new root volume, they can provide a new tag so
> > even if we did not reset the tag, the user can overwrite it. As a user,
> > would you expect the tag to be reset when the root volume is detached or
> > have it persist but be overwritable?
> >
> > If in this scenario the user then attaches a new root volume that is
> > CentOS or Ubuntu 18.04 or something like that, but forgets to update the
> > tag, then the old tag would be misleading.
>
> The tag is a Nova concept on the attachment. If you detach a volume
> (root or not) then attach a different one (root or not), to me that's
> a new attachment, with a new (potentially None) tag. I have no idea
> who that fits into the semantics around root volume detach, but that's
> my 2 cents.
>
> >
> > So it is probably safest to just reset the tag like Kevin's proposed
> > code is doing, but we could use some wider feedback here.
> >
> > [1]
> >
> https://github.com/openstack/nova/blob/33f367ec2f32ce36b00257c11c5084400416774c/nova/utils.py#L943
> > [2]
> >
> https://github.com/openstack/nova/blob/33f367ec2f32ce36b00257c11c5084400416774c/nova/compute/api.py#L1091-L1101
> >
> > --
> >
> > Thanks,
> >
> > Matt
> >
>
>
> --
> --
> Artom Lifshitz
> Software Engineer, OpenStack Compute DFG
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190228/0975719d/attachment.html>


More information about the openstack-discuss mailing list