[openstack-dev] [nova][cinder] Questions about truncked disk serial number

Walter Boring waboring at hemna.com
Wed Jan 31 12:59:27 UTC 2018


First off, the id's you are showing there are Cinder uuid's to identify the
volumes in the cinder DB and are used for cinder based actions.  The Ids
that are seen and used by the system for discovery and passing to qemu are
the disk SCSI ids, which are embedded in the volume's themselves.  os-brick
returns the SCSI id to nova for use in attaching and it's not limited to
the 20 characters.



On Tue, Jan 16, 2018 at 4:19 AM, Yikun Jiang <yikunkero at gmail.com> wrote:

> Some detail steps as below:
> 1. First, We have 2 volumes with same part-uuid prefix.
> [image: 内嵌图片 1]
>
> volume(yikun2) is attached to server(test)
>
> 2. In GuestOS(Cent OS 7), take a look at by path and by id:
> [image: 内嵌图片 2]
> we found both by-path and by-id vdb links was generated successfully.
>
> 3. attach volume(yikun2_1) to server(test)
> [image: 内嵌图片 4]
>
> 4. In GuestOS(Cent OS 7), take a look at by path and by id:
>
> [image: 内嵌图片 6]
>
> by-path soft link was generated successfully, but by-id link was failed
> to generate.
> *That is, in this case, if a user find the device by by-id, it would be
> failed to find it or find a wrong device.*
>
> one of the user cases was happened on k8s device finding, more info you
> can see the ref as below:
> https://github.com/kubernetes/kubernetes/blob/
> 53a8ac753bf468eaf6bcb5a07e34a0a67480df43/pkg/cloudprovider/
> providers/openstack/openstack_volumes.go#L463
>
> So, I think by-id is NOT a good way to find the device, but what the best
> practice is? let's see other idea.
>
> Regards,
> Yikun
>
> ----------------------------------------
> Jiang Yikun(Kero)
> Mail: yikunkero at gmail.com
>
> 2018-01-16 14:36 GMT+08:00 Zhenyu Zheng <zhengzhenyulixi at gmail.com>:
>
>> Ops, forgot references:
>> [1] https://github.com/torvalds/linux/blob/1cc15701cd89b0ce6
>> 95bbc5cff3a2bf3e2efd25f/include/uapi/linux/virtio_blk.h#L54
>> [2] https://github.com/torvalds/linux/blob/1cc15701cd89b0ce6
>> 95bbc5cff3a2bf3e2efd25f/drivers/block/virtio_blk.c#L363
>>
>> On Tue, Jan 16, 2018 at 2:35 PM, Zhenyu Zheng <zhengzhenyulixi at gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> I meet a problem like this recently:
>>>
>>> When attaching a volume to an instance, in the xml, the disk is
>>> described as:
>>>
>>> [image: Inline image 1]
>>> where the serial number here is the volume uuid in Cinder. While inside
>>> the vm:
>>> in /dev/disk/by-id, there is a link for /vdb with the name of
>>> "virtio"+truncated serial number:
>>>
>>> [image: Inline image 2]
>>>
>>> and according to https://access.redhat.com/d
>>> ocumentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platfo
>>> rm/2/html/Getting_Started_Guide/ch16s03.html
>>>
>>> it seems that we will use this mount the volume.
>>>
>>> The truncate seems to be happen in here [1][2] which is 20 digits.
>>>
>>> *My question here is: *if two volume have the identical first 20 digits
>>> in their uuids, it seems that the latter attached one will overwrite the
>>> first one's link:
>>> [image: Inline image 3]
>>> (the above graph is snapshot for an volume backed instance, the
>>> virtio-15exxxxx was point to vda before, the by-path seems correct though)
>>>
>>> It is rare to have the identical first 20 digits of two uuids, but
>>> possible, so what was the consideration of truncate only 20 digits of the
>>> volume uuid instead of use full 32?
>>>
>>> BR,
>>>
>>> Kevin Zheng
>>>
>>
>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180131/77c3a92b/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 10798 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180131/77c3a92b/attachment-0007.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 9638 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180131/77c3a92b/attachment-0008.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 21095 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180131/77c3a92b/attachment-0009.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 19561 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180131/77c3a92b/attachment-0010.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 13550 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180131/77c3a92b/attachment-0011.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 5228 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180131/77c3a92b/attachment-0012.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 46374 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180131/77c3a92b/attachment-0013.png>


More information about the OpenStack-dev mailing list