[openstack-dev] [nova][cinder] Questions about truncked disk serial number

Lucian Petrut lpetrut at cloudbasesolutions.com
Wed Jan 31 14:36:57 UTC 2018


Actually, when using the libvirt driver, the SCSI id returned by os-brick is not exposed to the guest. The reason is that Nova explicitly sets the volume id as "serial" id in the guest disk configuration. Qemu will expose this to the guest, but with a 20 character limit.

For what is worth, Kubernetes as well as some guides rely on this behaviour.

For example:

nova volume-attach e03303e1-c20b-441c-b94a-724cb2469487 10780b60-ad70-479f-a612-14d03b1cc64d
virsh dumpxml `nova show cirros | grep instance_name | cut -d "|" -f 3`

<domain type='qemu' id='10'>
  <name>instance-0000013d</name>
  <uuid>e03303e1-c20b-441c-b94a-724cb2469487</uuid>
....
<disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/sdb'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <serial>10780b60-ad70-479f-a612-14d03b1cc64d</serial>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>

nova log:
Jan 31 15:39:54 ubuntu nova-compute[46142]: DEBUG os_brick.initiator.connectors.iscsi [None req-d0c62440-133c-4e89-8798-20278ca50f00 admin admin] <== connect_volume: return (2578ms) {'path': u'/dev/sdb', 'scsi_wwn': u'360000000000000000e00000000010001', 'type': u'block'} {{(pid=46142) trace_logging_wrapper /usr/local/lib/python2.7/dist-packages/os_brick/utils.py:170}}
Jan 31 15:39:54 ubuntu nova-compute[46142]: DEBUG nova.virt.libvirt.volume.iscsi [None req-d0c62440-133c-4e89-8798-20278ca50f00 admin admin] Attached iSCSI volume {'path': u'/dev/sdb', 'scsi_wwn': '360000000000000000e00000000010001', 'type': 'block'} {{(pid=46142) connect_volume /opt/stack/nova/nova/virt/libvirt/volume/iscsi.py:65}}
Jan 31 15:39:54 ubuntu nova-compute[46142]: DEBUG nova.virt.libvirt.guest [None req-d0c62440-133c-4e89-8798-20278ca50f00 admin admin] attach device xml: <disk type="block" device="disk">
Jan 31 15:39:54 ubuntu nova-compute[46142]:   <driver name="qemu" type="raw" cache="none" io="native"/>
Jan 31 15:39:54 ubuntu nova-compute[46142]:   <source dev="/dev/sdb"/>
Jan 31 15:39:54 ubuntu nova-compute[46142]:   <target bus="virtio" dev="vdb"/>
Jan 31 15:39:54 ubuntu nova-compute[46142]:   <serial>10780b60-ad70-479f-a612-14d03b1cc64d</serial>
Jan 31 15:39:54 ubuntu nova-compute[46142]: </disk>
Jan 31 15:39:54 ubuntu nova-compute[46142]:  {{(pid=46142) attach_device /opt/stack/nova/nova/virt/libvirt/guest.py:302}}

Regards,
Lucian Petrut

On Wed, 2018-01-31 at 07:59 -0500, Walter Boring wrote:
First off, the id's you are showing there are Cinder uuid's to identify the volumes in the cinder DB and are used for cinder based actions.  The Ids that are seen and used by the system for discovery and passing to qemu are the disk SCSI ids, which are embedded in the volume's themselves.  os-brick returns the SCSI id to nova for use in attaching and it's not limited to the 20 characters.



On Tue, Jan 16, 2018 at 4:19 AM, Yikun Jiang <yikunkero at gmail.com<mailto:yikunkero at gmail.com>> wrote:
Some detail steps as below:
1. First, We have 2 volumes with same part-uuid prefix.
[内嵌图片 1]

volume(yikun2) is attached to server(test)

2. In GuestOS(Cent OS 7), take a look at by path and by id:
[内嵌图片 2]
we found both by-path and by-id vdb links was generated successfully.

3. attach volume(yikun2_1) to server(test)
[内嵌图片 4]

4. In GuestOS(Cent OS 7), take a look at by path and by id:

[内嵌图片 6]

by-path soft link was generated successfully, but by-id link was failed to generate.
That is, in this case, if a user find the device by by-id, it would be failed to find it or find a wrong device.

one of the user cases was happened on k8s device finding, more info you can see the ref as below:
https://github.com/kubernetes/kubernetes/blob/53a8ac753bf468eaf6bcb5a07e34a0a67480df43/pkg/cloudprovider/providers/openstack/openstack_volumes.go#L463

So, I think by-id is NOT a good way to find the device, but what the best practice is? let's see other idea.

Regards,
Yikun

----------------------------------------
Jiang Yikun(Kero)
Mail: yikunkero at gmail.com<mailto:yikunkero at gmail.com>

2018-01-16 14:36 GMT+08:00 Zhenyu Zheng <zhengzhenyulixi at gmail.com<mailto:zhengzhenyulixi at gmail.com>>:
Ops, forgot references:
[1] https://github.com/torvalds/linux/blob/1cc15701cd89b0ce695bbc5cff3a2bf3e2efd25f/include/uapi/linux/virtio_blk.h#L54
[2] https://github.com/torvalds/linux/blob/1cc15701cd89b0ce695bbc5cff3a2bf3e2efd25f/drivers/block/virtio_blk.c#L363

On Tue, Jan 16, 2018 at 2:35 PM, Zhenyu Zheng <zhengzhenyulixi at gmail.com<mailto:zhengzhenyulixi at gmail.com>> wrote:
Hi,

I meet a problem like this recently:

When attaching a volume to an instance, in the xml, the disk is described as:

[Inline image 1]
where the serial number here is the volume uuid in Cinder. While inside the vm:
in /dev/disk/by-id, there is a link for /vdb with the name of "virtio"+truncated serial number:

[Inline image 2]

and according to https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/Getting_Started_Guide/ch16s03.html

it seems that we will use this mount the volume.

The truncate seems to be happen in here [1][2] which is 20 digits.

My question here is: if two volume have the identical first 20 digits in their uuids, it seems that the latter attached one will overwrite the first one's link:
[Inline image 3]
(the above graph is snapshot for an volume backed instance, the virtio-15exxxxx was point to vda before, the by-path seems correct though)

It is rare to have the identical first 20 digits of two uuids, but possible, so what was the consideration of truncate only 20 digits of the volume uuid instead of use full 32?

BR,

Kevin Zheng


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180131/885e8e2b/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 46374 bytes
Desc: image.png
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180131/885e8e2b/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 5228 bytes
Desc: image.png
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180131/885e8e2b/attachment-0001.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 13550 bytes
Desc: image.png
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180131/885e8e2b/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 21095 bytes
Desc: image.png
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180131/885e8e2b/attachment-0003.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 9638 bytes
Desc: image.png
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180131/885e8e2b/attachment-0004.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 19561 bytes
Desc: image.png
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180131/885e8e2b/attachment-0005.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 10798 bytes
Desc: image.png
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180131/885e8e2b/attachment-0006.png>


More information about the OpenStack-dev mailing list