[openstack-dev] [Nova][Cinder] Feature about Raw Device Mapping
zhangleiqiang at gmail.com
zhangleiqiang at gmail.com
Wed Mar 19 12:03:31 UTC 2014
After second thought, it will be more meaningful to just add virtio-SCSI bus type support to block-device-mapping.
RDM can then be used or not, depend on the bus type and device type of bdm specified by user. And user can also just use virtio-SCSI bus for performance other than pass through.
Any suggestions?
"Zhangleiqiang (Trump)" <zhangleiqiang at huawei.com> :
>> From: Huang Zhiteng [mailto:winston.d at gmail.com]
>> Sent: Wednesday, March 19, 2014 12:14 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device
>> Mapping
>>
>> On Tue, Mar 18, 2014 at 5:33 PM, Zhangleiqiang (Trump)
>> <zhangleiqiang at huawei.com> wrote:
>>>> From: Huang Zhiteng [mailto:winston.d at gmail.com]
>>>> Sent: Tuesday, March 18, 2014 4:40 PM
>>>> To: OpenStack Development Mailing List (not for usage questions)
>>>> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device
>>>> Mapping
>>>>
>>>> On Tue, Mar 18, 2014 at 11:01 AM, Zhangleiqiang (Trump)
>>>> <zhangleiqiang at huawei.com> wrote:
>>>>>> From: Huang Zhiteng [mailto:winston.d at gmail.com]
>>>>>> Sent: Tuesday, March 18, 2014 10:32 AM
>>>>>> To: OpenStack Development Mailing List (not for usage questions)
>>>>>> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw
>>>>>> Device Mapping
>>>>>>
>>>>>> On Tue, Mar 18, 2014 at 9:40 AM, Zhangleiqiang (Trump)
>>>>>> <zhangleiqiang at huawei.com> wrote:
>>>>>>> Hi, stackers:
>>>>>>>
>>>>>>> With RDM, the storage logical unit number (LUN) can be
>>>>>>> directly
>>>>>> connected to a instance from the storage area network (SAN).
>>>>>>>
>>>>>>> For most data center applications, including Databases,
>>>>>>> CRM and
>>>>>> ERP applications, RDM can be used for configurations involving
>>>>>> clustering between instances, between physical hosts and instances
>>>>>> or where SAN-aware applications are running inside a instance.
>>>>>> If 'clustering' here refers to things like cluster file system,
>>>>>> which requires LUNs to be connected to multiple instances at the same
>> time.
>>>>>> And since you mentioned Cinder, I suppose the LUNs (volumes) are
>>>>>> managed by Cinder, then you have an extra dependency for
>>>>>> multi-attach
>>>>>> feature:
>>>> https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume.
>>>>>
>>>>> Yes. "Clustering" include Oracle RAC, MSCS, etc. If they want to
>>>>> work in
>>>> instance-based cloud environment, RDM and multi-attached-volumes are
>>>> both needed.
>>>>>
>>>>> But RDM is not only used for clustering, and haven't dependency for
>>>> multi-attach-volume.
>>>>
>>>> Set clustering use case and performance improvement aside, what other
>>>> benefits/use cases can RDM bring/be useful for?
>>>
>>> Thanks for your reply.
>>>
>>> The advantages of Raw device mapping are all introduced by its capability of
>> "pass" scsi command to the device, and the most common use cases are
>> clustering and performance improvement mentioned above.
>> As mentioned in earlier email, I doubt the performance improvement comes
>> from 'virtio-scsi' interface instead of RDM. We can actually test them to
>> verify. Here's what I would do: create one LUN(volume) on the SAN, attach
>> the volume to instance using current attach code path but change the virtual
>> bus to 'virtio-scsi' and then measure the IO performance using standard IO
>> benchmark; next, attach the volume to instance using 'lun' device for 'disk' and
>> 'virtio-scsi' for bus, and do the measurement again. We shall be able to see
>> the performance difference if there is any. Since I don't have a SAN to play
>> with, could you please do the test and share the results?
>
> The performance improvement does comes from "virtio-scsi" controller, and is not caused by using "lun" device instead of "disk" device.
> I don't have a usable SAN at present. But from the libvirt's doc ([1]), the "lun" device behaves identically to "disk" device except that generic SCSI commands from the instance are accepted and passed through to the physical device.
>
> Sorry for misleading. The "RDM" I mentioned in earlier email includes the "lun" device and the "virtio-scsi" controller.
>
> Now, the performance improvement comes from "virtio-scsi" controller, however, boot-from a volume using virtio-scsi interface or attach a volume with a new virtio-scsi interface are both unsupported currently. I think add these features is meaningful. And as mentioned in the first email, set the "virtio-scsi" controller aside, "lun" device has already supported by block-device-mapping-v2 extension.
>
> [1] http://libvirt.org/formatdomain.html#elementsDisks
>
>>> And besides these two scenarios, there is another use case: running
>> SAN-aware application inside instances, such as:
>>> 1. SAN management app
>> Yes, that is possible if RDM is enable. But I wonder what is the real use case
>> behind this. Even though SAN mgmt app inside instance is able to manage the
>> LUN directly, but it is just a LUN instead of a real SAN, what the instance can do
>> is *limited* to the specific LUN, which doesn't seem very useful IMO. Or are
>> you thinking about creating a big enough LUN for user so they can treat it like a
>> 'virtual' SAN and do all kinds of management stuff to it and even maybe resell it
>> for PaaS use cases?
>>
>>> 2. Apps which can offload the device related works, such as snapshot, backup,
>> etc, to SAN.
>> Not sure I follow this use cases either, nor do I understand why end users want
>> to do all those operations _inside_ instance instead of utilizing existing
>> infrastructure like Cinder. If the goal behind this is to make traditional IT
>> users happy, I tend to agree with what Duncan said in another thread
>> (http://osdir.com/ml/openstack-dev/2014-03/msg01395.html)
>
> Maybe there is some misunderstanding. The goal behind it is not to make traditional IT users happy.
> For the end users, it is very appropriate that if their apps which work on physical server previously now can running in cloud environment without extra limit. I think it's not to make traditional IT users happy on purpose.
>
>
>>>
>>>
>>>>>
>>>>>>> RDM, which permits the use of existing SAN commands, is
>>>>>> generally used to improve performance in I/O-intensive
>>>>>> applications and block locking. Physical mode provides access to
>>>>>> most hardware functions of the storage system that is mapped.
>>>>>> It seems to me that the performance benefit mostly from
>>>>>> virtio-scsi, which is just an virtual disk interface, thus should
>>>>>> also benefit all virtual disk use cases not just raw device mapping.
>>>>>>>
>>>>>>> For libvirt driver, RDM feature can be enabled through the
>> "lun"
>>>>>> device connected to a "virtio-scsi" controller:
>>>>>>>
>>>>>>> <disk type='block' device='lun'>
>>>>>>> <driver name='qemu' type='raw' cache='none'/>
>>>>>>> <source
>>>>>> dev='/dev/mapper/360022a110000ecba5db427db00000023'/>
>>>>>>> <target dev='sdb' bus='scsi'/>
>>>>>>> <address type='drive' controller='0' bus='0'/>
>>>>>>> </disk>
>>>>>>>
>>>>>>> <controller type='scsi' index='0' model='virtio-scsi'/>
>>>>>>>
>>>>>>> Currently,the related works in OpenStack as follows:
>>>>>>> 1. block-device-mapping-v2 extension has already support
>>>>>>> the
>>>>>> "lun" device with "scsi" bus type listed above, but cannot make
>>>>>> the disk use "virtio-scsi" controller instead of default "lsi" scsi controller.
>>>>>>> 2. libvirt-virtio-scsi-driver BP ([1]) whose milestone
>>>>>>> target is
>>>>>> icehouse-3 is aim to support generate a virtio-scsi controller
>>>>>> when using an image with "virtio-scsi" property, but it seems not
>>>>>> to take boot-from-volume and attach-rdm-volume into account.
>>>>>>>
>>>>>>> I think it is meaningful if we provide the whole support
>>>>>>> for RDM
>>>>>> feature in OpenStack.
>>>>>>>
>>>>>>> Any thoughts? Welcome any advices.
>>>>>>>
>>>>>>>
>>>>>>> [1]
>>>>>>> https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-scsi-
>>>>>>> dri
>>>>>>> ver
>>>>>>> ----------
>>>>>>> zhangleiqiang (Trump)
>>>>>>>
>>>>>>> Best Regards
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> OpenStack-dev mailing list
>>>>>>> OpenStack-dev at lists.openstack.org
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de
>>>>>>> v
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Regards
>>>>>> Huang Zhiteng
>>>>>>
>>>>>> _______________________________________________
>>>>>> OpenStack-dev mailing list
>>>>>> OpenStack-dev at lists.openstack.org
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> _______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> OpenStack-dev at lists.openstack.org
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>>
>>>> --
>>>> Regards
>>>> Huang Zhiteng
>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Regards
>> Huang Zhiteng
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list