[openstack-dev] [Cinder] Support LVM on a shared LU

Deepak Shetty dpkshetty at gmail.com
Wed May 28 07:11:24 UTC 2014


Mitsuhiro,
  Few questions that come to my mind based on your proposal

1) There is a lof of manual work needed here.. like every time the new host
added.. admin needs to do FC zoning to ensure that LU is visible by the
host. Also the method you mentioend for refreshing (echo '---' > ...)
doesn't work reliably across all storage types does it ?

2) In Slide 1-1 .. how ( and who?) ensures that the compute nodes don't
step on each other is using the LVs ? In other words.. how is it ensured
that LV1 is not used by compute nodes 1 and 2 at the same time ?

3) In slide 1-2, you show that the LU1 is seen as /dev/sdx on all the
nodes.. this is wrong.. it can be seen as anything (/dev/sdx on control
node, sdn on compute 1, sdz on compute 2) so assumign sdx on all nodes is
wrong.
How does this different device names handled.. in short, how does compute
node 2 knows that LU1 is actually sdn and not sdz (assuming you had > 1 LUs
provisioned)

4) What abt multipath ? In most prod env.. the FC storage will be
multipath'ed.. hence you will actually see sdx and sdy on each node and you
actually need to use mpathN (which is multipathe'd to sdx anx sdy) device
and NOT the sd? device to take adv of the customer multipath env. How does
the nodes know which mpath? device to use and which mpath? device maps to
which LU on the array ?

5) Doesnt this new proposal also causes the compute nodes to be physcially
connected (via FC) to the array, which means more wiring and need for FC
HBA on compute nodes. With LVMiSCSI, we don't need FC HBA on compute nodes
so you are actualluy adding cost of each FC HBA to the compute nodes and
slowly turning commodity system to non-commodity ;-) (in a way)

6) Last but not the least... since you are using 1 BIG LU on the array to
host multiple volumes, you cannot possibly take adv of the premium,
efficient snapshot/clone/mirroring features of the array, since they are at
LU level, not at the LV level. LV snapshots have limitations (as mentioned
by you in other thread) and are always in-efficient compared to array
snapshots. Why would someone want to use less efficient method when they
invested on a expensive array ?

thanx,
deepak



On Tue, May 20, 2014 at 9:01 PM, Mitsuhiro Tanino
<mitsuhiro.tanino at hds.com>wrote:

>  Hello All,
>
>
>
> I’m proposing a feature of LVM driver to support LVM on a shared LU.
>
> The proposed LVM volume driver provides these benefits.
>   - Reduce hardware based storage workload by offloading the workload to
> software based volume operation.
>   - Provide quicker volume creation and snapshot creation without storage
> workloads.
>   - Enable cinder to any kinds of shared storage volumes without specific
> cinder storage driver.
>
>   - Better I/O performance using direct volume access via Fibre channel.
>
>
>
> In the attachment pdf, following contents are explained.
>
>   1. Detail of Proposed LVM volume driver
>
>   1-1. Big Picture
>
>   1-2. Administrator preparation
>
>   1-3. Work flow of volume creation and attachment
>
>   2. Target of Proposed LVM volume driver
>
>   3. Comparison of Proposed LVM volume driver
>
>
>
> Could you review the attachment?
>
> Any comments, questions, additional ideas would be appreciated.
>
>
>
>
>
> Also there are blueprints, wiki and patches related to the slide.
>
> https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage
>
> https://blueprints.launchpad.net/nova/+spec/lvm-driver-for-shared-storage
>
>
> https://wiki.openstack.org/wiki/Cinder/NewLVMbasedDriverForSharedStorageInCinder
>
> https://review.openstack.org/#/c/92479/
>
> https://review.openstack.org/#/c/92443/
>
>
>
> Regards,
>
> Mitsuhiro Tanino <mitsuhiro.tanino at hds.com>
>
>      *HITACHI DATA SYSTEMS*
>
>      c/o Red Hat, 314 Littleton Road, Westford, MA 01886
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140528/2016f0de/attachment.html>


More information about the OpenStack-dev mailing list