<div dir="ltr"><div><div><div><div><div><div><div><div>Mitsuhiro,<br></div> Few questions that come to my mind based on your proposal<br></div><br>1) There is a lof of manual work needed here.. like every time the new host added.. admin needs to do FC zoning to ensure that LU is visible by the host. Also the method you mentioend for refreshing (echo '---' > ...) doesn't work reliably across all storage types does it ?<br>
<br></div>2) In Slide 1-1 .. how ( and who?) ensures that the compute nodes don't step on each other is using the LVs ? In other words.. how is it ensured that LV1 is not used by compute nodes 1 and 2 at the same time ?<br>
<br></div>3) In slide 1-2, you show that the LU1 is seen as /dev/sdx on all the nodes.. this is wrong.. it can be seen as anything (/dev/sdx on control node, sdn on compute 1, sdz on compute 2) so assumign sdx on all nodes is wrong.<br>
How does this different device names handled.. in short, how does compute node 2 knows that LU1 is actually sdn and not sdz (assuming you had > 1 LUs provisioned)<br><br></div>4) What abt multipath ? In most prod env.. the FC storage will be multipath'ed.. hence you will actually see sdx and sdy on each node and you actually need to use mpathN (which is multipathe'd to sdx anx sdy) device and NOT the sd? device to take adv of the customer multipath env. How does the nodes know which mpath? device to use and which mpath? device maps to which LU on the array ?<br>
<br></div>5) Doesnt this new proposal also causes the compute nodes to be physcially connected (via FC) to the array, which means more wiring and need for FC HBA on compute nodes. With LVMiSCSI, we don't need FC HBA on compute nodes<br>
so you are actualluy adding cost of each FC HBA to the compute nodes and slowly turning commodity system to non-commodity ;-) (in a way)<br><br></div>6) Last but not the least... since you are using 1 BIG LU on the array to host multiple volumes, you cannot possibly take adv of the premium, efficient snapshot/clone/mirroring features of the array, since they are at LU level, not at the LV level. LV snapshots have limitations (as mentioned by you in other thread) and are always in-efficient compared to array snapshots. Why would someone want to use less efficient method when they invested on a expensive array ? <br>
<br></div>thanx,<br>deepak<br><div><div><div><div><div><div><br></div></div></div></div></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, May 20, 2014 at 9:01 PM, Mitsuhiro Tanino <span dir="ltr"><<a href="mailto:mitsuhiro.tanino@hds.com" target="_blank">mitsuhiro.tanino@hds.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div link="blue" vlink="purple" lang="EN-US">
<div>
<p class="MsoNormal">Hello All,<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">I’m proposing a feature of LVM driver to support LVM on a shared LU.<span><u></u><u></u></span></p>
<p class="MsoNormal"><span>The proposed LVM volume driver provides these
</span>benefits<span>.</span><br>
<span> </span>- <span>
R</span>educe hardware based storage workload by offloading the workload to software based volume operation.<br>
<span> </span>- Provide quicker volume creation and snapshot creation without storage workloads.<br>
<span> </span>- Enable cinder to any kinds of shared storage volumes without specific cinder storage<span>
</span>driver.<span><u></u><u></u></span></p>
<p class="MsoNormal"><span> </span>-<span> Better I/O performance using direct volume access via Fibre channel.<u></u><u></u></span></p>
<p class="MsoNormal"><span><u></u> <u></u></span></p>
<p class="MsoNormal">In the attachment pdf, following contents are explained.<u></u><u></u></p>
<p class="MsoNormal"> 1. Detail of Proposed LVM volume driver<u></u><u></u></p>
<p class="MsoNormal"> 1-1. Big Picture<u></u><u></u></p>
<p class="MsoNormal"> 1-2. Administrator preparation<u></u><u></u></p>
<p class="MsoNormal"> 1-3. Work flow of volume creation and attachment<u></u><u></u></p>
<p class="MsoNormal"> 2. Target of Proposed LVM volume driver<u></u><u></u></p>
<p class="MsoNormal"> 3. Comparison of Proposed LVM volume driver<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Could you review the attachment?<u></u><u></u></p>
<p class="MsoNormal">Any comments, questions<span>, additional ideas</span> would be appreciated.<span><u></u><u></u></span></p>
<p class="MsoNormal"><span><u></u> <u></u></span></p>
<p class="MsoNormal"><span><u></u> <u></u></span></p>
<p class="MsoNormal">Also there are blueprints, wiki and patches related to the slide.<u></u><u></u></p>
<p class="MsoNormal"><a href="https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage" target="_blank">https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage</a><u></u><u></u></p>
<p class="MsoNormal"><a href="https://blueprints.launchpad.net/nova/+spec/lvm-driver-for-shared-storage" target="_blank">https://blueprints.launchpad.net/nova/+spec/lvm-driver-for-shared-storage</a><u></u><u></u></p>
<p class="MsoNormal"><a href="https://wiki.openstack.org/wiki/Cinder/NewLVMbasedDriverForSharedStorageInCinder" target="_blank">https://wiki.openstack.org/wiki/Cinder/NewLVMbasedDriverForSharedStorageInCinder</a><u></u><u></u></p>
<p class="MsoNormal"><a href="https://review.openstack.org/#/c/92479/" target="_blank">https://review.openstack.org/#/c/92479/</a><u></u><u></u></p>
<p class="MsoNormal"><a href="https://review.openstack.org/#/c/92443/" target="_blank">https://review.openstack.org/#/c/92443/</a><u></u><u></u></p>
<p class="MsoNormal"><span style="color:black"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="color:black">Regards,<u></u><u></u></span></p>
<p class="MsoNormal"><span style="color:black">Mitsuhiro Tanino <<a href="http://mitsuhiro.tanino@hds.com" target="_blank">mitsuhiro.tanino@hds.com</a>><u></u><u></u></span></p>
<p class="MsoNormal"><span style="color:black"> <b>HITACHI DATA SYSTEMS<u></u><u></u></b></span></p>
<p class="MsoNormal"><span style="color:black"> c/o Red Hat, 314 Littleton Road, Westford, MA 01886</span><span style="color:black"><u></u><u></u></span></p>
</div>
</div>
<br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div>