[openstack-dev] [Cinder] Support LVM on a volume attached to multiple nodes

Mitsuhiro Tanino mitsuhiro.tanino at hds.com
Thu Apr 10 18:44:26 UTC 2014


Hi John-san,

Thank you for your review of my summit suggestion and I understand your indication.
Could you see my following comments?

-------------
http://summit.openstack.org/cfp/details/166
There's a lot here, based on conversations in IRC and the excellent wiki you put together
I'm still trying to get a clear picture here.
Let's start by taking transport protocol out of the mix; I believe what you're proposing
is a shared, multi-attach LV (over iscsi, fibre or whatever). That seems to fall under
the multi-attach feature.
-------------

So both my BP and BP of "multi-attach-volume" need multi attach feature, but
in my understanding, the target layer and goals are different .

Let me explain the additional point.
* "multi-attach-volume": Implement the volume  multi attach feature.
     Main target layers is Nova layer. (and a little cinder layer implement?).

* My BP : Implement a generic LVM volume driver using LVM  on
   a storage volume(over iscsi, fibre or whatever) attached to multi compute nodes.
   Target layers are both Nova and Cinder layer.

  The different point is the former case, the cinder volume is needed to create
  by other cinder storage driver with supported storage, and after that the
  feature can attach a volume to multiple instances.(and also hosts)

  On the other hands, my BP targets generic LVM driver and  not depends on
  specific vendor storage. The driver just provide features to create/delete/snapshot
 volume, etc from a volume group on multi attached storage.
  The point is user needs to create a storage volume using own storage management tool,
  attach the created volume to multiple compute nodes, create VG on the volume and
  configure it to cinder.conf as a "volume_group".
  So, multi-attach feature is not a target feature to implement. The driver just
  requires an environment of LVM on a storage attached to multi compute nodes.

  I think my proposed LVM driver is orthogonal to the "multi-attach-volume"
  and we can use "multi-attach-volume" and my BP in combination.

Here is my additional understanding for both BPs.
[A] My BP
  https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage
  The goal of my BP is supporting a volume group on a volume which is attached to
  multiple compute nodes.
  So the proposed driver just provide a feature to create/delete/snapshot, etc from
  volume group same as existing LVMiSCSI driver.
  But the difference from LVMiSCSI driver, the prepared volume group on a storage
  volume is required to be attached to multiple compute nodes in order to recognize
  multiple compute nodes simultaneously instead of using iSCSI target.

  The multi attached storage and volume group on it is needed to prepare before cinder
  configuration by user without any cinder features because the target of my driver is
  generic LVM driver and my driver targets to support a storages environment which
  does not have cinder storage driver.

  [Preparation of volume_group]
   (a) An user create a storage volume using "storage management tool".(not cinder feature)
   (b) Export a created volume to multiple compute nodes.(using FC host group or iSCSI target feature)
    (c) Create volume group on a exported storage volume.
   (d) Configure the volume group for the cinder "volume_group".

[B] BP of multi-attach-volume
  https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume
  The goal of this BP is attaching a cinder volume to multiple instances.
  In my understanding, providing a cinder volume multi-attach feature is
  a goal of this BP.

Steps to multi attach
  (1) Create a volume using "cinder volume driver" with "supported storage".
     Driver and supported driver is required.
  (2) Export a created volume to multiple compute nodes via FC, iSCSI, etc.
  (3) Attach a created volume to multiple instances with RO/RW mode.
  (4) As a result, multiple instances can recognize a single cinder volume
     simultaneously.


-------------
I'm also trying to understand your statements about improved performance. Could you maybe
add some more detail or explanation around this things or maybe grab me on IRC when convenient?
-------------
  This explains a comparison between LVMiSCSI and my BP driver.
  Current LVMiSCSI driver uses a cinder node as a "virtual storage" using software iSCSI target.
  This is useful and not depends on storage arrangement, but we have to access a volume group
  via network even if we have FC environment. If we have multi attached FC volume, it is better
  to access the volume via FC instead of iSCSI target.
  As a result, we expect better I/O performance and latency to access multi attached FC volume
  to compute nodes.  This is a point which I mentioned.

Regards,
Mitsuhiro Tanino <mitsuhiro.tanino at hds.com>
     HITACHI DATA SYSTEMS
     c/o Red Hat, 314 Littleton Road, Westford, MA 01886

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140410/3d210358/attachment.html>


More information about the OpenStack-dev mailing list