[openstack-dev] [Cinder] Support LVM on a shared LU
Avishay Traeger
avishay at stratoscale.com
Sun May 25 07:05:39 UTC 2014
Hello Mitsuhiro,
I'm sorry, but I remain unconvinced. Is there a customer demand for this
feature?
If you'd like, feel free to add this topic to a Cinder weekly meeting
agenda, and join the meeting so that we can have an interactive discussion.
https://wiki.openstack.org/wiki/CinderMeetings
Thanks,
Avishay
On Sat, May 24, 2014 at 12:31 AM, Mitsuhiro Tanino <mitsuhiro.tanino at hds.com
> wrote:
> Hi Avishay-san,
>
>
>
> Thank you for your review and comments for my proposal. I commented
> in-line.
>
>
>
> >>So the way I see it, the value here is a generic driver that can work
> with any storage. The downsides:
>
>
>
> A generic driver for any storage is an one of benefit.
>
> But main benefit of proposed driver is as follows.
>
> - Reduce hardware based storage workload by offloading the workload to
> software based volume operation.
>
>
>
> Conventionally, operations to an enterprise storage such as volume
> creation, deletion, snapshot, etc
>
> are only permitted system administrator and they handle these operations
> after carefully examining.
>
> In OpenStack cloud environment, every user have a permission to execute
> these storage operations
>
> via cinder. As a result, workloads of storages have been increasing and it
> is difficult to manage
>
> the workloads.
>
>
>
> If we have two drivers in regards to a storage, we can use both way as the
> situation demands.
>
> Ex.
>
> As for "Standard" type storage, use proposed software based LVM cinder
> driver.
>
> As for "High performance" type storage, use hardware based cinder driver.
>
>
>
> As a result, we can offload the workload of standard type storage from
> physical storage to cinder host.
>
>
>
> >>1. The admin has to manually provision a very big volume and attach it
> to the Nova and Cinder hosts.
>
> >> Every time a host is rebooted,
>
>
>
> I thinks current FC-based cinder drivers using scsi scan to find created
> LU.
>
> # echo "- - -" > /sys/class/scsi_host/host#/scan
>
>
>
> The admin can find additional LU using this, so host reboot are not
> required.
>
>
>
> >> or introduced, the admin must do manual work. This is one of the things
> OpenStack should be trying
>
> >> to avoid. This can't be automated without a driver, which is what
> you're trying to avoid.
>
>
>
> Yes. Some admin manual work is required and can’t be automated.
>
> I would like to know whether these operations are acceptable range to
> enjoy benefits from
>
> my proposed driver.
>
>
>
> >>2. You lose on performance to volumes by adding another layer in the
> stack.
>
>
>
> I think this is case by case. When user use a cinder volume for DATA BASE,
> they prefer
>
> raw volume and proposed driver can’t provide raw cinder volume.
>
> In this case, I recommend "High performance" type storage.
>
>
>
> LVM is a default feature in many Linux distribution. Also LVM is used
> many enterprise
>
> systems and I think there is not critical performance loss.
>
>
>
> >>3. You lose performance with snapshots - appliances will almost
> certainly have more efficient snapshots
>
> >> than LVM over network (consider that for every COW operation, you are
> reading synchronously over the network).
>
> >> (Basically, you turned your fully-capable storage appliance into a dumb
> JBOD)
>
>
>
> I agree that storage has efficient COW snapshot feature, so we can create
> new Boot Volume
>
> from glance quickly. In this case, I recommend "High performance" type
> storage.
>
> LVM can’t create nested snapshot with shared LVM now. Therefore, we can’t
> assign
>
> writable LVM snapshot to instances.
>
>
>
> Is this answer for your comment?
>
>
>
> >> In short, I think the cons outweigh the pros. Are there people
> deploying OpenStack who would deploy
>
> >> their storage like this?
>
>
>
> Please consider above main benefit.
>
>
>
> Regards,
>
> Mitsuhiro Tanino <mitsuhiro.tanino at hds.com>
>
> *HITACHI DATA SYSTEMS*
>
> c/o Red Hat, 314 Littleton Road, Westford, MA 01886
>
>
>
> *From:* Avishay Traeger [mailto:avishay at stratoscale.com]
> *Sent:* Wednesday, May 21, 2014 4:36 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Cc:* Tomoki Sekiyama
> *Subject:* Re: [openstack-dev] [Cinder] Support LVM on a shared LU
>
>
>
> So the way I see it, the value here is a generic driver that can work with
> any storage. The downsides:
>
> 1. The admin has to manually provision a very big volume and attach it to
> the Nova and Cinder hosts. Every time a host is rebooted, or introduced,
> the admin must do manual work. This is one of the things OpenStack should
> be trying to avoid. This can't be automated without a driver, which is what
> you're trying to avoid.
>
> 2. You lose on performance to volumes by adding another layer in the stack.
>
> 3. You lose performance with snapshots - appliances will almost certainly
> have more efficient snapshots than LVM over network (consider that for
> every COW operation, you are reading synchronously over the network).
>
>
>
> (Basically, you turned your fully-capable storage appliance into a dumb
> JBOD)
>
>
>
> In short, I think the cons outweigh the pros. Are there people deploying
> OpenStack who would deploy their storage like this?
>
>
>
> Thanks,
> Avishay
>
> On Tue, May 20, 2014 at 6:31 PM, Mitsuhiro Tanino <
> mitsuhiro.tanino at hds.com> wrote:
>
> Hello All,
>
>
>
> I’m proposing a feature of LVM driver to support LVM on a shared LU.
>
> The proposed LVM volume driver provides these benefits.
> - Reduce hardware based storage workload by offloading the workload to
> software based volume operation.
> - Provide quicker volume creation and snapshot creation without storage
> workloads.
> - Enable cinder to any kinds of shared storage volumes without specific
> cinder storage driver.
>
> - Better I/O performance using direct volume access via Fibre channel.
>
>
>
> In the attachment pdf, following contents are explained.
>
> 1. Detail of Proposed LVM volume driver
>
> 1-1. Big Picture
>
> 1-2. Administrator preparation
>
> 1-3. Work flow of volume creation and attachment
>
> 2. Target of Proposed LVM volume driver
>
> 3. Comparison of Proposed LVM volume driver
>
>
>
> Could you review the attachment?
>
> Any comments, questions, additional ideas would be appreciated.
>
>
>
>
>
> Also there are blueprints, wiki and patches related to the slide.
>
> https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage
>
> https://blueprints.launchpad.net/nova/+spec/lvm-driver-for-shared-storage
>
>
> https://wiki.openstack.org/wiki/Cinder/NewLVMbasedDriverForSharedStorageInCinder
>
> https://review.openstack.org/#/c/92479/
>
> https://review.openstack.org/#/c/92443/
>
>
>
> Regards,
>
> Mitsuhiro Tanino <mitsuhiro.tanino at hds.com>
>
> *HITACHI DATA SYSTEMS*
>
> c/o Red Hat, 314 Littleton Road, Westford, MA 01886
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140525/e897e39d/attachment.html>
More information about the OpenStack-dev
mailing list