[Openstack-operators] RAID / stripe block storage volumes

Robert Starmer robert at kumul.us
Mon Feb 8 21:30:26 UTC 2016


Ned's model is the model I meant by "multiple underlying storage
services".  Most of the systems I've built are LV/LVM only,  a few added
Ceph as an alternative/live-migration option, and one where we used Gluster
due to size.  Note that the environments I have worked with in general are
small (~20 compute), so huge Ceph environments aren't common.  I am also
working on a project where the storage backend is entirely NFS...

And I think users are more and more educated to assume that there is
nothing guaranteed.  There is the realization, at least for a good set of
the customers I've worked with (and I try to educate the non-believers),
that the way you get best effect from a system like OpenStack is to
consider everything disposable. The one gap I've seen is that there are
plenty of folks who don't deploy SWIFT, and without some form of object
store, there's still the question of where you place your datasets so that
they can be quickly recovered (and how do you keep them up to date if you
do have one).  With VMs, there's the concept that you can recover quickly
because the "dataset" e.g. your OS, is already there for you, and in plenty
of small environments, that's only as true as the glance repository (guess
what's usually backing that when there's no SWIFT around...).

So I see the issue as a holistic one. How do you show operators/users that
they should consider everything disposable if we only look at the current
running instance as the "thing"   Somewhere you still likely need some form
of distributed resilience (and yes, I can see using the distributed
Canonical, Centos, RedHat, Fedora, Debian, etc. mirrors as your distributed
Image backup but what about the database content, etc.).

Robert

On Mon, Feb 8, 2016 at 1:44 PM, Ned Rhudy (BLOOMBERG/ 731 LEX) <
erhudy at bloomberg.net> wrote:

> In our environments, we offer two types of storage. Tenants can either use
> Ceph/RBD and trade speed/latency for reliability and protection against
> physical disk failures, or they can launch instances that are realized as
> LVs on an LVM VG that we create on top of a RAID 0 spanning all but the OS
> disk on the hypervisor. This lets the users elect to go all-in on speed and
> sacrifice reliability for applications where replication/HA is handled at
> the app level, if the data on the instance is sourced from elsewhere, or if
> they just don't care much about the data.
>
> There are some further changes to our approach that we would like to make
> down the road, but in general our users seem to like the current system and
> being able to forgo reliability or speed as their circumstances demand.
>
> From: joe at topjian.net
> Subject: Re: [Openstack-operators] RAID / stripe block storage volumes
>
> Hi Robert,
>
> Can you elaborate on "multiple underlying storage services"?
>
> The reason I asked the initial question is because historically we've made
> our block storage service resilient to failure. Historically we also made
> our compute environment resilient to failure, too, but over time, we've
> seen users become more educated to cope with compute failure. As a result,
> we've been able to become more lenient with regard to building resilient
> compute environments.
>
> We've been discussing how possible it would be to translate that same idea
> to block storage. Rather than have a large HA storage cluster (whether
> Ceph, Gluster, NetApp, etc), is it possible to offer simple single LVM
> volume servers and push the failure handling on to the user?
>
> Of course, this doesn't work for all types of use cases and environments.
> We still have projects which require the cloud to own most responsibility
> for failure than the users.
>
> But for environments were we offer general purpose / best effort compute
> and storage, what methods are available to help the user be resilient to
> block storage failures?
>
> Joe
>
> On Mon, Feb 8, 2016 at 12:09 PM, Robert Starmer <robert at kumul.us> wrote:
>
>> I've always recommended providing multiple underlying storage services to
>> provide this rather than adding the overhead to the VM.  So, not in any of
>> my systems or any I've worked with.
>>
>> R
>>
>>
>>
>> On Fri, Feb 5, 2016 at 5:56 PM, Joe Topjian <joe at topjian.net> wrote:
>>
>>> Hello,
>>>
>>> Does anyone have users RAID'ing or striping multiple block storage
>>> volumes from within an instance?
>>>
>>> If so, what was the experience? Good, bad, possible but with caveats?
>>>
>>> Thanks,
>>> Joe
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>
> _______________________________________________
> OpenStack-operators mailing listOpenStack-operators at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160208/85060be9/attachment.html>


More information about the OpenStack-operators mailing list