[Openstack-operators] [OpenStack-Operators] [Cinder] Request for input on new/advanced features

John Griffith john.griffith at solidfire.com
Sun Jul 27 15:32:24 UTC 2014


On Sat, Jul 26, 2014 at 12:58 PM, Tim Bell <Tim.Bell at cern.ch> wrote:

>
>
> The terms do vary significantly  between different vendors which adds to
> the confusion. I’ll try to add what we see from a CERN perspective (which
> is not always your typical customer profile but we use ceph and NetApp). I
> assume this would still follow the approach where the storage subsystem is
> in charge and cinder issues the commands to set things up.
>
>
>
> Replication for us would mean copy of a volume to a remote location.
> Parameterisation such as copy type (async/sync), maximum delay and target
> pool would be expected.
>
>
>
> Consistency groups for me would be how to handle multiple volume
> replication in terms of sync points to ensure a consistent set of volumes.
> Allowing a disaster recovery scenario is our most obvious use case, i.e.
> restart application VMs remotely.
>
>
>
> We have use cases for replication (such as online database logs), less for
> consistency groups but as we move more into VMs with external system
> disk/data disk models where we’d like the two to be in sync if possible.
>
>
>
> What are the specific questions that you have in mind ?
>
>
>
> Tim
>
>
>
> *From:* John Griffith [mailto:john.griffith at solidfire.com]
> *Sent:* 26 July 2014 16:29
> *To:* openstack-operators at lists.openstack.org
> *Subject:* [Openstack-operators] [OpenStack-Operators] [Cinder] Request
> for input on new/advanced features
>
>
>
> 1. Replication
>
> 2. Consistency Groups
>
>
>
> It's easy for Vendors to come to me and say "we have lots of customers
> asking for this" but personally I'd love to get feedback directly from the
> actual customers, that's where you all come in.
>
>
>
> So, if any of you out there are interested in these topics from a Cinder
> perspective I'd love to hear from you.  If there's interest we can either
> start an ML thread to discuss, or perhaps a meeting to catch everybody up
> and hash things out a bit.  I'd like to discuss the current proposals and
> go through what you as Operators may feel should be priorities (or better
> yet things you don't care about).  I make no promises on the outcome of
> this little experiment but thought it would be interesting to try and get
> user input up front before we release new features.
>
>
>
> Let me know your thoughts, I'll avoid a bunch of detail in this posting
> until I get a feel for who if anybody is interested in helping out.
>
>
>
> Thanks,
>
> John
>

​Hey Tim,

Thanks for the response and input.  The details of my email were a bit
vague by intention, but maybe it would be better if I gave some more
background.  At least on the replication topic.  First though, one thing
I'd like to do is get general input from Operator Community on features in
Cinder (ie; are we working on the right things, are we completely missing
key features you need or want).

Replication is an interesting one and there are a number of ways to go in
terms of how to implement it in Cinder.  Here are some of the paths I see:

1. Implement new API methods specifically to manage replication.
This includes providing API commands for a replicated volume object
including create, enable, disable, delete and promote.  In essence this
adds a new managed object to Cinder.

2. Leave the replication implementation and control completely up to the
driver and expose it via Volume-Type and extra-specs info. The driver would
create setup and create the replication target based on the Type input.  In
this case the replication target is probably invisible to the end-user,
although we could add this sort of info to the status update info from the
driver.

Primary use case for both of these approaches is providing the ability to
have volumes that are replicated on remote devices (remote sites or
otherwise) to use in a DR scenario.  There has been some talk about
mirrored regional deployments which I think could be obtained by either of
these approaches but currently that's not the primary focus for this first
release, that would be focused work on a follow up release in my opinion.

So my questions were of course is this a huge gap for Cinder to begin with,
and are their details in the use case and how this should be consumed that
we might be missing.

One of the difficult things about approach '1' listed above is that we'll
have a period of time in which most drivers won't actually support the
commands.  That's not really a terribly big deal but it does add some
confusion.  What I'm more concerned about is the complexity it introduces
to every volume action we do in Cinder.  Again, nothing we can't work with
and modify but it's worth considering I think.

Approach '2' is somewhat on the other end of the spectrum.  It's very
basic, doesn't offer a ton of value add, but it provides some good first
steps to enabling replication.  It also eliminates some of the concerns I
have regarding API methods being in Cinder that aren't going to work with
most of the drivers (granted this would be addressed over time).

Anyway, I think there's a middle-ground between the two approaches we have
in progress right now.  The big question I had for the community however is
if this is something that over time the Operator Community has been feeling
like "oh if we just had replication", or "if we just had feature xyz".  I'm
mostly looking for input on missing features and use cases for those
features.  Including how the community would like to see the interfaces
implemented for those.

There's a lot of discussion about developers not always understanding use
cases or use models for the end users so this seemed like an interesting
opportunity to try and build something in cooperation with the Operators
group.  I don't know if this will work or not, but thought it would be
worth a shot.

Thanks,
John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20140727/94533df4/attachment.html>


More information about the OpenStack-operators mailing list