[Openstack] [Cinder] Questions on implementing the Replication V2 spec

Price, Loren Michael.Price at netapp.com
Thu Sep 24 21:21:57 UTC 2015


Hi John,

Okay, it sounds like we’ll be okay to implement the replication V2 spec. I believe the failover aspect was the only API that we were seeing a problem with. It also sounds like there might be some areas for improvement around documentation, etc. Let me know if there’s anything I/we can do to help on that.

Thanks,

Michael

From: John Griffith [mailto:john.griffith at solidfire.com]
Sent: Thursday, September 24, 2015 2:26 PM
To: Price, Loren
Cc: openstack at lists.openstack.org
Subject: Re: [Openstack] [Cinder] Questions on implementing the Replication V2 spec



On Thu, Sep 24, 2015 at 11:48 AM, Price, Loren <Michael.Price at netapp.com<mailto:Michael.Price at netapp.com>> wrote:
Hey,

We’re looking into implementing the VolumeReplication_V2<https://github.com/openstack/cinder-specs/blob/master/specs/liberty/replication_v2.rst> spec for our NetApp E-Series volume driver. Looking at the specification, I can foresee a problem with implementing the new API call “failover_replicated_volume(volume) “ with an unmanaged replication target. I believe with a managed target we can provide it, if I’m understanding correctly that it merely requires updating the host id for the volume. Based on that, I have two questions:


1.      Is it acceptable, in implementing this spec, to only provide this API for managed targets (and either throw an exception or essentially make a no-op) for an unmanaged replication target?

2.      In general, if a storage backend is incapable of performing a certain operation, what is the correct way to handle it? Can the driver implement the spec at all? Should it throw a NotImplementedError? No-op?

Thanks,

Michael Price

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
​Ooops, did I not respond to the list on that last response?  Just incase here it is again:
​


1.      Is it acceptable, in implementing this spec, to only provide this API for managed targets (and either throw an exception or essentially make a no-op) for an unmanaged replication target?
​Yes by ​

​design it's set up such that ​it's left up to configuration.  In other words the idea is that we have fairly loose definitions around the API calls themselves to allow for differing implementations.

2.      In general, if a storage backend is incapable of performing a certain operation, what is the correct way to handle it? Can the driver implement the spec at all? Should it throw a NotImplementedError? No-op?
​Depends on who you ask :)  IMO we need to do a better job of this, this could be documenting in the deployment guides how to enable/disable API calls in certain deployments so that unsupported calls are just flat out not available.  My true belief is that we shouldn't be implementing features that you can't run with every/any backend device in the first place, but that's my usual rant and somewhat off topic here :)

Note that a lot of the logic for replication in V2 was moved into the volume-type and the conf file precisely to address some of the issues you mention above.  The idea being that if the capabilities of the backend don't match replication specs in the type then the command fails for no-valid host.  The one thing I don't like about this is how we relay that info to the end user (or more accurately the fact that we don't).  We just put the volume in error state and the only info regarding why is in the logs which the end user doesn't have.  This is where something like a better more clear policy file would help as well as providing a capabilities call in the API.

By the way, I'm glad you asked these questions here.  This is part of the reason why I was so strongly opposed to merging an implementation of the V2 replication in Liberty.  I think it's important to have more than one or two vendors looking at this and working out details so we release something that is stable and usable.  My philosophy is that now for M we have a foundation in the core code that will likely evolve as drivers begin implementing the feature.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20150924/fca8d387/attachment.html>


More information about the Openstack mailing list