[openstack-dev] [Cinder]Behavior when one cinder-volume service is down

John Griffith john.griffith8 at gmail.com
Tue Sep 15 16:24:24 UTC 2015


On Tue, Sep 15, 2015 at 8:53 AM, Eduard Matei <
eduard.matei at cloudfounders.com> wrote:

> Hi,
>
> Let me see if i got this:
> - running 3 (multiple) c-vols won't automatically give you failover
>
​correct
​


> - each c-vol is "master" of a certain number of volumes
>
​yes
​


> -- if the c-vol is "down" then those volumes cannot be managed by another
> c-vol
>
​By default no, but you can configure an HA setup of multiple c-vol
services.  There are a number of folks doing this in production and there's
probably better documentation on how to achieve this, but this gives a
descent enough start:
http://docs.openstack.org/high-availability-guide/content/s-cinder-api.html
​


>
> What i'm trying to achieve is making sure ANY volume is managed
> (manageable) by WHICHEVER c-vol is running (and gets the call first) - sort
> of A/A - so this means i need to look into Pacemaker and virtual-ips, or i
> should try first the "same name".
>
​Yes, I gathered... and to do that you need to do something like name the
backends the same and use a VIP in from of them.
​


>
> Thanks,
>
> Eduard
>
> PS. @Michal: Where are volumes physically in case of your driver? <-
> similar to ceph, on a distributed object storage service (whose disks can
> be anywhere even on the same compute host)
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/e18fe5ff/attachment-0001.html>


More information about the OpenStack-dev mailing list