[openstack-dev] [Cinder]Behavior when one cinder-volume service is down

Duncan Thomas duncan.thomas at gmail.com
Tue Sep 15 15:19:45 UTC 2015


Of the two, pacemaker is far, far safer from a cinder PoV - fewer races,
fewer problematic scenarios.

On 15 September 2015 at 17:59, D'Angelo, Scott <scott.dangelo at hpe.com>
wrote:

> Eduard, Gorka has done a great job of explaining some of the issues with
> Active-Active Cinder-volume services in his blog:
>
> http://gorka.eguileor.com/
>
>
>
> TL;DR: The hacks to use the same hostname or use Pacemaker + VIP are
> dangerous because of races, and are not recommended for Enterprise
> deployments.
>
>
>
> *From:* Eduard Matei [mailto:eduard.matei at cloudfounders.com]
> *Sent:* Tuesday, September 15, 2015 8:54 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Cinder]Behavior when one cinder-volume
> service is down
>
>
>
> Hi,
>
>
>
> Let me see if i got this:
>
> - running 3 (multiple) c-vols won't automatically give you failover
>
> - each c-vol is "master" of a certain number of volumes
>
> -- if the c-vol is "down" then those volumes cannot be managed by another
> c-vol
>
>
>
> What i'm trying to achieve is making sure ANY volume is managed
> (manageable) by WHICHEVER c-vol is running (and gets the call first) - sort
> of A/A - so this means i need to look into Pacemaker and virtual-ips, or i
> should try first the "same name".
>
>
>
> Thanks,
>
>
>
> Eduard
>
>
>
> PS. @Michal: Where are volumes physically in case of your driver? <-
> similar to ceph, on a distributed object storage service (whose disks can
> be anywhere even on the same compute host)
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/b67532b9/attachment.html>


More information about the OpenStack-dev mailing list