[Openstack-operators] Running cinder with two Ceph backends (not possible)?

gustavo panizzo <gfa> gfa at zumbi.com.ar
Fri Oct 4 06:49:58 UTC 2013



"Don Talton (dotalton)" <dotalton at cisco.com> wrote:
>I think an additional question is why run two separate ceph clusters

you could run a cluster with different disks, for example

- an sata+ssd based cluster
- regular sata based cluster

while you put your prod db on the first one, you do the backups on the second
 
>(ceph is already HA), and give yourself that headache? You can
>accomplish a single distributed multi-site cluster with the right crush
>map setup.
>
>From: Darren Birkett [mailto:darren.birkett at gmail.com]
>Sent: Thursday, October 03, 2013 6:48 AM
>To: Robert van Leeuwen
>Cc: openstack-operators at lists.openstack.org
>Subject: Re: [Openstack-operators] Running cinder with two Ceph
>backends (not possible)?
>
>On 2 October 2013 09:09, Robert van Leeuwen
><Robert.vanLeeuwen at spilgames.com<mailto:Robert.vanLeeuwen at spilgames.com>>
>wrote:
>Hi,
>
>We would like to run 2 separate Ceph clusters for our cloud. (Due to HA
>and performance reasons)
>It is possible to set this up?
>I'm trying in our dev environment (currently just one ceph cluster but
>with 2 volumes)
>
>With just one backend it works fine but with 2 backends I get the
>following error:
>ERROR [cinder.scheduler.filters.capacity_filter] Free capacity not set:
>volume node info collection broken
>ERROR [cinder.scheduler.manager] Failed to schedule_create_volume: No
>valid host was found.
>
>
>Cinder config:
>enabled_backends=ceph1,ceph2
>[ceph1]
>volume_driver=cinder.volume.drivers.rbd.RBDDriver
>rbd_pool=volumes
>rbd_user=volumes
>rbd_secret_uuid=xxxxxxxxxxxxxxxx
>
>[ceph2]
>volume_driver=cinder.volume.drivers.rbd.RBDDriver
>rbd_pool=volumes2
>rbd_user=volumes2
>rbd_secret_uuid=yyyyyyyyyyyyyyyy
>
>
>Running it all on SL6.4.
>Openstack Grizzly (RDO)
>Ceph Dumpling (ceph repo)
>
>Thx,
>Robert van Leeuwen
>
>Hi Robert,
>
>I believe there are 2 separate questions here:
>
>1 - is it possible to use multiple *different* RBD back ends (ie
>different RBD pools/users) within the same cinder-volume instance?
>
>This capability was broken in Grizzly, but fixed in Havana I believe
>(https://review.openstack.org/#/c/28208/).  There was some discussion,
>I seem to recall, around possibly getting around the broken-ness by
>running multiple cinder-volumes each with a different config, but I
>don't know if this would work properly with scheduling.  Maybe someone
>has tried/done this?
>
>2. Is it possible to use 2 entirely different ceph clusters?
>I think again, by default no since you specify the ceph config file in
>the cinder.conf (or let it use the default of /etc/ceph/ceph.conf).  If
>you're able to get multple cinder-volumes going, each with a separate
>config file, and get around the scheduling issues, then I suppose
>possibly.
>
>Thanks
>Darren
>
>
>------------------------------------------------------------------------
>
>_______________________________________________
>OpenStack-operators mailing list
>OpenStack-operators at lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20131004/e7685619/attachment.html>


More information about the OpenStack-operators mailing list