[Openstack-operators] Running cinder with two Ceph backends (not possible)?
Darren Birkett
darren.birkett at gmail.com
Thu Oct 3 13:47:36 UTC 2013
On 2 October 2013 09:09, Robert van Leeuwen <Robert.vanLeeuwen at spilgames.com
> wrote:
> Hi,
>
> We would like to run 2 separate Ceph clusters for our cloud. (Due to HA
> and performance reasons)
> It is possible to set this up?
> I'm trying in our dev environment (currently just one ceph cluster but
> with 2 volumes)
>
> With just one backend it works fine but with 2 backends I get the
> following error:
> ERROR [cinder.scheduler.filters.capacity_filter] Free capacity not set:
> volume node info collection broken
> ERROR [cinder.scheduler.manager] Failed to schedule_create_volume: No
> valid host was found.
>
>
> Cinder config:
> enabled_backends=ceph1,ceph2
> [ceph1]
> volume_driver=cinder.volume.drivers.rbd.RBDDriver
> rbd_pool=volumes
> rbd_user=volumes
> rbd_secret_uuid=xxxxxxxxxxxxxxxx
>
> [ceph2]
> volume_driver=cinder.volume.drivers.rbd.RBDDriver
> rbd_pool=volumes2
> rbd_user=volumes2
> rbd_secret_uuid=yyyyyyyyyyyyyyyy
>
>
> Running it all on SL6.4.
> Openstack Grizzly (RDO)
> Ceph Dumpling (ceph repo)
>
> Thx,
> Robert van Leeuwen
>
Hi Robert,
I believe there are 2 separate questions here:
1 - is it possible to use multiple *different* RBD back ends (ie different
RBD pools/users) within the same cinder-volume instance?
This capability was broken in Grizzly, but fixed in Havana I believe (
https://review.openstack.org/#/c/28208/). There was some discussion, I
seem to recall, around possibly getting around the broken-ness by running
multiple cinder-volumes each with a different config, but I don't know if
this would work properly with scheduling. Maybe someone has tried/done
this?
2. Is it possible to use 2 entirely different ceph clusters?
I think again, by default no since you specify the ceph config file in the
cinder.conf (or let it use the default of /etc/ceph/ceph.conf). If you're
able to get multple cinder-volumes going, each with a separate config file,
and get around the scheduling issues, then I suppose possibly.
Thanks
Darren
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20131003/654cae43/attachment.html>
More information about the OpenStack-operators
mailing list