[Openstack-operators] =?UTF-8?Q?Re=3A_=5BOpenstack-operators=5D_Fwd=3A_Ceph_bock_storage_and_Openstack_Cinder Scheduler_issue?=
Sébastien Han
sebastien.han at enovance.com
Thu Sep 19 12:23:34 UTC 2013
Hi there,
There is no need to run cinder-volume on every compute nodes, unless they are the only servers that can get access to your ceph cluster.
Traditionally, we put cinder-volume on the cloud controllers.
I’ve through a similar issue while running an lvm+iscsi backend, the only solution was to play with AZ and to change the scheduler.
Try to change your cinder.conf with:
scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler
capacity_weight_multiplier=1.0
scheduler_default_weighers=CapacityWeigher
Let me know if it works as expected.
Cheers.
––––
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood."
Phone: +33 (0)1 49 70 99 72 - Mobile: +33 (0)6 52 84 44 70
Mail: sebastien.han at enovance.com - Skype : han.sbastien
Address : 10, rue de la Victoire - 75009 Paris
Web : www.enovance.com - Twitter : @enovance
On September 19, 2013 at 12:04:22 PM, Gavin (netmatters at gmail.com) wrote:
Hi there,
I'm hoping that someone can possibly shed some light on an issue that
we are experiencing
with the way that Cinder is scheduling Ceph volumes in our environment.
We are running cinder-volume on each of our compute nodes, and they
are all configured to make use of our Ceph cluster.
As far as we can tell the Ceph cluster is working as it should,
however the problem we are having is that each and every Ceph volume
gets attached to only one of the Compute nodes.
This is not ideal as it will create a bottle-neck on the one host.
From what I have read, the default Cinder scheduler should pick the
cinder-volume node with the most available space, but since all
compute nodes should report the same, as per the space available in
the Ceph volume pool, how is this meant to work then ?
We have also tried to implement the Cinder chance scheduler in the
hope that Cinder will randomly pick another storage node, but this did
not make any difference.
Has anyone else experienced the same issue or similar ?
Is there perhaps a way that we can round-robin the volume attachments ?
Openstack version: Grizzly using Ubuntu LTS and Cloud PPA.
Ceph version: Cuttlefish from Ceph PPA.
Apologies for the cross-post, if you get this twice as this email was
first sent to the Ceph Users mailing list.
Thanks in advance,
Gavin
_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130919/2b24c604/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: logo-eNovance-2013.png
Type: image/png
Size: 28260 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130919/2b24c604/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: logo-eNovance-2013_1.png
Type: image/png
Size: 28260 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130919/2b24c604/attachment-0001.png>
More information about the OpenStack-operators
mailing list