[Openstack-operators] Fwd: Ceph bock storage and Openstack Cinder Scheduler issue
Gavin
netmatters at gmail.com
Thu Sep 19 13:37:41 UTC 2013
Hi Sébastien,
Thank you kindly for your response.
My question came about due to what is displayed within the Horizon
Admin interface, under Volumes the interface showed all of my volumes
as if they were all 'attached' to one compute node.
And since I was still thinking about the LVM/ISCSI setup, my
assumption was that the compute node was hosting LUN's for all of the
instances.
My assumption was incorrect, and it was pointed out to me that each
compute node connects to Ceph via Libvirt, so this makes me feel a
little better about things.
It still does not explain why Horizon shows only one compute node in
the volume listing but that may just be a legacy thing.
Regards,
Gavin
On 19 September 2013 14:23, Sébastien Han <sebastien.han at enovance.com> wrote:
> Hi there,
>
> There is no need to run cinder-volume on every compute nodes, unless they
> are the only servers that can get access to your ceph cluster.
> Traditionally, we put cinder-volume on the cloud controllers.
>
> I’ve through a similar issue while running an lvm+iscsi backend, the only
> solution was to play with AZ and to change the scheduler.
>
> Try to change your cinder.conf with:
>
> scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler
> capacity_weight_multiplier=1.0
> scheduler_default_weighers=CapacityWeigher
>
> Let me know if it works as expected.
>
> Cheers.
>
> ––––
> Sébastien Han
> Cloud Engineer
>
> "Always give 100%. Unless you're giving blood."
>
>
> Phone: +33 (0)1 49 70 99 72 -
> Mobile: +33 (0)6 52 84 44 70
> Mail: sebastien.han at enovance.com - Skype : han.sbastien
> Address : 10, rue de la Victoire - 75009 Paris
> Web : www.enovance.com - Twitter : @enovance
>
> On September 19, 2013 at 12:04:22 PM, Gavin (netmatters at gmail.com) wrote:
>
> Hi there,
>
> I'm hoping that someone can possibly shed some light on an issue that
> we are experiencing
> with the way that Cinder is scheduling Ceph volumes in our environment.
>
> We are running cinder-volume on each of our compute nodes, and they
> are all configured to make use of our Ceph cluster.
>
> As far as we can tell the Ceph cluster is working as it should,
> however the problem we are having is that each and every Ceph volume
> gets attached to only one of the Compute nodes.
>
> This is not ideal as it will create a bottle-neck on the one host.
>
> From what I have read, the default Cinder scheduler should pick the
> cinder-volume node with the most available space, but since all
> compute nodes should report the same, as per the space available in
> the Ceph volume pool, how is this meant to work then ?
>
> We have also tried to implement the Cinder chance scheduler in the
> hope that Cinder will randomly pick another storage node, but this did
> not make any difference.
>
> Has anyone else experienced the same issue or similar ?
>
> Is there perhaps a way that we can round-robin the volume attachments ?
>
> Openstack version: Grizzly using Ubuntu LTS and Cloud PPA.
>
> Ceph version: Cuttlefish from Ceph PPA.
>
> Apologies for the cross-post, if you get this twice as this email was
> first sent to the Ceph Users mailing list.
>
> Thanks in advance,
> Gavin
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
More information about the OpenStack-operators
mailing list