[Openstack-operators] Cinder: multi-scheduler in a round robin fashion

Sebastien Han sebastien.han at enovance.com
Tue Jun 18 09:02:00 UTC 2013


Thanks for your answer, the thing is, since it's always the same LUN, so all the new volume are created using the same block. The free capacity is always the same, this is why it's not well balanced. Any idea to tell the scheduler to do round robin?

––––
Sébastien Han
Cloud Engineer

"Always give 100%. Unless you're giving blood."










Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70
Email : sebastien.han at enovance.com – Skype : han.sbastien
Address : 10, rue de la Victoire – 75009 Paris
Web : www.enovance.com – Twitter : @enovance

On Jun 18, 2013, at 10:46 AM, Huang Zhiteng <winston.d at gmail.com> wrote:

> On Tue, Jun 18, 2013 at 3:57 PM, Sebastien Han <sebastien.han at enovance.com> wrote:
> Hi,
> 
> I use this one: scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler
> 
> Well, default weigher for filter scheduler sorts all volume services by free capacity if you haven't change default filter/weigher.  So I think the reason one volume always get picked up to serve request is because it reports the most available capacity among all volume services.  You may check log from volume services as well as scheduler to confirm.
>  
> Thanks!
> 
> ––––
> Sébastien Han
> Cloud Engineer
> 
> "Always give 100%. Unless you're giving blood."
> 
> 
> 
> <image.png>
> 
> 
> 
> 
> 
> 
> Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70
> Email : sebastien.han at enovance.com – Skype : han.sbastien
> Address : 10, rue de la Victoire – 75009 Paris
> Web : www.enovance.com – Twitter : @enovance
> 
> On Jun 18, 2013, at 4:30 AM, Huang Zhiteng <winston.d at gmail.com> wrote:
> 
>> Hi Sebastien,
>> 
>> What scheduler are you using in Cinder?
>> 
>> 
>> On Mon, Jun 17, 2013 at 4:20 PM, Sebastien Han <sebastien.han at enovance.com> wrote:
>> Hi all,
>> 
>> Here the problem:
>> 
>> I have 2 Datacenters, 2 computes par DC, a SAN and the LUN is mapped on every compute nodes. So all the compute nodes share the same block device. I use the LVM iSCSI driver for Cinder and obviously the multi-backend. Thus I created AZ for cinder based on the location of the cinder-volume process (cinder-volume runs on every compute nodes), I have two AZ: DC1 and DC2. However when I try to create a volume I get most the volume on the first compute node of the specified location. Thus most of the iSCSI targets are created on the same node and I don't like this.
>> 
>> Little schema here http://www.asciiflow.com/#Draw7392320624318387463
>> 
>> My question is: can I tweak the scheduler to create volume in a round robin fashion, then I can have an even repartition of the targets.
>> 
>> Thanks in advance!
>> 
>> ––––
>> Sébastien Han
>> Cloud Engineer
>> 
>> "Always give 100%. Unless you're giving blood."
>> 
>> 
>> 
>> <image.png>
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70
>> Email : sebastien.han at enovance.com – Skype : han.sbastien
>> Address : 10, rue de la Victoire – 75009 Paris
>> Web : www.enovance.com – Twitter : @enovance
>> 
>> 
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 
>> 
>> 
>> 
>> -- 
>> Regards
>> Huang Zhiteng
> 
> 
> 
> 
> -- 
> Regards
> Huang Zhiteng

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130618/ad5072a5/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 6560 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130618/ad5072a5/attachment.png>


More information about the OpenStack-operators mailing list