[Openstack-operators] Cinder: multi-scheduler in a round robin fashion

Sebastien Han sebastien.han at enovance.com
Wed Jun 19 21:48:27 UTC 2013


Answering my own question: these 2 flags made what I wanted:

capacity_weight_multiplier=1.0
scheduler_default_weighers=CapacityWeigher

Thanks for the guidance :)

––––
Sébastien Han
Cloud Engineer

"Always give 100%. Unless you're giving blood."










Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70
Email : sebastien.han at enovance.com – Skype : han.sbastien
Address : 10, rue de la Victoire – 75009 Paris
Web : www.enovance.com – Twitter : @enovance

On Jun 18, 2013, at 2:16 PM, Sebastien Han <sebastien.han at enovance.com> wrote:

> Hi thanks for your answer,
> 
> 1) However changing the scheduler for 'chance', is not a valid solution since I'll loose the multi-backed functionality.
> 
> 2) I assume the driver report the good capacity, but one again since it's always the same, I guess it just picks the first node.
> 
> 3) I think this one can solve my issue. Any idea? example to implement this?
> 
> Thanks again!
> 
> 
> ––––
> Sébastien Han
> Cloud Engineer
> 
> "Always give 100%. Unless you're giving blood."
> 
> 
> 
> <image.png>
> 
> 
> 
> 
> 
> 
> Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70
> Email : sebastien.han at enovance.com – Skype : han.sbastien
> Address : 10, rue de la Victoire – 75009 Paris
> Web : www.enovance.com – Twitter : @enovance
> 
> On Jun 18, 2013, at 11:10 AM, Huang Zhiteng <winston.d at gmail.com> wrote:
> 
>> Hmm, maybe all your volume drivers are reporting 'infinite' as free capacity no matter how many volumes were created.  So possible solutions are: 1) change to simple/chance scheduler, with this you will be able to do round robin placement but you miss advanced feature provided by filter scheduler; 2) change your volume drivers so that it correctly report free capacity; 3) add a custom weigher that doesn't weigh volume services with free capacity but other things like allocated volumes.
>> 
>> Both 2) and 3) require code change and doesn't seem trivial.  So given that your environment is small/simple, I suggest you change to simple/chance scheduler.
>> 
>> 
>> On Tue, Jun 18, 2013 at 5:02 PM, Sebastien Han <sebastien.han at enovance.com> wrote:
>> Thanks for your answer, the thing is, since it's always the same LUN, so all the new volume are created using the same block. The free capacity is always the same, this is why it's not well balanced. Any idea to tell the scheduler to do round robin?
>> 
>> ––––
>> Sébastien Han
>> Cloud Engineer
>> 
>> "Always give 100%. Unless you're giving blood."
>> 
>> 
>> 
>> <image.png>
>> 
>> 
>> 
>> 
>> 
>> 
>> Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70
>> Email : sebastien.han at enovance.com – Skype : han.sbastien
>> Address : 10, rue de la Victoire – 75009 Paris
>> Web : www.enovance.com – Twitter : @enovance
>> 
>> On Jun 18, 2013, at 10:46 AM, Huang Zhiteng <winston.d at gmail.com> wrote:
>> 
>>> On Tue, Jun 18, 2013 at 3:57 PM, Sebastien Han <sebastien.han at enovance.com> wrote:
>>> Hi,
>>> 
>>> I use this one: scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler
>>> 
>>> Well, default weigher for filter scheduler sorts all volume services by free capacity if you haven't change default filter/weigher.  So I think the reason one volume always get picked up to serve request is because it reports the most available capacity among all volume services.  You may check log from volume services as well as scheduler to confirm.
>>>  
>>> Thanks!
>>> 
>>> ––––
>>> Sébastien Han
>>> Cloud Engineer
>>> 
>>> "Always give 100%. Unless you're giving blood."
>>> 
>>> 
>>> 
>>> <image.png>
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70
>>> Email : sebastien.han at enovance.com – Skype : han.sbastien
>>> Address : 10, rue de la Victoire – 75009 Paris
>>> Web : www.enovance.com – Twitter : @enovance
>>> 
>>> On Jun 18, 2013, at 4:30 AM, Huang Zhiteng <winston.d at gmail.com> wrote:
>>> 
>>>> Hi Sebastien,
>>>> 
>>>> What scheduler are you using in Cinder?
>>>> 
>>>> 
>>>> On Mon, Jun 17, 2013 at 4:20 PM, Sebastien Han <sebastien.han at enovance.com> wrote:
>>>> Hi all,
>>>> 
>>>> Here the problem:
>>>> 
>>>> I have 2 Datacenters, 2 computes par DC, a SAN and the LUN is mapped on every compute nodes. So all the compute nodes share the same block device. I use the LVM iSCSI driver for Cinder and obviously the multi-backend. Thus I created AZ for cinder based on the location of the cinder-volume process (cinder-volume runs on every compute nodes), I have two AZ: DC1 and DC2. However when I try to create a volume I get most the volume on the first compute node of the specified location. Thus most of the iSCSI targets are created on the same node and I don't like this.
>>>> 
>>>> Little schema here http://www.asciiflow.com/#Draw7392320624318387463
>>>> 
>>>> My question is: can I tweak the scheduler to create volume in a round robin fashion, then I can have an even repartition of the targets.
>>>> 
>>>> Thanks in advance!
>>>> 
>>>> ––––
>>>> Sébastien Han
>>>> Cloud Engineer
>>>> 
>>>> "Always give 100%. Unless you're giving blood."
>>>> 
>>>> 
>>>> 
>>>> <image.png>
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70
>>>> Email : sebastien.han at enovance.com – Skype : han.sbastien
>>>> Address : 10, rue de la Victoire – 75009 Paris
>>>> Web : www.enovance.com – Twitter : @enovance
>>>> 
>>>> 
>>>> _______________________________________________
>>>> OpenStack-operators mailing list
>>>> OpenStack-operators at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>> 
>>>> 
>>>> 
>>>> 
>>>> -- 
>>>> Regards
>>>> Huang Zhiteng
>>> 
>>> 
>>> 
>>> 
>>> -- 
>>> Regards
>>> Huang Zhiteng
>> 
>> 
>> 
>> 
>> -- 
>> Regards
>> Huang Zhiteng
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130619/56958c4a/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 6560 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130619/56958c4a/attachment.png>


More information about the OpenStack-operators mailing list