[Openstack] [OpenStack] [CINDER] how to get updated pool info when multi users create volumes on pool configured?

Dilip Sunkum Manjunath Dilip.SunkumManjunath at toshiba-tsip.com
Fri Feb 12 05:01:44 UTC 2016


Hello Xing,

thanks for your replay, I will try to clarify my question,.


Base line of problem:

Pools is been sent to the scheduler with the data
thin_ provisioning_support  and thick_ provisioning_support along with the max_over_subscription_ratio  [which will be called from get_volume_stats method from the driver]

Idea is to have both supported.  [cid:image003.png at 01D1657D.B140B3E0]


Problem faced is I have two pools configured
0 and 1
Filtered [host 'devstack at sc3000#1': free_capacity_gb: 112.6, pools: None, host 'devstack at sc3000#0': free_capacity_gb: 343.58, pools: None

If I create the volume of 150 GB pool selected from the scheduler is pool 1 which cannot accommodate the 150 GB request [ I expect pool has to be switched to 0 ].


My suspect  is (might be wrong )
Since max_over_subscription_ratio is set,  even for the thick volume it considers max_over_subscription_ratio times the free space and allows.


Please Advise on how to approach

What I want to achieve is that even by setting the thin_ provisioning_support  and thick_ provisioning_support along with the max_over_subscription_ratio

Scheduler should be aware of thin and thick volumes before sends it to driver as am reading pool id in create volume as it is need for my storage api.




Logs for reference:

2016-02-04 14:55:02.289 DEBUG cinder.scheduler.manager [req-123b1df9-912a-4158-9185-f97b0b28db02 f3623b79d1554018b543beab7bf5bc32 61983bcbb6ed4decb8019adc6f28a343] Task 'cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create' (97af6df4-9724-4f85-8a66-fc22ce9fe797) transitioned into state 'RUNNING' from state 'PENDING' from (pid=22860) _task_receiver /usr/local/lib/python2.7/dist-packages/taskflow/listeners/logging.py:189
2016-02-04 14:55:02.295 DEBUG cinder.openstack.common.scheduler.base_filter [req-123b1df9-912a-4158-9185-f97b0b28db02 f3623b79d1554018b543beab7bf5bc32 61983bcbb6ed4decb8019adc6f28a343] Starting with 3 host(s) from (pid=22860) get_filtered_objects /opt/stack/cinder/cinder/openstack/common/scheduler/base_filter.py:77
2016-02-04 14:55:02.296 DEBUG cinder.openstack.common.scheduler.base_filter [req-123b1df9-912a-4158-9185-f97b0b28db02 f3623b79d1554018b543beab7bf5bc32 61983bcbb6ed4decb8019adc6f28a343] Filter AvailabilityZoneFilter returned 3 host(s) from (pid=22860) get_filtered_objects /opt/stack/cinder/cinder/openstack/common/scheduler/base_filter.py:94
2016-02-04 14:55:02.297 WARNING cinder.scheduler.filters.capacity_filter [req-123b1df9-912a-4158-9185-f97b0b28db02 f3623b79d1554018b543beab7bf5bc32 61983bcbb6ed4decb8019adc6f28a343] Insufficient free space for volume creation on host devstack at lvmdriver-1#lvmdriver-1 (requested / avail): 150/10.01
2016-02-04 14:55:02.297 DEBUG cinder.openstack.common.scheduler.base_filter [req-123b1df9-912a-4158-9185-f97b0b28db02 f3623b79d1554018b543beab7bf5bc32 61983bcbb6ed4decb8019adc6f28a343] Filter CapacityFilter returned 2 host(s) from (pid=22860) get_filtered_objects /opt/stack/cinder/cinder/openstack/common/scheduler/base_filter.py:94
2016-02-04 14:55:02.298 DEBUG cinder.openstack.common.scheduler.base_filter [req-123b1df9-912a-4158-9185-f97b0b28db02 f3623b79d1554018b543beab7bf5bc32 61983bcbb6ed4decb8019adc6f28a343] Filter CapabilitiesFilter returned 2 host(s) from (pid=22860) get_filtered_objects /opt/stack/cinder/cinder/openstack/common/scheduler/base_filter.py:94
2016-02-04 14:55:02.298 DEBUG cinder.scheduler.filter_scheduler [req-123b1df9-912a-4158-9185-f97b0b28db02 f3623b79d1554018b543beab7bf5bc32 61983bcbb6ed4decb8019adc6f28a343] Filtered [host 'devstack at sc3000#1': free_capacity_gb: 112.6, pools: None, host 'devstack at sc3000#0': free_capacity_gb: 343.58, pools: None] from (pid=22860) _get_weighted_candidates /opt/stack/cinder/cinder/scheduler/filter_scheduler.py:310
2016-02-04 14:55:02.299 DEBUG cinder.scheduler.filter_scheduler [req-123b1df9-912a-4158-9185-f97b0b28db02 f3623b79d1554018b543beab7bf5bc32 61983bcbb6ed4decb8019adc6f28a343] Choosing devstack at sc3000#1 from (pid=22860) _choose_top_host /opt/stack/cinder/cinder/scheduler/filter_scheduler.py:429
2016-02-04 14:55:02.380 DEBUG oslo_messaging._drivers.amqpdriver [req-123b1df9-912a-4158-9185-f97b0b28db02 f3623b79d1554018b543beab7bf5bc32 61983bcbb6ed4decb8019adc6f28a343] CAST unique_id: 237dbe5e94ea4cfcadb138bb9911e2ad exchange 'openstack' topic 'cinder-volume' from (pid=22860) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:448
2016-02-04 14:55:02.384 DEBUG cinder.scheduler.manager [req-123b1df9-912a-4158-9185-f97b0b28db02 f3623b79d1554018b543beab7bf5bc32 61983bcbb6ed4decb8019adc6f28a343] Task 'cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create' (97af6df4-9724-4f85-8a66-fc22ce9fe797) transitioned into state 'SUCCESS' from state 'RUNNING' with result 'None' from (pid=22860) _task_receiver /usr/local/lib/python2.7/dist-packages/taskflow/listeners/logging.py:178
2016-02-04 14:55:02.386 DEBUG cinder.scheduler.manager [req-123b1df9-912a-4158-9185-f97b0b28db02 f3623b79d1554018b543beab7bf5bc32 61983bcbb6ed4decb8019adc6f28a343] Flow 'volume_create_scheduler' (36c3617e-bdd1-4583-9f2b-758294d3704c) transitioned into state 'SUCCESS' from state 'RUNNING' from (pid=22860) _flow_receiver /usr/local/lib/python2.7/dist-packages/taskflow/listeners/logging.py:140
2016-02-04 14:55:03.838 DEBUG oslo_messaging._drivers.amqpdriver [-] received message unique_id: c59ef0b3373b425999e997f47b3db1b4  from (pid=22860) __call__ /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:195
2016-02-04 14:55:03.840 DEBUG cinder.scheduler.host_manager [req-0e703203-354e-420a-91dc-653d2a831490 None None] Received volume service update from devstack at sc3000: {u'filter_function': None, u'goodness_function': None, u'volume_backend_name': u'sc3000', u'reserved_percentage': 1, u'pools': [{u'pool_name': u'0', u'QoS_support': False, u'thick_provisioning_support': True, u'allocated_capacity_gb': 0, u'thin_provisioning_support': True, u'free_capacity_gb': 343.58000000000004, u'total_capacity_gb': 838, u'reserved_percentage': 1, u'consistencygroup_support': False}, {u'pool_name': u'1', u'QoS_support': False, u'thick_provisioning_support': True, u'allocated_capacity_gb': 200, u'thin_provisioning_support': True, u'free_capacity_gb': 112.60000000000002, u'total_capacity_gb': 1126, u'reserved_percentage': 1, u'consistencygroup_support': False}], u'QoS_support': False, u'vendor_name': u'toshiba'} from (pid=22860) update_service_capabilities /opt/stack/cinder/cinder/scheduler/host_manager.py:450
2







Thanks
Dilip

From: yang, xing [mailto:xing.yang at emc.com]
Sent: Thursday, February 11, 2016 4:10 AM
To: Dilip Sunkum Manjunath; 'openstack at lists.openstack.org'
Cc: 'itzdilip at gmail.com'
Subject: Re: [Openstack] [CINDER] how to get updated pool info when multi users create volumes on pool configured?

Hi Dilip,

Can you please clarify your question?  If a driver reports both thin_provisioning and thick_provisioning  to True and reports free_capacity based on thin provisioning , and you want to provision a thick volume, the scheduler won’t block it and it fail when the driver tries to create volume?  If you can describe the exact problem, we can try to find a solution.

Thanks,
Xing


From: Dilip Sunkum Manjunath <Dilip.SunkumManjunath at toshiba-tsip.com<mailto:Dilip.SunkumManjunath at toshiba-tsip.com>>
Date: Monday, February 8, 2016 at 12:52 AM
To: "'openstack at lists.openstack.org<mailto:'openstack at lists.openstack.org>'" <openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>>
Cc: "'itzdilip at gmail.com<mailto:'itzdilip at gmail.com>'" <itzdilip at gmail.com<mailto:itzdilip at gmail.com>>
Subject: Re: [Openstack] [CINDER] how to get updated pool info when multi users create volumes on pool configured?

Hi all,

The problem what I noticed is, in update stats we send thick and thin volume support as true. This issue can be reproduced.

So the over provision and thick and thin parameters are reason, scheduler is not blocking for thick volume based on the capacity if it is thick, it Is passing to the driver which forces driver to handle a particular use case!!

I believe this has to handle at the above layer, however it will not know what kind of volume is created using volume types, I think this also has to be considered in scheduler!


Any one has a better approach? For this or is it better to write own filters to avoid writing in driver?



Thanks
Dilip



From: Wanghao (S) [mailto:wanghao749 at huawei.com]
Sent: Thursday, February 04, 2016 11:48 AM
To: Dilip Sunkum Manjunath; 'openstack at lists.openstack.org<mailto:'openstack at lists.openstack.org>'
Cc: 'itzdilip at gmail.com<mailto:'itzdilip at gmail.com>'
Subject: 答复: [CINDER] how to get updated pool info when multi users create volumes on pool configured?

Hi, Dilip

Generally, Cinder scheduler will consume the free_capacity_gb after chose the host for a volume creation,  could see the consume_from_volume function in host_manager.py.
That keep the pool capacity gb updated correctly when multi users are creating volume.

Thanks
Wang Hao

发件人: Dilip Sunkum Manjunath [mailto:Dilip.SunkumManjunath at toshiba-tsip.com]
发送时间: 2016年2月3日 17:50
收件人: 'openstack at lists.openstack.org<mailto:'openstack at lists.openstack.org>'
抄送: 'itzdilip at gmail.com<mailto:'itzdilip at gmail.com>'
主题: [Openstack] [CINDER] how to get updated pool info when multi users create volumes on pool configured?


Hi All,


the get_volume_stats method runs 60 sec once,

am using the multi pools configured in it, while creating volumes,

if more than one user is creating how will the pool information be reliable which is from pooling job? As it runs once in 60 sec!


Am getting the old values and request is failing , has any one faced it before?


Thanks
Dilip




The information contained in this e-mail message and in any attachments/annexure/appendices is confidential to the
recipient and may contain privileged information. If you are not the intended recipient, please notify the
sender and delete the message along with any attachments/annexure/appendices. You should not disclose,
copy or otherwise use the information contained in the message or any annexure. Any views expressed in this e-mail
are those of the individual sender except where the sender specifically states them to be the views of
Toshiba Software India Pvt. Ltd. (TSIP),Bangalore.
Although this transmission and any attachments are believed to be free of any virus or other defect that might affect any computer system into which it is received and opened, it is the responsibility of the recipient to ensure that it is virus free and no responsibility is accepted by Toshiba Software India Pvt. Ltd, for any loss or damage arising in any way from its use.

The information contained in this e-mail message and in any attachments/annexure/appendices is confidential to the
recipient and may contain privileged information. If you are not the intended recipient, please notify the
sender and delete the message along with any attachments/annexure/appendices. You should not disclose,
copy or otherwise use the information contained in the message or any annexure. Any views expressed in this e-mail
are those of the individual sender except where the sender specifically states them to be the views of
Toshiba Software India Pvt. Ltd. (TSIP),Bangalore.
Although this transmission and any attachments are believed to be free of any virus or other defect that might affect any computer system into which it is received and opened, it is the responsibility of the recipient to ensure that it is virus free and no responsibility is accepted by Toshiba Software India Pvt. Ltd, for any loss or damage arising in any way from its use.
The information contained in this e-mail message and in any
attachments/annexure/appendices is confidential to the 
recipient and may contain privileged information. 
If you are not the intended recipient, please notify the
sender and delete the message along with any 
attachments/annexure/appendices. You should not disclose,
copy or otherwise use the information contained in the
message or any annexure. Any views expressed in this e-mail 
are those of the individual sender except where the sender 
specifically states them to be the views of 
Toshiba Software India Pvt. Ltd. (TSIP),Bangalore.

Although this transmission and any attachments are believed to be
free of any virus or other defect that might affect any computer 
system into which it is received and opened, it is the responsibility
of the recipient to ensure that it is virus free and no responsibility 
is accepted by Toshiba Embedded Software India Pvt. Ltd, for any loss or
damage arising in any way from its use.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160212/273e9844/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image003.png
Type: image/png
Size: 7749 bytes
Desc: image003.png
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160212/273e9844/attachment.png>


More information about the Openstack mailing list