[Openstack] [CINDER] how to get updated pool info when multi users create volumes on pool configured?
Dilip Sunkum Manjunath
Dilip.SunkumManjunath at toshiba-tsip.com
Mon Feb 8 05:52:04 UTC 2016
Hi all,
The problem what I noticed is, in update stats we send thick and thin volume support as true. This issue can be reproduced.
So the over provision and thick and thin parameters are reason, scheduler is not blocking for thick volume based on the capacity if it is thick, it Is passing to the driver which forces driver to handle a particular use case!!
I believe this has to handle at the above layer, however it will not know what kind of volume is created using volume types, I think this also has to be considered in scheduler!
Any one has a better approach? For this or is it better to write own filters to avoid writing in driver?
Thanks
Dilip
From: Wanghao (S) [mailto:wanghao749 at huawei.com]
Sent: Thursday, February 04, 2016 11:48 AM
To: Dilip Sunkum Manjunath; 'openstack at lists.openstack.org'
Cc: 'itzdilip at gmail.com'
Subject: 答复: [CINDER] how to get updated pool info when multi users create volumes on pool configured?
Hi, Dilip
Generally, Cinder scheduler will consume the free_capacity_gb after chose the host for a volume creation, could see the consume_from_volume function in host_manager.py.
That keep the pool capacity gb updated correctly when multi users are creating volume.
Thanks
Wang Hao
发件人: Dilip Sunkum Manjunath [mailto:Dilip.SunkumManjunath at toshiba-tsip.com]
发送时间: 2016年2月3日 17:50
收件人: 'openstack at lists.openstack.org'
抄送: 'itzdilip at gmail.com'
主题: [Openstack] [CINDER] how to get updated pool info when multi users create volumes on pool configured?
Hi All,
the get_volume_stats method runs 60 sec once,
am using the multi pools configured in it, while creating volumes,
if more than one user is creating how will the pool information be reliable which is from pooling job? As it runs once in 60 sec!
Am getting the old values and request is failing , has any one faced it before?
Thanks
Dilip
The information contained in this e-mail message and in any attachments/annexure/appendices is confidential to the
recipient and may contain privileged information. If you are not the intended recipient, please notify the
sender and delete the message along with any attachments/annexure/appendices. You should not disclose,
copy or otherwise use the information contained in the message or any annexure. Any views expressed in this e-mail
are those of the individual sender except where the sender specifically states them to be the views of
Toshiba Software India Pvt. Ltd. (TSIP),Bangalore.
Although this transmission and any attachments are believed to be free of any virus or other defect that might affect any computer system into which it is received and opened, it is the responsibility of the recipient to ensure that it is virus free and no responsibility is accepted by Toshiba Software India Pvt. Ltd, for any loss or damage arising in any way from its use.
The information contained in this e-mail message and in any
attachments/annexure/appendices is confidential to the
recipient and may contain privileged information.
If you are not the intended recipient, please notify the
sender and delete the message along with any
attachments/annexure/appendices. You should not disclose,
copy or otherwise use the information contained in the
message or any annexure. Any views expressed in this e-mail
are those of the individual sender except where the sender
specifically states them to be the views of
Toshiba Software India Pvt. Ltd. (TSIP),Bangalore.
Although this transmission and any attachments are believed to be
free of any virus or other defect that might affect any computer
system into which it is received and opened, it is the responsibility
of the recipient to ensure that it is virus free and no responsibility
is accepted by Toshiba Embedded Software India Pvt. Ltd, for any loss or
damage arising in any way from its use.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160208/69455ae9/attachment.html>
More information about the Openstack
mailing list