[Openstack] [OpenStack] [CINDER] how to get updated pool info when multi users create volumes on pool configured?
yang, xing
xing.yang at emc.com
Fri Feb 26 15:46:20 UTC 2016
Hi Dilip,
What are the values of thin_provisioning_support and max_over_subscription_ratio reported by the two backends?
For the one supports thick, thin_provisioning_support should be False. Also there is a bug regarding max_over_subscription_ratio. A fix was proposed but not merged yet. Set this ratio to -1 for now.
Set 'thin_provisioning_support': '<is> False' in the extra specs of the volume type you created for the thick volume provisioning. This should tell the scheduler not to choose a backend supporting thin.
Let me know how it goes.
Thanks,
Xing
On Feb 26, 2016, at 12:03 AM, Dilip Sunkum Manjunath <Dilip.SunkumManjunath at toshiba-tsip.com<mailto:Dilip.SunkumManjunath at toshiba-tsip.com>> wrote:
Hello Xing,
I tried this approach, however I ended in same problem, scheduler was still picking over provisioned thin pool for thick volumes.
But I noticed extra specs with thin_provision_support capabilities, I tried this option but unable to understand what is the use of it?
Is it like when we have add this extra specs capabilities does it have any relation with the get_volume_stats method which update the pool stats?
Thanks
Dilip
I added some logs in filters and printed for reference
<image001.png>
-----Original Message-----
From: yang, xing [mailto:xing.yang at emc.com]
Sent: Wednesday, February 17, 2016 12:18 AM
To: Dilip Sunkum Manjunath
Cc: openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>; itzdilip at gmail.com<mailto:itzdilip at gmail.com>
Subject: Re: [OpenStack] [CINDER] how to get updated pool info when multi users create volumes on pool configured?
Sounds good. Let me know how it goes.
Thanks Dilip,
Xing
> On Feb 16, 2016, at 1:21 AM, Dilip Sunkum Manjunath <Dilip.SunkumManjunath at toshiba-tsip.com<mailto:Dilip.SunkumManjunath at toshiba-tsip.com>> wrote:
>
> Hi Xing,
>
>
> Thanks for replay,
>
>
>
> I tried because the use case was to support both in single pool.
>
> I was thinking in same as to read the volume type in scheduler, however since it is a new requirement that affects everyone it might not be good to change now.
>
> I shall try with the other approach pools for thin /thick and update you.
>
>
> Thanks
> Dilip
>
>
>
>
>
>
>
>
> -----Original Message-----
> From: yang, xing [mailto:xing.yang at emc.com]
> Sent: Friday, February 12, 2016 12:42 PM
> To: Dilip Sunkum Manjunath
> Cc: openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>; itzdilip at gmail.com<mailto:itzdilip at gmail.com>
> Subject: Re: [OpenStack] [CINDER] how to get updated pool info when multi users create volumes on pool configured?
>
> Hi Dilip,
>
> I see. If thin_provisioning is true and max_over_subscription_ratio is valid, the scheduler will treat it as thin provisioning. We do not prevent driver from reporting both thin and thick support to be true. However, I think we need to make a change.
>
> I suggest that you have one pool for thin and the other one for thick but don't report both thin and thick support from the same pool. That will avoid this problem.
>
> Another possible alternative is to require thin/thick provisioning to be in extra specs and use that info in the scheduler, however that will be a new requirement that affects everyone. So I am not in favor of that approach.
>
> Can you use one pool for thin and another for thick in your testing?
>
> Thanks,
> Xing
>
>
>
>> On Feb 12, 2016, at 12:05 AM, Dilip Sunkum Manjunath <Dilip.SunkumManjunath at toshiba-tsip.com<mailto:Dilip.SunkumManjunath at toshiba-tsip.com>> wrote:
>>
>> max_over_subscription_ratio
> The information contained in this e-mail message and in any
> attachments/annexure/appendices is confidential to the recipient and
> may contain privileged information.
> If you are not the intended recipient, please notify the sender and
> delete the message along with any attachments/annexure/appendices. You
> should not disclose, copy or otherwise use the information contained
> in the message or any annexure. Any views expressed in this e-mail are
> those of the individual sender except where the sender specifically
> states them to be the views of Toshiba Software India Pvt. Ltd.
> (TSIP),Bangalore.
>
> Although this transmission and any attachments are believed to be free
> of any virus or other defect that might affect any computer system
> into which it is received and opened, it is the responsibility of the
> recipient to ensure that it is virus free and no responsibility is
> accepted by Toshiba Embedded Software India Pvt. Ltd, for any loss or
> damage arising in any way from its use.
>
The information contained in this e-mail message and in any attachments/annexure/appendices is confidential to the
recipient and may contain privileged information. If you are not the intended recipient, please notify the
sender and delete the message along with any attachments/annexure/appendices. You should not disclose,
copy or otherwise use the information contained in the message or any annexure. Any views expressed in this e-mail
are those of the individual sender except where the sender specifically states them to be the views of
Toshiba Software India Pvt. Ltd. (TSIP),Bangalore.
Although this transmission and any attachments are believed to be free of any virus or other defect that might affect any computer system into which it is received and opened, it is the responsibility of the recipient to ensure that it is virus free and no responsibility is accepted by Toshiba Software India Pvt. Ltd, for any loss or damage arising in any way from its use.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160226/76881ccf/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 127000 bytes
Desc: image001.png
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160226/76881ccf/attachment.png>
More information about the Openstack
mailing list