[openstack-dev] [quotas] [cyborg]Dublin Rocky PTG Summary

Tom Barron tpb at dyncloud.net
Mon Mar 12 18:56:34 UTC 2018


Just a remark below w.r.t. quota support in some other projects fwiw.

On 09/03/18 15:46 +0800, Zhipeng Huang wrote:
> Hi Team,
>
>Thanks to our topic leads' efforts, below is the aggregated summary from
>our dublin ptg session discussion. Please check it out and feel free to
>feedback any concerns you might have.
>

< -- snip -- >

> Quota and Multi-tenancy Support
> Etherpad: https://etherpad.openstack.org/p/cyborg-ptg-rocky-quota
> Slide:
> https://docs.google.com/presentation/d/1DUKWW2vgqUI3Udl4UDvxgJ53Ve5LmyaBpX4u--rVrCc/edit?usp=sharing
>
> 1. Provide project and user level quota support
> 2. Treat all resources as the reserved resource type
> 3. Add quota engine and quota driver for the quota support
> 4. Tables: quotas, quota_usage, reservation
> 5. Transactions operation: reserve, commit, rollback
>
>   - Concerns on rollback
>
>
>   - Implement a two-stage resevation and rollback
>
>
>   - reserve - commit - rollback (if failed)
>

Note that cinder and manila followed the nova implementation of a two-stage
reservation/commit/rollback model but the resulting system has been buggy.
Over time, the quota system's notion of resource usage gets out of sync with
actual resource usage.

Nova has since dropped the reserve/commit/rollback model [0] and cinder and
manila are considering making a similar change.

Currently we create reservation records and update quota usage in
the API service and then remove the reservation records and update
quota usage in another service at commit or rollback time, or on reservation
timeout. Nova now avoids the double bookkeeping of resource usage and 
the need to update these records correctly across separate services by 
directly checking resource counts in the api at the time requests are 
received. If we can do the same thing in cinder and manila a whole 
class of tough, recurrent bugs can be eliminated.

The main concern expressed thus far with this "resource counting"
approach is that there may be some negative performance impact since
the current approach provides cached usage information to the api
service.  As you can see here [1] there probably is not yet agreement on the
degree of performance impact but there does seem to be agreement that we need
first to get a quota system that is correct and reliable, then 
optimize for performance as needed.

Best regards,

-- Tom Barron

[0] https://specs.openstack.org/openstack/nova-specs/specs/pike/implemented/cells-count-resources-to-check-quota-in-api.html
[1] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128108.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180312/63e10b23/attachment.sig>


More information about the OpenStack-dev mailing list