[openstack-dev] More on the topic of DELIMITER, the Quota Management Library proposal

Vilobh Meshram vilobhmeshram.openstack at gmail.com
Wed Apr 27 03:44:01 UTC 2016


Thanks everyone for being there at the Design Summit talk on Cross project
Quotas.

Points from the discussion are captured in this etherpad[1] for reference.

-Vilobh
[1] https://etherpad.openstack.org/p/newton-quota-library

On Tue, Apr 26, 2016 at 9:33 AM, Jay Pipes <jaypipes at gmail.com> wrote:

> On 04/25/2016 02:05 PM, Joshua Harlow wrote:
>
>> Is the generation stuff going to be exposed outside of the AP?
>>
>
> No, wasn't planning on it.
>
> I'm sort of hoping not(?), because a service (if one everyone wanted to
>> create it) for say zookeeper (or etcd or consul...) could use its
>> built-in generation equivalent (every znode has a version that u can use
>> to do equivalent things).
>>
>> Thus it's like the gist I posted earlier @
>>
>> https://gist.github.com/harlowja/e7175c2d76e020a82ae94467a1441d85
>>
>> So might be nice to not expose such a thing outside of the db-layer.
>>
>> Amrith Kumar wrote:
>>
>>> On Sat, 2016-04-23 at 21:41 +0000, Amrith Kumar wrote:
>>>
>>>> Ok to beer and high bandwidth. FYI Jay the distributed high perf db we
>>>> did a couple of years ago is now open source. Just saying. Mysql plug
>>>> compatible ....
>>>>
>>>
>>> -amrith
>>>>
>>>>
>>>> --
>>>> Amrith Kumar
>>>> amrith at tesora.com
>>>>
>>>>
>>>> -------- Original message --------
>>>> From: Jay Pipes<jaypipes at gmail.com>
>>>> Date: 04/23/2016 4:10 PM (GMT-05:00)
>>>> To: Amrith Kumar<amrith at tesora.com>,
>>>> openstack-dev at lists.openstack.org
>>>> Cc: vilobhmm at yahoo-inc.com, nik.komawar at gmail.com, Ed Leafe
>>>> <ed at leafe.com>
>>>> Subject: Re: [openstack-dev] More on the topic of DELIMITER, the Quota
>>>> Management Library proposal
>>>>
>>>>
>>>> Looking forward to arriving in Austin so that I can buy you a beer,
>>>> Amrith, and have a high-bandwidth conversation about how you're
>>>> wrong. :P
>>>>
>>>
>>>
>>> Jay and I chatted and it took a long time to come to an agreement
>>> because we weren't able to find any beer.
>>>
>>> Here's what I think we've agreed about. The library will store data in
>>> two tables;
>>>
>>> 1. the detail table which stores the individual claims and the resource
>>> class.
>>> 2. a generation table which stores the resource class and a generation.
>>>
>>> When a claim is received, the requestor performs the following
>>> operations.
>>>
>>> begin
>>>
>>>          select sum(detail.claims) as total_claims,
>>>              generation.resource as resource,
>>>              generation.generation last_generation
>>>          from detail, generation
>>>          where detail.resource = generation.resource
>>>          and generation.resource =<chosen resource; memory, cpu, ...>
>>>          group by generation.generation, generation.resource
>>>
>>>          if total_claims + this_claim<  limit
>>>                  insert into detail values (this_claim, resource)
>>>
>>>                  update generation
>>>                  set generation = generation + 1
>>>                  where generation = last_generation
>>>
>>>                  if @@rowcount = 1
>>>                          -- all good
>>>                          commit
>>>                  else
>>>                          rollback
>>>                          -- try again
>>>
>>>
>>>
>>> There will be some bootstrapping that will be required for the situation
>>> where there are no detail records for a given resource and so on but I
>>> think we can figure that out easily. The easiest way I can think of
>>> doing that is to lose the join and do both the queries (one against the
>>> detail and one against the generation table within the same
>>> transaction).
>>>
>>> Using the generation table update as the locking mechanism that prevents
>>> multiple requestors from making concurrent claims.
>>>
>>> So long as people don't go and try and read these tables and change
>>> tables outside of the methods that the library provides, we can
>>> guarantee that this is al safe and will not oversubscribe.
>>>
>>> -amrith
>>>
>>> Comments inline.
>>>>
>>>> On 04/23/2016 11:25 AM, Amrith Kumar wrote:
>>>>
>>>>> On Sat, 2016-04-23 at 10:26 -0400, Andrew Laski wrote:
>>>>>
>>>>>> On Fri, Apr 22, 2016, at 09:57 PM, Tim Bell wrote:
>>>>>>
>>>>>> I have reservations on f and g.
>>>>>>>
>>>>>>>
>>>>>>> On f., We have had a number of discussions in the past about
>>>>>>> centralising quota (e.g. Boson) and the project teams of the other
>>>>>>> components wanted to keep the quota contents ‘close’. This can
>>>>>>> always be reviewed further with them but I would hope for at least
>>>>>>>
>>>>>> a
>>>>
>>>>> standard schema structure of tables in each project for the
>>>>>>>
>>>>>> handling
>>>>
>>>>> of quota.
>>>>>>>
>>>>>>>
>>>>>>> On g., aren’t all projects now nested projects ? If we have the
>>>>>>> complexity of handling nested projects sorted out in the common
>>>>>>> library, is there a reason why a project would not want to support
>>>>>>> nested projects ?
>>>>>>>
>>>>>>>
>>>>>>> One other issue is how to do reconcilliation, each project needs
>>>>>>>
>>>>>> to
>>>>
>>>>> have a mechanism to re-calculate the current allocations and
>>>>>>> reconcile that with the quota usage. While in an ideal world, this
>>>>>>> should not be necessary, it would be for the foreseeable future,
>>>>>>> especially with a new implementation.
>>>>>>>
>>>>>>> One of the big reasons that Jay and I have been pushing to remove
>>>>>> reservations and tracking of quota in a separate place than the
>>>>>> resources are actually used, e.g., an instance record in the Nova
>>>>>>
>>>>> db,
>>>>
>>>>> is so that reconciliation is not necessary. For example, if RAM
>>>>>>
>>>>> quota
>>>>
>>>>> usage is simply tracked as sum(instances.memory_mb) then you can be
>>>>>> sure that usage is always up to date.
>>>>>>
>>>>> Uh oh, there be gremlins here ...
>>>>>
>>>>> I am positive that this will NOT work, see earlier conversations
>>>>>
>>>> about
>>>>
>>>>> isolation levels, and Jay's alternate solution.
>>>>>
>>>>> The way (I understand the issue, and Jay's solution) you get around
>>>>>
>>>> the
>>>>
>>>>> isolation levels trap is to NOT do your quota determinations based
>>>>>
>>>> on a
>>>>
>>>>> SUM(column) but rather based on the rowcount on a well crafted
>>>>>
>>>> UPDATE of
>>>>
>>>>> a single table that stored total quota.
>>>>>
>>>> No, we would do our quota calculations by doing a SUM(used) against
>>>> the
>>>> allocations table. There is no separate table that stored the total
>>>> quota (or quota usage records). That's the source of the problem with
>>>> the existing quota handling code in Nova. The generation field value
>>>> is
>>>> used to provide the consistent view of the actual resource usage
>>>> records
>>>> so that the INSERT operations for all claimed resources can be done in
>>>> a
>>>> transactional manner and will be rolled back if any other writer
>>>> changes
>>>> the amount of consumed resources on a provider (which of course would
>>>> affect the quota check calculations).
>>>>
>>>>   >  You could also store a detail
>>>>
>>>>> claim record for each claim in an independent table that is
>>>>>
>>>> maintained
>>>>
>>>>> in the same database transaction if you so desire, that is optional.
>>>>>
>>>> The allocations table is the "detail claim record" table that you
>>>> refer
>>>> to above.
>>>>
>>>> My view of how this would work (which I described earlier as
>>>>>
>>>> building on
>>>>
>>>>> Jay's solution) is that the claim flow would look like this:
>>>>>
>>>>>           select total_used, generation
>>>>>           from quota_claimed
>>>>>           where tenant =<tenant>  and resource = 'memory'
>>>>>
>>>> There is no need to keep a total_used value for anything. That is
>>>> denormalized calculated data that merely adds a point of race
>>>> contention. The quota check is against the *detail* table
>>>> (allocations),
>>>> which stores the *actual resource usage records*.
>>>>
>>>>           begin transaction
>>>>>
>>>>>           update quota_claimed
>>>>>                   set total_used = total_used + claim, generation =
>>>>>                   generation + 1
>>>>>                   where tenant =<tenant>  and resource = 'memory'
>>>>>                   and generation = generation
>>>>>                   and total_used + claim<  limit
>>>>>
>>>> This part of the transaction must always occur **after** the
>>>> insertion
>>>> of the actual resource records, not before.
>>>>
>>>>           if @@rowcount = 1
>>>>>                   -- optional claim_detail table
>>>>>                   insert into claim_detail values (<tenant>,
>>>>>
>>>> 'memory',
>>>>
>>>>>                   claim, ...)
>>>>>                   commit
>>>>>           else
>>>>>                   rollback
>>>>>
>>>> So, in pseudo-Python-SQLish code, my solution works like this:
>>>>
>>>> limits = get_limits_from_delimiter()
>>>> requested = get_requested_from_request_spec()
>>>>
>>>> while True:
>>>>
>>>>       used := SELECT
>>>>                 resource_class,
>>>>                 resource_provider,
>>>>                 generation,
>>>>                 SUM(used) as total_used
>>>>               FROM allocations
>>>>               JOIN resource_providers ON (...)
>>>>               WHERE consumer_uuid = $USER_UUID
>>>>               GROUP BY
>>>>                 resource_class,
>>>>                 resource_provider,
>>>>                 generation;
>>>>
>>>>       # Check that our requested resource amounts don't exceed quotas
>>>>       if not check_requested_within_limits(requested, used, limits):
>>>>           raise QuotaExceeded
>>>>
>>>>       # Claim all requested resources. Note that the generation
>>>> retrieved
>>>>       # from the above query is our consistent view marker. If the
>>>> UPDATE
>>>>       # below succeeds and returns != 0 rows affected, that means there
>>>>       # was no other writer that changed our resource usage in between
>>>>       # this thread's claiming of resources, and therefore we prevent
>>>>       # any oversubscription of resources.
>>>>       begin_transaction:
>>>>
>>>>           provider := SELECT id, generation, ... FROM
>>>> resource_providers
>>>>                       JOIN (...)
>>>>                       WHERE (<resource_usage_filters>)
>>>>
>>>>           for resource in requested:
>>>>               INSERT INTO allocations (
>>>>                 resource_provider_id,
>>>>                 resource_class_id,
>>>>                 consumer_uuid,
>>>>                 used
>>>>               ) VALUES (
>>>>                 $provider.id,
>>>>                 $resource.id,
>>>>                 $USER_UUID,
>>>>                 $resource.amount
>>>>               );
>>>>
>>>>          rows_affected := UPDATE resource_providers
>>>>                           SET generation = generation + 1
>>>>                           WHERE id = $provider.id
>>>>                           AND generation =
>>>> $used[$provider.id].generation;
>>>>
>>>>          if $rows_affected == 0:
>>>>              ROLLBACK;
>>>>
>>>> The only reason we would need a post-claim quota check is if some of
>>>> the
>>>> requested resources are owned and tracked by an external-to-Nova
>>>> system.
>>>>
>>>> BTW, note to Ed Leafe... unless your distributed data store supports
>>>> transactional semantics, you can't use a distributed data store for
>>>> these types of solutions. Instead, you will need to write a whole
>>>> bunch
>>>> of code that does post-auditing of claims and quotas and a system
>>>> that
>>>> accepts that oversubscription and out-of-sync quota limits and usages
>>>> is
>>>> a fact of life. Not to mention needing to implement JOINs in Python.
>>>>
>>>> But, it is my understanding that
>>>>>
>>>>>           (a) if you wish to do the SUM(column) approach that you
>>>>>
>>>> propose,
>>>>
>>>>>           you must have a reservation that is committed and then you
>>>>>
>>>> must
>>>>
>>>>>           re-read the SUM(column) to make sure you did not
>>>>>
>>>> over-subscribe;
>>>>
>>>>>           and
>>>>>
>>>> Erm, kind of? Oversubscription is not possible in the solution I
>>>> describe because the compare-and-update on the
>>>> resource_providers.generation field allows for a consistent view of
>>>> the
>>>> resources used -- and if that view changes during the insertion of
>>>> resource usage records -- the transaction containing those insertions
>>>> is
>>>> rolled back.
>>>>
>>>>           (b) to get away from reservations you must stop using the
>>>>>           SUM(column) approach and instead use a single quota_claimed
>>>>>           table to determine the current quota claimed.
>>>>>
>>>> No. This has nothing to do with reservations.
>>>>
>>>> At least that's what I understand of Jay's example from earlier in
>>>>>
>>>> this
>>>>
>>>>> thread.
>>>>>
>>>>> Let's definitely discuss this in Austin. While I don't love Jay's
>>>>> solution for other reasons to do with making the quota table a
>>>>>
>>>> hotspot
>>>>
>>>>> and things like that, it is a perfectly workable solution, I think.
>>>>>
>>>> There is no quota table in my solution.
>>>>
>>>> If you refer to the resource_providers table (the table that has the
>>>> generation field), then yes, it's a hot spot. But hot spots in the DB
>>>> aren't necessarily a bad thing if you design the underlying schema
>>>> properly.
>>>>
>>>> More in Austin.
>>>>
>>>> Best,
>>>> -jay
>>>>
>>>>
>>>>>> Tim
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> From: Amrith Kumar<amrith at tesora.com>
>>>>>>> Reply-To: "OpenStack Development Mailing List (not for usage
>>>>>>> questions)"<openstack-dev at lists.openstack.org>
>>>>>>> Date: Friday 22 April 2016 at 06:51
>>>>>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>>>>>> <openstack-dev at lists.openstack.org>
>>>>>>> Subject: Re: [openstack-dev] More on the topic of DELIMITER, the
>>>>>>> Quota Management Library proposal
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>           I’ve thought more about Jay’s approach to enforcing
>>>>>>>
>>>>>> quotas
>>>>
>>>>>           and I think we can build on and around it. With that
>>>>>>>           implementation as the basic quota primitive, I think we
>>>>>>>
>>>>>> can
>>>>
>>>>>           build a quota management API that isn’t dependent on
>>>>>>>           reservations. It does place some burdens on the consuming
>>>>>>>           projects that I had hoped to avoid and these will cause
>>>>>>>           heartburn for some (make sure that you always request
>>>>>>>           resources in a consistent order and free them in a
>>>>>>>           consistent order being the most obvious).
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>           If it doesn’t make it harder, I would like to see if we
>>>>>>>
>>>>>> can
>>>>
>>>>>           make the quota API take care of the ordering of requests.
>>>>>>>           i.e. if the quota API is an extension of Jay’s example
>>>>>>>
>>>>>> and
>>>>
>>>>>           accepts some data structure (dict?) with all the claims
>>>>>>>
>>>>>> that
>>>>
>>>>>           a project wants to make for some operation, and then
>>>>>>>           proceeds to make those claims for the project in the
>>>>>>>           consistent order, I think it would be of some value.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>           Beyond that, I’m on board with a-g below,
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>           -amrith
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>           From: Vilobh Meshram
>>>>>>>           [mailto:vilobhmeshram.openstack at gmail.com]
>>>>>>>           Sent: Friday, April 22, 2016 4:08 AM
>>>>>>>           To: OpenStack Development Mailing List (not for usage
>>>>>>>           questions)<openstack-dev at lists.openstack.org>
>>>>>>>           Subject: Re: [openstack-dev] More on the topic of
>>>>>>>
>>>>>> DELIMITER,
>>>>
>>>>>           the Quota Management Library proposal
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>           I strongly agree with Jay on the points related to "no
>>>>>>>           reservation" , keeping the interface simple and the role
>>>>>>>
>>>>>> for
>>>>
>>>>>           Delimiter (impose limits on resource consumption and
>>>>>>>
>>>>>> enforce
>>>>
>>>>>           quotas).
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>           The point to keep user quota, tenant quotas in Keystone
>>>>>>>           sounds interestring and would need support from Keystone
>>>>>>>           team. We have a Cross project session planned [1] and
>>>>>>>
>>>>>> will
>>>>
>>>>>           definitely bring that up in that session.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>           The main thought with which Delimiter was formed was to
>>>>>>>           enforce resource quota in transaction safe manner and do
>>>>>>>
>>>>>> it
>>>>
>>>>>           in a cross-project conducive manner and it still holds
>>>>>>>           true. Delimiters mission is to impose limits on
>>>>>>>           resource consumption and enforce quotas in transaction
>>>>>>>
>>>>>> safe
>>>>
>>>>>           manner. Few key aspects of Delimiter are :-
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>           a. Delimiter will be a new Library and not a Service.
>>>>>>>           Details covered in spec.
>>>>>>>
>>>>>>>
>>>>>>>           b. Delimiter's role will be to impose limits on resource
>>>>>>>           consumption.
>>>>>>>
>>>>>>>
>>>>>>>           c. Delimiter will not be responsible for rate limiting.
>>>>>>>
>>>>>>>
>>>>>>>           d. Delimiter will not maintain data for the resources.
>>>>>>>           Respective projects will take care of keeping,
>>>>>>>
>>>>>> maintaining
>>>>
>>>>>           data for the resources and resource consumption.
>>>>>>>
>>>>>>>
>>>>>>>           e. Delimiter will not have the concept of "reservations".
>>>>>>>           Delimiter will read or update the "actual" resource
>>>>>>>
>>>>>> tables
>>>>
>>>>>           and will not rely on the "cached" tables. At present, the
>>>>>>>           quota infrastructure in Nova, Cinder and other projects
>>>>>>>
>>>>>> have
>>>>
>>>>>           tables such as reservations, quota_usage, etc which are
>>>>>>>
>>>>>> used
>>>>
>>>>>           as "cached tables" to track re
>>>>>>>
>>>>>>>
>>>>>>>           f. Delimiter will fetch the information for project
>>>>>>>
>>>>>> quota,
>>>>
>>>>>           user quota from a centralized place, say Keystone, or if
>>>>>>>           that doesn't materialize will fetch default quota values
>>>>>>>           from respective service. This information will be cached
>>>>>>>           since it gets updated rarely but read many times.
>>>>>>>
>>>>>>>
>>>>>>>           g. Delimiter will take into consideration whether the
>>>>>>>           project is a Flat or Nested and will make the
>>>>>>>
>>>>>> calculations
>>>>
>>>>>           of allocated, available resources. Nested means project
>>>>>>>           namespace is hierarchical and Flat means project
>>>>>>>
>>>>>> namespace
>>>>
>>>>>           is not hierarchical.
>>>>>>>
>>>>>>>
>>>>>>>           -Vilobh
>>>>>>>
>>>>>>>
>>>>>>>           [1]
>>>>>>>
>>>>>>
>>>> https://www.openstack.org/summit/austin-2016/summit-schedule/events/9492
>>>>
>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>           On Thu, Apr 21, 2016 at 11:08 PM, Joshua Harlow
>>>>>>>           <harlowja at fastmail.com>  wrote:
>>>>>>>
>>>>>>>
>>>>>>>                   Since people will be on a plane soon,
>>>>>>>
>>>>>>>                   I threw this together as a example of a quota
>>>>>>>
>>>>>> engine
>>>>
>>>>>                   (the zookeeper code does even work, and yes it
>>>>>>>                   provides transactional semantics due to the nice
>>>>>>>                   abilities of zookeeper znode versions[1] and its
>>>>>>>                   inherent consistency model, yippe).
>>>>>>>
>>>>>>>
>>>>>>> https://gist.github.com/harlowja/e7175c2d76e020a82ae94467a1441d85
>>>>
>>>>>                   Someone else can fill in the db quota engine with
>>>>>>>
>>>>>> a
>>>>
>>>>>                   similar/equivalent api if they so dare, ha. Or
>>>>>>>
>>>>>> even
>>>>
>>>>>                   feel to say the gist/api above is crap, cause
>>>>>>>
>>>>>> that's
>>>>
>>>>>                   ok to, lol.
>>>>>>>
>>>>>>>                   [1]
>>>>>>>
>>>>>>>
>>>> https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#Data
>>>> +Access
>>>>
>>>>>
>>>>>>>
>>>>>>>                   Amrith Kumar wrote:
>>>>>>>
>>>>>>>                           Inline below ... thread is too long, will
>>>>>>>                           catch you in Austin.
>>>>>>>
>>>>>>>
>>>>>>>                                   -----Original Message-----
>>>>>>>                                   From: Jay Pipes
>>>>>>>                                   [mailto:jaypipes at gmail.com]
>>>>>>>                                   Sent: Thursday, April 21, 2016
>>>>>>>
>>>>>> 8:08
>>>>
>>>>>                                   PM
>>>>>>>                                   To:
>>>>>>>                                   openstack-dev at lists.openstack.org
>>>>>>>                                   Subject: Re: [openstack-dev] More
>>>>>>>
>>>>>> on
>>>>
>>>>>                                   the topic of DELIMITER, the Quota
>>>>>>>                                   Management Library proposal
>>>>>>>
>>>>>>>                                   Hmm, where do I start... I think
>>>>>>>
>>>>>> I
>>>>
>>>>>                                   will just cut to the two primary
>>>>>>>                                   disagreements I have. And I will
>>>>>>>                                   top-post because this email is
>>>>>>>
>>>>>> way
>>>>
>>>>>                                   too
>>>>>>>                                   big.
>>>>>>>
>>>>>>>                                   1) On serializable isolation
>>>>>>>
>>>>>> level.
>>>>
>>>>>                                   No, you don't need it at all to
>>>>>>>                                   prevent races in claiming. Just
>>>>>>>
>>>>>> use
>>>>
>>>>>                                   a
>>>>>>>                                   compare-and-update with retries
>>>>>>>                                   strategy. Proof is here:
>>>>>>>
>>>>>>>
>>>>>>>
>>>> https://github.com/jaypipes/placement-bench/blob/master/placement.py#L97-
>>>>
>>>>                                   L142
>>>>>>>
>>>>>>>                                   Works great and prevents multiple
>>>>>>>                                   writers from oversubscribing any
>>>>>>>                                   resource without relying on any
>>>>>>>                                   particular isolation level at
>>>>>>>
>>>>>> all.
>>>>
>>>>>                                   The `generation` field in the
>>>>>>>                                   inventories table is what allows
>>>>>>>                                   multiple
>>>>>>>                                   writers to ensure a consistent
>>>>>>>
>>>>>> view
>>>>
>>>>>                                   of the data without needing to
>>>>>>>
>>>>>> rely
>>>>
>>>>>                                   on
>>>>>>>                                   heavy lock-based semantics and/or
>>>>>>>                                   RDBMS-specific isolation levels.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>                           [amrith] this works for what it is doing,
>>>>>>>
>>>>>> we
>>>>
>>>>>                           can definitely do this. This will work at
>>>>>>>                           any isolation level, yes. I didn't want
>>>>>>>
>>>>>> to
>>>>
>>>>>                           go this route because it is going to
>>>>>>>
>>>>>> still
>>>>
>>>>>                           require an insert into another table
>>>>>>>                           recording what the actual 'thing' is that
>>>>>>>
>>>>>> is
>>>>
>>>>>                           claiming the resource and that insert is
>>>>>>>                           going to be in a different transaction
>>>>>>>
>>>>>> and
>>>>
>>>>>                           managing those two transactions was what
>>>>>>>
>>>>>> I
>>>>
>>>>>                           wanted to avoid. I was hoping to avoid
>>>>>>>                           having two tables tracking claims, one
>>>>>>>                           showing the currently claimed quota and
>>>>>>>                           another holding the things that claimed
>>>>>>>
>>>>>> that
>>>>
>>>>>                           quota. Have to think again whether that
>>>>>>>
>>>>>> is
>>>>
>>>>>                           possible.
>>>>>>>
>>>>>>>                                   2) On reservations.
>>>>>>>
>>>>>>>                                   The reason I don't believe
>>>>>>>                                   reservations are necessary to be
>>>>>>>
>>>>>> in
>>>>
>>>>>                                   a quota
>>>>>>>                                   library is because reservations
>>>>>>>
>>>>>> add
>>>>
>>>>>                                   a concept of a time to a claim of
>>>>>>>                                   some
>>>>>>>                                   resource. You reserve some
>>>>>>>
>>>>>> resource
>>>>
>>>>>                                   to be claimed at some point in
>>>>>>>
>>>>>> the
>>>>
>>>>>                                   future and release those
>>>>>>>
>>>>>> resources
>>>>
>>>>>                                   at a point further in time.
>>>>>>>
>>>>>>>                                   Quota checking doesn't look at
>>>>>>>
>>>>>> what
>>>>
>>>>>                                   the state of some system will be
>>>>>>>
>>>>>> at
>>>>
>>>>>                                   some point in the future. It
>>>>>>>
>>>>>> simply
>>>>
>>>>>                                   returns whether the system *right
>>>>>>>                                   now* can handle a request *right
>>>>>>>                                   now* to claim a set of resources.
>>>>>>>
>>>>>>>                                   If you want reservation semantics
>>>>>>>                                   for some resource, that's totally
>>>>>>>                                   cool,
>>>>>>>                                   but IMHO, a reservation service
>>>>>>>                                   should live outside of the
>>>>>>>
>>>>>> service
>>>>
>>>>>                                   that is
>>>>>>>                                   actually responsible for
>>>>>>>
>>>>>> providing
>>>>
>>>>>                                   resources to a consumer.
>>>>>>>                                   Merging right-now quota checks
>>>>>>>
>>>>>> and
>>>>
>>>>>                                   future-based reservations into
>>>>>>>
>>>>>> the
>>>>
>>>>>                                   same
>>>>>>>                                   library just complicates things
>>>>>>>                                   unnecessarily IMHO.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>                           [amrith] extension of the above ...
>>>>>>>
>>>>>>>                                   3) On resizes.
>>>>>>>
>>>>>>>                                   Look, I recognize some users see
>>>>>>>                                   some value in resizing their
>>>>>>>                                   resources.
>>>>>>>                                   That's fine. I personally think
>>>>>>>                                   expand operations are fine, and
>>>>>>>
>>>>>> that
>>>>
>>>>>                                   shrink operations are really the
>>>>>>>                                   operations that should be
>>>>>>>
>>>>>> prohibited
>>>>
>>>>>                                   in
>>>>>>>                                   the API. But, whatever, I'm fine
>>>>>>>                                   with resizing of requested
>>>>>>>
>>>>>> resource
>>>>
>>>>>                                   amounts. My big point is if you
>>>>>>>                                   don't have a separate table that
>>>>>>>                                   stores
>>>>>>>                                   quota_usages and instead only
>>>>>>>
>>>>>> have a
>>>>
>>>>>                                   single table that stores the
>>>>>>>
>>>>>> actual
>>>>
>>>>>                                   resource usage records, you don't
>>>>>>>                                   have to do *any* quota check
>>>>>>>                                   operations
>>>>>>>                                   at all upon deletion of a
>>>>>>>
>>>>>> resource.
>>>>
>>>>>                                   For modifying resource amounts
>>>>>>>
>>>>>> (i.e.
>>>>
>>>>>                                   a
>>>>>>>                                   resize) you merely need to change
>>>>>>>                                   the calculation of requested
>>>>>>>                                   resource
>>>>>>>                                   amounts to account for the
>>>>>>>                                   already-consumed usage amount.
>>>>>>>
>>>>>>>                                   Bottom line for me: I really
>>>>>>>
>>>>>> won't
>>>>
>>>>>                                   support any proposal for a
>>>>>>>
>>>>>> complex
>>>>
>>>>>                                   library that takes the resource
>>>>>>>                                   claim process out of the hands of
>>>>>>>                                   the
>>>>>>>                                   services that own those
>>>>>>>
>>>>>> resources.
>>>>
>>>>>                                   The simpler the interface of this
>>>>>>>                                   library, the better.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>                           [amrith] my proposal would not but this
>>>>>>>                           email thread has got too long. Yes,
>>>>>>>
>>>>>> simpler
>>>>
>>>>>                           interface, will catch you in Austin.
>>>>>>>
>>>>>>>                                   Best,
>>>>>>>                                   -jay
>>>>>>>
>>>>>>>                                   On 04/19/2016 09:59 PM, Amrith
>>>>>>>
>>>>>> Kumar
>>>>
>>>>>                                   wrote:
>>>>>>>
>>>>>>>                                                   -----Original
>>>>>>>                                                   Message-----
>>>>>>>                                                   From: Jay Pipes
>>>>>>>
>>>>>>> [mailto:jaypipes at gmail.com]
>>>>
>>>>>                                                   Sent: Monday,
>>>>>>>
>>>>>> April
>>>>
>>>>>                                                   18, 2016 2:54 PM
>>>>>>>                                                   To:
>>>>>>>
>>>>>>> openstack-dev at lists.openstack.org
>>>>
>>>>>                                                   Subject: Re:
>>>>>>>                                                   [openstack-dev]
>>>>>>>
>>>>>> More
>>>>
>>>>>                                                   on the topic of
>>>>>>>                                                   DELIMITER, the
>>>>>>>                                                   Quota Management
>>>>>>>                                                   Library proposal
>>>>>>>
>>>>>>>                                                   On 04/16/2016
>>>>>>>
>>>>>> 05:51
>>>>
>>>>>                                                   PM, Amrith Kumar
>>>>>>>                                                   wrote:
>>>>>>>
>>>>>>>                                                           If we
>>>>>>>                                                           therefore
>>>>>>>                                                           assume
>>>>>>>
>>>>>> that
>>>>
>>>>>                                                           this will
>>>>>>>
>>>>>> be
>>>>
>>>>>                                                           a Quota
>>>>>>>
>>>>>>> Management
>>>>
>>>>>                                                           Library,
>>>>>>>                                                           it is
>>>>>>>
>>>>>> safe
>>>>
>>>>>                                                           to assume
>>>>>>>                                                           that
>>>>>>>
>>>>>> quotas
>>>>
>>>>>                                                           are going
>>>>>>>
>>>>>> to
>>>>
>>>>>                                                           be
>>>>>>>
>>>>>> managed
>>>>
>>>>>                                                           on a
>>>>>>>
>>>>>>> per-project
>>>>
>>>>>                                                           basis,
>>>>>>>
>>>>>> where
>>>> participating projects will use this library.
>>>>
>>>>>                                                           I believe
>>>>>>>                                                           that it
>>>>>>>                                                           stands to
>>>>>>>                                                           reason
>>>>>>>
>>>>>> that
>>>>
>>>>>                                                           any data
>>>>>>>
>>>>>>> persistence
>>>>
>>>>>                                                           will
>>>>>>>                                                           have to
>>>>>>>
>>>>>> be
>>>>
>>>>>                                                           in a
>>>>>>>                                                           location
>>>>>>>                                                           decided
>>>>>>>
>>>>>> by
>>>>
>>>>>                                                           the
>>>>>>>
>>>>>>> individual
>>>>
>>>>>                                                           project.
>>>>>>>
>>>>>>>
>>>>>>>                                                   Depends on what
>>>>>>>
>>>>>> you
>>>>
>>>>>                                                   mean by "any data
>>>>>>>                                                   persistence". If
>>>>>>>
>>>>>> you
>>>>
>>>>>                                                   are
>>>>>>>                                                   referring to the
>>>>>>>                                                   storage of quota
>>>>>>>                                                   values (per user,
>>>>>>>                                                   per tenant,
>>>>>>>                                                   global, etc) I
>>>>>>>
>>>>>> think
>>>>
>>>>>                                                   that should be
>>>>>>>
>>>>>> done
>>>>
>>>>>                                                   by the Keystone
>>>>>>>                                                   service.
>>>>>>>                                                   This data is
>>>>>>>                                                   essentially an
>>>>>>>                                                   attribute of the
>>>>>>>                                                   user or the
>>>>>>>
>>>>>> tenant
>>>>
>>>>>                                                   or the
>>>>>>>
>>>>>>>
>>>>>>>                                   service endpoint itself (i.e.
>>>>>>>
>>>>>>>
>>>>>>>                                                   global defaults).
>>>>>>>                                                   This data also
>>>>>>>                                                   rarely changes
>>>>>>>
>>>>>> and
>>>>
>>>>>                                                   logically belongs
>>>>>>>                                                   to the service
>>>>>>>
>>>>>> that
>>>>
>>>>>                                                   manages users,
>>>>>>>                                                   tenants, and
>>>>>>>
>>>>>> service
>>>>
>>>>>                                                   endpoints:
>>>>>>>
>>>>>>>
>>>>>>>                                   Keystone.
>>>>>>>
>>>>>>>
>>>>>>>                                                   If you are
>>>>>>>
>>>>>> referring
>>>>
>>>>>                                                   to the storage of
>>>>>>>                                                   resource usage
>>>>>>>                                                   records, yes,
>>>>>>>                                                   each service
>>>>>>>
>>>>>> project
>>>>
>>>>>                                                   should own that
>>>>>>>
>>>>>> data
>>>>
>>>>>                                                   (and frankly, I
>>>>>>>                                                   don't see a
>>>>>>>                                                   need to persist
>>>>>>>
>>>>>> any
>>>>
>>>>>                                                   quota usage data
>>>>>>>
>>>>>> at
>>>>
>>>>>                                                   all, as I
>>>>>>>
>>>>>> mentioned
>>>>
>>>>>                                                   in a
>>>>>>>                                                   previous reply to
>>>>>>>                                                   Attila).
>>>>>>>
>>>>>>>
>>>>>>>                                           [amrith] You make a
>>>>>>>                                           distinction that I had
>>>>>>>
>>>>>> made
>>>>
>>>>>                                           implicitly, and it is
>>>>>>>                                           important to highlight
>>>>>>>
>>>>>> it.
>>>>
>>>>>                                           Thanks for pointing it
>>>>>>>
>>>>>> out.
>>>>
>>>>>                                           Yes, I meant
>>>>>>
>>>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160426/e53a49e8/attachment-0001.html>


More information about the OpenStack-dev mailing list