[openstack-dev] Quota management and enforcement across projects

Salvatore Orlando sorlando at nicira.com
Thu Nov 20 00:32:40 UTC 2014


Apparently, like everything Openstack-y we have a gathered a good crowd of
people with different opinions, more or less different, more or less strong.
My only strong opinion is that any ocean-boiling attempt should be
carefully avoided, and any proposed approach should add as little as
possible in terms of delays and complexity to the current process.
Anyway, I am glad nobody said so far "QaaS". If you're thinking to do so,
please don't. Please.

The reason for the now almost-abandoned proposal for a library was to have
a simple solution for allowing every project to enforce quotas in the same
ways.
As Kevin said, managing quotas for resources and their sequence attributes
(like the sec group rules for instance), can become pretty messy soon. As
of today this problem is avoided since quota management logic is baked in
every project's business logic. The library proposal also shamefully
avoided this problem - by focusing on enforcement only.

For enforcement, as Doug said, this is about answering the question
"can I consume
X units of a given resource?". In a distributed, non-locking, world, this
is not really obvious. A Boson-like proposal would solve this problem by
providing a centralised enforcement point. In this case the only possible
approach is one in which such centralised endpoint does both enforcement
and management.
While I too think that we need a centralised quota management endpoint, be
it folded in Keystone or not, I still need to be convinced that this is the
right architectural decision for enforcing quotas.

First, "can I consume X unit of a given resource" implies the external
service should be aware of current resource usage. Arguably it's the
consumer service (eg: nova, cinder) that owns this information, not the
quota service. The quota service can however be made aware of this in
several ways:
- notifications. In this case the service will also become a consumer of a
telemetry service like ceilometer or stacktach. I'm not sure we want to add
that dependency
- the API call itself might ask for reserving a given amount resource and
communicate at the same time resource usage. This sounds better, but why
would it be better than the dual approach, in which the consumer service
makes the reservation using quota values fetched by an external service.
Between quota and usage, the latter seem to be the more dynamic ones;
quotas can also be easily cached.
- usage info might be updated when reservations are committed. This looks
smart, but requirers more thinking for dealing with failure scenarios where
the failure occurs once resources are reserved but before they're committed
and there is no way to know which ones were actually and which ones not.

Second, calling an external service is not cheap. Especially since those
calls might be done several times for completing a single API request.
Think about POST /servers with all the ramifications into glance, neutron,
and cinder. In the simplest case we'd need two calls - one to reserve a
resource (or a set of resources), and one to either confirm or cancel the
reservation. This alone adds two round trips for each API call. Could this
scale well? To some extent it reminds me of the mayhem that was the
nova/neutron library before caching for keystone tokens was fixed.

Somebody mentioned an analogy with AuthZ. For some reason, perhaps
different from the reasons of who originally brought the analogy, I also
believe that this is a decent model to follow.
Quotas for each resource, together with the resource structure, might be
stored by a centralised service. This is pretty much inline with Boson's
goals, if I understand correctly. It will expose tenant-facing REST APIs
for quota management and will finally solve the problem of having a
standardized way for handling quotas, regardless of whether it stand as its
own service or is folded into Keystone, operators should be happier. And
there's nothing more satisfying that the face of a happy operator...
provided we also do not break backward compatibility!

Enforcement is something that in my opinion should happen locally, just
like we do authZ locally using information provided by Keystone upon
authentication. The issue Doug correctly pointed out however is that quota
enforcement is a too complex to be implemented in a library, like, for
instance, oslo.policy.
So, assuming there might consensus on doing quota enforcement locally in
the consumer application, what should be the 'thing' that enforces policy
if it can't be a library? Could it be a quota agent which communicates with
quota server, caches quota info? If yes, APIs like those for managing
reservation, over which transport should be invoked? IPC, AMQP, REST? I
need to go back to the drawing board to see if this can possibly work, but
in the meanwhile your feedback is more than welcome.

Salvatore

On 20 November 2014 00:39, Kevin L. Mitchell <kevin.mitchell at rackspace.com>
wrote:

> On Thu, 2014-11-20 at 10:16 +1100, Blair Bethwaite wrote:
> > For actions initiated directly through core OpenStack service APIs
> > (Nova, Cinder, Neutron, etc - anything using Keystone policy),
> > shouldn't quota-enforcement be handled by Keystone? To me this is just
> > a subset of authz, and OpenStack already has a well established
> > service for such decisions.
>
> If you look a little earlier in the thread, you will find a post from me
> where I point out just how complicated quota management actually is.  I
> suggest that it should be developed as a proof-of-concept as a separate
> service; from there, we can see whether it makes sense to roll it into
> Keystone or maintain it as a separate thing.
> --
> Kevin L. Mitchell <kevin.mitchell at rackspace.com>
> Rackspace
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141120/9ef7a06d/attachment.html>


More information about the OpenStack-dev mailing list