[openstack-dev] Quota management and enforcement across projects
Valeriy Ponomaryov
vponomaryov at mirantis.com
Wed Oct 15 08:55:38 UTC 2014
Manila project does use "policy" common code from incubator.
Our "small" wrapper for it:
https://github.com/openstack/manila/blob/8203c51081680a7a9dba30ae02d7c43d6e18a124/manila/policy.py
On Wed, Oct 15, 2014 at 2:41 AM, Salvatore Orlando <sorlando at nicira.com>
wrote:
> Doug,
>
> I totally agree with your findings on the policy module.
> Neutron already has some "customizations" there and we already have a few
> contributors working on syncing it back with oslo-incubator during the Kilo
> release cycle.
>
> However, my query was about the quota module.
> From what I gather it seems not a lot of projects use it:
>
> $ find . -name openstack-common.conf | xargs grep quota
> $
>
> Salvatore
>
> On 15 October 2014 00:34, Doug Hellmann <doug at doughellmann.com> wrote:
>
>>
>> On Oct 14, 2014, at 12:31 PM, Salvatore Orlando <sorlando at nicira.com>
>> wrote:
>>
>> Hi Doug,
>>
>> do you know if the existing quota oslo-incubator module has already some
>> active consumers?
>> In the meanwhile I've pushed a spec to neutron-specs for improving quota
>> management there [1]
>>
>>
>> It looks like a lot of projects are syncing the module:
>>
>> $ grep policy */openstack-common.conf
>>
>> barbican/openstack-common.conf:modules=gettextutils,jsonutils,log,local,timeutils,importutils,policy
>> ceilometer/openstack-common.conf:module=policy
>> cinder/openstack-common.conf:module=policy
>> designate/openstack-common.conf:module=policy
>> gantt/openstack-common.conf:module=policy
>> glance/openstack-common.conf:module=policy
>> heat/openstack-common.conf:module=policy
>> horizon/openstack-common.conf:module=policy
>> ironic/openstack-common.conf:module=policy
>> keystone/openstack-common.conf:module=policy
>> manila/openstack-common.conf:module=policy
>> neutron/openstack-common.conf:module=policy
>> nova/openstack-common.conf:module=policy
>> trove/openstack-common.conf:module=policy
>> tuskar/openstack-common.conf:module=policy
>>
>> I’m not sure how many are actively using it, but I wouldn’t expect them
>> to copy it in if they weren’t using it at all.
>>
>>
>> Now, I can either work on the oslo-incubator module and leverage it in
>> Neutron, or develop the quota module in Neutron, and move it to
>> oslo-incubator once we validate it with Neutron. The latter approach seems
>> easier from a workflow perspective - as it avoid the intermediate steps of
>> moving code from oslo-incubator to neutron. On the other hand it will delay
>> adoption in oslo-incubator.
>>
>>
>> The policy module is up for graduation this cycle. It may end up in its
>> own library, to allow us to build a review team for the code more easily
>> than if we put it in with some of the other semi-related modules like the
>> server code. We’re still working that out [1], and if you expect to make a
>> lot of incompatible changes we should delay graduation to make that simpler.
>>
>> Either way, since we have so many consumers, I think it would be easier
>> to have the work happen in Oslo somewhere so we can ensure those changes
>> are useful to and usable by all of the existing consumers.
>>
>> Doug
>>
>> [1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
>>
>>
>> What's your opinion?
>>
>> Regards,
>> Salvatore
>>
>> [1] https://review.openstack.org/#/c/128318/
>>
>> On 8 October 2014 18:52, Doug Hellmann <doug at doughellmann.com> wrote:
>>
>>>
>>> On Oct 8, 2014, at 7:03 AM, Davanum Srinivas <davanum at gmail.com> wrote:
>>>
>>> > Salvatore, Joe,
>>> >
>>> > We do have this at the moment:
>>> >
>>> >
>>> https://github.com/openstack/oslo-incubator/blob/master/openstack/common/quota.py
>>> >
>>> > — dims
>>>
>>> If someone wants to drive creating a useful library during kilo, please
>>> consider adding the topic to the etherpad we’re using to plan summit
>>> sessions and then come participate in the Oslo meeting this Friday 16:00
>>> UTC.
>>>
>>> https://etherpad.openstack.org/p/kilo-oslo-summit-topics
>>>
>>> Doug
>>>
>>> >
>>> > On Wed, Oct 8, 2014 at 2:29 AM, Salvatore Orlando <sorlando at nicira.com>
>>> wrote:
>>> >>
>>> >> On 8 October 2014 04:13, Joe Gordon <joe.gordon0 at gmail.com> wrote:
>>> >>>
>>> >>>
>>> >>> On Fri, Oct 3, 2014 at 10:47 AM, Morgan Fainberg
>>> >>> <morgan.fainberg at gmail.com> wrote:
>>> >>>>
>>> >>>> Keeping the enforcement local (same way policy works today) helps
>>> limit
>>> >>>> the fragility, big +1 there.
>>> >>>>
>>> >>>> I also agree with Vish, we need a uniform way to talk about quota
>>> >>>> enforcement similar to how we have a uniform policy language /
>>> enforcement
>>> >>>> model (yes I know it's not perfect, but it's far closer to uniform
>>> than
>>> >>>> quota management is).
>>> >>>
>>> >>>
>>> >>> It sounds like maybe we should have an oslo library for quotas?
>>> Somewhere
>>> >>> where we can share the code,but keep the operations local to each
>>> service.
>>> >>
>>> >>
>>> >> This is what I had in mind as well. A simple library for quota
>>> enforcement
>>> >> which can be used regardless of where and how you do it, which might
>>> depend
>>> >> on the application business logic, the WSGI framework in use, or other
>>> >> factors.
>>> >>
>>> >>>
>>> >>>>
>>> >>>>
>>> >>>> If there is still interest of placing quota in keystone, let's talk
>>> about
>>> >>>> how that will work and what will be needed from Keystone . The
>>> previous
>>> >>>> attempt didn't get much traction and stalled out early in
>>> implementation. If
>>> >>>> we want to revisit this lets make sure we have the resources needed
>>> and
>>> >>>> spec(s) in progress / info on etherpads (similar to how the
>>> multitenancy
>>> >>>> stuff was handled at the last summit) as early as possible.
>>> >>>
>>> >>>
>>> >>> Why not centralize quota management via the python-openstackclient,
>>> what
>>> >>> is the benefit of getting keystone involved?
>>> >>
>>> >>
>>> >> Providing this through the openstack client in my opinion has the
>>> >> disadvantage that users which either use the REST API direct or write
>>> their
>>> >> own clients won't leverage it. I don't think it's a reasonable
>>> assumption
>>> >> that everybody will use python-openstackclient, is it?
>>> >>
>>> >> Said that, storing quotas in keystone poses a further challenge to the
>>> >> scalability of the system, which we shall perhaps address by using
>>> >> appropriate caching strategies and leveraging keystone notifications.
>>> Until
>>> >> we get that, I think that the openstack client will be the best way of
>>> >> getting a unified quota management experience.
>>> >>
>>> >> Salvatore
>>> >>
>>> >>
>>> >>>>
>>> >>>> Cheers,
>>> >>>> Morgan
>>> >>>>
>>> >>>> Sent via mobile
>>> >>>>
>>> >>>>
>>> >>>> On Friday, October 3, 2014, Salvatore Orlando <sorlando at nicira.com>
>>> >>>> wrote:
>>> >>>>>
>>> >>>>> Thanks Vish,
>>> >>>>>
>>> >>>>> this seems a very reasonable first step as well - and since most
>>> >>>>> projects would be enforcing quotas in the same way, the shared
>>> library would
>>> >>>>> be the logical next step.
>>> >>>>> After all this is quite the same thing we do with authZ.
>>> >>>>>
>>> >>>>> Duncan is expressing valid concerns which in my opinion can be
>>> addressed
>>> >>>>> with an appropriate design - and a decent implementation.
>>> >>>>>
>>> >>>>> Salvatore
>>> >>>>>
>>> >>>>> On 3 October 2014 18:25, Vishvananda Ishaya <vishvananda at gmail.com
>>> >
>>> >>>>> wrote:
>>> >>>>>>
>>> >>>>>> The proposal in the past was to keep quota enforcement local, but
>>> to
>>> >>>>>> put the resource limits into keystone. This seems like an obvious
>>> first
>>> >>>>>> step to me. Then a shared library for enforcing quotas with decent
>>> >>>>>> performance should be next. The quota calls in nova are extremely
>>> >>>>>> inefficient right now and it will only get worse when we try to
>>> add
>>> >>>>>> hierarchical projects and quotas.
>>> >>>>>>
>>> >>>>>> Vish
>>> >>>>>>
>>> >>>>>> On Oct 3, 2014, at 7:53 AM, Duncan Thomas <
>>> duncan.thomas at gmail.com>
>>> >>>>>> wrote:
>>> >>>>>>
>>> >>>>>>> Taking quota out of the service / adding remote calls for quota
>>> >>>>>>> management is going to make things fragile - you've somehow got
>>> to
>>> >>>>>>> deal with the cases where your quota manager is slow, goes away,
>>> >>>>>>> hiccups, drops connections etc. You'll also need some way of
>>> >>>>>>> reconciling actual usage against quota usage periodically, to
>>> detect
>>> >>>>>>> problems.
>>> >>>>>>>
>>> >>>>>>> On 3 October 2014 15:03, Salvatore Orlando <sorlando at nicira.com>
>>> >>>>>>> wrote:
>>> >>>>>>>> Hi,
>>> >>>>>>>>
>>> >>>>>>>> Quota management is currently one of those things where every
>>> >>>>>>>> openstack
>>> >>>>>>>> project does its own thing. While quotas are obviously managed
>>> in a
>>> >>>>>>>> similar
>>> >>>>>>>> way for each project, there are subtle differences which
>>> ultimately
>>> >>>>>>>> result
>>> >>>>>>>> in lack of usability.
>>> >>>>>>>>
>>> >>>>>>>> I recall that in the past there have been several calls for
>>> unifying
>>> >>>>>>>> quota
>>> >>>>>>>> management. The blueprint [1] for instance, hints at the
>>> possibility
>>> >>>>>>>> of
>>> >>>>>>>> storing quotas in keystone.
>>> >>>>>>>> On the other hand, the blazar project [2, 3] seems to aim at
>>> solving
>>> >>>>>>>> this
>>> >>>>>>>> problem for good enabling resource reservation and therefore
>>> >>>>>>>> potentially
>>> >>>>>>>> freeing openstack projects from managing and enforcing quotas.
>>> >>>>>>>>
>>> >>>>>>>> While Blazar is definetely a good thing to have, I'm not
>>> entirely
>>> >>>>>>>> sure we
>>> >>>>>>>> want to make it a "required" component for every deployment.
>>> Perhaps
>>> >>>>>>>> single
>>> >>>>>>>> projects should still be able to enforce quota. On the other
>>> hand,
>>> >>>>>>>> at least
>>> >>>>>>>> on paper, the idea of making Keystone "THE" endpoint for
>>> managing
>>> >>>>>>>> quotas,
>>> >>>>>>>> and then letting the various project enforce them, sounds
>>> promising
>>> >>>>>>>> - is
>>> >>>>>>>> there any reason for which this blueprint is stalled to the
>>> point
>>> >>>>>>>> that it
>>> >>>>>>>> seems forgotten now?
>>> >>>>>>>>
>>> >>>>>>>> I'm coming to the mailing list with these random questions about
>>> >>>>>>>> quota
>>> >>>>>>>> management, for two reasons:
>>> >>>>>>>> 1) despite developing and using openstack on a daily basis I'm
>>> still
>>> >>>>>>>> confused by quotas
>>> >>>>>>>> 2) I've found a race condition in neutron quotas and the fix is
>>> not
>>> >>>>>>>> trivial.
>>> >>>>>>>> So, rather than start coding right away, it might probably make
>>> more
>>> >>>>>>>> sense
>>> >>>>>>>> to ask the community if there is already a known better
>>> approach to
>>> >>>>>>>> quota
>>> >>>>>>>> management - and obviously enforcement.
>>> >>>>>>>>
>>> >>>>>>>> Thanks in advance,
>>> >>>>>>>> Salvatore
>>> >>>>>>>>
>>> >>>>>>>> [1]
>>> https://blueprints.launchpad.net/keystone/+spec/service-metadata
>>> >>>>>>>> [2] https://wiki.openstack.org/wiki/Blazar
>>> >>>>>>>> [3]
>>> https://review.openstack.org/#/q/project:stackforge/blazar,n,z
>>> >>>>>>>>
>>> >>>>>>>> _______________________________________________
>>> >>>>>>>> OpenStack-dev mailing list
>>> >>>>>>>> OpenStack-dev at lists.openstack.org
>>> >>>>>>>>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>>>>>>>
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>> --
>>> >>>>>>> Duncan Thomas
>>> >>>>>>>
>>> >>>>>>> _______________________________________________
>>> >>>>>>> OpenStack-dev mailing list
>>> >>>>>>> OpenStack-dev at lists.openstack.org
>>> >>>>>>>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>>>>>
>>> >>>>>>
>>> >>>>>> _______________________________________________
>>> >>>>>> OpenStack-dev mailing list
>>> >>>>>> OpenStack-dev at lists.openstack.org
>>> >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>>>>>
>>> >>>>>
>>> >>>>
>>> >>>> _______________________________________________
>>> >>>> OpenStack-dev mailing list
>>> >>>> OpenStack-dev at lists.openstack.org
>>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>>>
>>> >>>
>>> >>>
>>> >>> _______________________________________________
>>> >>> OpenStack-dev mailing list
>>> >>> OpenStack-dev at lists.openstack.org
>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>>
>>> >>
>>> >>
>>> >> _______________________________________________
>>> >> OpenStack-dev mailing list
>>> >> OpenStack-dev at lists.openstack.org
>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > Davanum Srinivas :: https://twitter.com/dims
>>> >
>>> > _______________________________________________
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev at lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
--
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomaryov at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141015/5e9a4ae1/attachment.html>
More information about the OpenStack-dev
mailing list