[openstack-dev] [openstack][magnum][heat] Quota for Magnum Resources
Vilobh Meshram
vilobhmeshram.openstack at gmail.com
Tue Dec 22 18:56:19 UTC 2015
Clint,
To be more specific about the #2 option I was talking about is
" A request is within CaaS layer quota, and accepted by magnum. Magnum
calls Heat to create a stack, which will fail if the stack exceeds IaaS
layer quota. In this case, magnum catch and re-throw the exception to
users."
-Vilobh
On Tue, Dec 22, 2015 at 6:58 AM, Clint Byrum <clint at fewbar.com> wrote:
> Excerpts from Jay Lau's message of 2015-12-21 22:29:14 -0800:
> > For case 2), can we talk this with HEAT team? This seems to be a issue
> > related to HEAT quota, why HEAT do not add quota support?
> >
>
> It's been brought up before, and I made the same arguments against it
> there as I did here. I argued instead to focus on convergence, making Heat
> more scalable and more able to take on thousands of resources and stacks
> at once, which the team has been doing a great job at since I changed
> my focus. I'd be interested to hear if operators have been lamenting
> the lack of quotas in heat.
>
> BTW, in the quoted emails, there are multiple 2)'s. You should probably
> reply in-line so it is clear what you mean.
>
> > On Tue, Dec 22, 2015 at 7:42 AM, Vilobh Meshram <
> > vilobhmeshram.openstack at gmail.com> wrote:
> >
> > > As mentioned by Hongbin we might have these 3 cases. Hongbin and I did
> > > discuss about these in the Magnum IRC.
> > >
> > > The interestring case being the #2 one. Where in case enough resources
> are
> > > not available at the IaaS layer, and Magnum is in the process of
> creating a
> > > Bay; Magnum needs to be more descriptive about the failure so that the
> > > operator or user can be aware what exactly happened i.e. did the
> request
> > > failed because of resource constraints at the PaaS layer, at the IaaS
> layer
> > > etc.
> > >
> > > Having the Quota layer at magnum will abstract out the underlying layer
> > > and would impose quota on objects that Magnum understands. But again it
> > > would be nice to know what operators think about it and would it be
> > > something that they will find useful.
> > >
> > > -Vilobh
> > >
> > > On Mon, Dec 21, 2015 at 2:58 PM, Hongbin Lu <hongbin.lu at huawei.com>
> wrote:
> > >
> > >> If we decide to support quotas in CaaS layer (i.e. limit the # of
> bays),
> > >> its implementation doesn’t have to be redundant to IaaS layer (from
> Nova,
> > >> Cinder, etc.). The implementation could be a layer on top of IaaS, in
> which
> > >> requests need to pass two layers of quotas to succeed. There would be
> three
> > >> cases:
> > >>
> > >> 1. A request exceeds CaaS layer quota. Then, magnum rejects the
> > >> request.
> > >>
> > >> 2. A request is within CaaS layer quota, and accepted by magnum.
> > >> Magnum calls Heat to create a stack, which will fail if the stack
> exceeds
> > >> IaaS layer quota. In this case, magnum catch and re-throw the
> exception to
> > >> users.
> > >>
> > >> 3. A request is within both CaaS and IaaS layer quota, and the
> > >> request succeeds.
> > >>
> > >>
> > >>
> > >> I think the debate here is whether it would be useful to implement an
> > >> extra layer of quota management system in Magnum. My guess is “yes”,
> if
> > >> operators want to hide the underline infrastructure, and expose a
> pure CaaS
> > >> solution to its end-users. If the operators don’t use Magnum in this
> way,
> > >> then I will vote for “no”.
> > >>
> > >>
> > >>
> > >> Also, we can reference other platform-level services (like Trove and
> > >> Sahara) to see if they implemented an extra layer of quota management
> > >> system, and we could use it as a decision point.
> > >>
> > >>
> > >>
> > >> Best regards,
> > >>
> > >> Honbgin
> > >>
> > >>
> > >>
> > >> *From:* Adrian Otto [mailto:adrian.otto at rackspace.com]
> > >> *Sent:* December-20-15 12:50 PM
> > >> *To:* OpenStack Development Mailing List (not for usage questions)
> > >>
> > >> *Subject:* Re: [openstack-dev] [openstack][magnum] Quota for Magnum
> > >> Resources
> > >>
> > >>
> > >>
> > >> This sounds like a source-of-truth concern. From my perspective the
> > >> solution is not to create redundant quotas. Simply quota the Magnum
> > >> resources. Lower level limits *could* be queried by magnum prior to
> acting
> > >> to CRUD the lower level resources. In the case we could check the
> maximum
> > >> allowed number of (or access rate of) whatever lower level resource
> before
> > >> requesting it, and raising an understandable error. I see that as an
> > >> enhancement rather than a must-have. In all honesty that feature is
> > >> probably more complicated than it's worth in terms of value.
> > >>
> > >> --
> > >>
> > >> Adrian
> > >>
> > >>
> > >> On Dec 20, 2015, at 6:36 AM, Jay Lau <jay.lau.513 at gmail.com> wrote:
> > >>
> > >> I also have the same concern with Lee, as Magnum depend on HEAT and
> HEAT
> > >> need call nova, cinder, neutron to create the Bay resources. But both
> Nova
> > >> and Cinder has its own quota policy, if we define quota again in
> Magnum,
> > >> then how to handle the conflict? Another point is that limiting the
> Bay by
> > >> quota seems a bit coarse-grainded as different bays may have different
> > >> configuration and resource request. Comments? Thanks.
> > >>
> > >>
> > >>
> > >> On Thu, Dec 17, 2015 at 4:10 AM, Lee Calcote <leecalcote at gmail.com>
> > >> wrote:
> > >>
> > >> Food for thought - there is a cost to FIPs (in the case of public IP
> > >> addresses), security groups (to a lesser extent, but in terms of the
> > >> computation of many hundreds of them), etc. Administrators may wish to
> > >> enforce quotas on a variety of resources that are direct costs or
> indirect
> > >> costs (e.g. # of bays, where a bay consists of a number of multi-VM /
> > >> multi-host pods and services, which consume CPU, mem, etc.).
> > >>
> > >>
> > >>
> > >> If Magnum quotas are brought forward, they should govern (enforce
> quota)
> > >> on Magnum-specific constructs only, correct? Resources created by
> Magnum
> > >> COEs should be governed by existing quota policies governing said
> resources
> > >> (e.g. Nova and vCPUs).
> > >>
> > >>
> > >>
> > >> Lee
> > >>
> > >>
> > >>
> > >> On Dec 16, 2015, at 1:56 PM, Tim Bell <Tim.Bell at cern.ch> wrote:
> > >>
> > >>
> > >>
> > >> -----Original Message-----
> > >> From: Clint Byrum [mailto:clint at fewbar.com <clint at fewbar.com>]
> > >> Sent: 15 December 2015 22:40
> > >> To: openstack-dev <openstack-dev at lists.openstack.org>
> > >> Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
> > >> Resources
> > >>
> > >> Hi! Can I offer a counter point?
> > >>
> > >> Quotas are for _real_ resources.
> > >>
> > >>
> > >> The CERN container specialist agrees with you ... it would be good to
> > >> reflect on the needs given that ironic, neutron and nova are policing
> the
> > >> resource usage. Quotas in the past have been used for things like key
> > >> pairs
> > >> which are not really real.
> > >>
> > >>
> > >> Memory, CPU, disk, bandwidth. These are all _closely_ tied to things
> that
> > >>
> > >> cost
> > >>
> > >> real money and cannot be conjured from thin air. As such, the user
> being
> > >> able to allocate 1 billion or 2 containers is not limited by Magnum,
> but
> > >>
> > >> by real
> > >>
> > >> things that they must pay for. If they have enough Nova quota to
> allocate
> > >>
> > >> 1
> > >>
> > >> billion tiny pods, why would Magnum stop them? Who actually benefits
> from
> > >> that limitation?
> > >>
> > >> So I suggest that you not add any detailed, complicated quota system
> to
> > >> Magnum. If there are real limitations to the implementation that
> Magnum
> > >> has chosen, such as we had in Heat (the entire stack must fit in
> memory),
> > >> then make that the limit. Otherwise, let their vcpu, disk, bandwidth,
> and
> > >> memory quotas be the limit, and enjoy the profit margins that having
> an
> > >> unbound force multiplier like Magnum in your cloud gives you and your
> > >> users!
> > >>
> > >> Excerpts from Vilobh Meshram's message of 2015-12-14 16:58:54 -0800:
> > >>
> > >> Hi All,
> > >>
> > >> Currently, it is possible to create unlimited number of resource like
> > >> bay/pod/service/. In Magnum, there should be a limitation for user or
> > >> project to create Magnum resource, and the limitation should be
> > >> configurable[1].
> > >>
> > >> I proposed following design :-
> > >>
> > >> 1. Introduce new table magnum.quotas
> > >> +------------+--------------+------+-----+---------+----------------+
> > >>
> > >> | Field | Type | Null | Key | Default | Extra |
> > >>
> > >> +------------+--------------+------+-----+---------+----------------+
> > >>
> > >> | id | int(11) | NO | PRI | NULL | auto_increment |
> > >>
> > >> | created_at | datetime | YES | | NULL | |
> > >>
> > >> | updated_at | datetime | YES | | NULL | |
> > >>
> > >> | deleted_at | datetime | YES | | NULL | |
> > >>
> > >> | project_id | varchar(255) | YES | MUL | NULL | |
> > >>
> > >> | resource | varchar(255) | NO | | NULL | |
> > >>
> > >> | hard_limit | int(11) | YES | | NULL | |
> > >>
> > >> | deleted | int(11) | YES | | NULL | |
> > >>
> > >> +------------+--------------+------+-----+---------+----------------+
> > >>
> > >> resource can be Bay, Pod, Containers, etc.
> > >>
> > >>
> > >> 2. API controller for quota will be created to make sure basic CLI
> > >> commands work.
> > >>
> > >> quota-show, quota-delete, quota-create, quota-update
> > >>
> > >> 3. When the admin specifies a quota of X number of resources to be
> > >> created the code should abide by that. For example if hard limit for
> Bay
> > >>
> > >> is 5
> > >>
> > >> (i.e.
> > >>
> > >> a project can have maximum 5 Bay's) if a user in a project tries to
> > >> exceed that hardlimit it won't be allowed. Similarly goes for other
> > >>
> > >> resources.
> > >>
> > >>
> > >> 4. Please note the quota validation only works for resources created
> > >> via Magnum. Could not think of a way that Magnum to know if a COE
> > >> specific utilities created a resource in background. One way could be
> > >> to see the difference between whats stored in magnum.quotas and the
> > >> information of the actual resources created for a particular bay in
> > >>
> > >> k8s/COE.
> > >>
> > >>
> > >> 5. Introduce a config variable to set quotas values.
> > >>
> > >> If everyone agrees will start the changes by introducing quota
> > >> restrictions on Bay creation.
> > >>
> > >> Thoughts ??
> > >>
> > >>
> > >> -Vilobh
> > >>
> > >> [1] https://blueprints.launchpad.net/magnum/+spec/resource-quota
> > >>
> > >>
> > >> ________________________________________________________________
> > >> __________
> > >> OpenStack Development Mailing List (not for usage questions)
> > >> Unsubscribe: OpenStack-dev-
> > >> request at lists.openstack.org?subject:unsubscribe
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>
> > >>
> __________________________________________________________________________
> > >> OpenStack Development Mailing List (not for usage questions)
> > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org
> > >> ?subject:unsubscribe
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>
> > >>
> > >>
> > >>
> > >>
> __________________________________________________________________________
> > >> OpenStack Development Mailing List (not for usage questions)
> > >> Unsubscribe:
> > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>
> > >>
> > >>
> > >>
> > >> --
> > >>
> > >> Thanks,
> > >>
> > >> Jay Lau (Guangya Liu)
> > >>
> > >>
> __________________________________________________________________________
> > >> OpenStack Development Mailing List (not for usage questions)
> > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org
> > >> ?subject:unsubscribe
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>
> > >>
> > >>
> __________________________________________________________________________
> > >> OpenStack Development Mailing List (not for usage questions)
> > >> Unsubscribe:
> > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>
> > >>
> > >
> > >
> __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151222/f43d31eb/attachment.html>
More information about the OpenStack-dev
mailing list