[openstack-dev] [openstack][magnum] Quota for Magnum Resources
Fox, Kevin M
Kevin.Fox at pnnl.gov
Thu Dec 17 00:05:29 UTC 2015
Yeah, as an op, I've run into a few things that need quota's that just have basically hardcoded values. heat stacks for example. its a single global in /etc/heat/heat.conf:max_stacks_per_tenant=100. Instead of being able to tweak it for just our one project that legitimately has to create over 200 stacks, I had to set it cloud wide and I had to bounce services to do it. Please don't do that.
Ideally, it would be nice if the quota stuff could be pulled out into its own shared lib (oslo?) and shared amongst projects so that they don't have to spend much effort implementing quota's. Maybe then things that need quota's that don't currently can more easily get them.
Thanks,
Kevin
________________________________
From: Adrian Otto [adrian.otto at rackspace.com]
Sent: Wednesday, December 16, 2015 2:48 PM
To: James Bottomley
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources
On Dec 16, 2015, at 2:25 PM, James Bottomley <James.Bottomley at HansenPartnership.com<mailto:James.Bottomley at HansenPartnership.com>> wrote:
On Wed, 2015-12-16 at 20:35 +0000, Adrian Otto wrote:
Clint,
On Dec 16, 2015, at 11:56 AM, Tim Bell <tim.bell at cern.ch<mailto:tim.bell at cern.ch>> wrote:
-----Original Message-----
From: Clint Byrum [mailto:clint at fewbar.com]
Sent: 15 December 2015 22:40
To: openstack-dev <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
Resources
Hi! Can I offer a counter point?
Quotas are for _real_ resources.
No. Beyond billable resources, quotas are a mechanism for limiting
abusive use patterns from hostile users.
Actually, I believe this is the wrong way to look at it. You're
confusing policy and mechanism. Quotas are policy on resources. The
mechanisms by which you implement quotas can also be used to limit
abuse by hostile users, but that doesn't mean that this limitation
should be part of the quota policy.
I’m not convinced. Cloud operators already use quotas as a mechanism for limiting abuse (intentional or accidental). They can be configured with a system wide default, and can be set to a different value on a per-tenant basis. It would be silly to have a second mechanism for doing the same thing we already use quotas for. Quotas/limits can also be queried by a user so they can determine why they are getting a 4XX Rate Limit responses when they try to act on resources too rapidly.
The idea of hard coding system wide limits into the system is making my stomach turn. If you wanted to change the limit you’d need to edit the production system’s configuration, and restart the API services. Yuck! That’s why we put quotas/limits into OpenStack to begin with, so that we had a sensible, visible, account-level configurable place to configure limits.
Adrian
For instance, in Linux, the memory limit policy is implemented by the
memgc. The user usually sees a single figure for "memory" but inside
the cgroup, that memory is split into user and kernel. Kernel memory
limiting prevents things like fork bombs because you run out of your
kernel memory limit creating task structures before you can bring down
the host system. However, we don't usually expose the kernel/user
split or the fact that the kmem limit mechanism can prevent fork and
inode bombs.
James
The rate at which Bays are created, and how many of them you can
have in total are important limits to put in the hands of cloud
operators. Each Bay contains a keypair, which takes resources to
generate and securely distribute. Updates to and Deletion of bays
causes a storm of activity in Heat, and even more activity in Nova.
Cloud operators should have the ability to control the rate of
activity by enforcing rate controls on Magnum resources before they
become problematic further down in the control plane. Admission
controls are best managed at the entrance to a system, not at the
core.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151217/8dc438d4/attachment.html>
More information about the OpenStack-dev
mailing list