[openstack-dev] [openstack][magnum] Quota for Magnum Resources
James Bottomley
James.Bottomley at HansenPartnership.com
Wed Dec 16 23:44:34 UTC 2015
On Wed, 2015-12-16 at 22:48 +0000, Adrian Otto wrote:
> On Dec 16, 2015, at 2:25 PM, James Bottomley <
> James.Bottomley at HansenPartnership.com<mailto:
> James.Bottomley at HansenPartnership.com>> wrote:
>
> On Wed, 2015-12-16 at 20:35 +0000, Adrian Otto wrote:
> Clint,
>
> On Dec 16, 2015, at 11:56 AM, Tim Bell <tim.bell at cern.ch<mailto:
> tim.bell at cern.ch>> wrote:
>
> -----Original Message-----
> From: Clint Byrum [mailto:clint at fewbar.com]
> Sent: 15 December 2015 22:40
> To: openstack-dev <openstack-dev at lists.openstack.org<mailto:
> openstack-dev at lists.openstack.org>>
> Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
> Resources
>
> Hi! Can I offer a counter point?
>
> Quotas are for _real_ resources.
>
> No. Beyond billable resources, quotas are a mechanism for limiting
> abusive use patterns from hostile users.
>
> Actually, I believe this is the wrong way to look at it. You're
> confusing policy and mechanism. Quotas are policy on resources. The
> mechanisms by which you implement quotas can also be used to limit
> abuse by hostile users, but that doesn't mean that this limitation
> should be part of the quota policy.
>
> I’m not convinced. Cloud operators already use quotas as a mechanism
> for limiting abuse (intentional or accidental). They can be
> configured with a system wide default, and can be set to a different
> value on a per-tenant basis. It would be silly to have a second
> mechanism for doing the same thing we already use quotas for.
> Quotas/limits can also be queried by a user so they can determine why
> they are getting a 4XX Rate Limit responses when they try to act on
> resources too rapidly.
I think we might be talking a bit past each other. My definition of
"real" is end user visible. So in the fork bomb example below the end
user visible (and billable) panel just gives a choice for "memory".
The provider policy divides this into user memory and kernel memory,
usually in a fixed ratio and then imposes that on the cgroup.
> The idea of hard coding system wide limits into the system is making
> my stomach turn. If you wanted to change the limit you’d need to edit
> the production system’s configuration, and restart the API services.
> Yuck! That’s why we put quotas/limits into OpenStack to begin with,
> so that we had a sensible, visible, account-level configurable place
> to configure limits.
I don't believe anyone advocated for hard coding. I was just saying
that the view that Quota == Real End User Visible resource limits is a
valid way of looking at things because it forces you to think about
what the end user sees. The fact that the service provided uses the
mechanism for abuse prevention is also valid, but you wouldn't usually
want the end user to see it. Even in a private cloud, you'll have this
distinction between end user and cloud administrator. Conversely,
taking the mechanistic view that anything you can do with the mechanism
constitutes a quota and should be exposed pushes the issue up to the
UI/UX layer to sort out.
Perhaps this whole thing is just a semantic question of does quota mean
mechanism or policy. I think the latter, but I suppose it's possible
to take the view it's the former ... in which case we just need more
precision.
James
> Adrian
>
>
> For instance, in Linux, the memory limit policy is implemented by the
> memgc. The user usually sees a single figure for "memory" but inside
> the cgroup, that memory is split into user and kernel. Kernel memory
> limiting prevents things like fork bombs because you run out of your
> kernel memory limit creating task structures before you can bring
> down
> the host system. However, we don't usually expose the kernel/user
> split or the fact that the kmem limit mechanism can prevent fork and
> inode bombs.
>
> James
>
> The rate at which Bays are created, and how many of them you can
> have in total are important limits to put in the hands of cloud
> operators. Each Bay contains a keypair, which takes resources to
> generate and securely distribute. Updates to and Deletion of bays
> causes a storm of activity in Heat, and even more activity in Nova.
> Cloud operators should have the ability to control the rate of
> activity by enforcing rate controls on Magnum resources before they
> become problematic further down in the control plane. Admission
> controls are best managed at the entrance to a system, not at the
> core.
>
More information about the OpenStack-dev
mailing list