[openstack-dev] [openstack][magnum] Quota for Magnum Resources

Hongbin Lu hongbin.lu at huawei.com
Mon Dec 21 22:22:53 UTC 2015


Jay,

I think we should agree on a general direction before asking for a spec. It is bad to have contributors spend time working on something that might not be accepted.

Best regards,
Hongbin

From: Jay Lau [mailto:jay.lau.513 at gmail.com]
Sent: December-20-15 6:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

Thanks Adrian and Tim, I saw that @Vilobh already uploaded a patch https://review.openstack.org/#/c/259201/ here, perhaps we can first have a spec and discuss there. ;-)

On Mon, Dec 21, 2015 at 2:44 AM, Tim Bell <Tim.Bell at cern.ch<mailto:Tim.Bell at cern.ch>> wrote:
Given the lower level quotas in Heat, Neutron, Nova etc., the error feedback is very important. A Magnum “cannot create” message requires a lot of debugging whereas a “Floating IP quota exceeded” gives a clear root cause.

Whether we quota Magnum resources or not, some error scenarios and appropriate testing+documentation would be a great help for operators.

Tim

From: Adrian Otto [mailto:adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>]
Sent: 20 December 2015 18:50
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>

Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

This sounds like a source-of-truth concern. From my perspective the solution is not to create redundant quotas. Simply quota the Magnum resources. Lower level limits *could* be queried by magnum prior to acting to CRUD the lower level resources. In the case we could check the maximum allowed number of (or access rate of) whatever lower level resource before requesting it, and raising an understandable error. I see that as an enhancement rather than a must-have. In all honesty that feature is probably more complicated than it's worth in terms of value.

--
Adrian

On Dec 20, 2015, at 6:36 AM, Jay Lau <jay.lau.513 at gmail.com<mailto:jay.lau.513 at gmail.com>> wrote:
I also have the same concern with Lee, as Magnum depend on HEAT  and HEAT need call nova, cinder, neutron to create the Bay resources. But both Nova and Cinder has its own quota policy, if we define quota again in Magnum, then how to handle the conflict? Another point is that limiting the Bay by quota seems a bit coarse-grainded as different bays may have different configuration and resource request. Comments? Thanks.

On Thu, Dec 17, 2015 at 4:10 AM, Lee Calcote <leecalcote at gmail.com<mailto:leecalcote at gmail.com>> wrote:
Food for thought - there is a cost to FIPs (in the case of public IP addresses), security groups (to a lesser extent, but in terms of the computation of many hundreds of them), etc. Administrators may wish to enforce quotas on a variety of resources that are direct costs or indirect costs (e.g. # of bays, where a bay consists of a number of multi-VM / multi-host pods and services, which consume CPU, mem, etc.).

If Magnum quotas are brought forward, they should govern (enforce quota) on Magnum-specific constructs only, correct? Resources created by Magnum COEs should be governed by existing quota policies governing said resources (e.g. Nova and vCPUs).

Lee

On Dec 16, 2015, at 1:56 PM, Tim Bell <Tim.Bell at cern.ch<mailto:Tim.Bell at cern.ch>> wrote:

-----Original Message-----
From: Clint Byrum [mailto:clint at fewbar.com]
Sent: 15 December 2015 22:40
To: openstack-dev <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
Resources

Hi! Can I offer a counter point?

Quotas are for _real_ resources.

The CERN container specialist agrees with you ... it would be good to
reflect on the needs given that ironic, neutron and nova are policing the
resource usage. Quotas in the past have been used for things like key pairs
which are not really real.

Memory, CPU, disk, bandwidth. These are all _closely_ tied to things that
cost
real money and cannot be conjured from thin air. As such, the user being
able to allocate 1 billion or 2 containers is not limited by Magnum, but
by real
things that they must pay for. If they have enough Nova quota to allocate
1
billion tiny pods, why would Magnum stop them? Who actually benefits from
that limitation?

So I suggest that you not add any detailed, complicated quota system to
Magnum. If there are real limitations to the implementation that Magnum
has chosen, such as we had in Heat (the entire stack must fit in memory),
then make that the limit. Otherwise, let their vcpu, disk, bandwidth, and
memory quotas be the limit, and enjoy the profit margins that having an
unbound force multiplier like Magnum in your cloud gives you and your
users!

Excerpts from Vilobh Meshram's message of 2015-12-14 16:58:54 -0800:
Hi All,

Currently, it is possible to create unlimited number of resource like
bay/pod/service/. In Magnum, there should be a limitation for user or
project to create Magnum resource, and the limitation should be
configurable[1].

I proposed following design :-

1. Introduce new table magnum.quotas
+------------+--------------+------+-----+---------+----------------+

| Field      | Type         | Null | Key | Default | Extra          |

+------------+--------------+------+-----+---------+----------------+

| id         | int(11)      | NO   | PRI | NULL    | auto_increment |

| created_at | datetime     | YES  |     | NULL    |                |

| updated_at | datetime     | YES  |     | NULL    |                |

| deleted_at | datetime     | YES  |     | NULL    |                |

| project_id | varchar(255) | YES  | MUL | NULL    |                |

| resource   | varchar(255) | NO   |     | NULL    |                |

| hard_limit | int(11)      | YES  |     | NULL    |                |

| deleted    | int(11)      | YES  |     | NULL    |                |

+------------+--------------+------+-----+---------+----------------+

resource can be Bay, Pod, Containers, etc.


2. API controller for quota will be created to make sure basic CLI
commands work.

quota-show, quota-delete, quota-create, quota-update

3. When the admin specifies a quota of X number of resources to be
created the code should abide by that. For example if hard limit for Bay
is 5
(i.e.
a project can have maximum 5 Bay's) if a user in a project tries to
exceed that hardlimit it won't be allowed. Similarly goes for other
resources.

4. Please note the quota validation only works for resources created
via Magnum. Could not think of a way that Magnum to know if a COE
specific utilities created a resource in background. One way could be
to see the difference between whats stored in magnum.quotas and the
information of the actual resources created for a particular bay in
k8s/COE.

5. Introduce a config variable to set quotas values.

If everyone agrees will start the changes by introducing quota
restrictions on Bay creation.

Thoughts ??


-Vilobh

[1] https://blueprints.launchpad.net/magnum/+spec/resource-quota

________________________________________________________________
__________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
request at lists.openstack.org<mailto:request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Thanks,
Jay Lau (Guangya Liu)
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Thanks,
Jay Lau (Guangya Liu)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151221/320fa64c/attachment.html>


More information about the OpenStack-dev mailing list