[Openstack-operators] Quota Templates

Tim Bell Tim.Bell at cern.ch
Wed Apr 9 20:16:06 UTC 2014


Addressing all of CERN's needs would be a big patch :)

I can see a few scenarios which may be overlapping

The quota for a project should be (optionally) set such that the number of a particular flavour is limited. Examples would be very large VMs or flavors which match specific aggregates on a different allocation scheme.

Can you clarify the scope of the quota ? Are we setting it per user, per group (with Keystone V3 this is supported) and/or per project ?

Tim

From: Cazzolato, Sergio J [mailto:sergio.j.cazzolato at intel.com]
Sent: 09 April 2014 21:28
To: James Penick; Tim Bell; Narayan Desai; Jay Pipes
Cc: openstack-operators at lists.openstack.org
Subject: RE: [Openstack-operators] Quota Templates

Hi James, thanks for sharing with us.

I am working in the blueprint "per-flavor-quotas" and I would to know based on your datacenter needs, if this feature is covering all the problems you have regarding this topic? Or we should complement that with something else?

https://review.openstack.org/#/c/84432/7

Thanks


From: James Penick [mailto:penick at yahoo-inc.com]
Sent: Tuesday, April 08, 2014 6:37 PM
To: Tim Bell; Narayan Desai; Jay Pipes
Cc: openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>
Subject: Re: [Openstack-operators] Quota Templates

This all seems to be going in a great direction.

 One of the things I've been working on at Yahoo is integrating Openstack VMs and Bare metal into our larger finance process. We have a couple interesting problems presented by the current quota management system.

 When one of our Projects on our internal cloud wants to increase their quota they'll need to file a request. That request will be compared against projected cluster capacity as well as certain financial models. If the request falls within certain parameters it'll need larger finance approval before the quota can be allocated.  There are a number of capacity folks who would be handling these requests, so there could be a race condition when multiple Projects request capacity at the same time. So not only do we need a way to grant and track quota increases, but also to have a concept of state management within a given quota request. The advantage of state management is an enterprise can track why and when a given quota increase was requested and granted. Especially if there's a field to track an external reference ID (like a bug/ticket number).  The big change is that we will no longer have one item in the DB per-Project-per-resource, but instead it'd be per-Project-per-resource-per-request. This means we'd also need to extend nova to support the concept of 'Soft' and 'hard' quota. Hard quota being what you can actually allocate against, and 'soft' quota would let you see what you have + what's in the pipe for approval.

 Now for flavor-level quotas. We, too have a use case where we need to limit a Project to certain 'flavor' types. In our case it's primarily our bare metal deployment, but I might want to grant a Project differing levels of quota within different cluster types. For example, I might want to have some hypervisors slated for no over provisioning, while others would be available at an extremely high over provision rate. This would let me guide our internal tenants onto the best possible resources with the right SLA and price point. As was mentioned earlier in the thread there would also be value in granting quota against a certain hardware type. SSD hypervisors, hypervisors with SSL accelerators, etc etc.  There are a lot of ways I can see slicing the pie in my data centers.

While both of these things are of definite need to the enterprise running private clouds, I imagine the smaller public cloud provider would need them just as much.

Tim, would you (or anyone else) care to work together on the blueprint and reboot Boson? I can contribute insight and code..

Thanks!
-James

:)=


From: Tim Bell <Tim.Bell at cern.ch<mailto:Tim.Bell at cern.ch>>
Date: Monday, April 7, 2014 at 11:45 AM
To: Narayan Desai <narayan.desai at gmail.com<mailto:narayan.desai at gmail.com>>, Jay Pipes <jaypipes at gmail.com<mailto:jaypipes at gmail.com>>
Cc: "openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>" <openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>>
Subject: Re: [Openstack-operators] Quota Templates


We can add some more CERN brass for this scenario. We want to fill up workload spread across the maximum number of hypervisors (don't overcommit unless you have to) but that means as you approach full, you can only give out small VMs that fit in the cracks.

In my ideal world, we would have a way of saying schedule for maximum distribution across hypervisors but make sure that there are X slots free for flavour A, Y slots free for flavour B etc for requests that exactly match that resource.

Given a 3-4 year purchasing cycle, the Thailand floods means HDDs were not available in volume and varying processor prices, we've got a large mixture of different configurations.

It's a really tough problem to do this at scale and quickly but I'd love to discuss/debate the options available to be able to turn the cloud efficiency up to 11 :)

Tim

From: Narayan Desai [mailto:narayan.desai at gmail.com]
Sent: 07 April 2014 20:33
To: Jay Pipes
Cc: openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>
Subject: Re: [Openstack-operators] Quota Templates

...

You must be using a much smarter openstack scheduler than I am. I'll agree in principle that this could be true, however, fragmentation avoidance is tricky problem, particularly when you have a big range of potential configuration. YOu can only imagine the sad trombone that played in my office the first time the scheduler placed an 8GB instance on a bigmem node, blocking that system's largest configuration from being usable. We've personally had a lot more luck partitioning resources, and picking a set of favorable resource combinations than letting the scheduler deal with it.

This is a great discussion. I think that we need more of this kind of thing on the operators list.
 -nld
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20140409/35855412/attachment.html>


More information about the OpenStack-operators mailing list