[Openstack-operators] Quota Templates

Cazzolato, Sergio J sergio.j.cazzolato at intel.com
Fri Apr 11 15:37:58 UTC 2014


Thanks James!

I think we will need some help for code review and to test this in a real cluster.

-----Original Message-----
From: James Penick [mailto:penick at yahoo-inc.com] 
Sent: Thursday, April 10, 2014 2:44 PM
To: Cazzolato, Sergio J; Tim Bell; Narayan Desai; Jay Pipes
Cc: openstack-operators at lists.openstack.org
Subject: Re: [Openstack-operators] Quota Templates

This certainly covers our immediate needs. Longer term we should consider how to cascade this concept through cells, and allow someone to have a quota for a given flavor in a given cell type. For example, I may allow them 100 m1.medium flavors in a 0 over provisioning cell, but 500 in a highly over provisioned cell. Something to consider.

That said, there are some dev resources that we have available to us. Want some help getting this in?

-James

:)=






On 4/9/14, 2:17 PM, "Cazzolato, Sergio J" <sergio.j.cazzolato at intel.com>
wrote:

>So far the plan is to make it by project/user. The idea for the flavor 
>quotas is that could be used combined with other resources or alone.
>
>Example:
>Suppose this scenario:
>. flavor medium has 2 cores and 4GB ram . flavor large has 4 cores and 
>8GB ram
>
>Today we can assign in this way the quotas per user:
>. Instances: 4
>. Cores: 8
>. Ram: 16GB
>
>Through this feature:
>. flavor medium: 4
>Or
>. flavor large: 2
>Or
>. flavor medium: 2
>. flavor large: 1
>Or
>.flavor medium: 4
>. Instances: 4
>. Cores: 8
>. Ram: 16GB
>
>So,
>In case the quotas for a user is flavor_medium: 4 The idea is that when 
>the user creates a machine and it is based on a flavor medium, if the 
>user has already created 4 machines the system will not allow this, 
>otherwise it will allow it. And in case the user wants to create a 
>large one, the system will deny that.
>
>Does it make sense?
>
>Do you think it is enough to cover the scenarios for specific hardware 
>as you mention in previous emails?
>
>From: Tim Bell [mailto:Tim.Bell at cern.ch]
>Sent: Wednesday, April 09, 2014 5:16 PM
>To: Cazzolato, Sergio J; James Penick; Narayan Desai; Jay Pipes
>Cc: openstack-operators at lists.openstack.org
>Subject: RE: [Openstack-operators] Quota Templates
>
>
>Addressing all of CERN’s needs would be a big patch ☺
>
>I can see a few scenarios which may be overlapping
>
>The quota for a project should be (optionally) set such that the number 
>of a particular flavour is limited. Examples would be very large VMs or 
>flavors which match specific aggregates on a different allocation scheme.
>
>Can you clarify the scope of the quota ? Are we setting it per user, 
>per group (with Keystone V3 this is supported) and/or per project ?
>
>Tim
>
>From: Cazzolato, Sergio J [mailto:sergio.j.cazzolato at intel.com]
>Sent: 09 April 2014 21:28
>To: James Penick; Tim Bell; Narayan Desai; Jay Pipes
>Cc: openstack-operators at lists.openstack.org
>Subject: RE: [Openstack-operators] Quota Templates
>
>Hi James, thanks for sharing with us.
>
>I am working in the blueprint “per-flavor-quotas” and I would to know 
>based on your datacenter needs, if this feature is covering all the 
>problems you have regarding this topic? Or we should complement that 
>with something else?
>
>https://review.openstack.org/#/c/84432/7
>
>Thanks
>
>
>From: James Penick [mailto:penick at yahoo-inc.com]
>Sent: Tuesday, April 08, 2014 6:37 PM
>To: Tim Bell; Narayan Desai; Jay Pipes
>Cc: openstack-operators at lists.openstack.org
>Subject: Re: [Openstack-operators] Quota Templates
>
>This all seems to be going in a great direction.
>
> One of the things I’ve been working on at Yahoo is integrating 
>Openstack VMs and Bare metal into our larger finance process. We have a 
>couple interesting problems presented by the current quota management system.
>
> When one of our Projects on our internal cloud wants to increase their 
>quota they’ll need to file a request. That request will be compared 
>against projected cluster capacity as well as certain financial models.
>If the request falls within certain parameters it’ll need larger 
>finance approval before the quota can be allocated.  There are a number 
>of capacity folks who would be handling these requests, so there could 
>be a race condition when multiple Projects request capacity at the same time.
>So not only do we need a way to grant and track quota increases, but 
>also to have a concept of state management within a given quota 
>request. The advantage of state management is an enterprise can track 
>why and when a given quota increase was requested and granted. 
>Especially if there’s a field to track an external reference ID (like a 
>bug/ticket number).  The big change is that we will no longer have one 
>item in the DB per-Project-per-resource, but instead it’d be 
>per-Project-per-resource-per-request. This means we’d also need to 
>extend nova to support the concept of ‘Soft’ and ‘hard’ quota. Hard 
>quota being what you can actually allocate against, and ‘soft’ quota 
>would let you see what you have + what’s in the pipe for approval.
>
> Now for flavor-level quotas. We, too have a use case where we need to 
>limit a Project to certain ‘flavor’ types. In our case it’s primarily 
>our bare metal deployment, but I might want to grant a Project 
>differing levels of quota within different cluster types. For example, 
>I might want to have some hypervisors slated for no over provisioning, 
>while others would be available at an extremely high over provision 
>rate. This would let me guide our internal tenants onto the best 
>possible resources with the right SLA and price point. As was mentioned 
>earlier in the thread there would also be value in granting quota 
>against a certain hardware type. SSD hypervisors, hypervisors with SSL 
>accelerators, etc etc.  There are a lot of ways I can see slicing the pie in my data centers.
>
>While both of these things are of definite need to the enterprise 
>running private clouds, I imagine the smaller public cloud provider 
>would need them just as much.
>
>Tim, would you (or anyone else) care to work together on the blueprint 
>and reboot Boson? I can contribute insight and code..
>
>Thanks!
>-James
>
>:)=
>
>
>From: Tim Bell <Tim.Bell at cern.ch>
>Date: Monday, April 7, 2014 at 11:45 AM
>To: Narayan Desai <narayan.desai at gmail.com>, Jay Pipes 
><jaypipes at gmail.com>
>Cc: "openstack-operators at lists.openstack.org"
><openstack-operators at lists.openstack.org>
>Subject: Re: [Openstack-operators] Quota Templates
>
> 
>We can add some more CERN brass for this scenario. We want to fill up 
>workload spread across the maximum number of hypervisors (don’t 
>overcommit unless you have to) but that means as you approach full, you 
>can only give out small VMs that fit in the cracks.
> 
>In my ideal world, we would have a way of saying schedule for maximum 
>distribution across hypervisors but make sure that there are X slots 
>free for flavour A, Y slots free for flavour B etc for requests that 
>exactly match that resource.
> 
>Given a 3-4 year purchasing cycle, the Thailand floods means HDDs were 
>not available in volume and varying processor prices, we’ve got a large 
>mixture of different configurations.
> 
>It’s a really tough problem to do this at scale and quickly but I’d 
>love to discuss/debate the options available to be able to turn the 
>cloud efficiency up to 11 ☺
> 
>Tim
> 
>From: Narayan Desai [mailto:narayan.desai at gmail.com]
>Sent: 07 April 2014 20:33
>To: Jay Pipes
>Cc: openstack-operators at lists.openstack.org
>Subject: Re: [Openstack-operators] Quota Templates
> 
>> 
>You must be using a much smarter openstack scheduler than I am. I'll 
>agree in principle that this could be true, however, fragmentation 
>avoidance is tricky problem, particularly when you have a big range of 
>potential configuration. YOu can only imagine the sad trombone that 
>played in my office the first time the scheduler placed an 8GB instance 
>on a bigmem node, blocking that system's largest configuration from 
>being usable. We've personally had a lot more luck partitioning 
>resources, and picking a set of favorable resource combinations than 
>letting the scheduler deal with it.
> 
>This is a great discussion. I think that we need more of this kind of 
>thing on the operators list.
> -nld



More information about the OpenStack-operators mailing list