[Openstack-operators] Quota Templates

Jay Pipes jaypipes at gmail.com
Sun Apr 6 20:38:50 UTC 2014


On Sun, Apr 6, 2014 at 3:27 AM, Tim Bell <Tim.Bell at cern.ch> wrote:

>
> There is some work on a reservation system going on around the Climate
> project. Last I saw, there was a proposal to integrate this function into
> Nova itself.
>

Yes. The problem with that is that quotas are cross-project functionality,
so this really does belong outside of Nova.

I'd like to focus primarily on the job of quota management for the Boson
effort, however, and not get mired in the topic of resource reservations
(time-based reservations). Climate is already handling the latter.

Boson should provide a simple, but flexible, way of managing quotas for a
variety of resources in a transactional way, across many projects. Anything
outside of that scope should be saved for either Climate or some future
milestones, IMO.

Best,
-jay


>
> It would be interesting to explore the different quota options. At CERN,
> we're trying to work through some of the scenarios where hypervisors are
> not 100% used and we want to run some additional opportunistic work on them
> (a la spot market). The dynamic flavors is also an interesting one too to
> expand and shrink according to the workload.
>
> Amongst other things, we need a way to ask a VM to gracefully complete its
> current work (i.e. give it some warning) rather than just shrinking/killing
> it without notice.
>
> Tim
>
> > -----Original Message-----
> > From: Cazzolato, Sergio J [mailto:sergio.j.cazzolato at intel.com]
> > Sent: 06 April 2014 00:35
> > To: Nathanael Burton; Jay Pipes; Joe Topjian
> > Cc: openstack-operators at lists.openstack.org
> > Subject: Re: [Openstack-operators] Quota Templates
> >
> > Nice proposal, do you think it could be done with something like
> automatic leasing or reservations?
> >
> > The problem I see is that could be a conflict if a project request
> resources (included in its quotas) but another one is using those.
> >
> > From: Nathanael Burton [mailto:nathanael.i.burton at gmail.com]
> > Sent: Saturday, April 05, 2014 7:14 PM
> > To: Jay Pipes; Joe Topjian
> > Cc: openstack-operators at lists.openstack.org
> > Subject: Re: [Openstack-operators] Quota Templates
> >
> >
> > On Apr 5, 2014 5:42 PM, "Jay Pipes" <jaypipes at gmail.com> wrote:
> > >
> > > On Sat, 2014-04-05 at 14:56 -0600, Joe Topjian wrote:
> > > > Hi Jay,
> > > >
> > > >         The only problem with this is that what happens if an
> instance
> > > >         type's
> > > >         underlying resource allocation changes? Quotas and
> > > >         reservations
> > > >         generally need to be on the actual resources that are
> > > >         consumed, not on
> > > >         an abstract representation of those resources like an
> instance
> > > >         type, due
> > > >         to the fact that the abstract representation can changes over
> > > >         time.
> > > >
> > > >         My idea about the UI above works nicely with the idea of
> > > >         changeable
> > > >         instance types. The UI can do simple real-time calculations
> > > >         about what a
> > > >         quota allocation represents in terms of the actual resources
> > > >         used by an
> > > >         instance type.
> > > >
> > > >
> > > > Yes, but there is nothing limiting a user to only a set amount of a
> > > > certain resource in a certain environment.
> > > >
> > > > Perhaps I am misunderstanding this discussion, but here is an example
> > > > scenario of how I see it:
> > > >
> > > > I have a small set of compute nodes that are configured in such a way
> > > > that make them "special" (SSD, GPU, etc). I can put those compute
> > > > nodes in a host aggregate and tie a series of flavors to that host
> > > > aggregate.
> > > >
> > > > But since there is only one quota that the user has, there is nothing
> > > > that is preventing a user from hogging a large majority of the those
> > > > "special" nodes. For example, if a user has a quota of 500 cpus and
> > > > 800gb of memory, and if the "special" nodes collectively only have
> > > > 1500 cpus and 2400gb of memory, the user could consume 1/3rd of the
> > > > special resources.
> > > >
> > > > However, if I was able to tie a separate quota to that host
> aggregate,
> > > > I could give the user a quota of 50 cpus and 80gb of memory alongside
> > > > his standard quota of 500/800 for the rest of the cloud.
> > > >
> > > > I greatly apologize if this scenario is *not* what Narayan was
> talking
> > > > about. In addition, this is not a feature that I'm personally in
> > > > desperate need for, but would definitely implement it if it was
> > > > available -- much more than a quota template.
> > >
> > > Yes, the above scenario indeed is an interesting use case. Part of the
> > > issue is that with the addition of the "extra specs" part of the
> > > instance type, the instance type no longer represents just the
> dedicated
> > > resources that the a single instance of that type consumes, but also a
> > > free-form set of "capabilities" that the instance has -- GPU, SSD, etc.
> > >
> > > The trick really is going to be about quantifying what those free-form
> > > qualitative "tags" like GPU actually represent when it comes to the
> > > consumption of resources in Nova. As it stands now, quota management
> has
> > > no real way of differentiating one CPU unit from another. I imagine
> that
> > > one solution to this would be to attach a "weighting factor" to each
> > > free-form instance type extra spec that would effectively multiply the
> > > actual resource that is consumed by the instance of that type by the
> > > factor, thereby producing a quantifiable value.
> > >
> > > For example, let's say you had these two instance types:
> > >
> > >  m1.small
> > >   - 2GB memory
> > >   - 10GB disk
> > >   - 1 CPU
> > >
> > >  g1.small
> > >   - 2GB memory
> > >   - 10GB disk
> > >   - 1 CPU
> > >   - extra_specs:
> > >     - gpu
> > >     - ssd
> > >
> > > An admin might assign the "gpu" extra spec a weighting factor of
> > > "cpu:1.5" and the "ssd" extra spec a weighting factor of "disk:10.0".
> > > This could be used by the quota management system to effectively give
> > > the g1.small instance type the following adjusted resource allocation:
> > >
> > >  - 2GB memory
> > >  - 1.5 CPU
> > >  - 100GB disk
> > >
> > > Make sense?
> > >
> > > -jay
> > >
> > >
> > >
> > >
> > > _______________________________________________
> > > OpenStack-operators mailing list
> > > OpenStack-operators at lists.openstack.org
> > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > We use the per-project flavors to somewhat manage this use case. There
> are a set of default flavors that every project can see and
> > use and then the more exotic flavors are private and shared with
> projects as needed.  The other addition we did is to add a few
> > additional quota resources, for example local-gigabytes to manage
> type/quantity of local instance storage.
> > Would love to see additional work with regards to quota management and
> enforcement. Particularly within private clouds, the
> > current quota system doesn't allow for much dynamic growth.  I think
> enhancements to the quota system to allow for something akin
> > to "spot instances" would be really useful. Quotas would enforce how
> much resources a project is guaranteed, but projects could
> > elastically grow beyond their quotas if the rest of the cloud was under
> utilized because other projects were not using all their
> > guaranteed resources.  At the point in which a project that had been
> under-utilizing their quota needs to schedule new resources, if
> > the scheduler finds the cloud is full it would "reap" resources in a
> FIFO manner from any projects which were in excess of their
> > guaranteed quota.  Thus freeing up enough resources for the incoming
> request.
> > Just some thoughts I've had recently.
> > Nate
> >
> > _______________________________________________
> > OpenStack-operators mailing list
> > OpenStack-operators at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20140406/e1d539a4/attachment.html>


More information about the OpenStack-operators mailing list