[Openstack-operators] Quota Templates
Joe Topjian
joe at topjian.net
Sat Apr 5 22:42:07 UTC 2014
Hi Jay,
Ah! Very interesting.
My initial reaction is that weighting resources to make flavors use more
resources than in reality could make actual usage reporting difficult. But
nothing reverse-weighting couldn't deal with. It's a very good idea that
highlights a more simple approach to what I had in mind - and for that,
thank you very much :)
Joe
On Sat, Apr 5, 2014 at 3:40 PM, Jay Pipes <jaypipes at gmail.com> wrote:
> On Sat, 2014-04-05 at 14:56 -0600, Joe Topjian wrote:
> > Hi Jay,
> >
> > The only problem with this is that what happens if an instance
> > type's
> > underlying resource allocation changes? Quotas and
> > reservations
> > generally need to be on the actual resources that are
> > consumed, not on
> > an abstract representation of those resources like an instance
> > type, due
> > to the fact that the abstract representation can changes over
> > time.
> >
> > My idea about the UI above works nicely with the idea of
> > changeable
> > instance types. The UI can do simple real-time calculations
> > about what a
> > quota allocation represents in terms of the actual resources
> > used by an
> > instance type.
> >
> >
> > Yes, but there is nothing limiting a user to only a set amount of a
> > certain resource in a certain environment.
> >
> > Perhaps I am misunderstanding this discussion, but here is an example
> > scenario of how I see it:
> >
> > I have a small set of compute nodes that are configured in such a way
> > that make them "special" (SSD, GPU, etc). I can put those compute
> > nodes in a host aggregate and tie a series of flavors to that host
> > aggregate.
> >
> > But since there is only one quota that the user has, there is nothing
> > that is preventing a user from hogging a large majority of the those
> > "special" nodes. For example, if a user has a quota of 500 cpus and
> > 800gb of memory, and if the "special" nodes collectively only have
> > 1500 cpus and 2400gb of memory, the user could consume 1/3rd of the
> > special resources.
> >
> > However, if I was able to tie a separate quota to that host aggregate,
> > I could give the user a quota of 50 cpus and 80gb of memory alongside
> > his standard quota of 500/800 for the rest of the cloud.
> >
> > I greatly apologize if this scenario is *not* what Narayan was talking
> > about. In addition, this is not a feature that I'm personally in
> > desperate need for, but would definitely implement it if it was
> > available -- much more than a quota template.
>
> Yes, the above scenario indeed is an interesting use case. Part of the
> issue is that with the addition of the "extra specs" part of the
> instance type, the instance type no longer represents just the dedicated
> resources that the a single instance of that type consumes, but also a
> free-form set of "capabilities" that the instance has -- GPU, SSD, etc.
>
> The trick really is going to be about quantifying what those free-form
> qualitative "tags" like GPU actually represent when it comes to the
> consumption of resources in Nova. As it stands now, quota management has
> no real way of differentiating one CPU unit from another. I imagine that
> one solution to this would be to attach a "weighting factor" to each
> free-form instance type extra spec that would effectively multiply the
> actual resource that is consumed by the instance of that type by the
> factor, thereby producing a quantifiable value.
>
> For example, let's say you had these two instance types:
>
> m1.small
> - 2GB memory
> - 10GB disk
> - 1 CPU
>
> g1.small
> - 2GB memory
> - 10GB disk
> - 1 CPU
> - extra_specs:
> - gpu
> - ssd
>
> An admin might assign the "gpu" extra spec a weighting factor of
> "cpu:1.5" and the "ssd" extra spec a weighting factor of "disk:10.0".
> This could be used by the quota management system to effectively give
> the g1.small instance type the following adjusted resource allocation:
>
> - 2GB memory
> - 1.5 CPU
> - 100GB disk
>
> Make sense?
>
> -jay
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20140405/247d231c/attachment.html>
More information about the OpenStack-operators
mailing list