<div dir="ltr">FWIW - There was a lengthy discussion in #openstack-dev yesterday regarding this [0].<div><br></div><div><br></div><div>[0] <a href="http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2017-02-28.log.html#t2017-02-28T17:39:48">http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2017-02-28.log.html#t2017-02-28T17:39:48</a></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Mar 1, 2017 at 5:42 AM, John Garbutt <span dir="ltr"><<a href="mailto:john@johngarbutt.com" target="_blank">john@johngarbutt.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On 27 February 2017 at 21:18, Matt Riedemann <<a href="mailto:mriedemos@gmail.com">mriedemos@gmail.com</a>> wrote:<br>
> We talked about a few things related to quotas at the PTG, some in<br>
> cross-project sessions earlier in the week and then some on Wednesday<br>
> morning in the Nova room. The full etherpad is here [1].<br>
><br>
> Counting quotas<br>
> ---------------<br>
><br>
> Melanie hit a problem with the counting quotas work in Ocata with respect to<br>
> how to handle quotas when the cell that an instance is running in is down.<br>
> The proposed solution is to track project/user ID information in the<br>
> "allocations" table in the Placement service so that we can get allocation<br>
> information for quota usage from Placement rather than the cell. That should<br>
> be a relatively simple change to move this forward and hopefully get the<br>
> counting quotas patches merged by p-1 so we have plenty of burn-in time for<br>
> the new quotas code.<br>
><br>
> Centralizing limits in Keystone<br>
> ------------------------------<wbr>-<br>
><br>
> This actually came up mostly during the hierarchical quotas discussion on<br>
> Tuesday which was a cross-project session. The etherpad for that is here<br>
> [2]. The idea here is that Keystone already knows about the project<br>
> hierarchy and can be a central location for resource limits so that the<br>
> various projects, like nova and cinder, don't have to have a similar data<br>
> model and API for limits, we can just make that common in Keystone. The<br>
> other projects would still track resource usage and calculate when a request<br>
> is over the limit, but the hope is that the calculation and enforcement can<br>
> be generalized so we don't have to implement the same thing in all of the<br>
> projects for calculating when something is over quota.<br>
><br>
> There is quite a bit of detail in the nova etherpad [1] about overbooking<br>
> and enforcement modes, which will need to be brought up as options in a spec<br>
> and then projects can sort out what makes the most sense (there might be<br>
> multiple enforcement models available).<br>
><br>
> We still have to figure out the data migration plan to get limits data from<br>
> each project into Keystone, and what the API in Keystone is going to look<br>
> like, including what this looks like when you have multiple compute<br>
> endpoints in the service catalog, or regions, for example.<br>
><br>
> Sean Dague was going to start working on the spec for this.<br>
><br>
> Hierarchical quota support<br>
> --------------------------<br>
><br>
> The notes on hierarchical quota support are already in [1] and [2]. We<br>
> agreed to not try and support hierarchical quotas in Nova until we were<br>
> using limits from Keystone so that we can avoid the complexity of both<br>
> systems (limits from Nova and limits from Keystone) in the same API code. We<br>
> also agreed to not block the counting quotas work that melwitt is doing<br>
> since that's already valuable on its own. It's also fair to say that<br>
> hierarchical quota support in Nova is a Queens item at the earliest given we<br>
> have to get limits stored in Keystone in Pike first.<br>
><br>
> Dealing with the os-qouta-class-sets API<br>
> ------------------------------<wbr>----------<br>
><br>
> I had a spec [3] proposing to cleanup some issues with the<br>
> os-quota-class-sets API in Nova. We agreed that rather than spend time<br>
> fixing the latent issues in that API, we'd just invest that time in storing<br>
> and getting limits from Keystone, after which we'll revisit deprecating the<br>
> quota classes API in Nova.<br>
><br>
> [1] <a href="https://etherpad.openstack.org/p/nova-ptg-pike-quotas" rel="noreferrer" target="_blank">https://etherpad.openstack.<wbr>org/p/nova-ptg-pike-quotas</a><br>
> [2] <a href="https://etherpad.openstack.org/p/ptg-hierarchical-quotas" rel="noreferrer" target="_blank">https://etherpad.openstack.<wbr>org/p/ptg-hierarchical-quotas</a><br>
> [3] <a href="https://review.openstack.org/#/c/411035/" rel="noreferrer" target="_blank">https://review.openstack.org/#<wbr>/c/411035/</a><br>
<br>
</div></div>I started a quota backlog spec before the PTG to collect my thoughts here:<br>
<a href="https://review.openstack.org/#/c/429678" rel="noreferrer" target="_blank">https://review.openstack.org/#<wbr>/c/429678</a><br>
<br>
I have updated that post summit to include updated details on<br>
hierarchy (ln134) when using keystone to store the limits. This mostly<br>
came from some side discussions in the API-WG room with morgan and<br>
melwitt.<br>
<br>
It includes a small discussion on how the idea behind quota-class-sets<br>
could be turned into something usable, although that is now a problem<br>
for keystone's limits API.<br>
<br>
There were some side discussion around the move to placement meaning<br>
ironic quotas move from vCPU and RAM to custom resource classes. Its<br>
worth noting this largely supersedes the ideas we discussed here in<br>
flavor classes:<br>
<a href="http://specs.openstack.org/openstack/nova-specs/specs/backlog/approved/flavor-class.html" rel="noreferrer" target="_blank">http://specs.openstack.org/<wbr>openstack/nova-specs/specs/<wbr>backlog/approved/flavor-class.<wbr>html</a><br>
<br>
I don't currently plan on taking that backlog spec further, as sdague<br>
is going to take moving this all forward.<br>
<br>
Thanks,<br>
John<br>
<div class="HOEnZb"><div class="h5"><br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
</div></div></blockquote></div><br></div>