[Openstack-operators] Managing quota for Nova local storage?

Van Leeuwen, Robert rovanleeuwen at ebay.com
Wed Nov 9 07:51:42 UTC 2016


Hi,

Found this thread in the archive so a bit of a late reaction.
We are hitting the same thing so I created a blueprint:
https://blueprints.launchpad.net/nova/+spec/nova-local-storage-quota

If you guys already found a nice solution to this problem I’d like to hear it :)

Robert van Leeuwen
eBay - ECG

From: Warren Wang <warren at wangspeed.com>
Date: Wednesday, February 17, 2016 at 8:00 PM
To: Ned Rhudy <erhudy at bloomberg.net>
Cc: "openstack-operators at lists.openstack.org" <openstack-operators at lists.openstack.org>
Subject: Re: [Openstack-operators] Managing quota for Nova local storage?

We are in the same boat. Can't get rid of ephemeral for it's speed, and independence. I get it, but it makes management of all these tiny pools a scheduling and capacity nightmare.
Warren @ Walmart

On Wed, Feb 17, 2016 at 1:50 PM, Ned Rhudy (BLOOMBERG/ 731 LEX) <erhudy at bloomberg.net<mailto:erhudy at bloomberg.net>> wrote:
The subject says it all - does anyone know of a method by which quota can be enforced on storage provisioned via Nova rather than Cinder? Googling around appears to indicate that this is not possible out of the box (e.g., https://ask.openstack.org/en/question/8518/disk-quota-for-projects/).

The rationale is we offer two types of storage, RBD that goes via Cinder and LVM that goes directly via the libvirt driver in Nova. Users know they can escape the constraints of their volume quotas by using the LVM-backed instances, which were designed to provide a fast-but-unreliable RAID 0-backed alternative to slower-but-reliable RBD volumes. Eventually users will hit their max quota in some other dimension (CPU or memory), but we'd like to be able to limit based directly on how much local storage is used in a tenancy.

Does anyone have a solution they've already built to handle this scenario? We have a few ideas already for things we could do, but maybe somebody's already come up with something. (Social engineering on our user base by occasionally destroying a random RAID 0 to remind people of their unsafety, while tempting, is probably not a viable candidate solution.)

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org<mailto:OpenStack-operators at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161109/a2fd6ab3/attachment.html>


More information about the OpenStack-operators mailing list