<html><head><title></title></head><body><!-- rte-version 0.2 9947551637294008b77bce25eb683dac --><div class="rte-style-maintainer" style="white-space: pre-wrap; font-size: small; font-family: 'Courier New', Courier; color: rgb(0, 0, 0);"data-color="global-default" bbg-color="default" data-bb-font-size="medium" bbg-font-size="medium" bbg-font-family="fixed-width">We didn't come up with one. RAM on our HVs is the limiting factor since we don't run with memory overcommit, so the ability of people to run an HV out of disk space ended up being moot.  ¯\_(ツ)_/¯<div><br></div><div>Long term we would like to switch to being exclusively RBD-backed and get rid of local storage entirely, but that is Distant Future at best.<br><div class="rte-style-maintainer" style="font-size: small; font-family: 'Courier New', Courier; color: rgb(0, 0, 0);"data-color="global-default" bbg-color="default" data-bb-font-size="medium" bbg-font-size="medium" bbg-font-family="fixed-width"><br><div class="bbg-rte-fold-content" data-header="From: rovanleeuwen@ebay.com" data-digest="From: rovanleeuwen@ebay.com" style=""><div class="bbg-rte-fold-summary">From: rovanleeuwen@ebay.com </div><div>Subject: Re: [Openstack-operators] Managing quota for Nova local storage?<br></div></div><div class="rte-internet-block-wrapper" style="color: black; font-family: Arial, 'BB.Proportional'; font-size: small; white-space: normal; background: white;"><div class="rte-internet-block"><blockquote><span class="bbScopedStyle7263624439947307">        </span><div class="WordSection1"><p class="MsoNormal"><span class="bbScopedStyle7263624439947307"><span style="font-size:11.0pt;font-family:Calibri">Hi,</span></span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:Calibri"> </span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:Calibri">Found this thread in the archive so a bit of a late reaction.</span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:Calibri">We are hitting the same thing so I created a blueprint:</span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:Calibri"><a spellcheck="false"bbg-destination="rte:bind" class="" href="https://blueprints.launchpad.net/nova/+spec/nova-local-storage-quota"data-destination="rte:bind">https://blueprints.launchpad.net/nova/+spec/nova-local-storage-quota</a> </span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:Calibri"> </span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:Calibri">If you guys already found a nice solution to this problem I’d like to hear it :)</span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:Calibri"> </span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:Calibri">Robert van Leeuwen</span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:Calibri">eBay - ECG</span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:Calibri"> </span></p> <div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in"><p class="MsoNormal"><b><span style="font-family:Calibri;color:black">From: </span> </b><span style="font-family:Calibri;color:black">Warren Wang <warren@wangspeed.com><br><b>Date: </b>Wednesday, February 17, 2016 at 8:00 PM<br><b>To: </b>Ned Rhudy <erhudy@bloomberg.net><br><b>Cc: </b>"openstack-operators@lists.openstack.org" <openstack-operators@lists.openstack.org><br><b>Subject: </b>Re: [Openstack-operators] Managing quota for Nova local storage?</span></p></div> <div><p class="MsoNormal"><span> </span></p></div> <div><div><p class="MsoNormal" style="margin-bottom:12.0pt">We are in the same boat. Can't get rid of ephemeral for it's speed, and independence. I get it, but it makes management of all these tiny pools a scheduling and capacity nightmare.</p></div> <p class="MsoNormal">Warren @ Walmart</p></div> <div><p class="MsoNormal"><span> </span></p> <div><p class="MsoNormal">On Wed, Feb 17, 2016 at 1:50 PM, Ned Rhudy (BLOOMBERG/ 731 LEX) <<a spellcheck="false"bbg-destination="mailto:rte:bind" class="rte-from-internet" href="mailto:erhudy@bloomberg.net" data-destination="mailto:rte:bind">erhudy@bloomberg.net</a>> wrote:</p> <blockquote style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0in 0in 0in 6.0pt;margin-left:4.8pt;margin-right:0in"><div><div><p class="MsoNormal"><span style="font-family:"Courier New";color:black">The subject says it all - does anyone know of a method by which quota can be enforced on storage provisioned via Nova rather than Cinder? Googling around appears to indicate that this  is not possible out of the box (e.g., <a spellcheck="false"bbg-destination="https://ask.openstack.org/en/question/8518/disk-quota-for-projects/"class="" href="https://ask.openstack.org/en/question/8518/disk-quota-for-projects/"data-destination="https://ask.openstack.org/en/question/8518/disk-quota-for-projects/">https://ask.openstack.org/en/question/8518/disk-quota-for-projects/</a>). </span></p> <div><p class="MsoNormal"><span style="font-family:"Courier New";color:black"> </span></p></div> <div><p class="MsoNormal"><span style="font-family:"Courier New";color:black">The rationale is we offer two types of storage, RBD that goes via Cinder and LVM that goes directly via the libvirt driver in Nova. Users know they can escape the constraints of their  volume quotas by using the LVM-backed instances, which were designed to provide a fast-but-unreliable RAID 0-backed alternative to slower-but-reliable RBD volumes. Eventually users will hit their max quota in some other dimension (CPU or memory), but we'd  like to be able to limit based directly on how much local storage is used in a tenancy.</span></p></div> <div><p class="MsoNormal"><span style="font-family:"Courier New";color:black"> </span></p></div> <div><p class="MsoNormal"><span style="font-family:"Courier New";color:black">Does anyone have a solution they've already built to handle this scenario? We have a few ideas already for things we could do, but maybe somebody's already come up with something. (Social  engineering on our user base by occasionally destroying a random RAID 0 to remind people of their unsafety, while tempting, is probably not a viable candidate solution.)</span></p></div></div></div> <p class="MsoNormal" style="margin-bottom:12.0pt"><br>_______________________________________________<br>OpenStack-operators mailing list<br><a spellcheck="false"bbg-destination="mailto:rte:bind" class="rte-from-internet" href="mailto:OpenStack-operators@lists.openstack.org" data-destination="mailto:rte:bind">OpenStack-operators@lists.openstack.org</a><br><a spellcheck="false"bbg-destination="rte:bind" class="" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators"data-destination="rte:bind">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a></p></blockquote></div> <p class="MsoNormal"><span> </span></p></div></div>  </blockquote><br></div></div></div></div></div></body></html>