[openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

Gerald McBrearty gfm at us.ibm.com
Fri Jun 8 15:31:18 UTC 2018


Dan Smith <dms at danplanet.com> wrote on 06/08/2018 08:46:01 AM:

> From: Dan Smith <dms at danplanet.com>
> To: melanie witt <melwittt at gmail.com>
> Cc: "OpenStack Development Mailing List \(not for usage questions\)"
> <openstack-dev at lists.openstack.org>, 
openstack-operators at lists.openstack.org
> Date: 06/08/2018 08:48 AM
> Subject: Re: [openstack-dev] [nova] increasing the number of allowed
> volumes attached per instance > 26
> 
> > Some ideas that have been discussed so far include:
> 
> FYI, these are already in my order of preference.
> 
> > A) Selecting a new, higher maximum that still yields reasonable
> > performance on a single compute host (64 or 128, for example). Pros:
> > helps prevent the potential for poor performance on a compute host
> > from attaching too many volumes. Cons: doesn't let anyone opt-in to a
> > higher maximum if their environment can handle it.
> 
> I prefer this because I think it can be done per virt driver, for
> whatever actually makes sense there. If powervm can handle 500 volumes
> in a meaningful way on one instance, then that's cool. I think libvirt's
> limit should likely be 64ish.
> 

As long as this can be done on a per virt driver basis as Dan says
I think also would prefer this option.

Actually the meaning fully number is much higher that 500 for powervm.
I'm thinking the powervm limit could likely be 4096ish. On powervm we have 

a OS where the meaningful limit is 4096 volumes but routinely most
operators would have between 1000-2000.

-Gerald

> > B) Creating a config option to let operators choose how many volumes
> > allowed to attach to a single instance. Pros: lets operators opt-in to
> > a maximum that works in their environment. Cons: it's not discoverable
> > for those calling the API.
> 
> This is a fine compromise, IMHO, as it lets operators tune it per
> compute node based on the virt driver and the hardware. If one compute
> is using nothing but iSCSI over a single 10g link, then they may need to
> clamp that down to something more sane.
> 
> Like the per virt driver restriction above, it's not discoverable via
> the API, but if it varies based on compute node and other factors in a
> single deployment, then making it discoverable isn't going to be very
> easy anyway.
> 
> > C) Create a configurable API limit for maximum number of volumes to
> > attach to a single instance that is either a quota or similar to a
> > quota. Pros: lets operators opt-in to a maximum that works in their
> > environment. Cons: it's yet another quota?
> 
> Do we have any other quota limits that are per-instance like this would
> be? If not, then this would likely be weird, but if so, then this would
> also be an option, IMHO. However, it's too much work for what is really
> not a hugely important problem, IMHO, and both of the above are
> lighter-weight ways to solve this and move on.
> 
> --Dan
> 
> 
__________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> INVALID URI REMOVED
> 
u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-
> siA1ZOg&r=i0r4x6W1L_PMd5Bym8J36w&m=Vg5MEvB0VELjModDoJF8PGcmUinnq-
> kfFxavTqfnYYw&s=xe_2YmabBZEJJmtBK-4LZPh68rG3UI6dVqoZq6zKlIA&e=
> 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180608/71d55ceb/attachment.html>


More information about the OpenStack-dev mailing list