[openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26
Sahid Orentino Ferdjaoui
sferdjao at redhat.com
Mon Jun 11 09:55:29 UTC 2018
On Fri, Jun 08, 2018 at 11:35:45AM +0200, Kashyap Chamarthy wrote:
> On Thu, Jun 07, 2018 at 01:07:48PM -0500, Matt Riedemann wrote:
> > On 6/7/2018 12:56 PM, melanie witt wrote:
> > > Recently, we've received interest about increasing the maximum number of
> > > allowed volumes to attach to a single instance > 26. The limit of 26 is
> > > because of a historical limitation in libvirt (if I remember correctly)
> > > and is no longer limited at the libvirt level in the present day. So,
> > > we're looking at providing a way to attach more than 26 volumes to a
> > > single instance and we want your feedback.
> >
> > The 26 volumes thing is a libvirt driver restriction.
>
> The original limitation of 26 disks was because at that time there was
> no 'virtio-scsi'.
>
> (With 'virtio-scsi', each of its controller allows upto 256 targets, and
> each target can use any LUN (Logical Unit Number) from 0 to 16383
> (inclusive). Therefore, the maxium allowable disks on a single
> 'virtio-scsi' controller is 256 * 16384 == 4194304.) Source[1].
Not totally true for Nova. Nova handles one virtio-scsi controller per
guest and plug all the volumes in one target so in theory that would
be 16384 LUN (only).
But you made a good point the 26 volumes thing is not a libvirt driver
restriction. For example the QEMU SCSI native implementation handles
256 disks.
About the virtio-blk limitation I made the same finding but Tsuyoshi
Nagata shared an interesting point saying that virtio-blk is not longer
limited by the number of PCI slot available. That in recent kernel and
QEMU version [0].
I could join what you are suggesting at the bottom and fix the limit
to 256 disks.
[0] https://review.openstack.org/#/c/567472/16/nova/virt/libvirt/blockinfo.py@162
> [...]
>
> > > Some ideas that have been discussed so far include:
> > >
> > > A) Selecting a new, higher maximum that still yields reasonable
> > > performance on a single compute host (64 or 128, for example). Pros:
> > > helps prevent the potential for poor performance on a compute host from
> > > attaching too many volumes. Cons: doesn't let anyone opt-in to a higher
> > > maximum if their environment can handle it.
>
> Option (A) can still be considered: We can limit it to 256 disks. Why?
>
> FWIW, I did some digging here:
>
> The upstream libguestfs project after some thorough testing, arrived at
> a limit of 256 disks, and suggest the same for Nova. And if anyone
> wants to increase that limit, the proposer should come up with a fully
> worked through test plan. :-) (Try doing any meaningful I/O to so many
> disks at once, and see how well that works out.)
>
> What more, the libguestfs upstream tests 256 disks, and even _that_
> fails sometimes:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1478201 -- "kernel runs
> out of memory with 256 virtio-scsi disks"
>
> The above bug is fixed now in kernel-4.17.0-0.rc3.git1.2. (And also
> required a corresponding fix in QEMU[2], which is available from version
> v2.11.0 onwards.)
>
> [...]
>
>
> [1] https://lists.nongnu.org/archive/html/qemu-devel/2017-04/msg02823.html
> -- virtio-scsi limits
> [2] https://git.qemu.org/?p=qemu.git;a=commit;h=5c0919d
>
> --
> /kashyap
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list