[openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time
duncan.thomas at gmail.com
Tue May 9 16:17:46 UTC 2017
On 5 May 2017 at 23:45, Chris Friesen <chris.friesen at windriver.com> wrote:
> On 05/05/2017 02:04 PM, John Griffith wrote:
>> I'd love some detail on this. What falls over?
> It's been a while since I looked at it, but the main issue was that with LIO
> as the iSCSI server there is no automatic traffic shaping/QoS between
> guests, or between guests and the host. (There's no iSCSI server process to
> assign to a cgroup, for example.)
> The throttling in IOPS/Bps is better than nothing, but doesn't really help
> when you don't necessarily know what your total IOPS/bandwidth actually is
> or how many volumes could get created.
> So you have one or more guests that are hammering on the disk as fast as
> they can, combined with disks on the cinder server that maybe aren't as fast
> as they should be, and it ended up slowing down all the other guests. And
> if the host is using the same physical disks for things like glance
> downloads or image conversion, then a badly-behaved guest can cause
> performance issues on the host as well due to IO congestion. And if they
> fill up the host caches they can even affect writes to other unrelated
> So yes, it wasn't the ideal hardware for the purpose, and there are some
> tuning knobs, but in an ideal world we'd be able to reserve some
> amount/percentage of bandwidth/IOPs for the host and have the rest shared
> equally between all active iSCSI sessions (or unequally via a share
> allocation if desired).
So that's a complaint that it can't do magic with underspecced,
overloaded hardware, plus a request for fair-share I/O or network
scheduling? The latter is maybe something cinder could look at, though
we're limited by the available technologies - array vendors tend to
keep such things proprietary. Note that it is trivial to overload many
SAN too, both the data path and the control path.
More information about the OpenStack-dev