[Openstack-operators] [Openstack] Extreme network throughput tuning with KVM as hypervisor
narayan.desai at gmail.com
Wed Jan 15 03:32:47 UTC 2014
We don't have a workload remotely like that (generally, we have a lot more
demand for bandwidth, but we also generally run faster networks than that
as well), but 1k pps sounds awfully low. Like low by several orders of
I didn't measure pps in our benchmarking, but did manage to saturate a 10GE
link from a VM (actually we did this on 10 nodes at a time to saturate a
100GE wide area link), and all of those settings are here:
I'd start trying to do some fault isolation; see if you can get NAT out of
the mix, for example, or see if it is a network stack tuning problem. You
probably need to crank up some of your buffer sizes, even if you don't need
to mess with your TCP windows.
Can you actually saturate your 2x1ge lag with bandwidth? (single or ganged
On Tue, Jan 14, 2014 at 3:52 PM, Alejandro Comisario <
alejandro.comisario at mercadolibre.com> wrote:
> Wow, its kinda hard to imagine we are the only ones that have only 100Mb/s
> bandwidth but 50.000 requests per minute on each compute, i mean, lots of
> throughput, almost none bandwith.
> Everyone has their networking performance figured out ?
> No one to share some "SUPER THROUGHPUT" sysctl / ethtool / power / etc
> settings on the compute side ?
> Best regards.
> * alejandrito*
> On Sat, Jan 11, 2014 at 4:12 PM, Alejandro Comisario <
> alejandro.comisario at mercadolibre.com> wrote:
>> Well, its been a long time since we use nova with KVM, we got over the
>> many thousand vms, and still, something doesnt feel right.
>> We are using ubuntu 12.04 kernel 3.2.0-[40-48], tuned sysctl with lots of
>> parameters, and everything ... works, you can say, quite well.
>> But here's the deal, we have an special networking scenario that is,
>> EVERYTHING IS APIS, everything is throughput, no bandwidth.
>> Every 2x1Gb bonded compute node, doesnt get over the [200Mb/s - 400Mb/s]
>> but its handling hundreds of thousands requests per minute to the vms.
>> And once in a while, gives you the sensation that everything goes to
>> hell, timeouts from aplications over there, response times from apis going
>> from 10ms to 200ms over there, 20ms delays happening between the vm ETH0
>> and the VNET interface, etc.
>> So, since its a massive scenario to tune, we never kinda, nailedon WHERE
>> TO give this 1, 2 or 3 final buffer/ring/affinity tune to make everything
>> work from the compute side.
>> I know its a little awkward, but im craving, and jaunting for community
>> real life examples regarding "HIGH THROUGHPUT" tuning with KVM scenarios,
>> dark linux or if someone can help me go through configurations that might
>> sound weird / unnecesary / incorrect.
>> For those who are wondering, well ... i dont know what you have, lets
>> start with this.
>> COMPUTE NODES (99% of them, different vendors, but ...)
>> * 128/256 GB of ram
>> * 2 hexacores with HT enabled
>> * 2x1Gb bonded interfaces (want to know the more than 20 models we are
>> using, just ask for it)
>> * Multi queue interfaces, pined via irq to different cores
>> * ubuntu 12.04 kernel 3.2.0-[40-48]
>> * Linux bridges, no VLAN, no open-vswitch
>> I want to try to keep the networking appliances ( TOR's, AGGR, CORES ) as
>> out of the picture as possible.
>> im thinking "i hope this thread gets great, in time"
>> So, ready to learn as much as i can.
>> Thank you openstack community, as allways.
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-operators