[Openstack] Extreme network throughput tuning with KVM as hypervisor
Alejandro Comisario
alejandro.comisario at mercadolibre.com
Tue Jan 14 21:52:52 UTC 2014
Wow, its kinda hard to imagine we are the only ones that have only 100Mb/s
bandwidth but 50.000 requests per minute on each compute, i mean, lots of
throughput, almost none bandwith.
Everyone has their networking performance figured out ?
No one to share some "SUPER THROUGHPUT" sysctl / ethtool / power / etc
settings on the compute side ?
Best regards.
* alejandrito*
On Sat, Jan 11, 2014 at 4:12 PM, Alejandro Comisario <
alejandro.comisario at mercadolibre.com> wrote:
> Well, its been a long time since we use nova with KVM, we got over the
> many thousand vms, and still, something doesnt feel right.
> We are using ubuntu 12.04 kernel 3.2.0-[40-48], tuned sysctl with lots of
> parameters, and everything ... works, you can say, quite well.
>
> But here's the deal, we have an special networking scenario that is,
> EVERYTHING IS APIS, everything is throughput, no bandwidth.
> Every 2x1Gb bonded compute node, doesnt get over the [200Mb/s - 400Mb/s]
> but its handling hundreds of thousands requests per minute to the vms.
>
> And once in a while, gives you the sensation that everything goes to hell,
> timeouts from aplications over there, response times from apis going from
> 10ms to 200ms over there, 20ms delays happening between the vm ETH0 and the
> VNET interface, etc.
> So, since its a massive scenario to tune, we never kinda, nailedon WHERE
> TO give this 1, 2 or 3 final buffer/ring/affinity tune to make everything
> work from the compute side.
>
> I know its a little awkward, but im craving, and jaunting for community
> real life examples regarding "HIGH THROUGHPUT" tuning with KVM scenarios,
> dark linux or if someone can help me go through configurations that might
> sound weird / unnecesary / incorrect.
>
> For those who are wondering, well ... i dont know what you have, lets
> start with this.
>
> COMPUTE NODES (99% of them, different vendors, but ...)
> * 128/256 GB of ram
> * 2 hexacores with HT enabled
> * 2x1Gb bonded interfaces (want to know the more than 20 models we are
> using, just ask for it)
> * Multi queue interfaces, pined via irq to different cores
> * ubuntu 12.04 kernel 3.2.0-[40-48]
> * Linux bridges, no VLAN, no open-vswitch
>
> I want to try to keep the networking appliances ( TOR's, AGGR, CORES ) as
> out of the picture as possible.
> im thinking "i hope this thread gets great, in time"
>
> So, ready to learn as much as i can.
> Thank you openstack community, as allways.
>
> alejandrito
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140114/a39710f7/attachment.html>
More information about the Openstack
mailing list