<div dir="ltr">We don't have a workload remotely like that (generally, we have a lot more demand for bandwidth, but we also generally run faster networks than that as well), but 1k pps sounds awfully low. Like low by several orders of magnitude.<div>
<br></div><div>I didn't measure pps in our benchmarking, but did manage to saturate a 10GE link from a VM (actually we did this on 10 nodes at a time to saturate a 100GE wide area link), and all of those settings are here:</div>
<div><a href="http://buriedlede.blogspot.com/2012/11/driving-100-gigabit-network-with.html">http://buriedlede.blogspot.com/2012/11/driving-100-gigabit-network-with.html</a></div><div><br></div><div>I'd start trying to do some fault isolation; see if you can get NAT out of the mix, for example, or see if it is a network stack tuning problem. You probably need to crank up some of your buffer sizes, even if you don't need to mess with your TCP windows. </div>
<div><br></div><div>Can you actually saturate your 2x1ge lag with bandwidth? (single or ganged flows?)</div><div> -nld</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Jan 14, 2014 at 3:52 PM, Alejandro Comisario <span dir="ltr"><<a href="mailto:alejandro.comisario@mercadolibre.com" target="_blank">alejandro.comisario@mercadolibre.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_default" style="font-size:small;font-family:courier new,monospace">Wow, its kinda hard to imagine we are the only ones that have only 100Mb/s bandwidth but 50.000 requests per minute on each compute, i mean, lots of throughput, almost none bandwith.</div>
<div class="gmail_default" style="font-size:small;font-family:courier new,monospace"><br></div><div class="gmail_default" style="font-size:small;font-family:courier new,monospace">Everyone has their networking performance figured out ?</div>
<div class="gmail_default" style="font-size:small;font-family:courier new,monospace">No one to share some "SUPER THROUGHPUT" sysctl / ethtool / power / etc settings on the compute side ?</div><div class="gmail_default" style="font-size:small;font-family:courier new,monospace">
<br></div><div class="gmail_default" style="font-size:small;font-family:courier new,monospace">Best regards.</div><div class="gmail_extra"><br clear="all"><div><div><font><b><div class="gmail_default" style="font-size:small;display:inline;font-family:'courier new',monospace">
alejandrito</div></b></font></div></div><div><div class="h5"><br><div class="gmail_quote">On Sat, Jan 11, 2014 at 4:12 PM, Alejandro Comisario <span dir="ltr"><<a href="mailto:alejandro.comisario@mercadolibre.com" target="_blank">alejandro.comisario@mercadolibre.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div style="font-size:small;font-family:'courier new',monospace">Well, its been a long time since we use nova with KVM, we got over the many thousand vms, and still, something doesnt feel right.</div>
<div style="font-size:small;font-family:'courier new',monospace">We are using ubuntu 12.04 kernel 3.2.0-[40-48], tuned sysctl with lots of parameters, and everything ... works, you can say, quite well.</div>
<div style="font-size:small;font-family:'courier new',monospace"><br></div><div style="font-size:small;font-family:'courier new',monospace">
But here's the deal, we have an special networking scenario that is, EVERYTHING IS APIS, everything is throughput, no bandwidth.</div><div style="font-size:small;font-family:'courier new',monospace">
Every 2x1Gb bonded compute node, doesnt get over the [200Mb/s - 400Mb/s] but its handling hundreds of thousands requests per minute to the vms.</div><div style="font-size:small;font-family:'courier new',monospace">
<br></div><div style="font-size:small;font-family:'courier new',monospace">And once in a while, gives you the sensation that everything goes to hell, timeouts from aplications over there, response times from apis going from 10ms to 200ms over there, 20ms delays happening between the vm ETH0 and the VNET interface, etc.</div>
<div style="font-size:small;font-family:'courier new',monospace">So, since its a massive scenario to tune, we never kinda, nailedon WHERE TO give this 1, 2 or 3 final buffer/ring/affinity tune to make everything work from the compute side.</div>
<div style="font-size:small;font-family:'courier new',monospace"><br></div><div style="font-size:small;font-family:'courier new',monospace">
I know its a little awkward, but im craving, and jaunting for community real life examples regarding "HIGH THROUGHPUT" tuning with KVM scenarios, dark linux or if someone can help me go through configurations that might sound weird / unnecesary / incorrect.</div>
<div style="font-size:small;font-family:'courier new',monospace"><br></div><div style="font-size:small;font-family:'courier new',monospace">
For those who are wondering, well ... i dont know what you have, lets start with this.</div><div style="font-size:small;font-family:'courier new',monospace"><br></div><div style="font-size:small;font-family:'courier new',monospace">
COMPUTE NODES (99% of them, different vendors, but ...)</div><div style="font-size:small;font-family:'courier new',monospace">* 128/256 GB of ram</div><div style="font-size:small;font-family:'courier new',monospace">
* 2 hexacores with HT enabled</div><div style="font-size:small;font-family:'courier new',monospace">* 2x1Gb bonded interfaces (want to know the more than 20 models we are using, just ask for it)</div>
<div style="font-size:small;font-family:'courier new',monospace">* Multi queue interfaces, pined via irq to different cores</div><div style="font-size:small;font-family:'courier new',monospace">
* ubuntu 12.04 kernel 3.2.0-[40-48]</div><div style="font-size:small;font-family:'courier new',monospace">* Linux bridges, no VLAN, no open-vswitch</div><div style="font-size:small;font-family:'courier new',monospace">
<br></div><div style="font-size:small;font-family:'courier new',monospace">I want to try to keep the networking appliances ( TOR's, AGGR, CORES ) as out of the picture as possible.</div>
<div style="font-size:small;font-family:'courier new',monospace">im thinking "i hope this thread gets great, in time"</div><div style="font-size:small;font-family:'courier new',monospace">
<br></div><div style="font-size:small;font-family:'courier new',monospace">So, ready to learn as much as i can.</div><div style="font-size:small;font-family:'courier new',monospace">
Thank you openstack community, as allways.</div><div style="font-size:small;font-family:'courier new',monospace"><br></div><div style="font-size:small;font-family:'courier new',monospace">
alejandrito</div><div><br></div>
</div>
</blockquote></div><br></div></div></div></div>
<br>_______________________________________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
<br></blockquote></div><br></div>