<div dir="ltr">Try this Cisco white paper. <b>10Ge Connectivity with Windows Servers</b><div><br></div><div><a href="http://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-7000-series-switches/C07-572828-00_10Gb_Conn_Win_DG.pdf">http://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-7000-series-switches/C07-572828-00_10Gb_Conn_Win_DG.pdf</a><br>
</div><div><br></div><div>See page 14.</div></div><div class="gmail_extra"><br clear="all"><div><div dir="ltr"><div><font><div style="font-family:arial;font-size:small"><b><i><br>Adam Lawson</i></b></div><div><font><font color="#666666" size="1"><div style="font-family:arial;font-size:small">
AQORN, Inc.</div><div style="font-family:arial;font-size:small">427 North Tatnall Street</div><div style="font-family:arial;font-size:small">Ste. 58461</div><div style="font-family:arial;font-size:small">Wilmington, Delaware 19801-2230</div>
<div style="font-family:arial;font-size:small">Toll-free: (844) 4-AQORN-NOW</div><div style="font-family:arial;font-size:small">Direct: +1 (302) 268-6914</div></font></font></div></font></div><div style="font-family:arial;font-size:small">
<img src="http://www.aqorn.com/images/logo.png" width="96" height="39"><br></div></div></div>
<br><br><div class="gmail_quote">On Fri, May 9, 2014 at 11:09 AM, JR <span dir="ltr"><<a href="mailto:botemout@gmail.com" target="_blank">botemout@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Darn. That dropped the performance by ~50% ...<br>
<div class="im HOEnZb"><br>
<br>
On 5/9/2014 1:57 PM, Adam Lawson wrote:<br>
> I just heard back. Within Windows, disable Large File Offload.<br>
><br>
> Let me know how that goes?<br>
><br>
> Mahalo,<br>
> Adam<br>
><br>
><br>
</div><div class="HOEnZb"><div class="h5">> *Adam Lawson*<br>
> AQORN, Inc.<br>
> 427 North Tatnall Street<br>
> Ste. 58461<br>
> Wilmington, Delaware 19801-2230<br>
> Toll-free: (844) 4-AQORN-NOW<br>
> Direct: <a href="tel:%2B1%20%28302%29%20268-6914" value="+13022686914">+1 (302) 268-6914</a><br>
><br>
><br>
><br>
> On Fri, May 9, 2014 at 10:54 AM, JR <<a href="mailto:botemout@gmail.com">botemout@gmail.com</a>> wrote:<br>
><br>
>> Adam,<br>
>><br>
>> If I'm looking in the right place (the redhat virtio ethernet adapter<br>
>> properties), there is nowhere to specify forcing the speed or duplex.<br>
>> The only value I see is Init.ConnectionRate (which is 10G). I've been<br>
>> able to see performance in excess of 2Gb/sec (when running an iperf<br>
>> against the ubuntu host on which the VM runs) so it doesn't think it's a<br>
>> 1G NIC; the performance is just very poor. The same iperf from a centos<br>
>> VM to its host gives > 9Gb/sec.<br>
>><br>
>> Thanks<br>
>> JR<br>
>><br>
>> On 5/9/2014 1:19 PM, Adam Lawson wrote:<br>
>>> Is the duplex setting and speed set on each side to force 10ge (since<br>
>>> auto-neg seems to not work in this scenario)? Still waiting to hear back<br>
>> on<br>
>>> steps taken in the situation I described earlier.<br>
>>><br>
>>> Mahalo,<br>
>>> Adam<br>
>>><br>
>>><br>
>>> *Adam Lawson*<br>
>>> AQORN, Inc.<br>
>>> 427 North Tatnall Street<br>
>>> Ste. 58461<br>
>>> Wilmington, Delaware 19801-2230<br>
>>> Toll-free: (844) 4-AQORN-NOW<br>
>>> Direct: +1 (302) 268-6914<br>
>>><br>
>>><br>
>>><br>
>>> On Fri, May 9, 2014 at 9:25 AM, JR <<a href="mailto:botemout@gmail.com">botemout@gmail.com</a>> wrote:<br>
>>><br>
>>>> I'd be very appreciative to hear how you solved this Adam. ;-)<br>
>>>><br>
>>>> On 5/9/2014 12:29 AM, Adam Lawson wrote:<br>
>>>>> Look at the TCP stack within Windows and optimizations recommended by<br>
>>>>> Microsoft. I don't think it's a KVM or openstack question to be honest.<br>
>>>> We<br>
>>>>> ran into similar issues on plain old Win2008 R2 servers that were<br>
>> running<br>
>>>>> on bare metal. Will update again when I ping someone to find out what<br>
>>>>> specifically it was back then.<br>
>>>>><br>
>>>>><br>
>>>>> *Adam Lawson*<br>
>>>>> AQORN, Inc.<br>
>>>>> 427 North Tatnall Street<br>
>>>>> Ste. 58461<br>
>>>>> Wilmington, Delaware 19801-2230<br>
>>>>> Toll-free: (844) 4-AQORN-NOW<br>
>>>>> Direct: +1 (302) 268-6914<br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> On Thu, May 8, 2014 at 6:59 PM, JR <<a href="mailto:botemout@gmail.com">botemout@gmail.com</a>> wrote:<br>
>>>>><br>
>>>>>> Greetings,<br>
>>>>>><br>
>>>>>> My openstack grizzly cluster runs on ubuntu 12.04 servers with 10G<br>
>> NICs.<br>
>>>>>> I have ubuntu, centos and windows 2008 R2 guests. I've noticed that<br>
>>>>>> while both my linux guests can communicate at some reasonable<br>
>>>>>> approximation of 10G wire speed (e.g., 7-9Gb/sec on iperf tests), the<br>
>>>>>> windows guests max out at 2.5G when talking to the host on which they<br>
>>>>>> run, down to ~1.2G to other hosts.<br>
>>>>>><br>
>>>>>> I've made some modifications as per this doc:<br>
>>>>>><br>
>>>>>> <a href="http://www.linux-kvm.org/page/WindowsGuestDrivers/kvmnet/registry" target="_blank">http://www.linux-kvm.org/page/WindowsGuestDrivers/kvmnet/registry</a><br>
>>>>>><br>
>>>>>> and upgraded my virtio network driver, but it's not helped. I've also<br>
>>>>>> done about an hour or two of googling which has revealed little.<br>
>>>>>><br>
>>>>>> I understand that this is not an openstack issue, but, I suspect,<br>
>> others<br>
>>>>>> have encountered this when bringing 2008 R2 guests into their<br>
>> clusters.<br>
>>>>>> Anyone?<br>
>>>>>><br>
>>>>>> Thanks much,<br>
>>>>>> JR<br>
>>>>>><br>
>>>>>><br>
>>>>>><br>
>>>>>> _______________________________________________<br>
>>>>>> Mailing list:<br>
>>>>>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
>>>>>> Post to : <a href="mailto:openstack@lists.openstack.org">openstack@lists.openstack.org</a><br>
>>>>>> Unsubscribe :<br>
>>>>>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
>>>>>><br>
>>>>><br>
>>>><br>
>>>> --<br>
>>>> Your electronic communications are being monitored; strong encryption is<br>
>>>> an answer. My public key<br>
>>>> <<a href="http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x4F08C504BD634953" target="_blank">http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x4F08C504BD634953</a>><br>
>>>><br>
>>><br>
>><br>
>> --<br>
>> Your electronic communications are being monitored; strong encryption is<br>
>> an answer. My public key<br>
>> <<a href="http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x4F08C504BD634953" target="_blank">http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x4F08C504BD634953</a>><br>
>><br>
><br>
<br>
--<br>
Your electronic communications are being monitored; strong encryption is<br>
an answer. My public key<br>
<<a href="http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x4F08C504BD634953" target="_blank">http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x4F08C504BD634953</a>><br>
</div></div></blockquote></div><br></div>