<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<meta name="Generator" content="Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
{font-family:"HP Simplified";
panose-1:2 11 6 4 2 2 4 2 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p
{mso-style-priority:99;
mso-margin-top-alt:auto;
margin-right:0in;
mso-margin-bottom-alt:auto;
margin-left:0in;
font-size:12.0pt;
font-family:"Times New Roman","serif";}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
{mso-style-priority:99;
mso-style-link:"Balloon Text Char";
margin:0in;
margin-bottom:.0001pt;
font-size:8.0pt;
font-family:"Tahoma","sans-serif";}
span.BalloonTextChar
{mso-style-name:"Balloon Text Char";
mso-style-priority:99;
mso-style-link:"Balloon Text";
font-family:"Tahoma","sans-serif";}
span.EmailStyle20
{mso-style-type:personal;
font-family:"HP Simplified","sans-serif";
color:windowtext;
font-weight:normal;
font-style:normal;
text-decoration:none none;}
span.EmailStyle21
{mso-style-type:personal-reply;
font-family:"HP Simplified","sans-serif";
color:windowtext;
font-weight:normal;
font-style:normal;
text-decoration:none none;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="blue" vlink="purple">
<div class="WordSection1">
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">Hello All,<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif""><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">Update with my testing.
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif""><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">I have installed one more VM as neutron-server host and configured under the Load Balancer.
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">Currently I have 2 VMs running neutron-server process (one is Controller and other is dedicated neutron-server VM)<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif""><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">With this configuration during the batch instance deployment with a batch size of 30 and sleep time of 20min,
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">180 instances could get an IP during the first boot. During 181-210 instance creation some instances could not get an IP.
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif""><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">This is much better than when running with single neutron server where only 120 instances could get an IP during the first boot in Havana.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif""><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">When the instances are getting created, parent neutron-server process spending close to 90% of the cpu time on both the servers,<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">While rest of the neutron-server process (APIs) are spending very low CPU utilization.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif""><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">I think it’s good idea to expand the current multiple neutron-server api process to support rpc messages as well.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif""><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">Even with current setup (multiple neutron-server hosts), we still see rpc timeouts in DHCP, L2 agents<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">and dnsmasq process is getting restarted due to SIGKILL though.
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif""><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">Thanks & Regards,<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">Sreedhar Nathani<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:10.0pt;font-family:"HP Simplified","sans-serif""><o:p> </o:p></span></p>
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal"><b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">From:</span></b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif""> Nathani, Sreedhar (APS)
<br>
<b>Sent:</b> Friday, December 13, 2013 12:08 AM<br>
<b>To:</b> OpenStack Development Mailing List (not for usage questions)<br>
<b>Subject:</b> RE: [openstack-dev] Performance Regression in Neutron/Havana compared to Quantum/Grizzly<o:p></o:p></span></p>
</div>
</div>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">Hello Salvatore,<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif""><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">Thanks for your feedback. Does the patch
<a href="https://review.openstack.org/#/c/57420/">https://review.openstack.org/#/c/57420/</a> which you are working on bug
<a href="https://bugs.launchpad.net/neutron/+bug/1253993">https://bugs.launchpad.net/neutron/+bug/1253993</a>
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">will help to correct the OVS agent loop slowdown issue?<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">Does this patch address the DHCP agent updating the host file once in a minute and finally sending SIGKILL to dnsmasq process?<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif""><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif"">I have tested with Marun’s patch
<a href="https://review.openstack.org/#/c/61168/">https://review.openstack.org/#/c/61168/</a> regarding ‘<span style="color:black;background:white">Send DHCP notifications regardless of agent status’ but this patch<o:p></o:p></span></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black;background:white">Also observed the same behavior.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black;background:white"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black;background:white"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black;background:white">Thanks & Regards,<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black;background:white">Sreedhar Nathani<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:10.0pt;font-family:"HP Simplified","sans-serif""><o:p> </o:p></span></p>
<p class="MsoNormal"><b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">From:</span></b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif""> Salvatore Orlando [<a href="mailto:sorlando@nicira.com">mailto:sorlando@nicira.com</a>]
<br>
<b>Sent:</b> Thursday, December 12, 2013 6:21 PM<br>
<b>To:</b> OpenStack Development Mailing List (not for usage questions)<br>
<b>Subject:</b> Re: [openstack-dev] Performance Regression in Neutron/Havana compared to Quantum/Grizzly<o:p></o:p></span></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p>I believe your analysis is correct and inline with the findings reported in the bug concerning OVS agent loop slowdown.<o:p></o:p></p>
<p>The issue has become even more prominent with the ML2 plugin due to an increased number of notifications sent.<o:p></o:p></p>
<p>Another issue which makes delays on the DHCP agent worse is that instances send a discover message once a minute.<o:p></o:p></p>
<p>Salvatore<o:p></o:p></p>
<div>
<p class="MsoNormal">Il 11/dic/2013 11:50 "Nathani, Sreedhar (APS)" <<a href="mailto:sreedhar.nathani@hp.com">sreedhar.nathani@hp.com</a>> ha scritto:<o:p></o:p></p>
<p class="MsoNormal">Hello Peter,<br>
<br>
Here are the tests I have done. Already have 240 instances active across all the 16 compute nodes. To make the tests and data collection easy,<br>
I have done the tests on single compute node<br>
<br>
First Test -<br>
* 240 instances already active, 16 instances on the compute node where I am going to do the tests<br>
* deploy 10 instances concurrently using nova boot command with num-instances option in single compute node<br>
* All the instances could get IP during the instance boot time.<br>
<br>
- Instances are created at 2013-12-10 13:41:01<br>
- From the compute host, DHCP requests are sent from 13:41:20 but those are not reaching the DHCP server<br>
Reply from the DHCP server got at 13:43:08 (A delay of 108 seconds)<br>
- DHCP agent updated the host file from 13:41:06 till 13:42:54. Dnsmasq process got SIGHUP message every time the hosts file is updated<br>
- In compute node tap devices are created between 13:41:08 and 13:41:18<br>
Security group rules are received between 13:41:45 and 13:42:56<br>
IP table rules were updated between 13:41:50 and 13:43:04<br>
<br>
Second Test -<br>
* Deleted the newly created 10 instances.<br>
* 240 instances already active, 16 instances on the compute node where I am going to do the tests<br>
* Deploy 30 instances concurrently using nova boot command with num-instances option in single compute node<br>
* None of the instances could get the IP during the instance boot.<br>
<br>
<br>
- Instances are created at 2013-12-10 14:13:50<br>
<br>
- From the compute host, DHCP Requests are sent from 14:14:14 but those are not reaching the DHCP Server<br>
(don't see any DHCP requests are reaching the DHCP server from the tcpdump on the network node)<br>
<br>
- Reply from the DHCP server only got at 14:22:10 ( A delay of 636 seconds)<br>
<br>
- From the strace of the DHCP agent process, it first updated the hosts file at 14:14:05, after this there is a gap of close to 60 min for<br>
Updating next instance address, it repeated till 7th instance which was updated at 14:19:50. 30th instance updated at 14:20:00<br>
<br>
- During the 30 instance creation, dnsmasq process got SIGHUP after the host file is updated, but at 14:19:52 it got SIGKILL and new process<br>
created - 14:19:52.881088 +++ killed by SIGKILL +++<br>
<br>
- In the compute node, tap devices are created between 14:14:03 and 14:14:38<br>
From the strace of L2 agent log, can see security group related messages are received from 14:14:27 till 14:20:02<br>
During this period in the L2 agent log see many rpc timeout messages like below<br>
Timeout: Timeout while waiting on RPC response - topic: "q-plugin", RPC method: "security_group_rules_for_devices" info: "<unknown>"<br>
<br>
Due to security group related messages received by this compute node with delay, it's taking very long time to update the iptable rules<br>
(Can see it was updated till 14:20) which is causing the DHCP packets to be dropped at compute node itself without reaching to DHCP server<br>
<br>
<br>
Here is my understanding based on the tests.<br>
Instances are creating fast and so its TAP devices. But there is a considerable delay in updating the network port details in dnsmasq host file and sending<br>
The security group related info to the compute nodes due to which compute nodes are not able to update the iptable rules fast enough which is causing<br>
Instance not able to get the IP.<br>
<br>
I have collected the tcpdump from controller node, compute nodes + strace of dhcp, dnsmasq, OVS L2 agents incase if you are interested to look at it<br>
<br>
Thanks & Regards,<br>
Sreedhar Nathani<br>
<br>
<br>
-----Original Message-----<br>
From: Peter Feiner [mailto:<a href="mailto:peter@gridcentric.ca">peter@gridcentric.ca</a>]<br>
Sent: Tuesday, December 10, 2013 10:32 PM<br>
To: OpenStack Development Mailing List (not for usage questions)<br>
Subject: Re: [openstack-dev] Performance Regression in Neutron/Havana compared to Quantum/Grizzly<br>
<br>
On Tue, Dec 10, 2013 at 7:48 AM, Nathani, Sreedhar (APS) <<a href="mailto:sreedhar.nathani@hp.com">sreedhar.nathani@hp.com</a>> wrote:<br>
> My setup has 17 L2 agents (16 compute nodes, one Network node).<br>
> Setting the minimize_polling helped to reduce the CPU utilization by the L2 agents but it did not help in instances getting the IP during first boot.<br>
><br>
> With the minimize_polling polling enabled less number of instances could get IP than without the minimize_polling fix.<br>
><br>
> Once the we reach certain number of ports(in my case 120 ports),<br>
> during subsequent concurrent instance deployment(30 instances), updating the port details in the dnsmasq host is taking long time, which causing the delay for instances getting IP address.<br>
<br>
To figure out what the next problem is, I recommend that you determine precisely what "port details in the dnsmasq host [are] taking [a] long time" to update. Is the DHCPDISCOVER packet from the VM arriving before the dnsmasq process's hostsfile is updated
and dnsmasq is SIGHUP'd? Is the VM sending the DHCPDISCOVER request before its tap device is wired to the dnsmasq process (i.e., determine the status of the chain of bridges at the time the guest sends the DHCPDISCOVER packet)? Perhaps the DHCPDISCOVER packet
is being dropped because the iptables rules for the VM's port haven't been instantiated when the DHCPDISCOVER packet is sent. Or perhaps something else, such as the replies being dropped. These are my only theories at the moment.<br>
<br>
Anyhow, once you determine where the DHCP packets are being lost, you'll have a much better idea of what needs to be fixed.<br>
<br>
One suggestion I have to make your debugging less onerous is to reconfigure your guest image's networking init script to retry DHCP requests indefinitely. That way, you'll see the guests' DHCP traffic when neutron eventually gets everything in order. On CirrOS,
add the following line to the eht0 stanza in /etc/network/interfaces to retry DHCP requests 100 times every 3 seconds:<br>
<br>
udhcpc_opts -t 100 -T 3<br>
<br>
> When I deployed only 5 instances concurrently (already had 211 instances active) instead of 30, all the instances are able to get the IP.<br>
> But when I deployed 10 instances concurrently (already had 216<br>
> instances active) instead of 30, none of the instances could able to<br>
> get the IP<br>
<br>
This is reminiscent of yet another problem I saw at scale. If you're using the security group rule "VMs in this group can talk to everybody else in this group", which is one of the defaults in devstack, you get<br>
O(N^2) iptables rules for N VMs running on a particular host. When you have more VMs running, the openvswitch agent, which is responsible for instantiating the iptables and does so somewhat laboriously with respect to the number of iptables rules, the opevnswitch
agent could take too long to configure ports before VMs' DHCP clients time out.<br>
However, considering that you're seeing low CPU utilization by the openvswitch agent, I don't think you're having this problem; since you're distributing your VMs across numerous compute hosts, N is quite small in your case. I only saw problems when N was >
100.<br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><o:p></o:p></p>
</div>
</div>
</body>
</html>