Port creation times out for some VMs in large group
Albert.Braden at synopsys.com
Thu Oct 10 00:53:01 UTC 2019
We tested this in dev and qa and then implemented in production and it did make a difference, but 2 weeks later we started seeing an issue, first in dev, and then in qa. In syslog we see neutron-linuxbridge-agent.service stopping and starting. In neutron-linuxbridge-agent.log we see a rootwrap error: “Exception: Failed to spawn rootwrap process.”
If I comment out ‘root_helper_daemon = "sudo /usr/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf"’ and restart neutron services then the error goes away.
How can I use the root_helper_daemon setting without creating this new error?
Message with logs got moderated so logs are here:
From: Chris Apsey <bitskrieg at bitskrieg.net>
Sent: Friday, September 27, 2019 9:34 AM
To: Albert Braden <albertb at synopsys.com>
Cc: openstack-discuss at lists.openstack.org
Subject: Re: Port creation times out for some VMs in large group
Do this: https://cloudblog.switch.ch/2017/08/28/starting-1000-instances-on-switchengines/<https://urldefense.proofpoint.com/v2/url?u=https-3A__cloudblog.switch.ch_2017_08_28_starting-2D1000-2Dinstances-2Don-2Dswitchengines_&d=DwMGaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=izFVaT90rpGh_939STWbrLr4vnSwK2KBtqFKv_J8Gfs&s=UO9bh6wArCWKbxRWfWjt8egNaw9cxrbDwCZ8-2t0GmE&e=>
The problem will go away. I'm of the opinion that daemon mode for rootwrap should be the default since the performance improvement is an order of magnitude, but privsep may obviate that concern once its fully implemented.
Either way, that should solve your problem.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Friday, September 27, 2019 12:17 PM, Albert Braden <Albert.Braden at synopsys.com<mailto:Albert.Braden at synopsys.com>> wrote:
When I create 100 VMs in our prod cluster:
openstack server create --flavor s1.tiny --network it-network --image cirros-0.4.0-x86_64 --min 100 --max 100 alberttest
Most of them build successfully in about a minute. 5 or 10 will stay in BUILD status for 5 minutes and then fail with “ BuildAbortException: Build of instance <UUID> aborted: Failed to allocate the network(s), not rescheduling.”
If I build smaller numbers, I see less failures, and no failures if I build one at a time. This does not happen in dev or QA; it appears that we are exhausting a resource in prod. I tried reducing various config values in dev but am not able to duplicate the issue. The neutron servers don’t appear to be overloaded during the failure.
What config variables should I be looking at?
Here are the relevant log entries from the HV:
2019-09-26 10:10:43.001 57008 INFO os_vif [req-dea54d9a-3f3e-4d47-b901-a4f41b1947a8 d28c3871f61e4c8c8f8c7600417f7b14 e9621e3b105245ba8660f434ab21016c - default 4fb72165eee4468e8033cdc7d506ddf0] Successfully plugged vif VIFBridge(active=False,address=fa:16:3e:8b:45:07,bridge_name='brq49cbe55d-51',has_traffic_filtering=True,id=18f4e419-b19c-4b62-b6e4-152ec78e72bc,network=Network(49cbe55d-5188-4183-b5ad-e65f9b46f8f2),plugin='linux_bridge',port_profile=<?>,preserve_on_delete=False,vif_name='tap18f4e419-b1')
2019-09-26 10:15:44.029 57008 WARNING nova.virt.libvirt.driver [req-dea54d9a-3f3e-4d47-b901-a4f41b1947a8 d28c3871f61e4c8c8f8c7600417f7b14 e9621e3b105245ba8660f434ab21016c - default 4fb72165eee4468e8033cdc7d506ddf0] [instance: dc58f154-00f9-4c45-8986-94b10821cbc9] Timeout waiting for [('network-vif-plugged', u'18f4e419-b19c-4b62-b6e4-152ec78e72bc')] for instance with vm_state building and task_state spawning.: Timeout: 300 seconds
More logs and data:
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the openstack-discuss