[Openstack-operators] attaching network cards to VMs taking a very long time

Matt Riedemann mriedemos at gmail.com
Wed May 30 14:30:51 UTC 2018

On 5/29/2018 8:23 PM, Chris Apsey wrote:
> I want to echo the effectiveness of this change - we had vif failures 
> when launching more than 50 or so cirros instances simultaneously, but 
> moving to daemon mode made this issue disappear and we've tested 5x that 
> amount.  This has been the single biggest scalability improvement to 
> date.  This option should be the default in the official docs.

This is really good feedback. I'm not sure if there is any kind of 
centralized performance/scale-related documentation, does the LCOO team 
[1] have something that's current? There are also the performance docs 
[2] but that looks pretty stale.

We could add a note to the neutron rootwrap configuration option such 
that if you're running into timeout issues you could consider running 
that in daemon mode, but it's probably not very discoverable. In fact, I 
couldn't find anything about it in the neutron docs, I only found this 
[3] because I know it's defined in oslo.rootwrap (I don't expect 
everyone to know where this is defined).

I found root_helper_daemon in the neutron docs [4] but it doesn't 
mention anything about performance or related options, and it just makes 
it sound like it matters for xenserver, which I'd gloss over if I were 
using libvirt. The root_helper_daemon config option help in neutron 
should probably refer to the neutron-rootwrap-daemon which is in the 
setup.cfg [5].

For better discoverability of this, probably the best place to mention 
it is in the nova vif_plugging_timeout configuration option, since I 
expect that's the first place operators will be looking when they start 
hitting timeouts during vif plugging at scale.

I can start pushing some docs patches and report back here for review help.

[1] https://wiki.openstack.org/wiki/LCOO
[2] https://docs.openstack.org/developer/performance-docs/
[5] https://github.com/openstack/neutron/blob/f486f0/setup.cfg#L54




More information about the OpenStack-operators mailing list