[openstack-dev] [stable][neutron] Fwd: Re: [Openstack-stable-maint] Neutron backports for security group performance
James Page
james.page at ubuntu.com
Wed Nov 12 17:21:55 UTC 2014
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Hi Ihar
On 11/11/14 19:39, Ihar Hrachyshka wrote:
>> there is a series of Neutron backports in the Juno queue that are
>>
>>> intended to significantly improve service performance when
>>> handling security groups (one of the issues that are main pain
>>> points of current users): - https://review.openstack.org/130101
>>> - https://review.openstack.org/130098 -
>>> https://review.openstack.org/130100 -
>>> https://review.openstack.org/130097 -
>>> https://review.openstack.org/130105 The first four patches are
>>> optimizing db side (controller), while the last one is to avoid
>>> fetching security group rules by OVS agent when firewall is
>>> disabled.
In terms of putting some figures around how these proposed stable
patches help improve a Neutron based Juno cloud, I can provide some
metrics based on recent testing that Canonical did in-conjuction with HP.
The cloud we deployed was all based on Intel Atom Quad Core processors,
with 16G of RAM and SSD disk; 540 servers in total including 8 nova
controllers and 4 neutron controllers. OpenStack Juno release on
Ubuntu 14.04.
With around 12,000 running instances, which was as far as I could push
a vanilla ML2/ovs based Juno cloud, the load on the 4 neutron
controllers was around 40 with CPU maxing out all of the time - which
pretty much mean't it was impossible to create any new instances due
to vif plugging timeouts in nova waiting for neutron to complete
network setup.
I patched in:
https://review.openstack.org/#/c/130101/
https://review.openstack.org/#/c/130098/
https://review.openstack.org/#/c/130100/
https://review.openstack.org/#/c/130105/
and re-ran the same test; the messaging load on the RabbitMQ server at
12,000 instances was considerably less in terms of volume, and the
load on the 4 neutron controllers was around 10 (vs 40) with CPU at
around 55->65% utilization - so still pretty busy, but a better
situation than without the patches.
My testing was quite synthetic (boot small instances until things
start to break) but it does illustrate the difference these patches make.
HTH
James
- --
James Page
Ubuntu and Debian Developer
james.page at ubuntu.com
jamespage at debian.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAEBCAAGBQJUY5czAAoJEL/srsug59jDIJ8QAJAK8aWUSQGyPrcAqvKi+OE7
vS4NhVWog1ifubPcIpDstAoELHIfQVQKaryCN7oXAzQ0Yyp68DE68mw+o8rrly2S
25gPebORNath1BOJMMlv5iRS0lVN30cfmRrs9nfQ5bdAE6qkaPlofG9GsGRggCG2
feewRR9w+PFFQQ9NdsZ141FoQDtpLjhY095rEwzUhyah8spM2w2er2XiEJLHRTI/
HcJybUSX/Nu8OV4FJ6dn+pebWv1iWgzNOV/eqCYHf1Mx9G6HrB8ZQpv486LznyX1
PSNuiVMgUFcSWUcN1lFQSEe/ASW+G2t3/aEMKZBXiXsO3DTORtZ79oCTkzipkehj
18ztLr+nkCDrdGzbvkD6LWGt9F7MjTzsXao4RwGe/EiRBvcrvnHpkc5kfaW2aIb3
+rH8pcHpfaC04y7Zy492lFrkmrXn+73c2a+hS+gS3bMmQ1bcwF+QeeXunsMajgVo
CQW98n3HJI/jAjCBEbV5cmmw+BXQDWOHYlP+tZiAMC5Tnj42/9+K+KWZr+truhLK
cKGFlM+vaVsykAh9KIf1E/e6G72o/kihXDUnpx/mSk27sxDILEz9ItcQRJgpQCPN
cH3sIj+qG76NDqIhdLYs8LgyjwQI2SdOeSi+32oCCe3tnaI35FKKuRMI0oSP0HKn
3U7bekTsjXhlBWusW9Wb
=WaFI
-----END PGP SIGNATURE-----
More information about the OpenStack-dev
mailing list