[neutron] change of API performance from Pike to Yoga
Hi Neutrinos! Inspired by Julia Kreger's presentation on the summit [1] I wanted to gather some ideas about the change in Neutron API performance. For that I used Rally with Neutron's usual Rally task definition [2]. I measured against an all-in-one devstack - running always in a same sized VM, keeping its local.conf the same between versions as much as possible. Neutron was configured with ml2/ovs. Measuring other backends would also be interesting, but first I wanted to keep the config the same as I was going back to earlier versions as long as possible. Without much pain I managed to collect data starting from Yoga back to Pike. You can download all Rally reports in this tarball (6 MiB): https://drive.google.com/file/d/1TjFV7UWtX_sofjw3_njL6-6ezD7IPmsj/view?usp=s... The tarball also contains data about how to reproduce these tests. It is currently available at my personal Google Drive. I will keep this around at least to the end of July. I would be happy to upload it to somewhere else better suited for long term storage. Let me also attach a single plot (I hope the mailing list configuration allows this) that shows the load_duration (actually the average of 3 runs each) for each Rally scenario by OpenStack release. Which I hope is the single picture summary of these test runs. However the Rally reports contain much more data, feel free to download and browse them. If the mailing list strips the attachment, the picture is included in the tarball too. Cheers, Bence (rubasov) [1] https://youtu.be/OqcnXxTbIxk [2] https://opendev.org/openstack/neutron/src/commit/a9912caf3fa1e258621965ea8c6...
Hi, Dnia poniedziałek, 4 lipca 2022 16:31:13 CEST Bence Romsics pisze:
Hi Neutrinos!
Inspired by Julia Kreger's presentation on the summit [1] I wanted to gather some ideas about the change in Neutron API performance.
For that I used Rally with Neutron's usual Rally task definition [2]. I measured against an all-in-one devstack - running always in a same sized VM, keeping its local.conf the same between versions as much as possible. Neutron was configured with ml2/ovs. Measuring other backends would also be interesting, but first I wanted to keep the config the same as I was going back to earlier versions as long as possible.
Without much pain I managed to collect data starting from Yoga back to Pike.
You can download all Rally reports in this tarball (6 MiB): https://drive.google.com/file/d/1TjFV7UWtX_sofjw3_njL6-6ezD7IPmsj/view?usp=s...
The tarball also contains data about how to reproduce these tests. It is currently available at my personal Google Drive. I will keep this around at least to the end of July. I would be happy to upload it to somewhere else better suited for long term storage.
Let me also attach a single plot (I hope the mailing list configuration allows this) that shows the load_duration (actually the average of 3 runs each) for each Rally scenario by OpenStack release. Which I hope is the single picture summary of these test runs. However the Rally reports contain much more data, feel free to download and browse them. If the mailing list strips the attachment, the picture is included in the tarball too.
Cheers, Bence (rubasov)
[1] https://youtu.be/OqcnXxTbIxk [2] https://opendev.org/openstack/neutron/src/commit/a9912caf3fa1e258621965ea8c6...
Thx Bence for that. So from just brief look at load_duration.png file it seems that we are improving API performance in last cycles :) I was also thinking about doing something similar to what Julia described in Berlin (but I still didn't had time for it). But I was thinking that instead of using rally, maybe we can do something similar like Ironic is doing and have some simple script which will populate neutron db with many resources, like e.g. 2-3k ports/networks/trunks etc. and then measure time of e.g. doing "list" of those resources. That way we will IMHO measure only neutron API performance and Neutron - DB interactions, without relying on the backends and other components, like e.g. Nova to spawn actual VM. Wdyt about it? Is it worth to do or it will be better to rely on the rally only? -- Slawek Kaplonski Principal Software Engineer Red Hat
Hi,
So from just brief look at load_duration.png file it seems that we are improving API performance in last cycles :)
I believe the same. :-)
I was also thinking about doing something similar to what Julia described in Berlin (but I still didn't had time for it). But I was thinking that instead of using rally, maybe we can do something similar like Ironic is doing and have some simple script which will populate neutron db with many resources, like e.g. 2-3k ports/networks/trunks etc. and then measure time of e.g. doing "list" of those resources. That way we will IMHO measure only neutron API performance and Neutron - DB interactions, without relying on the backends and other components, like e.g. Nova to spawn actual VM. Wdyt about it? Is it worth to do or it will be better to rely on the rally only?
I think both approaches have their uses. These rally reports are hopefully useful for users of Neutron API, users planning an upgrade or as feedback for maintainers who worked on performance related issues in the last few cycles. But rally reports do not give much information on where to look when we want to make further improvements. But I believe Julia's approach can be used to narrow down or even identify where to make further code changes. Also she targeted testing _at scale_, what our current rally tests don't do. In short, I believe both approaches have their uses and rally tests probably cannot (easily) replace what the Ironic team did with the tests Julia described in her presentation. Cheers, Bence
Hi, Uploaded the same content to github for long term storage: https://github.com/rubasov/neutron-rally -- Bence
Thanks Bence, Really appreciated. Bence Romsics <bence.romsics@gmail.com> ezt írta (időpont: 2022. júl. 15., P, 13:02):
Hi,
Uploaded the same content to github for long term storage:
https://github.com/rubasov/neutron-rally
-- Bence
participants (3)
-
Bence Romsics
-
Lajos Katona
-
Slawek Kaplonski