On Wed, May 29, 2024, at 8:30 PM, Ihar Hrachyshka wrote:
On Wed, May 29, 2024 at 5:26 PM Mike Bayer <mike_mp@zzzcomputing.com> wrote:
can you maybe try reducing / removing the use of the "subqueryload" loader strategy and replacing with "selectin"  ?   One of the most egregious patterns neutron has is excessive use of "subqueryload" which generates huge queries that are expensive to cache, expensive on the server, expensive to run, etc.   

The "subqueryload" substring is only found in a single file (neutron/plugins/ml2/drivers/l2pop/db.py) in the neutron tree, two occurrences. I don't see it mentioned in neutron-lib anywhere either. Am I missing something?

yes, the lazy setting as well:

$ find neutron -name  "*.py" -exec grep -H 'lazy="subquery"' {} \;
neutron/db/models/allowed_address_pair.py:                            lazy="subquery", cascade="delete"))
neutron/db/models/metering.py:                             cascade="delete", lazy="subquery")
neutron/db/models_v2.py:                                        lazy="subquery",
neutron/db/models_v2.py:        lazy="subquery")


try changing to "selectin" for those.  that's the default loading scheme for those attributes.   then yes the two subqueryload calls in l2pop/db.py can be changed also.


all of that said there shouldn't be a big difference between SQLA 1.4 and 2.0 as far as memory use of query structures.    the big major difference going to 2.0 is that the whole "autocommit" notion goes away and you are always in a transaction block that needs to be explicitly ended.  





 

the "selectinload" strategy, when I first added it (and it's now very mature) was mostly after observing how badly neutron relies on the very overwrought "subqueryload" queries.

in theory, all subqueryload can be replaced with selectinload directly.   however obviously I'd do this more carefully.

On Wed, May 29, 2024, at 4:50 PM, Brian Haley wrote:
> Hi,
>
> Neutron has been having issues with our coverage gate job triggering the
> OOM killer since last week [0], which I just confirmed by holding a node
> and looking in the logs. It started happening after the sqlalchemy 2.0
> bump [1], but that just might be exposing the underlying issue.
>
> Running locally I can see via /proc/meminfo that memory is getting consumed:
>
> MemTotal:        8123628 kB
> MemFree:         1108404 kB
>
>
> And via ps it's the coverage processes doing it:
>
>
>         PID   %MEM             RSS       PPID       TIME     NLWP
> WCHAN                     COMMAND
>
>        4315   30.9         2516348       4314   01:29:07        1
> -                         /opt/stack/neutron/.tox/cover/bin/python
> /opt/stack/neutron/.tox/cover/bin/coverage run --source neutron
> --parallel-mode -m stestr.subunit_runner.run discover -t ./
> ./neutron/tests/unit --load-list /tmp/tmp0rhqfwhz
>        4313   30.0         2437500       4312   01:28:50        1
> -                         /opt/stack/neutron/.tox/cover/bin/python
> /opt/stack/neutron/.tox/cover/bin/coverage run --source neutron
> --parallel-mode -m stestr.subunit_runner.run discover -t ./
> ./neutron/tests/unit --load-list /tmp/tmpfzmqyuub
>
> (and the test hasn't even finished yet)
>
>
> Only workaround seems to be reducing concurrency [2].
>
>
> Have any other projects seen anything similar?
>
> (and sorry for the html email)
>
> -Brian
>
> [0] https://bugs.launchpad.net/neutron/+bug/2065821
> [1] https://review.opendev.org/c/openstack/requirements/+/879743
>
> [2] https://review.opendev.org/c/openstack/neutron/+/920766