[all][neutron][qa] Gate status: tempest-full-py3, tempest-slow-py3 or few more jobs are broken: "Donot recheck"

Radosław Piliszek radoslaw.piliszek at gmail.com
Wed Jul 21 06:34:55 UTC 2021


My (I guess obvious) suggestion is for Neutron to actually gate on the
OVN variants of these jobs as DevStack now does. Otherwise the testing
is leaky by design.
Please amend.

-yoctozepto

On Wed, Jul 21, 2021 at 1:40 AM Akihiro Motoki <amotoki at gmail.com> wrote:
>
> All of them are the same issue. It happened during the devstack run
> with ML2/OVN.
>
> -- amotoki
>
> On Wed, Jul 21, 2021 at 8:30 AM melanie witt <melwittt at gmail.com> wrote:
> >
> > On Tue, 20 Jul 2021, 16:59 -0500, Ghanshyam Mann
> > <gmann at ghanshyammann.com> wrote:
> > > Hello Everyone,
> > >
> > > Since 2-3 hrs before, tempest-full-py3, tempest-slow-py3 and few more jobs started failing
> > > consistently in create_neutron_initial_network() method:
> > >
> > > "++ lib/neutron_plugins/services/l3:create_neutron_initial_network:214 :   oscwrap --os-cloud devstack-admin --os-region RegionOne network create --project a6b71b541987471595bf5e38f5fbe264 private"
> > >
> > > - https://zuul.openstack.org/builds?job_name=tempest-full-py3
> > > - https://zuul.openstack.org/builds?job_name=tempest-slow-py3
> > >
> > > Strange is that 'tempest-integrated-storage'|'compute' jobs are passing even the configuration is almost the same.
> > >
> > > Slaweq reported the below bug and trying some neutron revert to know the root cause. He will check
> > > it tomorrow morning.
> > >
> > > - https://bugs.launchpad.net/neutron/+bug/1936983
> > >
> > > Until we find the root cause/fix, do not recheck on these failures.
> >
> > FYI all:
> >
> > nova-multi-cell, nova-live-migration, and nova-ceph-multistore are also
> > affected:
> >
> > https://zuul.opendev.org/t/openstack/builds?job_name=nova-multi-cell
> > https://zuul.opendev.org/t/openstack/builds?job_name=nova-live-migration
> > https://zuul.opendev.org/t/openstack/builds?job_name=nova-ceph-multistore
> >
> > And here's what the error trace looks like, for anyone who hasn't seen
> > it yet:
> >
> > > Error while executing command: HttpException: 500, Request Failed: internal server error while processing your request.
> > > ++ functions-common:oscwrap:2349            :   return 1
> > > + lib/neutron_plugins/services/l3:create_neutron_initial_network:214 :   NET_ID=
> > > + lib/neutron_plugins/services/l3:create_neutron_initial_network:215 :   die_if_not_set 215 NET_ID 'Failure creating NET_ID for private 0b1d94f08f194eb5b7679c47f91f6d4c'
> > > + functions-common:die_if_not_set:216      :   local exitcode=0
> > > [Call Trace]
> > > ./stack.sh:1300:create_neutron_initial_network
> > > /opt/stack/devstack/lib/neutron_plugins/services/l3:215:die_if_not_set
> > > /opt/stack/devstack/functions-common:223:die
> > > [ERROR] /opt/stack/devstack/functions-common:215 Failure creating NET_ID for private 0b1d94f08f194eb5b7679c47f91f6d4c
> > > exit_trap: cleaning up child processes
> > > Error on exit
> > > *** FINISHED ***
> >
> > -melwitt
> >
> >
>



More information about the openstack-discuss mailing list