[nova][gate] nova-multi-cell job failing test_*_with_qos_min_bw_allocation

Bal√°zs Gibizer balazs.gibizer at est.tech
Thu Dec 10 12:18:17 UTC 2020

On Wed, Dec 9, 2020 at 19:33, melanie witt <melwittt at gmail.com> wrote:
> Howdy all,
> FYI we have gate failures of the recently added 
> test_*_with_qos_min_bw_allocation tests [1] in the nova-multi-cell 
> job on the master, stable/victoria, and stable/ussuri branches. The 
> failures occur during cross cell migrations.
> I have opened a bug for the failure on the master branch:
> * https://bugs.launchpad.net/nova/+bug/1907522
> The issue here is that we fail to create port bindings in neutron 
> during a cross cell migration in the superconductor:
> nova.exception.PortBindingFailed: Binding failed for port <port uuid>
> and that corresponds to a failure in the neutron server log where it 
> fails the port binding with:
> neutron_lib.exceptions.placement.UnknownResourceProvider: No such 
> resource provider known by Neutron
> I don't yet know what is going on here ^.
> For the bug on stable/victoria and stable/ussuri I have opened this 
> bug:
> * https://bugs.launchpad.net/nova/+bug/1907511
> and have a WIP stable-only patch proposed that needs tests:
> https://review.opendev.org/c/openstack/nova/+/766364
> I just wanted to see ASAP if the nova-multi-cell job will pass on it.
> The issue here ^ is that during a cross cell migration, we aren't 
> targeting the cell database for the target host when we attempt to 
> lookup the service record of the target host.
> For the stable branch failures I think the failure rate is 100% and 
> it looks like it might also be 100% for the master branch failures.

Thanks Melanie!

A sort update. The test result in 
https://review.opendev.org/c/openstack/nova/+/766364 shows that after 
fixing the stable only https://bugs.launchpad.net/nova/+bug/1907511 we 
now hit the same failure on stable that is seen on master 

Both master and stable branches are blocked at the moment.

> Cheers,
> -melanie
> [1] https://review.opendev.org/c/openstack/tempest/+/694539

More information about the openstack-discuss mailing list