[openstack-dev] [puppet] Ubuntu problems + Help needed

Tobias Urdin tobias.urdin at crystone.com
Sun Jan 7 13:39:25 UTC 2018


Hello everyone and a happy new year!

I will follow this thread up with some information about the tempest failure that occurs on Ubuntu.
Saw it happen on my recheck tonight and took some time now to check it out properly.

* Here is the job: http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/

* The following test is failing but only sometimes: tempest.api.compute.servers.test_create_server.ServersTestManualDisk
http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/job-output.txt.gz#_2018-01-07_01_56_31_072370

* Checking the nova API log is fails the request against neutron server
http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/logs/nova/nova-api.txt.gz#_2018-01-07_01_46_47_301

So this is the call that times out: https://github.com/openstack/nova/blob/3800cf6ae2a1370882f39e6880b7df4ec93f4b93/nova/api/openstack/compute/attach_interfaces.py#L61

The timeout occurs at 01:46:47 but the first try is done at 01:46:17, checking the log http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/logs/neutron/neutron-server.txt.gz and searching for "GET /v2.0/ports?device_id=285061f8-2e8e-4163-9534-9b02900a8887"

You can see that neutron-server reports all request as 200 OK, so what I think is that neutron-server performs the request properly but for some reason nova-api does not get the reply and hence the timeout.

This is where I get stuck because since I can see all requests coming in there is no real way of seeing the replies.
At the same time you can see nova-api and neutron-server are continously handling requests so they are working but just that reply that neutron-server should send to nova-api does not occur.

Does anybody have any clue to why? Otherwise I guess the only way is to start running the tests on a local machine until I get that issue, which does not occur regularly.

Maybe loop in the neutron and/or Canonical OpenStack team on this one.

Best regards
Tobias


________________________________________
From: Tobias Urdin <tobias.urdin at crystone.com>
Sent: Friday, December 22, 2017 2:44 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

Follow up, have been testing some integration runs on a tmp machine.

Had to fix the following:
* Ceph repo key E84AC2C0460F3994 perhaps introduced in [0]
* Run glance-manage db_sync (have not seen in integration tests)
* Run neutron-db-manage upgrade heads (have not seen in integration tests)
* Disable l2gw because of
https://bugs.launchpad.net/ubuntu/+source/networking-l2gw/+bug/1739779
   proposed temp fix until resolved as [1]

[0] https://review.openstack.org/#/c/507925/
[1] https://review.openstack.org/#/c/529830/

Best regards

On 12/22/2017 10:44 AM, Tobias Urdin wrote:
> Ignore that, seems like it's the networking-l2gw package that fails[0]
> Seems like it hasn't been packaged for queens yet[1] or more it seems
> like a release has not been cut for queens for networking-l2gw[2]
>
> Should we try to disable l2gw like done in[3] recently for CentOS?
>
> [0]
> http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_23_10_05_564
> [1]
> http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/queens_versions.html
> [2] https://git.openstack.org/cgit/openstack/networking-l2gw/refs/
> [3] https://review.openstack.org/#/c/529711/
>
>
> On 12/22/2017 10:19 AM, Tobias Urdin wrote:
>> Follow up on Alex[1] point. The db sync upgrade for neutron fails here[0].
>>
>> [0] http://paste.openstack.org/show/629628/
>>
>> On 12/22/2017 04:57 AM, Alex Schultz wrote:
>>>> Just a note, the queens repo is not currently synced in the infra so
>>>> the queens repo patch is failing on Ubuntu jobs. I've proposed adding
>>>> queens to the infra configuration to resolve this:
>>>> https://review.openstack.org/529670
>>>>
>>> As a follow up, the mirrors have landed and two of the four scenarios
>>> now pass.  Scenario001 is failing on ceilometer-api which was removed
>>> so I have a patch[0] to remove it. Scenario004 is having issues with
>>> neutron and the db looks to be very unhappy[1].
>>>
>>> Thanks,
>>> -Alex
>>>
>>> [0] https://review.openstack.org/529787
>>> [1] http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_22_58_37_338
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list