Hi, At the moment we have failures with keystone. I am not exactly sure what has happened. Does anyone know how we can address this? Thanks Gary
2013-08-20 16:04:08.894 | cli.simple_read_only.test_keystone.SimpleReadOnlyKeystoneClientTest.test_admin_user_list ... ok
2013-08-20 16:04:09.320 | cli.simple_read_only.test_keystone.SimpleReadOnlyKeystoneClientTest.test_admin_user_role_list ... ok
2013-08-20 16:04:09.321 |
2013-08-20 16:04:09.321 | ======================================================================
2013-08-20 16:04:09.321 | FAIL: cli.simple_read_only.test_keystone.SimpleReadOnlyKeystoneClientTest.test_admin_catalog_list
2013-08-20 16:04:09.321 | ----------------------------------------------------------------------
2013-08-20 16:04:09.321 | _StringException: Traceback (most recent call last):
2013-08-20 16:04:09.322 | File "/opt/stack/new/tempest/cli/simple_read_only/test_keystone.py", line 45, in test_admin_catalog_list
2013-08-20 16:04:09.322 | self.assertTrue(svc['__label'].startswith('Service:'))
2013-08-20 16:04:09.322 | File "/usr/lib/python2.7/unittest/case.py", line 420, in assertTrue
2013-08-20 16:04:09.322 | raise self.failureException(msg)
2013-08-20 16:04:09.322 | AssertionError: False is not true
2013-08-20 16:04:09.322 |
2013-08-20 16:04:09.322 | -------------------- >> begin captured logging << --------------------
2013-08-20 16:04:09.323 | 2013-08-20 16:04:04,496 running: '/usr/local/bin/keystone --os-username admin --os-tenant-name admin --os-password secret --os-auth-url http://127.0.0.1:5000/v2.0/ catalog '
2013-08-20 16:04:09.323 | 2013-08-20 16:04:04,909 Invalid line between tables: Service: compute
2013-08-20 16:04:09.323 | --------------------- >> end captured logging << ---------------------
2013-08-20 16:04:09.323 |
2013-08-20 16:04:09.323 | ======================================================================
2013-08-20 16:04:09.324 | FAIL: cli.simple_read_only.test_keystone.SimpleReadOnlyKeystoneClientTest.test_admin_help
2013-08-20 16:04:09.324 | ----------------------------------------------------------------------
2013-08-20 16:04:09.324 | _StringException: Traceback (most recent call last):
2013-08-20 16:04:09.324 | File "/opt/stack/new/tempest/cli/simple_read_only/test_keystone.py", line 93, in test_admin_help
2013-08-20 16:04:09.324 | self.assertTrue(lines[0].startswith('usage: keystone'))
2013-08-20 16:04:09.324 | File "/usr/lib/python2.7/unittest/case.py", line 420, in assertTrue
2013-08-20 16:04:09.324 | raise self.failureException(msg)
2013-08-20 16:04:09.325 | AssertionError: False is not true
2013-08-20 16:04:09.325 |
2013-08-20 16:04:09.325 | -------------------- >> begin captured logging << --------------------
2013-08-20 16:04:09.325 | 2013-08-20 16:04:06,870 running: '/usr/local/bin/keystone --os-username admin --os-tenant-name admin --os-password secret --os-auth-url http://127.0.0.1:5000/v2.0/ help '
2013-08-20 16:04:09.325 | --------------------- >> end captured logging << ---------------------
2013-08-20 16:04:09.325 |
2013-08-20 16:04:09.326 | ----------------------------------------------------------------------
2013-08-20 16:04:09.326 | XML: nosetests-cli.xml
2013-08-20 16:04:09.326 | ----------------------------------------------------------------------
2013-08-20 16:04:09.326 | Ran 54 tests in 33.234s
Hi Gary,
we should figure out which version of keystoneclient Tempest is using, I could not parse that information from tempest log. At the same time, keystone jobs are failing with ImportError: No module named netaddr in common/jsonutils, that's a design issue with test_keystoneclient in Keystone: checking out keystoneclient from git master, assuming dependecies didn't change since last release, which isn't the case since[1] was merged after 0.3.1. Quick fix would be to release a new keystoneclient which has updated requirements, so new deps get pulled into venv.
Besides those keystoneclient related failures, we have Glance and Heat jobs failing and returning back to normal randomly.
Cheers, Alan
[1] https://github.com/openstack/python-keystoneclient/commit/de72de3b809c53420d...
On Wed, Aug 21, 2013 at 4:00 AM, Alan Pevec apevec@gmail.com wrote:
Hi Gary,
we should figure out which version of keystoneclient Tempest is using, I could not parse that information from tempest log. At the same time, keystone jobs are failing with ImportError: No module named netaddr in common/jsonutils, that's a design issue with test_keystoneclient in Keystone: checking out keystoneclient from git master, assuming dependecies didn't change since last release, which isn't the case since[1] was merged after 0.3.1. Quick fix would be to release a new keystoneclient which has updated requirements, so new deps get pulled into venv.
If that happens to be acceptable fix, I was intending to release a new version of keystoneclient in the next day or so anyway.
Besides those keystoneclient related failures, we have Glance and Heat jobs failing and returning back to normal randomly.
Cheers, Alan
[1] https://github.com/openstack/python-keystoneclient/commit/de72de3b809c53420d...
Openstack-stable-maint mailing list Openstack-stable-maint@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
Hi, Dolph please see - https://github.com/openstack/tempest/blob/stable/grizzly/tools/pip-requires#... This could be the cause of the problem. I think that either we need to lock down the client for the stable/grizzly tempest branch or add netaddr. What do you guys suggest? At the moment the reviews are piling up... Thanks Gary
From: Dolph Mathews [mailto:dolph.mathews@gmail.com] Sent: Wednesday, August 21, 2013 4:07 PM To: Alan Pevec Cc: Gary Kotton; openstack-stable-maint@lists.openstack.org Subject: Re: [Openstack-stable-maint] Stable grizzly failures
On Wed, Aug 21, 2013 at 4:00 AM, Alan Pevec <apevec@gmail.commailto:apevec@gmail.com> wrote: Hi Gary,
we should figure out which version of keystoneclient Tempest is using, I could not parse that information from tempest log. At the same time, keystone jobs are failing with ImportError: No module named netaddr in common/jsonutils, that's a design issue with test_keystoneclient in Keystone: checking out keystoneclient from git master, assuming dependecies didn't change since last release, which isn't the case since[1] was merged after 0.3.1. Quick fix would be to release a new keystoneclient which has updated requirements, so new deps get pulled into venv.
If that happens to be acceptable fix, I was intending to release a new version of keystoneclient in the next day or so anyway.
Besides those keystoneclient related failures, we have Glance and Heat jobs failing and returning back to normal randomly.
Cheers, Alan
[1] https://github.com/openstack/python-keystoneclient/commit/de72de3b809c53420d...
_______________________________________________ Openstack-stable-maint mailing list Openstack-stable-maint@lists.openstack.orgmailto:Openstack-stable-maint@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
--
-Dolph
From: Gary Kotton [mailto:gkotton@vmware.com] Sent: Thursday, August 22, 2013 2:13 PM To: Dolph Mathews; Alan Pevec Cc: openstack-stable-maint@lists.openstack.org Subject: Re: [Openstack-stable-maint] Stable grizzly failures
Hi, Dolph please see - https://github.com/openstack/tempest/blob/stable/grizzly/tools/pip-requires#... This could be the cause of the problem. I think that either we need to lock down the client for the stable/grizzly tempest branch or add netaddr. [Gary Kotton] netaddr is part of the requirements What do you guys suggest? At the moment the reviews are piling up... Thanks Gary
From: Dolph Mathews [mailto:dolph.mathews@gmail.com] Sent: Wednesday, August 21, 2013 4:07 PM To: Alan Pevec Cc: Gary Kotton; openstack-stable-maint@lists.openstack.org Subject: Re: [Openstack-stable-maint] Stable grizzly failures
On Wed, Aug 21, 2013 at 4:00 AM, Alan Pevec <apevec@gmail.commailto:apevec@gmail.com> wrote: Hi Gary,
we should figure out which version of keystoneclient Tempest is using, I could not parse that information from tempest log. At the same time, keystone jobs are failing with ImportError: No module named netaddr in common/jsonutils, that's a design issue with test_keystoneclient in Keystone: checking out keystoneclient from git master, assuming dependecies didn't change since last release, which isn't the case since[1] was merged after 0.3.1. Quick fix would be to release a new keystoneclient which has updated requirements, so new deps get pulled into venv.
If that happens to be acceptable fix, I was intending to release a new version of keystoneclient in the next day or so anyway.
Besides those keystoneclient related failures, we have Glance and Heat jobs failing and returning back to normal randomly.
Cheers, Alan
[1] https://github.com/openstack/python-keystoneclient/commit/de72de3b809c53420d...
_______________________________________________ Openstack-stable-maint mailing list Openstack-stable-maint@lists.openstack.orgmailto:Openstack-stable-maint@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
--
-Dolph
On Thu, Aug 22, 2013 at 6:13 AM, Gary Kotton gkotton@vmware.com wrote:
Hi,****
Dolph please see - https://github.com/openstack/tempest/blob/stable/grizzly/tools/pip-requires#...
This could be the cause of the problem. I think that either we need to lock down the client for the stable/grizzly tempest branch or add netaddr.
What do you guys suggest?
The latest clients should be continuously tested against the stable branches, so I'd rather not lock the client down if it can be avoided. Is there any reason *not* to allow netaddr as a client side dep?
At the moment the reviews are piling up…****
Thanks****
Gary****
*From:* Dolph Mathews [mailto:dolph.mathews@gmail.com] *Sent:* Wednesday, August 21, 2013 4:07 PM *To:* Alan Pevec *Cc:* Gary Kotton; openstack-stable-maint@lists.openstack.org
*Subject:* Re: [Openstack-stable-maint] Stable grizzly failures****
On Wed, Aug 21, 2013 at 4:00 AM, Alan Pevec apevec@gmail.com wrote:****
Hi Gary,
we should figure out which version of keystoneclient Tempest is using, I could not parse that information from tempest log. At the same time, keystone jobs are failing with ImportError: No module named netaddr in common/jsonutils, that's a design issue with test_keystoneclient in Keystone: checking out keystoneclient from git master, assuming dependecies didn't change since last release, which isn't the case since[1] was merged after 0.3.1. Quick fix would be to release a new keystoneclient which has updated requirements, so new deps get pulled into venv.****
If that happens to be acceptable fix, I was intending to release a new version of keystoneclient in the next day or so anyway.****
Besides those keystoneclient related failures, we have Glance and Heat jobs failing and returning back to normal randomly.
Cheers, Alan
[1] https://github.com/openstack/python-keystoneclient/commit/de72de3b809c53420d...
Openstack-stable-maint mailing list Openstack-stable-maint@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
-- ****
-Dolph ****
From: Dolph Mathews [mailto:dolph.mathews@gmail.com] Sent: Thursday, August 22, 2013 2:22 PM To: Gary Kotton Cc: Alan Pevec; openstack-stable-maint@lists.openstack.org Subject: Re: [Openstack-stable-maint] Stable grizzly failures
On Thu, Aug 22, 2013 at 6:13 AM, Gary Kotton <gkotton@vmware.commailto:gkotton@vmware.com> wrote: Hi, Dolph please see - https://github.com/openstack/tempest/blob/stable/grizzly/tools/pip-requires#... This could be the cause of the problem. I think that either we need to lock down the client for the stable/grizzly tempest branch or add netaddr. What do you guys suggest?
The latest clients should be continuously tested against the stable branches, so I'd rather not lock the client down if it can be avoided. Is there any reason *not* to allow netaddr as a client side dep?
[Gary Kotton] I saw that the netaddr is included - https://github.com/openstack/tempest/blob/stable/grizzly/tools/pip-requires#.... The failure in tempest is: 2013-08-21 07:24:31.018 | cli.simple_read_only.test_keystone.SimpleReadOnlyKeystoneClientTest.test_admin_user_list ... ok 2013-08-21 07:24:31.599 | cli.simple_read_only.test_keystone.SimpleReadOnlyKeystoneClientTest.test_admin_user_role_list ... ok 2013-08-21 07:24:31.635 | 2013-08-21 07:24:31.636 | ====================================================================== 2013-08-21 07:24:31.636 | FAIL: cli.simple_read_only.test_keystone.SimpleReadOnlyKeystoneClientTest.test_admin_catalog_list 2013-08-21 07:24:31.636 | ---------------------------------------------------------------------- 2013-08-21 07:24:31.637 | _StringException: Traceback (most recent call last): 2013-08-21 07:24:31.637 | File "/opt/stack/new/tempest/cli/simple_read_only/test_keystone.py", line 45, in test_admin_catalog_list 2013-08-21 07:24:31.637 | self.assertTrue(svc['__label'].startswith('Service:')) 2013-08-21 07:24:31.638 | File "/usr/lib/python2.7/unittest/case.py", line 420, in assertTrue 2013-08-21 07:24:31.638 | raise self.failureException(msg) 2013-08-21 07:24:31.638 | AssertionError: False is not true
At the moment the reviews are piling up... Thanks Gary
From: Dolph Mathews [mailto:dolph.mathews@gmail.commailto:dolph.mathews@gmail.com] Sent: Wednesday, August 21, 2013 4:07 PM To: Alan Pevec Cc: Gary Kotton; openstack-stable-maint@lists.openstack.orgmailto:openstack-stable-maint@lists.openstack.org
Subject: Re: [Openstack-stable-maint] Stable grizzly failures
On Wed, Aug 21, 2013 at 4:00 AM, Alan Pevec <apevec@gmail.commailto:apevec@gmail.com> wrote: Hi Gary,
we should figure out which version of keystoneclient Tempest is using, I could not parse that information from tempest log. At the same time, keystone jobs are failing with ImportError: No module named netaddr in common/jsonutils, that's a design issue with test_keystoneclient in Keystone: checking out keystoneclient from git master, assuming dependecies didn't change since last release, which isn't the case since[1] was merged after 0.3.1. Quick fix would be to release a new keystoneclient which has updated requirements, so new deps get pulled into venv.
If that happens to be acceptable fix, I was intending to release a new version of keystoneclient in the next day or so anyway.
Besides those keystoneclient related failures, we have Glance and Heat jobs failing and returning back to normal randomly.
Cheers, Alan
[1] https://github.com/openstack/python-keystoneclient/commit/de72de3b809c53420d...
_______________________________________________ Openstack-stable-maint mailing list Openstack-stable-maint@lists.openstack.orgmailto:Openstack-stable-maint@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
--
-Dolph
--
-Dolph
On a related note, I'm ready to release keystoneclient 0.3.2 today:
https://launchpad.net/python-keystoneclient/+milestone/0.3.2
On Thu, Aug 22, 2013 at 6:24 AM, Gary Kotton gkotton@vmware.com wrote:
*From:* Dolph Mathews [mailto:dolph.mathews@gmail.com] *Sent:* Thursday, August 22, 2013 2:22 PM *To:* Gary Kotton *Cc:* Alan Pevec; openstack-stable-maint@lists.openstack.org
*Subject:* Re: [Openstack-stable-maint] Stable grizzly failures****
On Thu, Aug 22, 2013 at 6:13 AM, Gary Kotton gkotton@vmware.com wrote:** **
Hi,****
Dolph please see - https://github.com/openstack/tempest/blob/stable/grizzly/tools/pip-requires#...
This could be the cause of the problem. I think that either we need to lock down the client for the stable/grizzly tempest branch or add netaddr.
What do you guys suggest?****
The latest clients should be continuously tested against the stable branches, so I'd rather not lock the client down if it can be avoided. Is there any reason *not* to allow netaddr as a client side dep?****
[Gary Kotton] I saw that the netaddr is included -
https://github.com/openstack/tempest/blob/stable/grizzly/tools/pip-requires#... .*
I just noticed the same thing after I sent that.
**
*The failure in tempest is:*
2013-08-21 07:24:31.018 | cli.simple_read_only.test_keystone.SimpleReadOnlyKeystoneClientTest.test_admin_user_list ... ok****
2013-08-21 07:24:31.599 | cli.simple_read_only.test_keystone.SimpleReadOnlyKeystoneClientTest.test_admin_user_role_list ... ok****
2013-08-21 07:24:31.635 | ****
2013-08-21 07:24:31.636 | ======================================================================****
2013-08-21 07:24:31.636 | FAIL: cli.simple_read_only.test_keystone.SimpleReadOnlyKeystoneClientTest.test_admin_catalog_list
2013-08-21 07:24:31.636 | ----------------------------------------------------------------------****
2013-08-21 07:24:31.637 | _StringException: Traceback (most recent call last):****
2013-08-21 07:24:31.637 | File "/opt/stack/new/tempest/cli/simple_read_only/test_keystone.py", line 45, in test_admin_catalog_list****
2013-08-21 07:24:31.637 | self.assertTrue(svc['__label'].startswith('Service:'))****
2013-08-21 07:24:31.638 | File "/usr/lib/python2.7/unittest/case.py", line 420, in assertTrue****
2013-08-21 07:24:31.638 | raise self.failureException(msg)****
2013-08-21 07:24:31.638 | AssertionError: False is not true****
I can't understand this failure without having the actual output tempest is seeing. I tried to reproduce manually but the output I got looked like it would pass this assertion. I'll try to repro via tempest today.
At the moment the reviews are piling up…****
Thanks****
Gary****
*From:* Dolph Mathews [mailto:dolph.mathews@gmail.com] *Sent:* Wednesday, August 21, 2013 4:07 PM *To:* Alan Pevec *Cc:* Gary Kotton; openstack-stable-maint@lists.openstack.org****
*Subject:* Re: [Openstack-stable-maint] Stable grizzly failures****
On Wed, Aug 21, 2013 at 4:00 AM, Alan Pevec apevec@gmail.com wrote:****
Hi Gary,
we should figure out which version of keystoneclient Tempest is using, I could not parse that information from tempest log. At the same time, keystone jobs are failing with ImportError: No module named netaddr in common/jsonutils, that's a design issue with test_keystoneclient in Keystone: checking out keystoneclient from git master, assuming dependecies didn't change since last release, which isn't the case since[1] was merged after 0.3.1. Quick fix would be to release a new keystoneclient which has updated requirements, so new deps get pulled into venv.****
If that happens to be acceptable fix, I was intending to release a new version of keystoneclient in the next day or so anyway.****
Besides those keystoneclient related failures, we have Glance and Heat jobs failing and returning back to normal randomly.
Cheers, Alan
[1] https://github.com/openstack/python-keystoneclient/commit/de72de3b809c53420d...
Openstack-stable-maint mailing list Openstack-stable-maint@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
-- ****
-Dolph ****
-- ****
-Dolph ****
Dolph Mathews wrote:
On a related note, I'm ready to release keystoneclient 0.3.2 today:
https://launchpad.net/python-keystoneclient/+milestone/0.3.2
We have python-keystoneclient 0.3.2 available now... Together with adding netaddr to the test-requires*, should it be sufficient to work around the issue ?
*: https://review.openstack.org/#/c/43402/
Got a couple security fixes lined up that need to land to stable/grizzly :)
Thierry Carrez wrote:
Dolph Mathews wrote:
On a related note, I'm ready to release keystoneclient 0.3.2 today:
https://launchpad.net/python-keystoneclient/+milestone/0.3.2
We have python-keystoneclient 0.3.2 available now... Together with adding netaddr to the test-requires*, should it be sufficient to work around the issue ?
*: https://review.openstack.org/#/c/43402/
Got a couple security fixes lined up that need to land to stable/grizzly :)
We just -2ed a change that disables the tests, which is like the worst solution ever... as it makes it more difficult to fix the issue.
I mean, we can disable the tests, but it needs to be the result of a ML discussion, the only way out of the problem, and extremely temporary.
My understanding is that the combination of the keystoneclient 0.3.2 (released a few hours ago) and the addition of netaddr to the test-requires (under review now, see link above) should fix the issue... but I'll readily admit that's more a thread summary than an analysis of the problem. Could someone confirm ?
Thierry Carrez wrote:
We just -2ed a change that disables the tests, which is like the worst solution ever... as it makes it more difficult to fix the issue.
So I thought we were much closer to the solution than we actually is. Disabling the tests should always be an exceptional measure, but in this case it's IMHO warranted given the triviality of the test and how far we are from a solution after one week of breakage.
Pavel should give us an update on his current analysis of the underlying issue and its potential solution, hopefully it should give us all the right information so that we can help.
Cheers,
Thierry Carrez wrote:
Pavel should give us an update on his current analysis of the underlying issue and its potential solution, hopefully it should give us all the right information so that we can help.
Note: Dolph proposed to pin keyring to <2.0: https://review.openstack.org/#/c/43564/
2013/8/23 Thierry Carrez thierry@openstack.org:
Dolph Mathews wrote:
On a related note, I'm ready to release keystoneclient 0.3.2 today: https://launchpad.net/python-keystoneclient/+milestone/0.3.2
We have python-keystoneclient 0.3.2 available now... Together with
Unfortunately that didn't help since Keystone Grizzly has cap on keystoneclient <0.3 ...
adding netaddr to the test-requires*, should it be sufficient to work around the issue ?
...so, yes, we need that workaround, let's get this in!
Cheers, Alan
On Tue, Aug 27, 2013 at 1:24 PM, Alan Pevec apevec@gmail.com wrote:
2013/8/23 Thierry Carrez thierry@openstack.org:
Dolph Mathews wrote:
On a related note, I'm ready to release keystoneclient 0.3.2 today: https://launchpad.net/python-keystoneclient/+milestone/0.3.2
We have python-keystoneclient 0.3.2 available now... Together with
Unfortunately that didn't help since Keystone Grizzly has cap on keystoneclient <0.3 ...
adding netaddr to the test-requires*, should it be sufficient to work around the issue ?
...so, yes, we need that workaround, let's get this in!
So, we have to explicitly add any new requirements for keystoneclient master to keystone stable/grizzly's requirements file as long as stable/grizzly is supported?
Cheers, Alan
Openstack-stable-maint mailing list Openstack-stable-maint@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
Alan Pevec wrote:
2013/8/23 Thierry Carrez thierry@openstack.org:
Dolph Mathews wrote:
On a related note, I'm ready to release keystoneclient 0.3.2 today: https://launchpad.net/python-keystoneclient/+milestone/0.3.2
We have python-keystoneclient 0.3.2 available now... Together with
Unfortunately that didn't help since Keystone Grizzly has cap on keystoneclient <0.3 ...
adding netaddr to the test-requires*, should it be sufficient to work around the issue ?
...so, yes, we need that workaround, let's get this in!
So... at this point we did: - pin keyring to < 2.0 in requirements (in master and stable/grizzly) - add netaddr to keystone test-requires (in stable/grizzly)
I marked https://bugs.launchpad.net/keystone/+bug/1212939 fixcommitted.
Can we mark https://bugs.launchpad.net/devstack/+bug/1193164 fixcomitted too ?
Is there anything else we need to do before we can reenable the tests that were temporarily disabled at: https://github.com/openstack/tempest/commit/27e8547c31f15708b3ff143a62eb9674... ?
2013/8/29 Thierry Carrez thierry@openstack.org:
So... at this point we did:
- pin keyring to < 2.0 in requirements (in master and stable/grizzly)
- add netaddr to keystone test-requires (in stable/grizzly)
I marked https://bugs.launchpad.net/keystone/+bug/1212939 fixcommitted.
Can we mark https://bugs.launchpad.net/devstack/+bug/1193164 fixcomitted too ?
Please note https://bugs.launchpad.net/devstack/+bug/1217120 where it wants to unpin keyring as soon as 1193164 is fixed but fix for 1193164 was pinning, so we're full circle :)
1217120 should probably depend on a fix in keyring, i.e. reopen 1197988 based on https://bugs.launchpad.net/ubuntu/+source/python-keyring/+bug/1197988/commen...
Is there anything else we need to do before we can reenable the tests that were temporarily disabled at: https://github.com/openstack/tempest/commit/27e8547c31f15708b3ff143a62eb9674... ?
AFAICT keystone cli tests can be re-enabled now, but I'd like to hear from Pavel first.
Cheers, Alan
Alan Pevec wrote:
2013/8/29 Thierry Carrez thierry@openstack.org:
Can we mark https://bugs.launchpad.net/devstack/+bug/1193164 fixcomitted too ?
Please note https://bugs.launchpad.net/devstack/+bug/1217120 where it wants to unpin keyring as soon as 1193164 is fixed but fix for 1193164 was pinning, so we're full circle :)
1217120 should probably depend on a fix in keyring, i.e. reopen 1197988 based on https://bugs.launchpad.net/ubuntu/+source/python-keyring/+bug/1197988/commen...
Is there anything else we need to do before we can reenable the tests that were temporarily disabled at: https://github.com/openstack/tempest/commit/27e8547c31f15708b3ff143a62eb9674... ?
AFAICT keystone cli tests can be re-enabled now, but I'd like to hear from Pavel first.
Pavel: any news ? I would really like to reenable the tests before I stop thinking about this issue to focus on havana release :)
Hi, sorry I missed this.
Yes we should be ok to remove those skips with keyring < 2.0.
But to stop thinking about it, we should first at least get https://bugs.launchpad.net/ubuntu/+source/python-keyring/+bug/1197988 moving, but I can't re-open it. So who can?
Also I've proposed backport of complete cli-output logging also to tempest-stable/grizly https://review.openstack.org/#/c/46485/. I would suggest that un-skip should better happen after that, so we don't find ourself in the dark again.
----- Original Message -----
From: "Thierry Carrez" thierry@openstack.org To: "Alan Pevec" apevec@gmail.com Cc: "openstack-stable-maint" openstack-stable-maint@lists.openstack.org, "Pavel Sedlák" psedlak@redhat.com, "David Kranz" dkranz@redhat.com Sent: Tuesday, September 10, 2013 11:09:49 AM Subject: Re: [Openstack-stable-maint] Stable grizzly failures
Alan Pevec wrote:
2013/8/29 Thierry Carrez thierry@openstack.org:
Can we mark https://bugs.launchpad.net/devstack/+bug/1193164 fixcomitted too ?
Please note https://bugs.launchpad.net/devstack/+bug/1217120 where it wants to unpin keyring as soon as 1193164 is fixed but fix for 1193164 was pinning, so we're full circle :)
1217120 should probably depend on a fix in keyring, i.e. reopen 1197988 based on https://bugs.launchpad.net/ubuntu/+source/python-keyring/+bug/1197988/commen...
Is there anything else we need to do before we can reenable the tests that were temporarily disabled at: https://github.com/openstack/tempest/commit/27e8547c31f15708b3ff143a62eb9674... ?
AFAICT keystone cli tests can be re-enabled now, but I'd like to hear from Pavel first.
Pavel: any news ? I would really like to reenable the tests before I stop thinking about this issue to focus on havana release :)
-- Thierry Carrez (ttx)
Pavel Sedlak wrote:
Yes we should be ok to remove those skips with keyring < 2.0.
But to stop thinking about it, we should first at least get https://bugs.launchpad.net/ubuntu/+source/python-keyring/+bug/1197988 moving, but I can't re-open it. So who can?
You should probably file a separate bug about the reintroduction of the same issue in 2.0, referencing the original bug. That's better than reopening a but that was (correctly) closed.
Also I've proposed backport of complete cli-output logging also to tempest-stable/grizly https://review.openstack.org/#/c/46485/. I would suggest that un-skip should better happen after that, so we don't find ourself in the dark again.
OK, let's get some tempest eyes on this one...
Thierry Carrez wrote:
Pavel Sedlak wrote:
Yes we should be ok to remove those skips with keyring < 2.0.
But to stop thinking about it, we should first at least get https://bugs.launchpad.net/ubuntu/+source/python-keyring/+bug/1197988 moving, but I can't re-open it. So who can?
You should probably file a separate bug about the reintroduction of the same issue in 2.0, referencing the original bug. That's better than reopening a but that was (correctly) closed.
Also I've proposed backport of complete cli-output logging also to tempest-stable/grizly https://review.openstack.org/#/c/46485/. I would suggest that un-skip should better happen after that, so we don't find ourself in the dark again.
OK, let's get some tempest eyes on this one...
That change was merged... If you don't see any other obstacle, please push a change to unskip the tests... would be great to have that in before we actually do 2013.1.4.
Hi.
First - unskip proposed https://review.openstack.org/#/c/49589/ but I'm worried that it will suffer from https://bugs.launchpad.net/neutron/+bug/1234181
Second - python-keyring issue seems already fixed in keyring>=2.1 as per https://bitbucket.org/kang/python-keyring-lib/issue/115/error-root-could-not... so we can continue with unpinning after that - https://bugs.launchpad.net/devstack/+bug/1217120
----- Original Message -----
From: "Thierry Carrez" thierry@openstack.org To: "Pavel Sedlak" psedlak@redhat.com Cc: "openstack-stable-maint" openstack-stable-maint@lists.openstack.org, "Alan Pevec" apevec@gmail.com, "David Kranz" dkranz@redhat.com Sent: Thursday, September 26, 2013 11:53:22 AM Subject: Re: [Openstack-stable-maint] Stable grizzly failures
Thierry Carrez wrote:
Pavel Sedlak wrote:
Yes we should be ok to remove those skips with keyring < 2.0.
But to stop thinking about it, we should first at least get https://bugs.launchpad.net/ubuntu/+source/python-keyring/+bug/1197988 moving, but I can't re-open it. So who can?
You should probably file a separate bug about the reintroduction of the same issue in 2.0, referencing the original bug. That's better than reopening a but that was (correctly) closed.
Also I've proposed backport of complete cli-output logging also to tempest-stable/grizly https://review.openstack.org/#/c/46485/. I would suggest that un-skip should better happen after that, so we don't find ourself in the dark again.
OK, let's get some tempest eyes on this one...
That change was merged... If you don't see any other obstacle, please push a change to unskip the tests... would be great to have that in before we actually do 2013.1.4.
-- Thierry Carrez (ttx)
2013/8/21 Gary Kotton gkotton@vmware.com:
At the moment we have failures with keystone. I am not exactly sure what has happened.
Does anyone know how we can address this?
Adding Pavel who wrote cli.output_parser according to git log. Tempest is using keystoneclient master, but I don't see any changes in cli output. Pavel, you can check Tempest logs from the last failure[1] but afaict the complete output from subprocess.check_output is not logged, so I'm not sure what happened and confused clii parser.
Cheers, Alan
[1] http://logs.openstack.org/periodic/periodic-tempest-devstack-vm-stable-grizz...
2013-08-20 16:04:09.321 | FAIL: cli.simple_read_only.test_keystone.SimpleReadOnlyKeystoneClientTest.test_admin_catalog_list 2013-08-20 16:04:09.321 | _StringException: Traceback (most recent call last): 2013-08-20 16:04:09.322 | File "/opt/stack/new/tempest/cli/simple_read_only/test_keystone.py", line 45, in test_admin_catalog_list 2013-08-20 16:04:09.322 | self.assertTrue(svc['__label'].startswith('Service:')) 2013-08-20 16:04:09.322 | File "/usr/lib/python2.7/unittest/case.py", line 420, in assertTrue 2013-08-20 16:04:09.322 | raise self.failureException(msg) 2013-08-20 16:04:09.322 | AssertionError: False is not true 2013-08-20 16:04:09.322 | -------------------- >> begin captured logging << -------------------- 2013-08-20 16:04:09.323 | 2013-08-20 16:04:04,496 running: '/usr/local/bin/keystone --os-username admin --os-tenant-name admin --os-password secret --os-auth-url http://127.0.0.1:5000/v2.0/ catalog ' 2013-08-20 16:04:09.323 | 2013-08-20 16:04:04,909 Invalid line between tables: Service: compute 2013-08-20 16:04:09.323 | --------------------- >> end captured logging << --------------------- 2013-08-20 16:04:09.324 | FAIL: cli.simple_read_only.test_keystone.SimpleReadOnlyKeystoneClientTest.test_admin_help 2013-08-20 16:04:09.324 | _StringException: Traceback (most recent call last): 2013-08-20 16:04:09.324 | File "/opt/stack/new/tempest/cli/simple_read_only/test_keystone.py", line 93, in test_admin_help 2013-08-20 16:04:09.324 | self.assertTrue(lines[0].startswith('usage: keystone')) 2013-08-20 16:04:09.324 | File "/usr/lib/python2.7/unittest/case.py", line 420, in assertTrue 2013-08-20 16:04:09.324 | raise self.failureException(msg) 2013-08-20 16:04:09.325 | AssertionError: False is not true 2013-08-20 16:04:09.325 | -------------------- >> begin captured logging << -------------------- 2013-08-20 16:04:09.325 | 2013-08-20 16:04:06,870 running: '/usr/local/bin/keystone --os-username admin --os-tenant-name admin --os-password secret --os-auth-url http://127.0.0.1:5000/v2.0/ help ' 2013-08-20 16:04:09.325 | --------------------- >> end captured logging <<
On Wed, Aug 21, 2013 at 7:30 AM, Alan Pevec apevec@gmail.com wrote:
2013/8/21 Gary Kotton gkotton@vmware.com:
At the moment we have failures with keystone. I am not exactly sure what
has
happened.
Does anyone know how we can address this?
Adding Pavel who wrote cli.output_parser according to git log. Tempest is using keystoneclient master, but I don't see any changes in cli output. Pavel, you can check Tempest logs from the last failure[1] but afaict the complete output from subprocess.check_output is not logged, so I'm not sure what happened and confused clii parser.
The latest version of keystoneclient looks like it would still pass this test. Here's my attempt to reproduce manually:
http://pasteraw.com/812w2wwnyhz6nf9b7o0kyzimpepfdo0
As Alan suggested, knowing what tempest is actually seeing from this command is critical to debugging it.
Cheers, Alan
[1] http://logs.openstack.org/periodic/periodic-tempest-devstack-vm-stable-grizz...
2013-08-20 16:04:09.321 | FAIL:
cli.simple_read_only.test_keystone.SimpleReadOnlyKeystoneClientTest.test_admin_catalog_list
2013-08-20 16:04:09.321 | _StringException: Traceback (most recent call
last):
2013-08-20 16:04:09.322 | File
"/opt/stack/new/tempest/cli/simple_read_only/test_keystone.py", line 45, in test_admin_catalog_list
2013-08-20 16:04:09.322 |
self.assertTrue(svc['__label'].startswith('Service:'))
2013-08-20 16:04:09.322 | File "/usr/lib/python2.7/unittest/case.py",
line 420, in assertTrue
2013-08-20 16:04:09.322 | raise self.failureException(msg) 2013-08-20 16:04:09.322 | AssertionError: False is not true 2013-08-20 16:04:09.322 | -------------------- >> begin captured logging
<< --------------------
2013-08-20 16:04:09.323 | 2013-08-20 16:04:04,496 running: '/usr/local/bin/keystone --os-username admin --os-tenant-name admin --os-password secret --os-auth-url http://127.0.0.1:5000/v2.0/
catalog '
2013-08-20 16:04:09.323 | 2013-08-20 16:04:04,909 Invalid line between
tables: Service: compute
2013-08-20 16:04:09.323 | --------------------- >> end captured logging
<< ---------------------
2013-08-20 16:04:09.324 | FAIL:
cli.simple_read_only.test_keystone.SimpleReadOnlyKeystoneClientTest.test_admin_help
2013-08-20 16:04:09.324 | _StringException: Traceback (most recent call
last):
2013-08-20 16:04:09.324 | File
"/opt/stack/new/tempest/cli/simple_read_only/test_keystone.py", line 93, in test_admin_help
2013-08-20 16:04:09.324 |
self.assertTrue(lines[0].startswith('usage: keystone'))
2013-08-20 16:04:09.324 | File "/usr/lib/python2.7/unittest/case.py",
line 420, in assertTrue
2013-08-20 16:04:09.324 | raise self.failureException(msg) 2013-08-20 16:04:09.325 | AssertionError: False is not true 2013-08-20 16:04:09.325 | -------------------- >> begin captured logging
<< --------------------
2013-08-20 16:04:09.325 | 2013-08-20 16:04:06,870 running: '/usr/local/bin/keystone --os-username admin --os-tenant-name admin --os-password secret --os-auth-url http://127.0.0.1:5000/v2.0/ help ' 2013-08-20 16:04:09.325 | --------------------- >> end captured logging
<<
Openstack-stable-maint mailing list Openstack-stable-maint@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
2013/8/21 Dolph Mathews dolph.mathews@gmail.com:
On Wed, Aug 21, 2013 at 7:30 AM, Alan Pevec apevec@gmail.com wrote:
Adding Pavel who wrote cli.output_parser according to git log. Tempest is using keystoneclient master, but I don't see any changes in cli output. Pavel, you can check Tempest logs from the last failure[1] but afaict the complete output from subprocess.check_output is not logged, so I'm not sure what happened and confused clii parser.
The latest version of keystoneclient looks like it would still pass this test. Here's my attempt to reproduce manually: http://pasteraw.com/812w2wwnyhz6nf9b7o0kyzimpepfdo0 As Alan suggested, knowing what tempest is actually seeing from this command is critical to debugging it.
BTW this was already filed as https://bugs.launchpad.net/tempest/+bug/1213912
Cheers, Alan
Thanks. Sorry but I have not found the cycles today to look into any of this
-----Original Message----- From: Alan Pevec [mailto:apevec@gmail.com] Sent: Wednesday, August 21, 2013 8:25 PM To: Dolph Mathews Cc: openstack-stable-maint@lists.openstack.org; Pavel Sedlák Subject: Re: [Openstack-stable-maint] Stable grizzly failures
2013/8/21 Dolph Mathews dolph.mathews@gmail.com:
On Wed, Aug 21, 2013 at 7:30 AM, Alan Pevec apevec@gmail.com wrote:
Adding Pavel who wrote cli.output_parser according to git log. Tempest is using keystoneclient master, but I don't see any changes in cli output. Pavel, you can check Tempest logs from the last failure[1] but afaict the complete output from subprocess.check_output is not logged, so I'm not sure what happened and confused clii parser.
The latest version of keystoneclient looks like it would still pass this test. Here's my attempt to reproduce manually: http://pasteraw.com/812w2wwnyhz6nf9b7o0kyzimpepfdo0 As Alan suggested, knowing what tempest is actually seeing from this command is critical to debugging it.
BTW this was already filed as https://bugs.launchpad.net/tempest/+bug/1213912
Cheers, Alan
Openstack-stable-maint mailing list Openstack-stable-maint@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
Hi, sorry for delay I'm looking into it now.
After quick try with master pythonkeystone-client I was unable to reproduce it (but just again older than latest stable/grizzly instance).
I'm going to take that https://bugs.launchpad.net/tempest/+bug/1213912.
----- Original Message -----
From: "Gary Kotton" gkotton@vmware.com To: "Alan Pevec" apevec@gmail.com, "Dolph Mathews" dolph.mathews@gmail.com Cc: openstack-stable-maint@lists.openstack.org, "Pavel Sedlák" psedlak@redhat.com Sent: Wednesday, August 21, 2013 7:25:55 PM Subject: RE: [Openstack-stable-maint] Stable grizzly failures
Thanks. Sorry but I have not found the cycles today to look into any of this
-----Original Message----- From: Alan Pevec [mailto:apevec@gmail.com] Sent: Wednesday, August 21, 2013 8:25 PM To: Dolph Mathews Cc: openstack-stable-maint@lists.openstack.org; Pavel Sedlák Subject: Re: [Openstack-stable-maint] Stable grizzly failures
2013/8/21 Dolph Mathews dolph.mathews@gmail.com:
On Wed, Aug 21, 2013 at 7:30 AM, Alan Pevec apevec@gmail.com wrote:
Adding Pavel who wrote cli.output_parser according to git log. Tempest is using keystoneclient master, but I don't see any changes in cli output. Pavel, you can check Tempest logs from the last failure[1] but afaict the complete output from subprocess.check_output is not logged, so I'm not sure what happened and confused clii parser.
The latest version of keystoneclient looks like it would still pass this test. Here's my attempt to reproduce manually: http://pasteraw.com/812w2wwnyhz6nf9b7o0kyzimpepfdo0 As Alan suggested, knowing what tempest is actually seeing from this command is critical to debugging it.
BTW this was already filed as https://bugs.launchpad.net/tempest/+bug/1213912
Cheers, Alan
Openstack-stable-maint mailing list Openstack-stable-maint@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
Generally I fully support
Note: Dolph proposed to pin keyring to <2.0: https://review.openstack.org/#/c/43564/
as he explained also in https://bugs.launchpad.net/devstack/+bug/1193164. This reproducer works (with basic gir1.2 installed, python-gi etc installed).
This 'GnomeKeyring' error appearing in input is a regression between 1.6.1 of python-keyring and 2.0 due to faulty/feature-incomplete merge in https://bitbucket.org/kang/python-keyring-lib/diff/keyring/backends/Gnome.py... as there is no 'supported' method, which was doing check without using the guilty 'import GnomeKeyring' directly.
Still it seems that this message is printed by something bellow, like python-gi/gobject inspection etc, python-keyring was doing good job in not triggering it.
So for us, using 1.6.1 is good solution. But as Matthew pointed out, this actually can and happens also on master branch, even there it does not breaks the tests (for whatever reason, and I did not checked many results). Still IMO it's bad to have such amount of unexpected messages in output of our commands. So I would suggest to pin keyring to 1.6.1 also on master/global-reqs as we have there >=1.6.1 and seems there's no reason to require higher.
Just to be safe CCing few people which could know more from keystone/clients PoV so they can warn us if there is any incoming upgrade in reqs from this side.
----- Original Message -----
From: "Pavel Sedlak" psedlak@redhat.com To: "Gary Kotton" gkotton@vmware.com Cc: "Alan Pevec" apevec@gmail.com, "Dolph Mathews" dolph.mathews@gmail.com, openstack-stable-maint@lists.openstack.org Sent: Thursday, August 22, 2013 5:01:43 PM Subject: Re: [Openstack-stable-maint] Stable grizzly failures
Hi, sorry for delay I'm looking into it now.
After quick try with master pythonkeystone-client I was unable to reproduce it (but just again older than latest stable/grizzly instance).
I'm going to take that https://bugs.launchpad.net/tempest/+bug/1213912.
----- Original Message -----
From: "Gary Kotton" gkotton@vmware.com To: "Alan Pevec" apevec@gmail.com, "Dolph Mathews" dolph.mathews@gmail.com Cc: openstack-stable-maint@lists.openstack.org, "Pavel Sedlák" psedlak@redhat.com Sent: Wednesday, August 21, 2013 7:25:55 PM Subject: RE: [Openstack-stable-maint] Stable grizzly failures
Thanks. Sorry but I have not found the cycles today to look into any of this
-----Original Message----- From: Alan Pevec [mailto:apevec@gmail.com] Sent: Wednesday, August 21, 2013 8:25 PM To: Dolph Mathews Cc: openstack-stable-maint@lists.openstack.org; Pavel Sedlák Subject: Re: [Openstack-stable-maint] Stable grizzly failures
2013/8/21 Dolph Mathews dolph.mathews@gmail.com:
On Wed, Aug 21, 2013 at 7:30 AM, Alan Pevec apevec@gmail.com wrote:
Adding Pavel who wrote cli.output_parser according to git log. Tempest is using keystoneclient master, but I don't see any changes in cli output. Pavel, you can check Tempest logs from the last failure[1] but afaict the complete output from subprocess.check_output is not logged, so I'm not sure what happened and confused clii parser.
The latest version of keystoneclient looks like it would still pass this test. Here's my attempt to reproduce manually: http://pasteraw.com/812w2wwnyhz6nf9b7o0kyzimpepfdo0 As Alan suggested, knowing what tempest is actually seeing from this command is critical to debugging it.
BTW this was already filed as https://bugs.launchpad.net/tempest/+bug/1213912
Cheers, Alan
Openstack-stable-maint mailing list Openstack-stable-maint@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
-- Pavel Sedlák psedlak@redhat.com OpenStack Quality Engineer Red Hat Czech / BRQ irc: psedlak phone: +420532294659 | (82) 62659
Thanks, Pavel. Although it would not have helped in this case because the regression was external, we are vulnerable to changes in client libraries breaking library compatibility with stable branches due to lack of testing. I put up a tempest blueprint in July about gating client libraries with runs from stable branches and adding bitrot jobs to make sure the libraries remain compatible: https://blueprints.launchpad.net/tempest/+spec/client-lib-stability.
Comments on that approach are appreciated and if it seems like a good idea I would like to get started.
-David
On 08/25/2013 03:09 PM, Pavel Sedlak wrote:
Generally I fully support
Note: Dolph proposed to pin keyring to <2.0: https://review.openstack.org/#/c/43564/
as he explained also in https://bugs.launchpad.net/devstack/+bug/1193164. This reproducer works (with basic gir1.2 installed, python-gi etc installed).
This 'GnomeKeyring' error appearing in input is a regression between 1.6.1 of python-keyring and 2.0 due to faulty/feature-incomplete merge in https://bitbucket.org/kang/python-keyring-lib/diff/keyring/backends/Gnome.py... as there is no 'supported' method, which was doing check without using the guilty 'import GnomeKeyring' directly.
Still it seems that this message is printed by something bellow, like python-gi/gobject inspection etc, python-keyring was doing good job in not triggering it.
So for us, using 1.6.1 is good solution. But as Matthew pointed out, this actually can and happens also on master branch, even there it does not breaks the tests (for whatever reason, and I did not checked many results). Still IMO it's bad to have such amount of unexpected messages in output of our commands. So I would suggest to pin keyring to 1.6.1 also on master/global-reqs as we have there >=1.6.1 and seems there's no reason to require higher.
Just to be safe CCing few people which could know more from keystone/clients PoV so they can warn us if there is any incoming upgrade in reqs from this side.
----- Original Message -----
From: "Pavel Sedlak" psedlak@redhat.com To: "Gary Kotton" gkotton@vmware.com Cc: "Alan Pevec" apevec@gmail.com, "Dolph Mathews" dolph.mathews@gmail.com, openstack-stable-maint@lists.openstack.org Sent: Thursday, August 22, 2013 5:01:43 PM Subject: Re: [Openstack-stable-maint] Stable grizzly failures
Hi, sorry for delay I'm looking into it now.
After quick try with master pythonkeystone-client I was unable to reproduce it (but just again older than latest stable/grizzly instance).
I'm going to take that https://bugs.launchpad.net/tempest/+bug/1213912.
----- Original Message -----
From: "Gary Kotton" gkotton@vmware.com To: "Alan Pevec" apevec@gmail.com, "Dolph Mathews" dolph.mathews@gmail.com Cc: openstack-stable-maint@lists.openstack.org, "Pavel Sedlák" psedlak@redhat.com Sent: Wednesday, August 21, 2013 7:25:55 PM Subject: RE: [Openstack-stable-maint] Stable grizzly failures
Thanks. Sorry but I have not found the cycles today to look into any of this
-----Original Message----- From: Alan Pevec [mailto:apevec@gmail.com] Sent: Wednesday, August 21, 2013 8:25 PM To: Dolph Mathews Cc: openstack-stable-maint@lists.openstack.org; Pavel Sedlák Subject: Re: [Openstack-stable-maint] Stable grizzly failures
2013/8/21 Dolph Mathews dolph.mathews@gmail.com:
On Wed, Aug 21, 2013 at 7:30 AM, Alan Pevec apevec@gmail.com wrote:
Adding Pavel who wrote cli.output_parser according to git log. Tempest is using keystoneclient master, but I don't see any changes in cli output. Pavel, you can check Tempest logs from the last failure[1] but afaict the complete output from subprocess.check_output is not logged, so I'm not sure what happened and confused clii parser.
The latest version of keystoneclient looks like it would still pass this test. Here's my attempt to reproduce manually: http://pasteraw.com/812w2wwnyhz6nf9b7o0kyzimpepfdo0 As Alan suggested, knowing what tempest is actually seeing from this command is critical to debugging it.
BTW this was already filed as https://bugs.launchpad.net/tempest/+bug/1213912
Cheers, Alan
Openstack-stable-maint mailing list Openstack-stable-maint@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
-- Pavel Sedlák psedlak@redhat.com OpenStack Quality Engineer Red Hat Czech / BRQ irc: psedlak phone: +420532294659 | (82) 62659
participants (6)
-
Alan Pevec
-
David Kranz
-
Dolph Mathews
-
Gary Kotton
-
Pavel Sedlak
-
Thierry Carrez