[openstack-dev] [all][qa][glance] some recent tempest problems
Eric Harney
eharney at redhat.com
Fri Jun 16 13:58:24 UTC 2017
On 06/15/2017 10:51 PM, Ghanshyam Mann wrote:
> On Fri, Jun 16, 2017 at 9:43 AM, <zhu.fanglei at zte.com.cn> wrote:
>> https://review.openstack.org/#/c/471352/ may be an example
>
> If this is case which is ceph related, i think we already discussed
> these kind of cases, where functionality depends on backend storage
> and how to handle corresponding tests failure [1].
>
> Solution on that was Ceph job should exclude such test case which
> functionality is not implemented/supported in ceph byregex. Jon
> Bernard is working on this tests blacklist [2].
>
> If there is any other job or case, then we can discuss/think of having
> job running for Tempest gate also which i think we do in most cases.
>
> And about making ceph job as voting, i remember we did not do that due
> to stability ok job. Ceph job fails frequently and once Jon patches
> merge and job is consistently stable then we can make voting.
>
I'm not convinced yet that this failure is purely Ceph-specific, at a
quick look.
I think what happens here is, unshelve performs an asynchronous delete
of a glance image, and returns as successful before the delete has
necessarily completed. The check in tempest then sees that the image
still exists, and fails -- but this isn't valid, because the unshelve
API doesn't guarantee that this image is no longer there at the time it
returns. This would fail on any image delete that isn't instantaneous.
Is there a guarantee anywhere that the unshelve API behaves how this
tempest test expects it to?
>>
>>
>> Original Mail
>> Sender: <sean at dague.net>;
>> To: <openstack-dev at lists.openstack.org>;
>> Date: 2017/06/16 05:25
>> Subject: Re: [openstack-dev] [all][qa][glance] some recent tempest problems
>>
>>
>> On 06/15/2017 01:04 PM, Brian Rosmaita wrote:
>>> This isn't a glance-specific problem though we've encountered it quite
>>> a few times recently.
>>>
>>> Briefly, we're gating on Tempest jobs that tempest itself does not
>>> gate on. This leads to a situation where new tests can be merged in
>>> tempest, but wind up breaking our gate. We aren't claiming that the
>>> added tests are bad or don't provide value; the problem is that we
>>> have to drop everything and fix the gate. This interrupts our current
>>> work and forces us to prioritize bugs to fix based not on what makes
>>> the most sense for the project given current priorities and resources,
>>> but based on whatever we can do to get the gates un-blocked.
>>>
>>> As we said earlier, this situation seems to be impacting multiple
>>> projects.
>>>
>>> One solution for this is to change our gating so that we do not run
>>> any Tempest jobs against Glance repositories that are not also gated
>>> by Tempest. That would in theory open a regression path, which is why
>>> we haven't put up a patch yet. Another way this could be addressed is
>>> by the Tempest team changing the non-voting jobs causing this
>>> situation into voting jobs, which would prevent such changes from
>>> being merged in the first place. The key issue here is that we need
>>> to be able to prioritize bugs based on what's most important to each
>>> project.
>>>
>>> We want to be clear that we appreciate the work the Tempest team does.
>>> We abhor bugs and want to squash them too. The problem is just that
>>> we're stretched pretty thin with resources right now, and being forced
>>> to prioritize bug fixes that will get our gate un-blocked is
>>> interfering with our ability to work on issues that may have a higher
>>> impact on end users.
>>>
>>> The point of this email is to find out whether anyone has a better
>>> suggestion for how to handle this situation.
>>
>> It would be useful to provide detailed examples. Everything is trade
>> offs, and having the conversation in the abstract is very difficult to
>> understand those trade offs.
>>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>
>
> ..1 http://lists.openstack.org/pipermail/openstack-dev/2017-May/116172.html
>
> ..2 https://review.openstack.org/#/c/459774/ ,
> https://review.openstack.org/#/c/459445/
>
>
> -gmann
>
>> __________________________________________________________________________
More information about the OpenStack-dev
mailing list