<div dir="ltr"><span style="color:rgb(75,75,75);font-family:'Helvetica Neue',Arial,Helvetica,sans-serif;font-size:14px;line-height:19.6000003814697px">we have pep8 job in our CI system when it is failed zuul trigger it again and jenkins server went in to a nonstop loop.we are trying that when a job is failed it should be abort and stop.please help me to solve this</span><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Apr 29, 2015 at 5:00 PM, <span dir="ltr"><<a href="mailto:openstack-infra-request@lists.openstack.org" target="_blank">openstack-infra-request@lists.openstack.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Send OpenStack-Infra mailing list submissions to<br>
<a href="mailto:openstack-infra@lists.openstack.org">openstack-infra@lists.openstack.org</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra</a><br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:openstack-infra-request@lists.openstack.org">openstack-infra-request@lists.openstack.org</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:openstack-infra-owner@lists.openstack.org">openstack-infra-owner@lists.openstack.org</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of OpenStack-Infra digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. Re: [openstack-dev][cinder] Could you please re-consider<br>
Oracle ZFSSA iSCSI Driver (Diem Tran)<br>
2. Re: Questions with respect to packaging (Steve Kowalik)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Tue, 28 Apr 2015 14:59:21 -0400<br>
From: Diem Tran <<a href="mailto:diem.tran@oracle.com">diem.tran@oracle.com</a>><br>
To: <a href="mailto:openstack-dev@lists.openstack.org">openstack-dev@lists.openstack.org</a><br>
Cc: <a href="mailto:openstack-infra@lists.openstack.org">openstack-infra@lists.openstack.org</a><br>
Subject: Re: [OpenStack-Infra] [openstack-dev][cinder] Could you<br>
please re-consider Oracle ZFSSA iSCSI Driver<br>
Message-ID: <<a href="mailto:553FD889.6010602@oracle.com">553FD889.6010602@oracle.com</a>><br>
Content-Type: text/plain; charset=windows-1252; format=flowed<br>
<br>
Dear Cinder team,<br>
<br>
A patchset has been uploaded to request re-integration of Oracle ZFSSA<br>
iSCSI Cinder driver to 2 branches: master and stable/kilo:<br>
<a href="https://review.openstack.org/#/c/178319" target="_blank">https://review.openstack.org/#/c/178319</a><br>
<br>
Here are some success reports of the Oracle ZFSSA iSCSI CI:<br>
<a href="https://review.openstack.org/#/c/175809/" target="_blank">https://review.openstack.org/#/c/175809/</a><br>
<a href="https://review.openstack.org/#/c/176802/" target="_blank">https://review.openstack.org/#/c/176802/</a><br>
<a href="https://review.openstack.org/#/c/176930/" target="_blank">https://review.openstack.org/#/c/176930/</a><br>
<a href="https://review.openstack.org/#/c/174291/" target="_blank">https://review.openstack.org/#/c/174291/</a><br>
<a href="https://review.openstack.org/#/c/175077/" target="_blank">https://review.openstack.org/#/c/175077/</a><br>
<a href="https://review.openstack.org/#/c/176543/" target="_blank">https://review.openstack.org/#/c/176543/</a><br>
<br>
The complete reports list can be found here:<br>
<a href="https://review.openstack.org/#/q/reviewer:%22Oracle+ZFSSA+CI%22,n,z" target="_blank">https://review.openstack.org/#/q/reviewer:%22Oracle+ZFSSA+CI%22,n,z</a><br>
<br>
Please let me know if you have any question.<br>
<br>
Thanks,<br>
Diem.<br>
<br>
On 04/21/2015 01:38 PM, Diem Tran wrote:<br>
><br>
> On 04/21/2015 01:01 PM, Mike Perez wrote:<br>
>> On 09:57 Apr 21, Mike Perez wrote:<br>
>>> On 15:47 Apr 20, Diem Tran wrote:<br>
>>>> Hi Mike,<br>
>>>><br>
>>>> Oracle ZFSSA iSCSI CI is now reporting test results. It is configured<br>
>>>> to run against the ZFSSA iSCSI driver. You can see the results here:<br>
>>>> <a href="https://review.openstack.org/#/q/reviewer:%22Oracle+ZFSSA+CI%22,n,z" target="_blank">https://review.openstack.org/#/q/reviewer:%22Oracle+ZFSSA+CI%22,n,z</a><br>
>>>><br>
>>>> Below are some patchsets that the CI reports:<br>
>>>> <a href="https://review.openstack.org/#/c/168424/" target="_blank">https://review.openstack.org/#/c/168424/</a><br>
>>>> <a href="https://review.openstack.org/#/c/168419/" target="_blank">https://review.openstack.org/#/c/168419/</a><br>
>>>> <a href="https://review.openstack.org/#/c/175247/" target="_blank">https://review.openstack.org/#/c/175247/</a><br>
>>>> <a href="https://review.openstack.org/#/c/175077/" target="_blank">https://review.openstack.org/#/c/175077/</a><br>
>>>> <a href="https://review.openstack.org/#/c/163706/" target="_blank">https://review.openstack.org/#/c/163706/</a><br>
>>>><br>
>>>><br>
>>>> I would like to kindly request you and the core team to get the<br>
>>>> Oracle ZFSSA iSCSI driver re-integrated back to the Cinder code base.<br>
>>>> If there is anything else you need from the CI and the driver, please<br>
>>>> do let me know.<br>
>>> This was done on 4/8:<br>
>>><br>
>>> <a href="https://review.openstack.org/#/c/170770/" target="_blank">https://review.openstack.org/#/c/170770/</a><br>
>> My mistake this was only the NFS driver. The window to have drivers<br>
>> readded in<br>
>> Kilo has long past. Please see:<br>
>><br>
>> <a href="http://lists.openstack.org/pipermail/openstack-dev/2015-March/059990.html" target="_blank">http://lists.openstack.org/pipermail/openstack-dev/2015-March/059990.html</a><br>
>><br>
>><br>
>> This will have to be readded in Liberty only at this point.<br>
><br>
> Thank you for your reply. Could you please let me know the procedure<br>
> needed for the driver to be readded to Liberty? Specifically, will you<br>
> be the one who upload the revert patchset, or it is the driver<br>
> maintainer's responsibility?<br>
><br>
> Diem.<br>
><br>
><br>
<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Wed, 29 Apr 2015 16:19:35 +1000<br>
From: Steve Kowalik <<a href="mailto:steven@wedontsleep.org">steven@wedontsleep.org</a>><br>
To: <a href="mailto:openstack-infra@lists.openstack.org">openstack-infra@lists.openstack.org</a><br>
Subject: Re: [OpenStack-Infra] Questions with respect to packaging<br>
Message-ID: <<a href="mailto:554077F7.1020803@wedontsleep.org">554077F7.1020803@wedontsleep.org</a>><br>
Content-Type: text/plain; charset=windows-1252<br>
<br>
On 29/04/15 04:41, Monty Taylor wrote:<br>
> On 04/28/2015 02:02 PM, Clark Boylan wrote:<br>
>> On Mon, Apr 27, 2015, at 11:13 PM, Steve Kowalik wrote:<br>
<br>
[snip]<br>
<br>
>>> * Decide if we host full upstream sources, or if we point at a remote<br>
>>> site with tarballs to extract.<br>
>> I don't have much of an opinion here other than I think it would be<br>
>> unfun to maintain local forks of all the things we build packages for. I<br>
>> am inclined to say point at remote locations until we need to do<br>
>> otherwise for some reason.<br>
><br>
> I agree with Clark - except I see both needs.<br>
><br>
> I can imagine a few different things we might want to build and/or host:<br>
><br>
> - temporary packages of things where the packaging source lives<br>
> elsewhere (think "run the version of libvirt that's in debian unstable<br>
> on our ubuntu trusty build hosts, but stop keeping a copy as soon as it<br>
> hits an official repo")<br>
><br>
> - temporary packages of things where we need to modify the packaging in<br>
> some manner (think "the nova team wants a version of libvirt that does<br>
> not exist anywhere yet, and us installing it on our build hosts is part<br>
> of the step needed to show that it's worthwhile putting into the next<br>
> release of a distro - but we'll consume from the upstream distro when it<br>
> gets there")<br>
><br>
> - per-commit or per-tag versions of things where we are the upstream but<br>
> the packaging repo is not hosted in gerrit ("think building a package of<br>
> nova on each commit and shoving it somewhere and using the installation<br>
> of that package as the basis for doing gate testing")<br>
><br>
> - per-commit or per-tag versions of things where we are the upstream and<br>
> where the packaging is hosted in gerrit ("think infra running CD<br>
> deployments of zuul per-commit but building a deb of it first")<br>
><br>
> So I think the answer here is complex and needs to accomodate some<br>
> packages where we want to have a packaging repository that is a<br>
> first-class citizen in our infrastructure, and some things where we do<br>
> not want to import a packaging repository into gerrit but instead want<br>
> to either reference an external packaging repository, or even just<br>
> generate some packages with no packaging repository based on other forms<br>
> of scripting.<br>
<br>
It also sounds like we want to support multiple builds per package --<br>
like your example above of different libvirts. So we have<br>
openstack/package-libvirt (bikeshedding about the name goes here), and<br>
multiple branches on gerrit? Sounds like it could work.<br>
<br>
> Once we have that sketched out, figuring out which repos need to exist<br>
> should then be pretty clear where those repos need to go.<br>
><br>
>>> * Decide how we are going to trigger builds of packages. Per commit may<br>
>>> not be the most useful step, since committers may want to pull multiple<br>
>>> disparate changes together into one changelog entry, and hence one<br>
>>> build.<br>
>> If a single commit does not produce a useful artifact then it is an<br>
>> incomplete commit. We should be building packages on every commit, we<br>
>> may not publish them on every commit but we should build them on every<br>
>> commit (this is one way to gate for package changes). If we need to<br>
>> control publishing independent of commits we can use tags to publish<br>
>> releases.<br>
><br>
> I agree with clarkb. I would prefer that we build a package on every<br>
> commit and that we "publish" those packages on a tag like what works<br>
> with all of our other things<br>
><br>
> I think, as with the other things, there is a case here we also publish<br>
> to a location per-commit. Again, zuul comes to mind as an example of<br>
> wanting to do that. If a commit to zuul requires a corresponding commit<br>
> and tag to deploy to our zuul server, then we have lost functionality.<br>
> That said, if we have packaging infrastructure and want to provide a<br>
> zuul repo so that other people can run "releases" - then I could see<br>
> tagged versions publishing to a different repo than untagged versions.<br>
>>><br>
>>> * Decide where to host the resultant package repositories.<br>
>>><br>
>>> * Decide how to deal with removal of packages from the package<br>
>>> repositories.<br>
>> I would like to see us building packages before we worry too much about<br>
>> hosting them. In the past everyone has want to jump straight to these<br>
>> problems when we really just need to sort out building packages first.<br>
><br>
> I agree and disagree with Clark. I think the above questions need to get<br>
> sorted first. However, I'd like to mention that, again, there are a few<br>
> different use cases.<br>
><br>
> One could be a long-lived repo for things we care and feed, like zuul,<br>
> that is publicly accessible with no warnings.<br>
><br>
> One could be a repo into which we put, for instance, per-commit packages<br>
> of OpenStack because we've decided that we want to run infra-cloud that<br>
> way. OR - a repo that we use to run infra-cloud that is only published<br>
> to when we tag something.<br>
><br>
> One could be a per-build repo - where step one in a devstack-gate run is<br>
> building packages from the repos that zuul has prepared - and then we<br>
> make that repo available to all of the jobs that are associated with<br>
> that zuul ref.<br>
><br>
> One could be a long-lived repo that contains some individually curated<br>
> but automatically built things such as a version of libvirt that the<br>
> nova team requests. Such a repo would need to be made publically<br>
> available such that developers working on OpenStack could run devstack<br>
> and it would get the right version.<br>
><br>
> Finally - we should also always keep in mind that whatever our solution<br>
> it needs to be able to handle debian, ubuntu, fedora and centos.<br>
<br>
Absolutely. However, given the above, I think the next step is to split<br>
this specification into two. First one about storing packaging in<br>
gerrit and building packages for Ubuntu/Debian/CentOS/etc, and then<br>
extending that in the second specification saying that we have these<br>
great packages and wouldn't it be awesome if people could consume them.<br>
<br>
Since the steps for building the packages is rather uncontentious, I<br>
shall push up a specification for doing same within the next day or<br>
two, if no one disagrees with my splitting plan.<br>
<br>
>> But if I were to look ahead I would expect that we would run per distro<br>
>> package mirrors to host our packages and possibly mirror distro released<br>
>> packages as well.<br>
><br>
> And yes - as Clark says, we may also (almost certainly do based on this<br>
> morning) want to have mirrors of each of the upstream distro package<br>
> repos that we based things on top of.<br>
<br>
I think that's separate again, but I'd be delighted to help (and<br>
perhaps spec up) a plan to run and update distribution mirrors for<br>
infra use.<br>
<br>
> Monty<br>
<br>
<br>
<br>
--<br>
Steve<br>
"I'm a doctor, not a doorstop!"<br>
- EMH, USS Enterprise<br>
<br>
<br>
<br>
------------------------------<br>
<br>
_______________________________________________<br>
OpenStack-Infra mailing list<br>
<a href="mailto:OpenStack-Infra@lists.openstack.org">OpenStack-Infra@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra</a><br>
<br>
<br>
End of OpenStack-Infra Digest, Vol 32, Issue 35<br>
***********************************************<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Thanks Regards</div>Zohaib Ahmed Hassan<br></div><div>Python & Open Stack Developer @Tecknox Systems<br></div></div></div></div>
</div>