[OpenStack-Infra] OpenStack-Infra Digest, Vol 32, Issue 35
Zohaib ahmed hassan
zohaib.hassan78669 at gmail.com
Wed Apr 29 12:01:28 UTC 2015
we have pep8 job in our CI system when it is failed zuul trigger it again
and jenkins server went in to a nonstop loop.we are trying that when a job
is failed it should be abort and stop.please help me to solve this
On Wed, Apr 29, 2015 at 5:00 PM, <
openstack-infra-request at lists.openstack.org> wrote:
> Send OpenStack-Infra mailing list submissions to
> openstack-infra at lists.openstack.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
> or, via email, send a message with subject or body 'help' to
> openstack-infra-request at lists.openstack.org
>
> You can reach the person managing the list at
> openstack-infra-owner at lists.openstack.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of OpenStack-Infra digest..."
>
>
> Today's Topics:
>
> 1. Re: [openstack-dev][cinder] Could you please re-consider
> Oracle ZFSSA iSCSI Driver (Diem Tran)
> 2. Re: Questions with respect to packaging (Steve Kowalik)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 28 Apr 2015 14:59:21 -0400
> From: Diem Tran <diem.tran at oracle.com>
> To: openstack-dev at lists.openstack.org
> Cc: openstack-infra at lists.openstack.org
> Subject: Re: [OpenStack-Infra] [openstack-dev][cinder] Could you
> please re-consider Oracle ZFSSA iSCSI Driver
> Message-ID: <553FD889.6010602 at oracle.com>
> Content-Type: text/plain; charset=windows-1252; format=flowed
>
> Dear Cinder team,
>
> A patchset has been uploaded to request re-integration of Oracle ZFSSA
> iSCSI Cinder driver to 2 branches: master and stable/kilo:
> https://review.openstack.org/#/c/178319
>
> Here are some success reports of the Oracle ZFSSA iSCSI CI:
> https://review.openstack.org/#/c/175809/
> https://review.openstack.org/#/c/176802/
> https://review.openstack.org/#/c/176930/
> https://review.openstack.org/#/c/174291/
> https://review.openstack.org/#/c/175077/
> https://review.openstack.org/#/c/176543/
>
> The complete reports list can be found here:
> https://review.openstack.org/#/q/reviewer:%22Oracle+ZFSSA+CI%22,n,z
>
> Please let me know if you have any question.
>
> Thanks,
> Diem.
>
> On 04/21/2015 01:38 PM, Diem Tran wrote:
> >
> > On 04/21/2015 01:01 PM, Mike Perez wrote:
> >> On 09:57 Apr 21, Mike Perez wrote:
> >>> On 15:47 Apr 20, Diem Tran wrote:
> >>>> Hi Mike,
> >>>>
> >>>> Oracle ZFSSA iSCSI CI is now reporting test results. It is configured
> >>>> to run against the ZFSSA iSCSI driver. You can see the results here:
> >>>> https://review.openstack.org/#/q/reviewer:%22Oracle+ZFSSA+CI%22,n,z
> >>>>
> >>>> Below are some patchsets that the CI reports:
> >>>> https://review.openstack.org/#/c/168424/
> >>>> https://review.openstack.org/#/c/168419/
> >>>> https://review.openstack.org/#/c/175247/
> >>>> https://review.openstack.org/#/c/175077/
> >>>> https://review.openstack.org/#/c/163706/
> >>>>
> >>>>
> >>>> I would like to kindly request you and the core team to get the
> >>>> Oracle ZFSSA iSCSI driver re-integrated back to the Cinder code base.
> >>>> If there is anything else you need from the CI and the driver, please
> >>>> do let me know.
> >>> This was done on 4/8:
> >>>
> >>> https://review.openstack.org/#/c/170770/
> >> My mistake this was only the NFS driver. The window to have drivers
> >> readded in
> >> Kilo has long past. Please see:
> >>
> >>
> http://lists.openstack.org/pipermail/openstack-dev/2015-March/059990.html
> >>
> >>
> >> This will have to be readded in Liberty only at this point.
> >
> > Thank you for your reply. Could you please let me know the procedure
> > needed for the driver to be readded to Liberty? Specifically, will you
> > be the one who upload the revert patchset, or it is the driver
> > maintainer's responsibility?
> >
> > Diem.
> >
> >
>
>
>
>
> ------------------------------
>
> Message: 2
> Date: Wed, 29 Apr 2015 16:19:35 +1000
> From: Steve Kowalik <steven at wedontsleep.org>
> To: openstack-infra at lists.openstack.org
> Subject: Re: [OpenStack-Infra] Questions with respect to packaging
> Message-ID: <554077F7.1020803 at wedontsleep.org>
> Content-Type: text/plain; charset=windows-1252
>
> On 29/04/15 04:41, Monty Taylor wrote:
> > On 04/28/2015 02:02 PM, Clark Boylan wrote:
> >> On Mon, Apr 27, 2015, at 11:13 PM, Steve Kowalik wrote:
>
> [snip]
>
> >>> * Decide if we host full upstream sources, or if we point at a remote
> >>> site with tarballs to extract.
> >> I don't have much of an opinion here other than I think it would be
> >> unfun to maintain local forks of all the things we build packages for. I
> >> am inclined to say point at remote locations until we need to do
> >> otherwise for some reason.
> >
> > I agree with Clark - except I see both needs.
> >
> > I can imagine a few different things we might want to build and/or host:
> >
> > - temporary packages of things where the packaging source lives
> > elsewhere (think "run the version of libvirt that's in debian unstable
> > on our ubuntu trusty build hosts, but stop keeping a copy as soon as it
> > hits an official repo")
> >
> > - temporary packages of things where we need to modify the packaging in
> > some manner (think "the nova team wants a version of libvirt that does
> > not exist anywhere yet, and us installing it on our build hosts is part
> > of the step needed to show that it's worthwhile putting into the next
> > release of a distro - but we'll consume from the upstream distro when it
> > gets there")
> >
> > - per-commit or per-tag versions of things where we are the upstream but
> > the packaging repo is not hosted in gerrit ("think building a package of
> > nova on each commit and shoving it somewhere and using the installation
> > of that package as the basis for doing gate testing")
> >
> > - per-commit or per-tag versions of things where we are the upstream and
> > where the packaging is hosted in gerrit ("think infra running CD
> > deployments of zuul per-commit but building a deb of it first")
> >
> > So I think the answer here is complex and needs to accomodate some
> > packages where we want to have a packaging repository that is a
> > first-class citizen in our infrastructure, and some things where we do
> > not want to import a packaging repository into gerrit but instead want
> > to either reference an external packaging repository, or even just
> > generate some packages with no packaging repository based on other forms
> > of scripting.
>
> It also sounds like we want to support multiple builds per package --
> like your example above of different libvirts. So we have
> openstack/package-libvirt (bikeshedding about the name goes here), and
> multiple branches on gerrit? Sounds like it could work.
>
> > Once we have that sketched out, figuring out which repos need to exist
> > should then be pretty clear where those repos need to go.
> >
> >>> * Decide how we are going to trigger builds of packages. Per commit may
> >>> not be the most useful step, since committers may want to pull multiple
> >>> disparate changes together into one changelog entry, and hence one
> >>> build.
> >> If a single commit does not produce a useful artifact then it is an
> >> incomplete commit. We should be building packages on every commit, we
> >> may not publish them on every commit but we should build them on every
> >> commit (this is one way to gate for package changes). If we need to
> >> control publishing independent of commits we can use tags to publish
> >> releases.
> >
> > I agree with clarkb. I would prefer that we build a package on every
> > commit and that we "publish" those packages on a tag like what works
> > with all of our other things
> >
> > I think, as with the other things, there is a case here we also publish
> > to a location per-commit. Again, zuul comes to mind as an example of
> > wanting to do that. If a commit to zuul requires a corresponding commit
> > and tag to deploy to our zuul server, then we have lost functionality.
> > That said, if we have packaging infrastructure and want to provide a
> > zuul repo so that other people can run "releases" - then I could see
> > tagged versions publishing to a different repo than untagged versions.
> >>>
> >>> * Decide where to host the resultant package repositories.
> >>>
> >>> * Decide how to deal with removal of packages from the package
> >>> repositories.
> >> I would like to see us building packages before we worry too much about
> >> hosting them. In the past everyone has want to jump straight to these
> >> problems when we really just need to sort out building packages first.
> >
> > I agree and disagree with Clark. I think the above questions need to get
> > sorted first. However, I'd like to mention that, again, there are a few
> > different use cases.
> >
> > One could be a long-lived repo for things we care and feed, like zuul,
> > that is publicly accessible with no warnings.
> >
> > One could be a repo into which we put, for instance, per-commit packages
> > of OpenStack because we've decided that we want to run infra-cloud that
> > way. OR - a repo that we use to run infra-cloud that is only published
> > to when we tag something.
> >
> > One could be a per-build repo - where step one in a devstack-gate run is
> > building packages from the repos that zuul has prepared - and then we
> > make that repo available to all of the jobs that are associated with
> > that zuul ref.
> >
> > One could be a long-lived repo that contains some individually curated
> > but automatically built things such as a version of libvirt that the
> > nova team requests. Such a repo would need to be made publically
> > available such that developers working on OpenStack could run devstack
> > and it would get the right version.
> >
> > Finally - we should also always keep in mind that whatever our solution
> > it needs to be able to handle debian, ubuntu, fedora and centos.
>
> Absolutely. However, given the above, I think the next step is to split
> this specification into two. First one about storing packaging in
> gerrit and building packages for Ubuntu/Debian/CentOS/etc, and then
> extending that in the second specification saying that we have these
> great packages and wouldn't it be awesome if people could consume them.
>
> Since the steps for building the packages is rather uncontentious, I
> shall push up a specification for doing same within the next day or
> two, if no one disagrees with my splitting plan.
>
> >> But if I were to look ahead I would expect that we would run per distro
> >> package mirrors to host our packages and possibly mirror distro released
> >> packages as well.
> >
> > And yes - as Clark says, we may also (almost certainly do based on this
> > morning) want to have mirrors of each of the upstream distro package
> > repos that we based things on top of.
>
> I think that's separate again, but I'd be delighted to help (and
> perhaps spec up) a plan to run and update distribution mirrors for
> infra use.
>
> > Monty
>
>
>
> --
> Steve
> "I'm a doctor, not a doorstop!"
> - EMH, USS Enterprise
>
>
>
> ------------------------------
>
> _______________________________________________
> OpenStack-Infra mailing list
> OpenStack-Infra at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>
>
> End of OpenStack-Infra Digest, Vol 32, Issue 35
> ***********************************************
>
--
Thanks Regards
Zohaib Ahmed Hassan
Python & Open Stack Developer @Tecknox Systems
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-infra/attachments/20150429/483decc9/attachment-0001.html>
More information about the OpenStack-Infra
mailing list