From iwienand at redhat.com Mon Mar 5 03:07:43 2018 From: iwienand at redhat.com (Ian Wienand) Date: Mon, 5 Mar 2018 14:07:43 +1100 Subject: [OpenStack-Infra] [infra][nova] Corrupt nova-specs repo In-Reply-To: References: Message-ID: <25c6c44c-600e-2c29-408d-da7dddf2f3b0@redhat.com> On 06/30/2017 04:11 PM, Ian Wienand wrote: > Unfortunately it seems the nova-specs repo has undergone some > corruption, currently manifesting itself in an inability to be pushed > to github for replication. We haven't cleaned this up, due to wanting to do it during a rename transition which hasn't happened yet due to zuulv3 rollout. We had reports that github replication was not working. Upon checking the queue, nova-specs was suspicious. ... 07141063 Mar-02 08:04 (retry 3810) [d7122c96] push git at github.com:openstack/nova-specs.git 4e27c57e waiting .... Mar-02 08:12 [ee1b1935] push git at github.com:openstack/networking-bagpipe.git ... so on ... Checking out the logs, nova-specs tries to push itself and fails constantly, per the previous mail. However, usually we get an error and things continue on; e.g. [2018-03-02 08:04:56,439] [d7122c96] Cannot replicate to git at github.com:openstack/nova-specs.git org.eclipse.jgit.errors.TransportException: git at github.com:openstack/nova-specs.git: error occurred during unpacking on the remote end: index-pack abnormal exit Something seems to have happened at [2018-03-02 08:05:58,065] [d7122c96] Push to git at github.com:openstack/nova-specs.git references: Becuase this never returned an error, or seemingly at all. From that point, no more attempts were made by the replication thread(s) to push to github; jobs were queued but nothing happened. I killed that task, but no progress appeared to be made and the replication queue continued to climb. I couldn't find any other useful messages in the logs; but they would be around that time if they were there. I've restarted gerrit and replication appears to be moving again. I'm thinking maybe we should attempt to fix this separate to renames, because at a minimum it makes debugging quite hard as it floods the logs. I'll bring it up in this week's meeting. -i From cboylan at sapwetik.org Wed Mar 7 13:54:38 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 07 Mar 2018 05:54:38 -0800 Subject: [OpenStack-Infra] Dublin PTG recap Message-ID: <1520430878.866172.1294671248.0C9A27CF@webmail.messagingengine.com> I've made it home after the PTG and am jet lagged which means it is a great time to try and recap some of what happened at the PTG. I'm not going to go into a ton of detail for each topic as I think that would make this turn into a novel but if people are interested in specific items feel free to start new threads for them and we can dig in more there (this has already happened for at least one or two topics). Also you can refer back to the PTG etherpad, https://etherpad.openstack.org/p/infra-rocky-ptg, for more information and notes that were taken at the time. First thing that everyone should know is that the weather did not cooperate with us and resulted in much travel uncertainty and disruption. Mention this so that we aren't surprised if some people are slow to respond or otherwise AFK while finding their way home. It also meant that by Thursday we had largely abandoned our PTG schedule. Apologies if this meant that a topic you were interested in was not covered. For this PTG we ended up having three major themes. There was the cross project (helproom) time, zuul topics, and infra services conversations. I think we managed to do a reasonable job covering these themes despite the weather and illness. During the cross project conversations we were able to spend time helping projects like neutron, ironic, swift, and glance better take advantage of Zuul v3. We covered how the gate works, how to convert jobs to native Zuul v3, cleaning up old unused jobs, and how to run multinode jobs. We also had a conversation with keystone about how they might run performance testing on infra. There was also quite a bit of time spent working with the QA team to work out how multinode devstack would work in Zuul v3, how to make irrelevant files lists more intuitive, and how to wind down grenade testing on our oldest supported branch. Kashyapc brought up nested virt support again which is becoming more important with post meltdown/spectre performance slowdowns. This resulted in a few of us crashing the public cloud working group's room for a discussion on how to have better communication between devs and clouds. Rough plan there is to spin up a new neutral mailing list to spark conversations over tools like nested virt between all the involved parties. We covered a number of Zuul topics ranging from scaling out the scheduler to dashboard improvements to tenant label restrictions and executor affinity. None of these topics seemed controversial and I think we captured good notes on the etherpad for these topics in particular. I will call out the tenant label restrictions and executor affinity features as items that seemed to come up quite a bit from various users for various use cases so I think the importance of these features may have gone up after the PTG. One Zuul topic that we couldn't quite get into due to illness and last minute travel changes was Zuul support for containers. As I understand it there was some pre PTG talk about this, but we should probably try to have a proper discussion on the mailing list once people are back to normal operating hours. For infra services we talked about upgrading Gerrit, better multi arch support, and rolling out Bionic. The rough plan (details on etherpad) for the Gerrit upgrade is to update the operating system for Java 8 this cycle then early next cycle upgrade Gerrit to 2.14 or 2.15 depending on which process is simpler for us (we will have to test that between now and then). Multi arch support is now something we have to think about with the arm64 Linaro cloud roll out. We seem to have largely decided that things are moving and largely working and we'll tackle problems as they come up. We also have Bionic beta images available. The one restriction we'd like to see there is to not gate on them but projects can (and in some cases should) go ahead and start using them to determine future compatibility particularly with python3.6. One last major item that came up was how the OpenStack Foundation's CI/CD focus area will affect the infra team. This was a topic at the board meeting in Dublin which I had to miss due to helping run the helproom, but there were discussions with members of the infra team later in the week. The most concrete result of that seemed to be a shared understanding of three facets to this focus area: 1) The how to and best practices of doing CI/CD properly and effectively 2) Zuul and related software as a set of tools to enable (1) and 3) the current set of services run by the infra team which may be useful to developers outside of OpenStack. We've been promised a proper thread of its own from the foundation to start this conversation more broadly with the infra team soon so keep an eye out for that. I've probably forgotten/skipped/missed other topics that are important and worth calling out. It was a long week and I'm now jetlagged so you have my apologies. Feel free to respond to this thread or make a new one to call things out or fill in details. It was great seeing everyone, hope you all made it home ok, and I'm excited we now have this large list of TODOs to get done over the next cycle :) Thanks, Clark From anteaya at anteaya.info Wed Mar 7 14:29:04 2018 From: anteaya at anteaya.info (Anita Kuno) Date: Wed, 7 Mar 2018 09:29:04 -0500 Subject: [OpenStack-Infra] Dublin PTG recap In-Reply-To: <1520430878.866172.1294671248.0C9A27CF@webmail.messagingengine.com> References: <1520430878.866172.1294671248.0C9A27CF@webmail.messagingengine.com> Message-ID: <57c4a56b-11c4-5f03-7f45-2479d573440b@anteaya.info> On 2018-03-07 08:54 AM, Clark Boylan wrote: > I've made it home after the PTG and am jet lagged which means it is a great time to try and recap some of what happened at the PTG. I'm not going to go into a ton of detail for each topic as I think that would make this turn into a novel but if people are interested in specific items feel free to start new threads for them and we can dig in more there (this has already happened for at least one or two topics). Also you can refer back to the PTG etherpad, https://etherpad.openstack.org/p/infra-rocky-ptg, for more information and notes that were taken at the time. > > First thing that everyone should know is that the weather did not cooperate with us and resulted in much travel uncertainty and disruption. Mention this so that we aren't surprised if some people are slow to respond or otherwise AFK while finding their way home. It also meant that by Thursday we had largely abandoned our PTG schedule. Apologies if this meant that a topic you were interested in was not covered. > > For this PTG we ended up having three major themes. There was the cross project (helproom) time, zuul topics, and infra services conversations. I think we managed to do a reasonable job covering these themes despite the weather and illness. > > During the cross project conversations we were able to spend time helping projects like neutron, ironic, swift, and glance better take advantage of Zuul v3. We covered how the gate works, how to convert jobs to native Zuul v3, cleaning up old unused jobs, and how to run multinode jobs. We also had a conversation with keystone about how they might run performance testing on infra. There was also quite a bit of time spent working with the QA team to work out how multinode devstack would work in Zuul v3, how to make irrelevant files lists more intuitive, and how to wind down grenade testing on our oldest supported branch. > > Kashyapc brought up nested virt support again which is becoming more important with post meltdown/spectre performance slowdowns. This resulted in a few of us crashing the public cloud working group's room for a discussion on how to have better communication between devs and clouds. Rough plan there is to spin up a new neutral mailing list to spark conversations over tools like nested virt between all the involved parties. > > We covered a number of Zuul topics ranging from scaling out the scheduler to dashboard improvements to tenant label restrictions and executor affinity. None of these topics seemed controversial and I think we captured good notes on the etherpad for these topics in particular. I will call out the tenant label restrictions and executor affinity features as items that seemed to come up quite a bit from various users for various use cases so I think the importance of these features may have gone up after the PTG. One Zuul topic that we couldn't quite get into due to illness and last minute travel changes was Zuul support for containers. As I understand it there was some pre PTG talk about this, but we should probably try to have a proper discussion on the mailing list once people are back to normal operating hours. > > For infra services we talked about upgrading Gerrit, better multi arch support, and rolling out Bionic. The rough plan (details on etherpad) for the Gerrit upgrade is to update the operating system for Java 8 this cycle then early next cycle upgrade Gerrit to 2.14 or 2.15 depending on which process is simpler for us (we will have to test that between now and then). Multi arch support is now something we have to think about with the arm64 Linaro cloud roll out. We seem to have largely decided that things are moving and largely working and we'll tackle problems as they come up. We also have Bionic beta images available. The one restriction we'd like to see there is to not gate on them but projects can (and in some cases should) go ahead and start using them to determine future compatibility particularly with python3.6. > > One last major item that came up was how the OpenStack Foundation's CI/CD focus area will affect the infra team. This was a topic at the board meeting in Dublin which I had to miss due to helping run the helproom, but there were discussions with members of the infra team later in the week. The most concrete result of that seemed to be a shared understanding of three facets to this focus area: 1) The how to and best practices of doing CI/CD properly and effectively 2) Zuul and related software as a set of tools to enable (1) and 3) the current set of services run by the infra team which may be useful to developers outside of OpenStack. We've been promised a proper thread of its own from the foundation to start this conversation more broadly with the infra team soon so keep an eye out for that. > > I've probably forgotten/skipped/missed other topics that are important and worth calling out. It was a long week and I'm now jetlagged so you have my apologies. Feel free to respond to this thread or make a new one to call things out or fill in details. It was great seeing everyone, hope you all made it home ok, and I'm excited we now have this large list of TODOs to get done over the next cycle :) > > Thanks, > Clark > Thanks for the great summary Clark. Glad you made it home safely. Anita From cboylan at sapwetik.org Wed Mar 7 20:38:48 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 07 Mar 2018 12:38:48 -0800 Subject: [OpenStack-Infra] Old infra specs In-Reply-To: <1515717021.1634844.1232582320.6F915C25@webmail.messagingengine.com> References: <1515717021.1634844.1232582320.6F915C25@webmail.messagingengine.com> Message-ID: <1520455128.1982086.1295217024.5F2C070C@webmail.messagingengine.com> On Thu, Jan 11, 2018, at 4:30 PM, Clark Boylan wrote: > Hello, > > Recently Fungi removed old Jenkins' votes in Gerrit which had the effect > of bubbling up older infra-specs to the top of the review list. This > prompted me to start looking through the list. So far I have abandoned > one spec, https://review.openstack.org/#/c/163637/, as the Zuul v3 spec > and implementation made it redundant. > > There are three other specs that I think we may be able to abandon for > various reasons but they aren't as clear cut so want your feedback. > > 1. Tracking priority efforts with yaml, > https://review.openstack.org/#/c/219372/. I'd like to abandon this one > as we are attempting to use storyboard boards for this type of work > tracking. We aren't using a board yet for our priority efforts but I > think we could easily add a lane to > https://storyboard.openstack.org/#!/board/54 to track that work. > > 2. Any bugtracker support in reviewstats, > https://review.openstack.org/#/c/172886/. Russellb wrote reviewstats and > doesn't seem to think this is necessary. Basically its easy enough to > modify reviewstats to grok bug trackers other than launchpad. We also > seem to have far less emphasis on stats tracking via this tool now so > super low priority? > > 3. Infra hosted survey tool, https://review.openstack.org/#/c/349831/. > We seem to be far less survey crazy recently compared to when this spec > was proposed. Granted that may be due to lack of infra hosted survey > tooling. Do we think this is still a service we want to run? and if so > would the community get benefit from it? > > Let me know what you think. Also, this list isn't comprehensive, I > expect there will be more of these emails as I dig into the specs > proposals more. I've gone ahead and abandoned 1). Item 2) was abandoned by its author. Rather than abandon 3) I have rebased it and updated it with a potential alternative to consider (reuse ethercalc basically) since there has been some renewed interest in having survey tooling available to us that is infra hosted. Now that I have had a chance to start going through these expect more updates on various specs as I manage to work through them and update those that need updates and abandon those that are no longer applicable. If you have a spec up that needs rebasing feel free to push that up too :) Thanks, Clark From xinliang.liu at linaro.org Fri Mar 9 02:01:42 2018 From: xinliang.liu at linaro.org (Xinliang Liu) Date: Fri, 9 Mar 2018 10:01:42 +0800 Subject: [OpenStack-Infra] arm64 first kolla gate jobs integrating (experimental) Message-ID: Hi , Thanks to Ian's great work, zuul can launch arm64 ubuntu-xenial-arm64 nodes. Now it's time to add arm64 jobs, which we've been dreaming for some times. We pick kolla as the project to add first gate jobs as experimental. Maybe building gate jobs is the first ones to added then others. But we are not that familiar with the whole process. Ian and Jeffrey could you help us on this? Like which infra repos[1] should be touched? jobs definitions should be upstream to which repo/project? Does we now use zuul v3? And when adding the jobs definitions how to debug or insure it running status? Other things need to be considered/done? [1] https://github.com/openstack-infra Thanks, Xinliang From zhang.lei.fly at gmail.com Fri Mar 9 03:16:29 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Fri, 9 Mar 2018 11:16:29 +0800 Subject: [OpenStack-Infra] arm64 first kolla gate jobs integrating (experimental) In-Reply-To: References: Message-ID: ​Hi xinliang, Kolla is already migrated to zuul v3 and using in-project zuul jobs. I found arm64 provider already added into zuul v3 nodepool by - https://review.openstack.org/#/c/549293/ - https://review.openstack.org/#/c/546027/ So the left things is add jobs in kolla & kolla-ansible zuul definition to trigger the build & deploy jobs run on arm64 nodes. You can refer ubuntu jobs implementation[0] [0] https://github.com/openstack/kolla/blob/48dab9ac50a9e5b3f48d93b0e8a9a0cfe3c35804/.zuul.d/ubuntu.yaml ​ On Fri, Mar 9, 2018 at 10:01 AM, Xinliang Liu wrote: > Hi , > > Thanks to Ian's great work, zuul can launch arm64 ubuntu-xenial-arm64 > nodes. > > Now it's time to add arm64 jobs, which we've been dreaming for some times. > We pick kolla as the project to add first gate jobs as experimental. > Maybe building gate jobs is the first ones to added then others. > > But we are not that familiar with the whole process. > Ian and Jeffrey could you help us on this? > Like which infra repos[1] should be touched? jobs definitions should > be upstream to which repo/project? > Does we now use zuul v3? > And when adding the jobs definitions how to debug or insure it running > status? > Other things need to be considered/done? > > [1] https://github.com/openstack-infra > > Thanks, > Xinliang > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From xinliang.liu at linaro.org Fri Mar 9 03:56:39 2018 From: xinliang.liu at linaro.org (Xinliang Liu) Date: Fri, 9 Mar 2018 11:56:39 +0800 Subject: [OpenStack-Infra] arm64 first kolla gate jobs integrating (experimental) In-Reply-To: References: Message-ID: On 9 March 2018 at 11:16, Jeffrey Zhang wrote: > Hi xinliang, > > Kolla is already migrated to zuul v3 and using in-project zuul jobs. > > I found arm64 provider already added into zuul v3 nodepool by > - https://review.openstack.org/#/c/549293/ > - https://review.openstack.org/#/c/546027/ > > So the left things is add jobs in kolla & kolla-ansible zuul definition > to trigger the build & deploy jobs run on arm64 nodes. You can refer > ubuntu jobs implementation[0] Thanks very much, will look into this first and investigate how to add kolla building job. Xinliang > > [0] > https://github.com/openstack/kolla/blob/48dab9ac50a9e5b3f48d93b0e8a9a0cfe3c35804/.zuul.d/ubuntu.yaml > > > > On Fri, Mar 9, 2018 at 10:01 AM, Xinliang Liu > wrote: >> >> Hi , >> >> Thanks to Ian's great work, zuul can launch arm64 ubuntu-xenial-arm64 >> nodes. >> >> Now it's time to add arm64 jobs, which we've been dreaming for some times. >> We pick kolla as the project to add first gate jobs as experimental. >> Maybe building gate jobs is the first ones to added then others. >> >> But we are not that familiar with the whole process. >> Ian and Jeffrey could you help us on this? >> Like which infra repos[1] should be touched? jobs definitions should >> be upstream to which repo/project? >> Does we now use zuul v3? >> And when adding the jobs definitions how to debug or insure it running >> status? >> Other things need to be considered/done? >> >> [1] https://github.com/openstack-infra >> >> Thanks, >> Xinliang > > > > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me From mgagne at calavera.ca Tue Mar 13 16:02:10 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Tue, 13 Mar 2018 12:02:10 -0400 Subject: [OpenStack-Infra] extending python-jenkins-core group In-Reply-To: <5CD43122-6ED5-4160-B1AF-0270D1CFC41A@redhat.com> References: <5CD43122-6ED5-4160-B1AF-0270D1CFC41A@redhat.com> Message-ID: Hi, On Tue, Feb 27, 2018 at 3:33 PM, Sorin Sbarnea wrote: > Hi! > > I would like to propose extending the list of people with commit access to > python-jenkins because that repository needs more attention. > > As you know this is a key dependency of jenkins-job-builder and sometimes we > need to fix bugs (or implement features) in the library. > > https://review.openstack.org/#/admin/groups/322,members > > Is seems that the current list of members is not long enough as even few > trivial reviews were ignored for long time. > I have removed myself from the group. Staying in the group would be a disservice to the community as people would expect me to review changes which I'm no longer doing. Thanks -- Mathieu From hashar at free.fr Wed Mar 14 08:40:16 2018 From: hashar at free.fr (Antoine Musso) Date: Wed, 14 Mar 2018 09:40:16 +0100 Subject: [OpenStack-Infra] extending python-jenkins-core group In-Reply-To: <5CD43122-6ED5-4160-B1AF-0270D1CFC41A@redhat.com> References: <5CD43122-6ED5-4160-B1AF-0270D1CFC41A@redhat.com> Message-ID: On 27/02/2018 21:33, Sorin Sbarnea wrote: > Hi! > > I would like to propose extending the list of people with commit access > to python-jenkins because that repository needs more attention. > > As you know this is a key dependency of jenkins-job-builder and > sometimes we need to fix bugs (or implement features) in the library. > > https://review.openstack.org/#/admin/groups/322,members > > Is seems that the current list of members is not long enough as even few > trivial reviews were ignored for long time. > > I think that adding few others should give the project a boost, all of > them already core committers on jenkins-job-builder-core: > * Thanh Ha > * Sorin Sbarnea (nominating myself, bit lame...) > * Wayne Warren > (picked based on who performed reviews recently) > > An alternative would be to add the entire jenkins-job-builder-core group > as member of python-jenkins one. > > Please let me know what you think about this proposal. Hello, python-jenkins used to be on Launchpad and I had it added to OpenStack to benefit Jenkins Job Builder. I haven't done much reviews for at least a year if not two and I guess that is the same for most people in the group. I guess once we had our basic use cases solved the interest has vanished. I am definitely in favor of having all JJB core reviewers to be added to python-jenkins. Specially the three nominated who are the de facto maintainers of jjb. jjb: https://review.openstack.org/#/admin/groups/194 https://review.openstack.org/#/admin/groups/321 (release) python-jenkins: https://review.openstack.org/#/admin/groups/322 https://review.openstack.org/#/admin/groups/323 (release) python-jenkins-release only has Khai Do and he is no more involved in the OpenStack Infrastructure as far as I know. Since both projects are tightly related and require knowledge of Jenkins, maybe it would make sense to merge the groups? cheers, -- Antoine "hashar" Musso From zaro0508 at gmail.com Wed Mar 14 16:25:20 2018 From: zaro0508 at gmail.com (Zaro) Date: Wed, 14 Mar 2018 09:25:20 -0700 Subject: [OpenStack-Infra] extending python-jenkins-core group In-Reply-To: References: <5CD43122-6ED5-4160-B1AF-0270D1CFC41A@redhat.com> Message-ID: +1 for Antoine's suggestion. +1 to have either Than Ha or Darragh Bailey for python-jenkins-release On Wed, Mar 14, 2018 at 1:40 AM, Antoine Musso wrote: > On 27/02/2018 21:33, Sorin Sbarnea wrote: > > Hi! > > > > I would like to propose extending the list of people with commit access > > to python-jenkins because that repository needs more attention. > > > > As you know this is a key dependency of jenkins-job-builder and > > sometimes we need to fix bugs (or implement features) in the library. > > > > https://review.openstack.org/#/admin/groups/322,members > > > > Is seems that the current list of members is not long enough as even few > > trivial reviews were ignored for long time. > > > > I think that adding few others should give the project a boost, all of > > them already core committers on jenkins-job-builder-core: > > * Thanh Ha > > * Sorin Sbarnea (nominating myself, bit lame...) > > * Wayne Warren > > (picked based on who performed reviews recently) > > > > An alternative would be to add the entire jenkins-job-builder-core group > > as member of python-jenkins one. > > > > Please let me know what you think about this proposal. > > Hello, > > python-jenkins used to be on Launchpad and I had it added to OpenStack > to benefit Jenkins Job Builder. I haven't done much reviews for at > least a year if not two and I guess that is the same for most people in > the group. I guess once we had our basic use cases solved the interest > has vanished. > > > I am definitely in favor of having all JJB core reviewers to be added to > python-jenkins. Specially the three nominated who are the de facto > maintainers of jjb. > > jjb: > https://review.openstack.org/#/admin/groups/194 > https://review.openstack.org/#/admin/groups/321 (release) > > python-jenkins: > https://review.openstack.org/#/admin/groups/322 > https://review.openstack.org/#/admin/groups/323 (release) > > python-jenkins-release only has Khai Do and he is no more involved in > the OpenStack Infrastructure as far as I know. > > Since both projects are tightly related and require knowledge of > Jenkins, maybe it would make sense to merge the groups? > > cheers, > > -- > Antoine "hashar" Musso > > > > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Mar 14 23:15:07 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 14 Mar 2018 16:15:07 -0700 Subject: [OpenStack-Infra] extending python-jenkins-core group In-Reply-To: <5CD43122-6ED5-4160-B1AF-0270D1CFC41A@redhat.com> References: <5CD43122-6ED5-4160-B1AF-0270D1CFC41A@redhat.com> Message-ID: <1521069307.2167490.1303528184.1BA49C59@webmail.messagingengine.com> On Tue, Feb 27, 2018, at 1:33 PM, Sorin Sbarnea wrote: > Hi! > > I would like to propose extending the list of people with commit access > to python-jenkins because that repository needs more attention. > > As you know this is a key dependency of jenkins-job-builder and > sometimes we need to fix bugs (or implement features) in the library. > > https://review.openstack.org/#/admin/groups/322,members > > > Is seems that the current list of members is not long enough as even few > trivial reviews were ignored for long time. > > I think that adding few others should give the project a boost, all of > them already core committers on jenkins-job-builder-core: > * Thanh Ha > * Sorin Sbarnea (nominating myself, bit lame...) > * Wayne Warren > (picked based on who performed reviews recently) > > An alternative would be to add the entire jenkins-job-builder-core group > as member of python-jenkins one. > > Please let me know what you think about this proposal. Fungi has pointed out that python-jenkins isn't an official Infra (or even OpenStack) project. This means my input here is mostly as an outside observer and should not be treated as special in any way. I think it would be a great idea to expand the core membership particularly if these individuals are interested in maintaining the project. My recollection was that we imported the project into Gerrit in the first place because it had gone stale on launchpad. The initial people involved in that were James Page, Jim Blair, and Khai Do. You already have input from one of the three, but maybe check up with the other two and if they give you the go ahead call it good and update the group? Hope this helps, Clark From corvus at inaugust.com Thu Mar 15 21:57:05 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 15 Mar 2018 14:57:05 -0700 Subject: [OpenStack-Infra] Zuul project evolution Message-ID: <87sh91gfym.fsf@meyer.lemoncheese.net> Hi, To date, Zuul has (perhaps rightly) often been seen as an OpenStack-specific tool. That's only natural since we created it explicitly to solve problems we were having in scaling the testing of OpenStack. Nevertheless, it is useful far beyond OpenStack, and even before v3, it has found adopters elsewhere. Though as we talk to more people about adopting it, it is becoming clear that the less experience they have with OpenStack, the more likely they are to perceive that Zuul isn't made for them. At the same time, the OpenStack Foundation has identified a number of strategic focus areas related to open infrastructure in which to invest. CI/CD is one of these. The OpenStack project infrastructure team, the Zuul team, and the Foundation staff recently discussed these issues and we feel that establishing Zuul as its own top-level project with the support of the Foundation would benefit everyone. It's too early in the process for me to say what all the implications are, but here are some things I feel confident about: * The folks supporting the Zuul running for OpenStack will continue to do so. We love OpenStack and it's just way too fun running the world's most amazing public CI system to do anything else. * Zuul will be independently promoted as a CI/CD tool. We are establishing our own website and mailing lists to facilitate interacting with folks who aren't otherwise interested in OpenStack. You can expect to hear more about this over the coming months. * We will remain just as open as we have been -- the "four opens" are intrinsic to what we do. As a first step in this process, I have proposed a change[1] to remove Zuul from the list of official OpenStack projects. If you have any questions, please don't hesitate to discuss them here, or privately contact me or the Foundation staff. -Jim [1] https://review.openstack.org/552637 From doug at doughellmann.com Thu Mar 15 23:11:57 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 15 Mar 2018 19:11:57 -0400 Subject: [OpenStack-Infra] Zuul project evolution In-Reply-To: <87sh91gfym.fsf@meyer.lemoncheese.net> References: <87sh91gfym.fsf@meyer.lemoncheese.net> Message-ID: <1521155464-sup-187@lrrr.local> Excerpts from corvus's message of 2018-03-15 14:57:05 -0700: > Hi, > > To date, Zuul has (perhaps rightly) often been seen as an > OpenStack-specific tool. That's only natural since we created it > explicitly to solve problems we were having in scaling the testing of > OpenStack. Nevertheless, it is useful far beyond OpenStack, and even > before v3, it has found adopters elsewhere. Though as we talk to more > people about adopting it, it is becoming clear that the less experience > they have with OpenStack, the more likely they are to perceive that Zuul > isn't made for them. > > At the same time, the OpenStack Foundation has identified a number of > strategic focus areas related to open infrastructure in which to invest. > CI/CD is one of these. The OpenStack project infrastructure team, the > Zuul team, and the Foundation staff recently discussed these issues and > we feel that establishing Zuul as its own top-level project with the > support of the Foundation would benefit everyone. > > It's too early in the process for me to say what all the implications > are, but here are some things I feel confident about: > > * The folks supporting the Zuul running for OpenStack will continue to > do so. We love OpenStack and it's just way too fun running the > world's most amazing public CI system to do anything else. > > * Zuul will be independently promoted as a CI/CD tool. We are > establishing our own website and mailing lists to facilitate > interacting with folks who aren't otherwise interested in OpenStack. > You can expect to hear more about this over the coming months. > > * We will remain just as open as we have been -- the "four opens" are > intrinsic to what we do. > > As a first step in this process, I have proposed a change[1] to remove > Zuul from the list of official OpenStack projects. If you have any > questions, please don't hesitate to discuss them here, or privately > contact me or the Foundation staff. > > -Jim > > [1] https://review.openstack.org/552637 > Thanks for posting this, Jim. I look forward to watching (and participating in) the evolution of Zuul through this change! Doug From harlowja at fastmail.com Fri Mar 16 21:23:45 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Fri, 16 Mar 2018 14:23:45 -0700 Subject: [OpenStack-Infra] [openstack-dev] Zuul project evolution In-Reply-To: <87sh91gfym.fsf@meyer.lemoncheese.net> References: <87sh91gfym.fsf@meyer.lemoncheese.net> Message-ID: <5AAC35E1.80002@fastmail.com> Awesome! Might IMHO be useful to also start doing this with other projects. James E. Blair wrote: > Hi, > > To date, Zuul has (perhaps rightly) often been seen as an > OpenStack-specific tool. That's only natural since we created it > explicitly to solve problems we were having in scaling the testing of > OpenStack. Nevertheless, it is useful far beyond OpenStack, and even > before v3, it has found adopters elsewhere. Though as we talk to more > people about adopting it, it is becoming clear that the less experience > they have with OpenStack, the more likely they are to perceive that Zuul > isn't made for them. > > At the same time, the OpenStack Foundation has identified a number of > strategic focus areas related to open infrastructure in which to invest. > CI/CD is one of these. The OpenStack project infrastructure team, the > Zuul team, and the Foundation staff recently discussed these issues and > we feel that establishing Zuul as its own top-level project with the > support of the Foundation would benefit everyone. > > It's too early in the process for me to say what all the implications > are, but here are some things I feel confident about: > > * The folks supporting the Zuul running for OpenStack will continue to > do so. We love OpenStack and it's just way too fun running the > world's most amazing public CI system to do anything else. > > * Zuul will be independently promoted as a CI/CD tool. We are > establishing our own website and mailing lists to facilitate > interacting with folks who aren't otherwise interested in OpenStack. > You can expect to hear more about this over the coming months. > > * We will remain just as open as we have been -- the "four opens" are > intrinsic to what we do. > > As a first step in this process, I have proposed a change[1] to remove > Zuul from the list of official OpenStack projects. If you have any > questions, please don't hesitate to discuss them here, or privately > contact me or the Foundation staff. > > -Jim > > [1] https://review.openstack.org/552637 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From fungi at yuggoth.org Fri Mar 16 22:02:49 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 16 Mar 2018 22:02:49 +0000 Subject: [OpenStack-Infra] [openstack-dev] Zuul project evolution In-Reply-To: <5AAC35E1.80002@fastmail.com> References: <87sh91gfym.fsf@meyer.lemoncheese.net> <5AAC35E1.80002@fastmail.com> Message-ID: <20180316220249.2gevqythibgxtqdk@yuggoth.org> On 2018-03-16 14:23:45 -0700 (-0700), Joshua Harlow wrote: > Awesome! > > Might IMHO be useful to also start doing this with other projects. [...] Don't be misled into thinking the first one is also the last, but we have to start somewhere. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From daragh.bailey at gmail.com Sat Mar 17 11:24:43 2018 From: daragh.bailey at gmail.com (Darragh Bailey) Date: Sat, 17 Mar 2018 11:24:43 +0000 Subject: [OpenStack-Infra] OpenStack and open infrastructure Message-ID: Hi, I'm looking to spin out a testing code used for git-upstream ( http://git.openstack.org/cgit/openstack/git-upstream) into a separate project as a fixture to make it easier to use else where for building tests for tools that use git. Currently lining up a few commits using OpenStack's Gerrit as a holding place until I've worked out where the new project should reside, https://review.openstack.org/#/c/551445/ being the main one, and I've some tests to write. Based on http://lists.openstack.org/pipermail/foundation/2017-November/002532.html (OpenStack and open infrastructure) I'm wondering if that means it can continue to live within the OpenStack infrastructure/tooling? Or is this something that is still under discussion as to what it means? Created a change for this just in case this is a straight forward request and there was no need for this email: https://review.openstack.org/553978 -- Darragh Bailey "Nothing is foolproof to a sufficiently talented fool" -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Mon Mar 19 18:31:12 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 19 Mar 2018 11:31:12 -0700 Subject: [OpenStack-Infra] OpenStack and open infrastructure In-Reply-To: References: Message-ID: <1521484272.2016986.1308595520.55F5A77B@webmail.messagingengine.com> On Sat, Mar 17, 2018, at 4:24 AM, Darragh Bailey wrote: > Hi, > > > I'm looking to spin out a testing code used for git-upstream ( > http://git.openstack.org/cgit/openstack/git-upstream) into a separate > project as a fixture to make it easier to use else where for building tests > for tools that use git. > > Currently lining up a few commits using OpenStack's Gerrit as a holding > place until I've worked out where the new project should reside, > https://review.openstack.org/#/c/551445/ being the main one, and I've some > tests to write. > > Based on > http://lists.openstack.org/pipermail/foundation/2017-November/002532.html > (OpenStack and open infrastructure) I'm wondering if that means it can > continue to live within the OpenStack infrastructure/tooling? Or is this > something that is still under discussion as to what it means? I think we can largely ignore what the foundation level changes imply for this and instead just rely on the existing unofficial project hosting that we provide. git-upstream itself is hosted as an unofficial project (formerly Stackforge) and I think we can just host fixtures-git the same way. Historical docs at https://docs.openstack.org/infra/system-config/stackforge.html with updates proposed at https://review.openstack.org/554312 to more accurately reflect the current state of things. More supporting info at https://governance.openstack.org/tc/resolutions/20160119-stackforge-retirement.html. > > Created a change for this just in case this is a straight forward request > and there was no need for this email: https://review.openstack.org/553978 Yup, it should be this straightforward. Though AJaeger has at least one review item that will need to be addressed. Hope this helps, Clark From berndbausch at gmail.com Wed Mar 21 07:54:06 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Wed, 21 Mar 2018 16:54:06 +0900 Subject: [OpenStack-Infra] Zuul: Question about Github webhook Message-ID: <007d01d3c0e9$cb7b6150$627223f0$@gmail.com> Sorry for this newbie intrusion on the distribution list. I am trying to follow the Zuul from Scratch instructions[1] to set up Zuul at home. The purpose is twofold: Learn about Zuul, and check whether Zuul from Scratch instructions are accurate and make sense. Currently I am at a roadblock. The problem is the Github webhook, through which Github sends data to Zuul. It should be http:///connection/github/payload [2], but on the Zuul server that I have deployed so far, nothing listens at port 80. Thus, Github gets a "connection refused" when trying to contact my server. I used tcpdump to confirm that my server does indeed receive an http POST from Github. Either something is wrong with my setup, which prevents Zuul from listening at 80. Or the webhook isn't documented correctly. How can I confirm what could be the problem? Bernd [1] https://docs.openstack.org/infra/zuul/admin/zuul-from-scratch.html [2] https://docs.openstack.org/infra/zuul/admin/drivers/github.html, https://docs.openstack.org/infra/zuul/admin/zuul-from-scratch.html#configure -github -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5518 bytes Desc: not available URL: From Tobias.Henkel at bmw.de Wed Mar 21 08:26:16 2018 From: Tobias.Henkel at bmw.de (Tobias.Henkel at bmw.de) Date: Wed, 21 Mar 2018 08:26:16 +0000 Subject: [OpenStack-Infra] Zuul: Question about Github webhook In-Reply-To: <007d01d3c0e9$cb7b6150$627223f0$@gmail.com> References: <007d01d3c0e9$cb7b6150$627223f0$@gmail.com> Message-ID: <2F465CF8-E46C-4A77-8491-0C7391A4D9D2@bmw.de> Hi Bernd, On 21.03.18, 08:54, "Bernd Bausch" wrote: Sorry for this newbie intrusion on the distribution list. I am trying to follow the Zuul from Scratch instructions[1] to set up Zuul at home. The purpose is twofold: Learn about Zuul, and check whether Zuul from Scratch instructions are accurate and make sense. Currently I am at a roadblock. The problem is the Github webhook, through which Github sends data to Zuul. It should be http:///connection/github/payload [2], but on the Zuul server that I have deployed so far, nothing listens at port 80. Thus, Github gets a "connection refused" when trying to contact my server. I used tcpdump to confirm that my server does indeed receive an http POST from Github. Either something is wrong with my setup, which prevents Zuul from listening at 80. Or the webhook isn't documented correctly. How can I confirm what could be the problem? Bernd [1] https://docs.openstack.org/infra/zuul/admin/zuul-from-scratch.html [2] https://docs.openstack.org/infra/zuul/admin/drivers/github.html, https://docs.openstack.org/infra/zuul/admin/zuul-from-scratch.html#configure -github It looks like you discovered a bug in the zuul-from-scratch docs. Zuul-web by default listens on port 9000. While the documented webhook config in github assumes port 80. So you need to either change the webhook to use 9000 or change the zuul config to use port 80 [1]. Further I pushed up a change to fix the doc [2]. Sidenote: we now have zuul specific mailing lists [3] so you might want to also subscribe to them. [1] https://docs.openstack.org/infra/zuul/admin/components.html#attr-web.port [2] https://review.openstack.org/554829 [3] http://lists.zuul-ci.org Kind regards Tobias From dmsimard at redhat.com Sun Mar 25 01:28:19 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Sat, 24 Mar 2018 21:28:19 -0400 Subject: [OpenStack-Infra] Public numbers about the scale of the infrastructure/CI ? Message-ID: Hi -infra, I'll be presenting a talk at a local OpenStack meetup next week [1] that will highlight some examples about how people can help and contribute to the infrastructure project. The talk will be recorded and should hopefully serve as a form of informal documentation. I'd like to disclose some semi-official numbers (as I'd personally pull them up) to let people have an idea of the scale our contributors are maintaining. I suppose this data is already somewhat public if you know where to look but I don't think it's been written down in a digestable format in recent history. Unless there's any objection, I'd have a slide with up to date numbers such as: - # of projects hosted (as per git.openstack.org) - # of servers (in aggregate of all our regions) -- (Maybe some big highlights like the size of logstash, logs.o.o, Zuul) - Nodepool capacity (number of clouds, aggregate capacity) - # of jobs and Ansible playbooks per month ran by Zuul - Approximate number of maintained and hosted services (irc, gerritbot, meetbot, gerrit, git, mailing lists, wiki, ask.openstack, storyboard, codesearch, etc.) - Probably some high level numbers from Stackalytics - Maybe something else I haven't thought about The idea of the talk is not to brag about all the stuff we're doing but rather, "hey, you don't need to be a pro in OpenStack to contribute, we got all these different things you can help with". I realize it's a bit last minute but please let me know if you see anything wrong with this ! [1]: https://www.meetup.com/Montreal-OpenStack/events/248344351/ David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] From berndbausch at gmail.com Mon Mar 26 06:54:54 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Mon, 26 Mar 2018 15:54:54 +0900 Subject: [OpenStack-Infra] Problems setting up my own OpenStack Infrastructure Message-ID: <002e01d3c4cf$59e21720$0da64560$@gmail.com> At least two places in the Infra documentation talk about setting up an infrastructure at home: - https://docs.openstack.org/infra/system-config/sysadmin.html#making-a-change -in-puppet - https://docs.openstack.org/infra/system-config/running-your-own.html and https://docs.openstack.org/infra/system-config/puppet.html. Yes, it's clearly marked as partially outdated. I am trying to do that, with the purpose of learning and (perhaps, one day...) later contributing to the project. My setup attempts fail in both cases - the reason being that no hiera is set up. For example, the first documentation page asks me to - clone system-config - set up a local.pp, for example for an Etherpad server - run install_puppet.sh and install_modules.sh - apply the local.pp I get: > 2018-03-26 15:34:45 +0900 Puppet (err): Could not find data item etherpad_ssl_cert_file_contents in any Hiera data file and no default supplied at /opt/system-config/local.pp:3 on node puppetmaster.home. The second set of instructions also fails because of the absence of a hiera. I find no file on the system where etherpad_ssl_cert_file_contents is defined. So my questions are: - am I supposed to create the hiera myself? If so, is there any help what should go inside, and where is its normal location? - or is one of the two install_* scripts supposed to do that but fails for some reason? None of them contains the string "hiera". - or is there something else I don't get? My confusion may be caused by the fact that I started learning Puppet a few days ago. Also, the Puppet version I have learned so far turns out to have a different hiera syntax than the one used in openstack-infra. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5518 bytes Desc: not available URL: From corvus at inaugust.com Mon Mar 26 14:20:57 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 26 Mar 2018 07:20:57 -0700 Subject: [OpenStack-Infra] Public numbers about the scale of the infrastructure/CI ? In-Reply-To: (David Moreau Simard's message of "Sat, 24 Mar 2018 21:28:19 -0400") References: Message-ID: <871sg6q5o6.fsf@meyer.lemoncheese.net> David Moreau Simard writes: > Unless there's any objection, I'd have a slide with up to date numbers such as: I don't have any objection to making them public (I believe nearly all, if not all, of these are public already). But I would like them to be as accurate as possible :). > - # of projects hosted (as per git.openstack.org) > - # of servers (in aggregate of all our regions) > -- (Maybe some big highlights like the size of logstash, logs.o.o, Zuul) > - Nodepool capacity (number of clouds, aggregate capacity) > - # of jobs and Ansible playbooks per month ran by Zuul I'm curious about this one -- how were you planning on defining these values and obtaining them? > - Approximate number of maintained and hosted services (irc, > gerritbot, meetbot, gerrit, git, mailing lists, wiki, ask.openstack, > storyboard, codesearch, etc.) > - Probably some high level numbers from Stackalytics > - Maybe something else I haven't thought about -Jim From dmsimard at redhat.com Mon Mar 26 16:03:31 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Mon, 26 Mar 2018 12:03:31 -0400 Subject: [OpenStack-Infra] Public numbers about the scale of the infrastructure/CI ? In-Reply-To: <871sg6q5o6.fsf@meyer.lemoncheese.net> References: <871sg6q5o6.fsf@meyer.lemoncheese.net> Message-ID: On Mon, Mar 26, 2018 at 10:20 AM, James E. Blair wrote: >> - # of jobs and Ansible playbooks per month ran by Zuul > > I'm curious about this one -- how were you planning on defining these > values and obtaining them? > I've needed to pull statistics out of Zuul in the past for RDO (i.e, justifying budget for CI resources) and I use the sql reporter data to do it. It looks like this: $range = "'2018-02-01 00:00:00' AND '2018-02-28 23:59:59'" SELECT job_name, result, start_time, end_time, TIMEDIFF(end_time, start_time) as duration FROM zuul_build WHERE start_time BETWEEN $range This gets me the amount of monthly *jobs* and I can extrapolate (over N playbooks..) by estimating a number knowing that: - base and post playbooks are fairly consistently X playbooks - there is at least one "run" playbook So pretending that 1000 jobs ran, I can say something like: 1000 jobs and over [1000 * (X+1)] playbooks It's not a perfect number but we know we run more playbooks than that. What I have also been thinking about is, if I want to get a more accurate number, I could do a sum of all the executor playbook results (which are in graphite) but the history for those don't go too far back. Ex: stats.zuul.executor.ze*_openstack_org.phase.*.* David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] From cboylan at sapwetik.org Mon Mar 26 18:11:38 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 26 Mar 2018 11:11:38 -0700 Subject: [OpenStack-Infra] Problems setting up my own OpenStack Infrastructure In-Reply-To: <002e01d3c4cf$59e21720$0da64560$@gmail.com> References: <002e01d3c4cf$59e21720$0da64560$@gmail.com> Message-ID: <1522087898.2381285.1316647112.11583426@webmail.messagingengine.com> On Sun, Mar 25, 2018, at 11:54 PM, Bernd Bausch wrote: > At least two places in the Infra documentation talk about setting up an > infrastructure at home: > - > https://docs.openstack.org/infra/system-config/sysadmin.html#making-a-change > -in-puppet > - https://docs.openstack.org/infra/system-config/running-your-own.html and > https://docs.openstack.org/infra/system-config/puppet.html. Yes, it's > clearly marked as partially outdated. > > I am trying to do that, with the purpose of learning and (perhaps, one > day...) later contributing to the project. My setup attempts fail in both > cases - the reason being that no hiera is set up. > > For example, the first documentation page asks me to > - clone system-config > - set up a local.pp, for example for an Etherpad server > - run install_puppet.sh and install_modules.sh > - apply the local.pp > > I get: > > 2018-03-26 15:34:45 +0900 Puppet (err): Could not find data item > etherpad_ssl_cert_file_contents in any Hiera data file and no default > supplied at /opt/system-config/local.pp:3 on node puppetmaster.home. Can you share the contents of your local.pp? Generally though hiera is used for anything that will be secret or very site specific. So in this case the expectation is that you will set up a hiera file with the info specific for your deployment (because you shouldn't have the ssl cert private data for our deployment and we shouldn't have yours). This is likely a missing set of info for our docs. We should add something with general hiera setup to get people going. With that out of the way, this specific case will actually avoid writing the file if you set the contents to be empty string, ''. In that case you can just set ssl_cert_file and ssl_key_file to use the system installed self signed snakeoil certificate. An example of doing this can be found in the config for our dev instance, https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/etherpad_dev.pp#n12 > > The second set of instructions also fails because of the absence of a hiera. > I find no file on the system where etherpad_ssl_cert_file_contents is > defined. > > So my questions are: > - am I supposed to create the hiera myself? If so, is there any help what > should go inside, and where is its normal location? > - or is one of the two install_* scripts supposed to do that but fails for > some reason? None of them contains the string "hiera". > - or is there something else I don't get? Yes, you'll need to supply it yourself, or remove hiera entirely and supply the values as linked above, essentially just bake the variable values into your local.pp rather than doing a hiera lookup. Unfortunately I don't remember off the top of my head how to set up a hiera so I will have to dig into docs (or maybe someone else can chime in with that info). > > My confusion may be caused by the fact that I started learning Puppet a few > days ago. Also, the Puppet version I have learned so far turns out to have a > different hiera syntax than the one used in openstack-infra. Yes, it is worth noting that we are still running old currently distro supported puppet versions (puppet 3 in particular). Most of our manifests should work as is though with newer puppet though and I don't think this particular issue is puppet version related. Hope this helps, Clark From corvus at inaugust.com Mon Mar 26 20:30:11 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 26 Mar 2018 13:30:11 -0700 Subject: [OpenStack-Infra] Public numbers about the scale of the infrastructure/CI ? In-Reply-To: (David Moreau Simard's message of "Mon, 26 Mar 2018 12:03:31 -0400") References: <871sg6q5o6.fsf@meyer.lemoncheese.net> Message-ID: <877epymvfw.fsf@meyer.lemoncheese.net> David Moreau Simard writes: > On Mon, Mar 26, 2018 at 10:20 AM, James E. Blair wrote: >>> - # of jobs and Ansible playbooks per month ran by Zuul >> >> I'm curious about this one -- how were you planning on defining these >> values and obtaining them? >> > > I've needed to pull statistics out of Zuul in the past for RDO (i.e, > justifying budget for CI resources) > and I use the sql reporter data to do it. > It looks like this: > > $range = "'2018-02-01 00:00:00' AND '2018-02-28 23:59:59'" > SELECT job_name, > result, > start_time, > end_time, > TIMEDIFF(end_time, start_time) as duration > FROM zuul_build > WHERE > start_time BETWEEN $range > > This gets me the amount of monthly *jobs* and I can extrapolate (over > N playbooks..) > by estimating a number knowing that: > - base and post playbooks are fairly consistently X playbooks > - there is at least one "run" playbook > > So pretending that 1000 jobs ran, I can say something like: > 1000 jobs and over [1000 * (X+1)] playbooks > > It's not a perfect number but we know we run more playbooks than that. > > What I have also been thinking about is, if I want to get a more > accurate number, I could do a sum of all the executor playbook results > (which are in graphite) but the history for those don't go too far > back. > Ex: stats.zuul.executor.ze*_openstack_org.phase.*.* The SQL query gets the number of completed jobs which are *reported*. It doesn't get you two other numbers, which are the jobs *launched* (many of which may have been aborted before completion), or the jobs *completed* (the results of many of which may have been discarded due to changes in the environment). In reality, the system is likely to be significantly busier than the number of jobs reported will indicate. Both of the other values can be obtained from graphite or by parsing logs. I think for this purpose, graphite might be sufficient. (The only time I'd recommend going to logs is when we need to find project-specific resource usage information.) stats_counts.zuul.executor.*.builds should be all jobs launched. stats_counts.zuul.tenant.*.pipeline.*.all_jobs should be all jobs completed. -Jim From dmsimard at redhat.com Mon Mar 26 20:32:24 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Mon, 26 Mar 2018 16:32:24 -0400 Subject: [OpenStack-Infra] Public numbers about the scale of the infrastructure/CI ? In-Reply-To: <877epymvfw.fsf@meyer.lemoncheese.net> References: <871sg6q5o6.fsf@meyer.lemoncheese.net> <877epymvfw.fsf@meyer.lemoncheese.net> Message-ID: Good point. I'll work with that instead. David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] On Mon, Mar 26, 2018 at 4:30 PM, James E. Blair wrote: > David Moreau Simard writes: > >> On Mon, Mar 26, 2018 at 10:20 AM, James E. Blair wrote: >>>> - # of jobs and Ansible playbooks per month ran by Zuul >>> >>> I'm curious about this one -- how were you planning on defining these >>> values and obtaining them? >>> >> >> I've needed to pull statistics out of Zuul in the past for RDO (i.e, >> justifying budget for CI resources) >> and I use the sql reporter data to do it. >> It looks like this: >> >> $range = "'2018-02-01 00:00:00' AND '2018-02-28 23:59:59'" >> SELECT job_name, >> result, >> start_time, >> end_time, >> TIMEDIFF(end_time, start_time) as duration >> FROM zuul_build >> WHERE >> start_time BETWEEN $range >> >> This gets me the amount of monthly *jobs* and I can extrapolate (over >> N playbooks..) >> by estimating a number knowing that: >> - base and post playbooks are fairly consistently X playbooks >> - there is at least one "run" playbook >> >> So pretending that 1000 jobs ran, I can say something like: >> 1000 jobs and over [1000 * (X+1)] playbooks >> >> It's not a perfect number but we know we run more playbooks than that. >> >> What I have also been thinking about is, if I want to get a more >> accurate number, I could do a sum of all the executor playbook results >> (which are in graphite) but the history for those don't go too far >> back. >> Ex: stats.zuul.executor.ze*_openstack_org.phase.*.* > > The SQL query gets the number of completed jobs which are *reported*. > It doesn't get you two other numbers, which are the jobs *launched* > (many of which may have been aborted before completion), or the jobs > *completed* (the results of many of which may have been discarded due to > changes in the environment). In reality, the system is likely to be > significantly busier than the number of jobs reported will indicate. > > Both of the other values can be obtained from graphite or by parsing > logs. I think for this purpose, graphite might be sufficient. (The > only time I'd recommend going to logs is when we need to find > project-specific resource usage information.) > > stats_counts.zuul.executor.*.builds should be all jobs launched. > stats_counts.zuul.tenant.*.pipeline.*.all_jobs should be all jobs completed. > > -Jim From tony at bakeyournoodle.com Mon Mar 26 21:56:09 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 27 Mar 2018 08:56:09 +1100 Subject: [OpenStack-Infra] [openstack-dev] [OpenStackAnsible] Tag repos as newton-eol In-Reply-To: References: <20180314212003.GC25428@thor.bakeyournoodle.com> <20180315011132.GF25428@thor.bakeyournoodle.com> Message-ID: <20180326215608.GC13389@thor.bakeyournoodle.com> Hi folks, Can we ask someone from infra to do this, or add me to bootstrappers to do it myself? On Thu, Mar 15, 2018 at 10:57:58AM +0000, Jean-Philippe Evrard wrote: > Looks good to me. > > On 15 March 2018 at 01:11, Tony Breeds wrote: > > On Wed, Mar 14, 2018 at 09:40:33PM +0000, Jean-Philippe Evrard wrote: > >> Hello folks, > >> > >> The list is almost perfect: you can do all of those except > >> openstack/openstack-ansible-tests. > >> I'd like to phase out openstack/openstack-ansible-tests and > >> openstack/openstack-ansible later. > > > > Okay excluding the 2 repos above and filtering out projects that don't > > have newton branches we came down to: > > > > # EOL repos belonging to OpenStackAnsible > > eol_branch.sh -- stable/newton newton-eol \ > > openstack/ansible-hardening \ > > openstack/openstack-ansible-apt_package_pinning \ > > openstack/openstack-ansible-ceph_client \ > > openstack/openstack-ansible-galera_client \ > > openstack/openstack-ansible-galera_server \ > > openstack/openstack-ansible-haproxy_server \ > > openstack/openstack-ansible-lxc_container_create \ > > openstack/openstack-ansible-lxc_hosts \ > > openstack/openstack-ansible-memcached_server \ > > openstack/openstack-ansible-openstack_hosts \ > > openstack/openstack-ansible-openstack_openrc \ > > openstack/openstack-ansible-ops \ > > openstack/openstack-ansible-os_aodh \ > > openstack/openstack-ansible-os_ceilometer \ > > openstack/openstack-ansible-os_cinder \ > > openstack/openstack-ansible-os_glance \ > > openstack/openstack-ansible-os_gnocchi \ > > openstack/openstack-ansible-os_heat \ > > openstack/openstack-ansible-os_horizon \ > > openstack/openstack-ansible-os_ironic \ > > openstack/openstack-ansible-os_keystone \ > > openstack/openstack-ansible-os_magnum \ > > openstack/openstack-ansible-os_neutron \ > > openstack/openstack-ansible-os_nova \ > > openstack/openstack-ansible-os_rally \ > > openstack/openstack-ansible-os_sahara \ > > openstack/openstack-ansible-os_swift \ > > openstack/openstack-ansible-os_tempest \ > > openstack/openstack-ansible-pip_install \ > > openstack/openstack-ansible-plugins \ > > openstack/openstack-ansible-rabbitmq_server \ > > openstack/openstack-ansible-repo_build \ > > openstack/openstack-ansible-repo_server \ > > openstack/openstack-ansible-rsyslog_client \ > > openstack/openstack-ansible-rsyslog_server \ > > openstack/openstack-ansible-security > > > > If you confirm I have the list right this time I'll work on this tomorrow > > > > Yours Tony. > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From pabelanger at redhat.com Mon Mar 26 22:19:10 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 26 Mar 2018 18:19:10 -0400 Subject: [OpenStack-Infra] [openstack-dev] [OpenStackAnsible] Tag repos as newton-eol In-Reply-To: <20180326215608.GC13389@thor.bakeyournoodle.com> References: <20180314212003.GC25428@thor.bakeyournoodle.com> <20180315011132.GF25428@thor.bakeyournoodle.com> <20180326215608.GC13389@thor.bakeyournoodle.com> Message-ID: <20180326221910.GC13234@localhost.localdomain> On Tue, Mar 27, 2018 at 08:56:09AM +1100, Tony Breeds wrote: > Hi folks, > Can we ask someone from infra to do this, or add me to bootstrappers > to do it myself? > Give that we did this last time, I don't see why we can't add you to boostrappers again. Will confirm. -Paul > On Thu, Mar 15, 2018 at 10:57:58AM +0000, Jean-Philippe Evrard wrote: > > Looks good to me. > > > > On 15 March 2018 at 01:11, Tony Breeds wrote: > > > On Wed, Mar 14, 2018 at 09:40:33PM +0000, Jean-Philippe Evrard wrote: > > >> Hello folks, > > >> > > >> The list is almost perfect: you can do all of those except > > >> openstack/openstack-ansible-tests. > > >> I'd like to phase out openstack/openstack-ansible-tests and > > >> openstack/openstack-ansible later. > > > > > > Okay excluding the 2 repos above and filtering out projects that don't > > > have newton branches we came down to: > > > > > > # EOL repos belonging to OpenStackAnsible > > > eol_branch.sh -- stable/newton newton-eol \ > > > openstack/ansible-hardening \ > > > openstack/openstack-ansible-apt_package_pinning \ > > > openstack/openstack-ansible-ceph_client \ > > > openstack/openstack-ansible-galera_client \ > > > openstack/openstack-ansible-galera_server \ > > > openstack/openstack-ansible-haproxy_server \ > > > openstack/openstack-ansible-lxc_container_create \ > > > openstack/openstack-ansible-lxc_hosts \ > > > openstack/openstack-ansible-memcached_server \ > > > openstack/openstack-ansible-openstack_hosts \ > > > openstack/openstack-ansible-openstack_openrc \ > > > openstack/openstack-ansible-ops \ > > > openstack/openstack-ansible-os_aodh \ > > > openstack/openstack-ansible-os_ceilometer \ > > > openstack/openstack-ansible-os_cinder \ > > > openstack/openstack-ansible-os_glance \ > > > openstack/openstack-ansible-os_gnocchi \ > > > openstack/openstack-ansible-os_heat \ > > > openstack/openstack-ansible-os_horizon \ > > > openstack/openstack-ansible-os_ironic \ > > > openstack/openstack-ansible-os_keystone \ > > > openstack/openstack-ansible-os_magnum \ > > > openstack/openstack-ansible-os_neutron \ > > > openstack/openstack-ansible-os_nova \ > > > openstack/openstack-ansible-os_rally \ > > > openstack/openstack-ansible-os_sahara \ > > > openstack/openstack-ansible-os_swift \ > > > openstack/openstack-ansible-os_tempest \ > > > openstack/openstack-ansible-pip_install \ > > > openstack/openstack-ansible-plugins \ > > > openstack/openstack-ansible-rabbitmq_server \ > > > openstack/openstack-ansible-repo_build \ > > > openstack/openstack-ansible-repo_server \ > > > openstack/openstack-ansible-rsyslog_client \ > > > openstack/openstack-ansible-rsyslog_server \ > > > openstack/openstack-ansible-security > > > > > > If you confirm I have the list right this time I'll work on this tomorrow > > > > > > Yours Tony. > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Yours Tony. > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra From tony at bakeyournoodle.com Mon Mar 26 22:25:51 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 27 Mar 2018 09:25:51 +1100 Subject: [OpenStack-Infra] Adding new etcd binaries to tarballs.o.o Message-ID: <20180326222550.GD13389@thor.bakeyournoodle.com> Hi All, Dredging up the past a little here ... In Denver James, Monty, Paul, Clark and I talked about the fact that we have an unoffical mirror for etcd binaries on tarballs.o.o[1]. We all agreed that this was ... sub-optimal. So I took ownership of getting etcd 3.2 (for all architectures) into 18.04 which has happened [2,3]. 3rd party CI systems running on non x86 CPUs have been manually patching the etcd version to get devstack working. Can we please add the appropriate files for the 3.3.2 (or 3.2.17) release of etcd added to tarballs.o.o I realise that 18.04 is just around the corner but doing this now gives us scope to land [4] soon and consider stable branches etc while we transition to bionic images and then dismantle the devstack infrastructure for consuming these tarballs Yours Tony. [1] http://tarballs.openstack.org/etcd/ [2] https://packages.ubuntu.com/bionic/etcd-server [3] https://packages.ubuntu.com/bionic/etcd-client [4] https://review.openstack.org/#/c/554977/1 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From iwienand at redhat.com Mon Mar 26 23:22:26 2018 From: iwienand at redhat.com (Ian Wienand) Date: Tue, 27 Mar 2018 10:22:26 +1100 Subject: [OpenStack-Infra] Adding new etcd binaries to tarballs.o.o In-Reply-To: <20180326222550.GD13389@thor.bakeyournoodle.com> References: <20180326222550.GD13389@thor.bakeyournoodle.com> Message-ID: <1b95b1e2-22c8-a824-c1ba-2c9ee64c62d5@redhat.com> On 03/27/2018 09:25 AM, Tony Breeds wrote: > Can we please add the appropriate files for the 3.3.2 (or 3.2.17) > release of etcd added to tarballs.o.o ISTR that we had problems even getting them from there during runs, so moved to caching this. I had to check, this isn't well documented ... The nodepool element caching code [1] should be getting the image name from [2], which gets the URL for the tarball via the environment variables in stackrc. dib then stuffs that tarball into /opt/cache on all our images. In the running devstack code, we use get_extra_file [3] which should look for the tarball in the on-disk cache, or otherwise download it [4]. Ergo, I'm pretty sure these files on tarballs.o.o are unused. Bumping the version in devstack should "just work" -- it will download directly until the next day's builds come online with the file cached. > I realise that 18.04 is just around the corner but doing this now gives > us scope to land [4] soon and consider stable branches etc while we > transition to bionic images and then dismantle the devstack > infrastructure for consuming these tarballs > [4] https://review.openstack.org/#/c/554977/1 I think we can discuss this in that review, but it seems likely from our discussions in IRC that 3.2 will be the best choice here. It is in bionic & fedora; so we can shortcut all of this and install from packages there. -i [1] https://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/cache-devstack/extra-data.d/55-cache-devstack-repos#n84 [2] https://git.openstack.org/cgit/openstack-dev/devstack/tree/tools/image_list.sh#n50 [3] https://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/etcd3#n101 [4] https://git.openstack.org/cgit/openstack-dev/devstack/tree/functions#n59 From tony at bakeyournoodle.com Mon Mar 26 23:39:35 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 27 Mar 2018 10:39:35 +1100 Subject: [OpenStack-Infra] Adding new etcd binaries to tarballs.o.o In-Reply-To: <1b95b1e2-22c8-a824-c1ba-2c9ee64c62d5@redhat.com> References: <20180326222550.GD13389@thor.bakeyournoodle.com> <1b95b1e2-22c8-a824-c1ba-2c9ee64c62d5@redhat.com> Message-ID: <20180326233934.GE13389@thor.bakeyournoodle.com> On Tue, Mar 27, 2018 at 10:22:26AM +1100, Ian Wienand wrote: > On 03/27/2018 09:25 AM, Tony Breeds wrote: > > Can we please add the appropriate files for the 3.3.2 (or 3.2.17) > > release of etcd added to tarballs.o.o > > ISTR that we had problems even getting them from there during runs, so > moved to caching this. I had to check, this isn't well documented ... > > The nodepool element caching code [1] should be getting the image name > from [2], which gets the URL for the tarball via the environment > variables in stackrc. dib then stuffs that tarball into /opt/cache on > all our images. > > In the running devstack code, we use get_extra_file [3] which should > look for the tarball in the on-disk cache, or otherwise download it > [4]. > > Ergo, I'm pretty sure these files on tarballs.o.o are unused. Bumping > the version in devstack should "just work" -- it will download > directly until the next day's builds come online with the file cached. Except something sets ETCD_DOWNLOAD_URL to tarballs.o.o See: http://logs.openstack.org/77/554977/1/check/devstack/4cc8483/controller/logs/_.localrc_auto.txt.gz and http://logs.openstack.org/77/554977/1/check/devstack/4cc8483/controller/logs/devstacklog.txt.gz#_2018-03-21_16_52_00_998 So we have a egg<->chicken problem don't we and we still need/want the data on tarballs.o.o even if it's seldom used > I think we can discuss this in that review, but it seems likely from > our discussions in IRC that 3.2 will be the best choice here. It is > in bionic & fedora; so we can shortcut all of this and install from > packages there. Yup we can discuss that on the review. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Mon Mar 26 23:42:57 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 27 Mar 2018 10:42:57 +1100 Subject: [OpenStack-Infra] [openstack-dev] [OpenStackAnsible] Tag repos as newton-eol In-Reply-To: <20180326221910.GC13234@localhost.localdomain> References: <20180314212003.GC25428@thor.bakeyournoodle.com> <20180315011132.GF25428@thor.bakeyournoodle.com> <20180326215608.GC13389@thor.bakeyournoodle.com> <20180326221910.GC13234@localhost.localdomain> Message-ID: <20180326234256.GF13389@thor.bakeyournoodle.com> On Mon, Mar 26, 2018 at 06:19:10PM -0400, Paul Belanger wrote: > On Tue, Mar 27, 2018 at 08:56:09AM +1100, Tony Breeds wrote: > > Hi folks, > > Can we ask someone from infra to do this, or add me to bootstrappers > > to do it myself? > > > Give that we did this last time, I don't see why we can't add you to > boostrappers again. > > Will confirm. Thanks. I only small problem with that is I need to stay way from review.o.o while I'm in the bootstrappers group "so many new buttons .. must not click them" :D Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From ram at rachum.com Tue Mar 27 11:16:32 2018 From: ram at rachum.com (Ram Rachum) Date: Tue, 27 Mar 2018 14:16:32 +0300 Subject: [OpenStack-Infra] git review -d without check out Message-ID: Hi, Is there a way to do `git review -d` without having it do a checkout? i.e. I just want to have these commits in my Git database so I could cherrypick them on some other branch. We've got tons of submodules so checking out often creates problems. I tried `git fetch gerrit my_commit_hash:temporary_branch_name` but that's insanely slow for some reason. Thanks, Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Mar 27 14:04:51 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 27 Mar 2018 14:04:51 +0000 Subject: [OpenStack-Infra] Adding new etcd binaries to tarballs.o.o In-Reply-To: <20180326233934.GE13389@thor.bakeyournoodle.com> References: <20180326222550.GD13389@thor.bakeyournoodle.com> <1b95b1e2-22c8-a824-c1ba-2c9ee64c62d5@redhat.com> <20180326233934.GE13389@thor.bakeyournoodle.com> Message-ID: <20180327140451.3z5rgaw55sibtpkr@yuggoth.org> On 2018-03-27 10:39:35 +1100 (+1100), Tony Breeds wrote: [...] > Except something sets ETCD_DOWNLOAD_URL to tarballs.o.o [...] I would be remiss if I failed to remind people that the *manually* installed etcd release there was supposed to be a one-time stop-gap, and we were promised it would be followed shortly with some sort of job which made updating it not-manual. We're coming up on a year and it looks like people have given in and manually added newer etcd releases at least once since. If this file were important to testing, I'd have expected someone to find time to take care of it so that we don't have to. If that effort has been abandoned by the people who originally convinced us to implement this "temporary" workaround, we should remove it until it can be supported properly. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From pabelanger at redhat.com Tue Mar 27 14:19:49 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 27 Mar 2018 10:19:49 -0400 Subject: [OpenStack-Infra] Adding new etcd binaries to tarballs.o.o In-Reply-To: <20180327140451.3z5rgaw55sibtpkr@yuggoth.org> References: <20180326222550.GD13389@thor.bakeyournoodle.com> <1b95b1e2-22c8-a824-c1ba-2c9ee64c62d5@redhat.com> <20180326233934.GE13389@thor.bakeyournoodle.com> <20180327140451.3z5rgaw55sibtpkr@yuggoth.org> Message-ID: <20180327141949.GA3548@localhost.localdomain> On Tue, Mar 27, 2018 at 02:04:51PM +0000, Jeremy Stanley wrote: > On 2018-03-27 10:39:35 +1100 (+1100), Tony Breeds wrote: > [...] > > Except something sets ETCD_DOWNLOAD_URL to tarballs.o.o > [...] > > I would be remiss if I failed to remind people that the *manually* > installed etcd release there was supposed to be a one-time stop-gap, > and we were promised it would be followed shortly with some sort of > job which made updating it not-manual. We're coming up on a year and > it looks like people have given in and manually added newer etcd > releases at least once since. If this file were important to > testing, I'd have expected someone to find time to take care of it > so that we don't have to. If that effort has been abandoned by the > people who originally convinced us to implement this "temporary" > workaround, we should remove it until it can be supported properly. > -- > Jeremy Stanley I have to agree with fungi here, I know I raised the point at last PTG in a meeting about removing this. This only makes it harder for operators to run etcd in production, if not packaged propelry. Which, it seems is part of the original issue as 3rd party CI is manually patching things. > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra From iwienand at redhat.com Tue Mar 27 21:22:53 2018 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 28 Mar 2018 08:22:53 +1100 Subject: [OpenStack-Infra] Adding new etcd binaries to tarballs.o.o In-Reply-To: <20180327140451.3z5rgaw55sibtpkr@yuggoth.org> References: <20180326222550.GD13389@thor.bakeyournoodle.com> <1b95b1e2-22c8-a824-c1ba-2c9ee64c62d5@redhat.com> <20180326233934.GE13389@thor.bakeyournoodle.com> <20180327140451.3z5rgaw55sibtpkr@yuggoth.org> Message-ID: On 03/28/2018 01:04 AM, Jeremy Stanley wrote: > I would be remiss if I failed to remind people that the *manually* > installed etcd release there was supposed to be a one-time stop-gap, > and we were promised it would be followed shortly with some sort of > job which made updating it not-manual. We're coming up on a year and > it looks like people have given in and manually added newer etcd > releases at least once since. If this file were important to > testing, I'd have expected someone to find time to take care of it > so that we don't have to. If that effort has been abandoned by the > people who originally convinced us to implement this "temporary" > workaround, we should remove it until it can be supported properly. In reality we did fix it, as described with the use-from-cache-or-download changes in the prior mail. I even just realised I submitted and forgot about [1] which never got reviewed to remove the tarballs.o.o pointer -- that setting then got copied into the new devstack zuulv3 jobs [2]. Anyway, we got there in the end :) I'll add to my todo list to clear them from tarballs.o.o once this settles out. -i [1] https://review.openstack.org/#/c/508022/ [2] https://review.openstack.org/#/c/554977/ From fungi at yuggoth.org Tue Mar 27 21:29:25 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 27 Mar 2018 21:29:25 +0000 Subject: [OpenStack-Infra] Adding new etcd binaries to tarballs.o.o In-Reply-To: References: <20180326222550.GD13389@thor.bakeyournoodle.com> <1b95b1e2-22c8-a824-c1ba-2c9ee64c62d5@redhat.com> <20180326233934.GE13389@thor.bakeyournoodle.com> <20180327140451.3z5rgaw55sibtpkr@yuggoth.org> Message-ID: <20180327212925.btbgpotbq54ngyn4@yuggoth.org> On 2018-03-28 08:22:53 +1100 (+1100), Ian Wienand wrote: [...] > Anyway, we got there in the end :) I'll add to my todo list to clear > them from tarballs.o.o once this settles out. [...] Thanks! Excellent news indeed. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From berndbausch at gmail.com Wed Mar 28 00:12:04 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Wed, 28 Mar 2018 09:12:04 +0900 Subject: [OpenStack-Infra] Problems setting up my own OpenStack Infrastructure In-Reply-To: References: <002e01d3c4cf$59e21720$0da64560$@gmail.com> <1522087898.2381285.1316647112.11583426@webmail.messagingengine.com> Message-ID: <014701d3c629$68299230$387cb690$@gmail.com> Resending this message because it was too large for the distribution list. ------- Clark, My first test uses this local.pp. It's copied verbatim from [1]: ~~~~ # local.pp class { 'openstack_project::etherpad': ssl_cert_file_contents => hiera('etherpad_ssl_cert_file_contents'), ssl_key_file_contents => hiera('etherpad_ssl_key_file_contents'), ssl_chain_file_contents => hiera('etherpad_ssl_chain_file_contents'), mysql_host => hiera('etherpad_db_host', 'localhost'), mysql_user => hiera('etherpad_db_user', 'etherpad'), mysql_password => hiera('etherpad_db_password','etherpad'), } ~~~~ The commands I run are also verbatim from the same page: ~~~~ # ./install_puppet.sh # ./install_modules.sh # puppet apply -l /tmp/manifest.log --modulepath=modules:/etc/puppet/modules manifests/local.pp ~~~~ My second test closely follows [2]. Here, I take the puppetmaster's original site.pp, adapt the domain "openstack.org" to my domain at home and remove all node definitions except puppetmaster and etherpad. My file is at the end of this message[4]. The commands: ~~~~ # ./install_puppet.sh # ./install_modules.sh # vi site.pp # see [4] # puppet apply --modulepath='/opt/system-config/production/modules:/etc/puppet/modules' -e 'include openstack_project::puppetmaster' ~~~~ > Generally though hiera is used for anything that will be secret or very site > specific. So in this case the expectation is that you will set up a hiera > file with the info specific for your deployment (because you shouldn't have > the ssl cert private data for our deployment and we shouldn't have yours). > This is likely a missing set of info for our docs. We should add something > with general hiera setup to get people going. Yes. The documentation seems to treat the hiera as a given; it just exists, and there doesn't seem to be any information about its content or even whether it's really required. Once I know the issues and technology better (steep learning curve), I'd be happy to write documentation from the perspective of a newbie. For now, let me do more testing with hardcoded values rather than hiera. I certainly learn a lot doing this. > Unfortunately I don't remember off the top of my head how to set up a hiera > so I will have to dig into docs (or maybe someone else can chime in with > that info). In principle, I can do that (for Puppet 4 at least), but the question is what goes into the OpenStack CI production hiera. I see a directory /opt/system-config/production/hiera [3] - is that it? It doesn't contain anything about Etherpad, though. I also did a codesearch for "etherpad_ssl_cert_file_contents", no result (except for the site.pp). Thanks much, Clark! Bernd --- [1] https://docs.openstack.org/infra/system-config/sysadmin.html#making-a-change-in-puppet [2] https://docs.openstack.org/infra/system-config/puppet.html [3] https://git.openstack.org/cgit/openstack-infra/system-config/tree/hiera [4] My site.pp: ~~~~ # # Top-level variables # # There must not be any whitespace between this comment and the variables or # in between any two variables in order for them to be correctly parsed and # passed around in test.sh # $elasticsearch_nodes = hiera_array('elasticsearch_nodes') # # Default: should at least behave like an openstack server # node default { class { 'openstack_project::server': sysadmins => hiera('sysadmins', []), } } # Node-OS: trusty # (I try this with Centos 7 first) node 'puppetmaster.home' { class { 'openstack_project::server': iptables_public_tcp_ports => [8140], sysadmins => hiera('sysadmins', []), pin_puppet => '3.6.', } class { 'openstack_project::puppetmaster': root_rsa_key => hiera('puppetmaster_root_rsa_key'), puppetmaster_clouds => hiera('puppetmaster_clouds'), enable_mqtt => true, mqtt_password => hiera('mqtt_service_user_password'), mqtt_ca_cert_contents => hiera('mosquitto_tls_ca_file'), } file { '/etc/openstack/infracloud_vanilla_cacert.pem': ensure => present, owner => 'root', group => 'root', mode => '0444', content => hiera('infracloud_vanilla_ssl_cert_file_contents'), require => Class['::openstack_project::puppetmaster'], } file { '/etc/openstack/infracloud_chocolate_cacert.pem': ensure => present, owner => 'root', group => 'root', mode => '0444', content => hiera('infracloud_chocolate_ssl_cert_file_contents'), require => Class['::openstack_project::puppetmaster'], } file { '/etc/openstack/limestone_cacert.pem': ensure => present, owner => 'root', group => 'root', mode => '0444', content => hiera('limestone_ssl_cert_file_contents'), require => Class['::openstack_project::puppetmaster'], } } # Node-OS: trusty # Node-OS: xenial node /^etherpad\d*\.home$/ { class { 'openstack_project::server': iptables_public_tcp_ports => [22, 80, 443], sysadmins => hiera('sysadmins', []), } class { 'openstack_project::etherpad': ssl_cert_file_contents => hiera('etherpad_ssl_cert_file_contents'), ssl_key_file_contents => hiera('etherpad_ssl_key_file_contents'), ssl_chain_file_contents => hiera('etherpad_ssl_chain_file_contents'), mysql_host => hiera('etherpad_db_host', 'localhost'), mysql_user => hiera('etherpad_db_user', 'username'), mysql_password => hiera('etherpad_db_password'), } } # Node-OS: trusty # Node-OS: xenial node /^etherpad-dev\d*\.home$/ { class { 'openstack_project::server': iptables_public_tcp_ports => [22, 80, 443], sysadmins => hiera('sysadmins', []), } class { 'openstack_project::etherpad_dev': mysql_host => hiera('etherpad-dev_db_host', 'localhost'), mysql_user => hiera('etherpad-dev_db_user', 'username'), mysql_password => hiera('etherpad-dev_db_password'), } } ~~~~ _______________________________________________ OpenStack-Infra mailing list OpenStack-Infra at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5518 bytes Desc: not available URL: From iwienand at redhat.com Wed Mar 28 00:17:05 2018 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 28 Mar 2018 11:17:05 +1100 Subject: [OpenStack-Infra] Options for logstash of ansible tasks Message-ID: I wanted to query for a failing ansible task; specifically what would appear in the console log as 2018-03-27 15:07:49.294630 | 2018-03-27 15:07:49.295143 | TASK [configure-unbound : Check for IPv6] 2018-03-27 15:07:49.368062 | primary | skipping: Conditional result was False 2018-03-27 15:07:49.400755 | While I can do message:"configure-unbound : Check for IPv6" I want to correlate that with a result, looking also for the matching skipping: Conditional result was False as the result of the task. AFAICT, there is no way in kibana to enforce a match on consecutive lines like this (as it has no concept they are consecutive). I considered a few things. We could conceivably group everything between "TASK" and a blank " | " into a single entry with a multiline filter. It was pointed out that this would make, for example, the entire devstack log as a single entry, however. The closest other thing I could find was "aggregate" [1]; but this relies on having a unique task-id to group things together with. Ansible doesn't give us this in the logs and AFAIK doesn't have a concept of a uuid for tasks. So I'm at a bit of a loss as to how we could effectively index ansible tasks so we can determine the intermediate values or results of individual tasks? Any ideas? -i [1] https://www.elastic.co/guide/en/logstash/current/plugins-filters-aggregate.html From corvus at inaugust.com Wed Mar 28 00:30:51 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 27 Mar 2018 17:30:51 -0700 Subject: [OpenStack-Infra] Options for logstash of ansible tasks In-Reply-To: (Ian Wienand's message of "Wed, 28 Mar 2018 11:17:05 +1100") References: Message-ID: <87k1tx59dw.fsf@meyer.lemoncheese.net> Ian Wienand writes: > The closest other thing I could find was "aggregate" [1]; but this > relies on having a unique task-id to group things together with. > Ansible doesn't give us this in the logs and AFAIK doesn't have a > concept of a uuid for tasks. We control the log output format in Zuul (both job-output.txt and job-output.json). So we could include a unique ID for tasks if we wished. However, we should not put that on every line, so that still would require some handling in the log processor. As soon as I say that, it makes me think that the solution to this really should be in the log processor. Whether it's a grok filter, or just us parsing the lines looking for task start/stop -- that's where we can associate the extra data with every line from a task. We can even generate a uuid right there in the log processor. -Jim From tony at bakeyournoodle.com Wed Mar 28 03:21:15 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 28 Mar 2018 14:21:15 +1100 Subject: [OpenStack-Infra] git review -d without check out In-Reply-To: References: Message-ID: <20180328032114.GH13389@thor.bakeyournoodle.com> On Tue, Mar 27, 2018 at 02:16:32PM +0300, Ram Rachum wrote: > Hi, > > Is there a way to do `git review -d` without having it do a checkout? i.e. > I just want to have these commits in my Git database so I could cherrypick > them on some other branch. We've got tons of submodules so checking out > often creates problems. > > I tried `git fetch gerrit my_commit_hash:temporary_branch_name` but that's > insanely slow for some reason. Thanks more or less what I do. This is my ~bin/git-os-change --- #!/usr/bin/env bash review=$1 revision=${2:-1} if [ -z "$review" ] ; then echo Need an OpenStack gerrit review number >&2 exit 1 fi ref=$(printf "refs/changes/%d/%d/%d" "${review: -2}" "${review}" "${revision}") git fetch gerrit "${ref}:${ref}" --- It's about the same speed as git review. --- [tony at thor ~]$ cd tmp /home/tony/tmp [tony at thor tmp]$ cp -a ~/projects/openstack/openstack-dev/devstack devstack-1 [tony at thor tmp]$ cp -a ~/projects/openstack/openstack-dev/devstack devstack-2 [tony at thor tmp]$ cd devstack-1 /home/tony/tmp/devstack-1 [tony at thor devstack-1]$ time git review -d 554977 Downloading refs/changes/77/554977/3 from gerrit Switched to branch "review/eric_berglund/etcd_version" real 0m3.892s user 0m0.508s sys 0m0.225s [tony at thor devstack-1]$ cd ../devstack-2 [tony at thor devstack-2]$ time git os-change 554977 3 remote: Counting objects: 7, done remote: Finding sources: 100% (4/4) remote: Total 4 (delta 3), reused 3 (delta 3) Unpacking objects: 100% (4/4), done. From ssh://review.openstack.org:29418/openstack-dev/devstack * [new ref] refs/changes/77/554977/3 -> refs/changes/77/554977/3 real 0m2.813s user 0m0.182s sys 0m0.152s --- Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From iwienand at redhat.com Wed Mar 28 06:57:11 2018 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 28 Mar 2018 17:57:11 +1100 Subject: [OpenStack-Infra] Options for logstash of ansible tasks In-Reply-To: <87k1tx59dw.fsf@meyer.lemoncheese.net> References: <87k1tx59dw.fsf@meyer.lemoncheese.net> Message-ID: <7bafba5c-1492-d315-350a-be05a52c19f1@redhat.com> On 03/28/2018 11:30 AM, James E. Blair wrote: > As soon as I say that, it makes me think that the solution to this > really should be in the log processor. Whether it's a grok filter, or > just us parsing the lines looking for task start/stop -- that's where we > can associate the extra data with every line from a task. We can even > generate a uuid right there in the log processor. I'd agree the logstash level is probably where to do this. How to acheive that ... In trying to bootstrap myself on the internals of this, one thing I've found is that the multi-line filter [1] is deprecated for the multiline codec plugin [2]. We make extensive use of this deprecated filter [3]. It's not clear how we can go about migrating away from it? The input is coming in as "json_lines" as basically a json-dict -- with a tag that we then use different multi-line matches for. >From what I can tell, it seems like the work of dealing with multiple-lines has actually largley been put into filebeat [5] which is analagous to our logstash-workers (it feeds the files into logstash). Ergo, do we have to add multi-line support to the logstash-pipeline, so that events sent into logstash are already bundled together? -i [1] https://www.elastic.co/guide/en/logstash/2.4/plugins-filters-multiline.html [2] https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html [3] https://git.openstack.org/cgit/openstack-infra/logstash-filters/tree/filters/openstack-filters.conf [4] http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/templates/logstash/input.conf.erb [5] https://www.elastic.co/guide/en/beats/filebeat/current/multiline-examples.html From borne.mace at oracle.com Wed Mar 28 18:14:12 2018 From: borne.mace at oracle.com (Borne Mace) Date: Wed, 28 Mar 2018 11:14:12 -0700 Subject: [OpenStack-Infra] [kolla] kolla-cli master pointer change Message-ID: <40811eb6-b28b-dc50-1278-ede5e671e344@oracle.com> Hi All, I brought up my issue in #openstack-infra and it was suggested that I send an email to this list. The kolla-cli repository was recently created, from existing sources. There was an issue with the source repo where the master branch was sorely out of date, but there is tagged source which is up to date. My hope is that someone can force-push the tag as master so that the master branch can be fixed / updated. I tried to solve this process through the normal merge process, but since I was not the only committer to that repository gerrit refused to post my review. I will add the full output of that attempt at the end so folks can see what I'm talking about. If there is some other process that is more appropriate for me to follow here let me know and I'm happy to go through it. The latest / optimal code is tagged as o3l_4.0.1. Thanks much for your help! -- Borne Mace bmace at borg ~/devel/kolla-cli $ git review -R You are about to submit multiple commits. This is expected if you are submitting a commit that is dependent on one or more in-review commits. Otherwise you should consider squashing your changes into one commit before submitting. The outstanding commits are: 3cdd26e (HEAD -> master) Merge tag 'o3l_4.0.1' 7da1927 (tag: o3l_4.0.1) Updated version to 4.0.1 079cf5c Fix yaml issue with adding ellipsis after booleans c1cfa88 Fix property setting when current value is None 1fc7122 Tweak kollacli help da2e8b6 Fix kollacli dump output 3d47927 Support modification of non string properties c8a5730 Updated unit tests to avoid remote calls 08f050a Supported service filtering added 725c80a Re-worked the way service association is handled 2e70c5c Added support for pull command 99afdbc Updated inventory version for 4.x release f597625 chdir before ansible execution to avoid new ansible behavior 03db283 Hard bind cliff dependency and minor unicode fixes 2ee8db4 Add reconfigure CLI command d44f712 Fix precheck command 69700b6 Updating the python api syntax 5281bdc Updated version from 3.0.1 to 4.0.0 e4f52da Revert "Added support for a new reconfigure command" 3c4828e Added support for a new reconfigure command 5dc9de9 Modified per host commands to use ansible --limit f7166ed API support for the reconfigure action 3e5cc35 Added support for turning off locking with environment variable bc42e6b WIP: Added initial support for stop containers 0c56e57 Fixed bug causing destroy on multiple hosts to fail 7cbb157 Pass through unlimited -v args to ansible playbook a320450 Added support for ignoring certain ansible errors. 4c70581 Fixed rpm spec after removal of the custom cli destroy playbooks 499aa1e Removed no longer relevant cli destroy playbooks 9772b1a Added support for removal of docker images during destroy 55da569 allow empty passwords to be set 31363f8 Updated destroy to use upstream destroy playbook bb16c06 support multiple parents of services in allinone file 3ad1c68 Allow empty groups and comments 28a9d93 Adjust preinstall entry to include a blank line. 8d87bea use cat command to add kolla_preinstall_version to properties file ea4ee6d Added support for upgrade and deploy per service d02b381 Update api docs 0a98fdb add new cli/api command to init passwords c0d104a add kolla_preinstall_version property in rpm spec 38d3120 clear out all /usr/share/kolla/kollacli files on uninstall 3f029a7 add ability to cli/api to add/clear ssh keys in password file 8858b0f move oracle-specific precheck to kollacli rpm b6b2018 add empty password check on deploy 894f8e7 remove docker-py requirement from kollacli ed7b025 unit test updates for mitaka changes e812696 use inventory_samples as name of all-in-one dir f371acd add ansible.cfg to kollacli spec b0f8494 in api, avoid adding keys if host is already setup 5524219 Changed initial inventory generation source path 6c5473d ensure dir for temp inventory is writable 011f885 Added deploy action needed for kolla update 510d5f7 Merge branch 'master' into o3l_3.next a963d25 change plugin license back to gpl3 016e9cb Added pep-484 entry to inventory file. 694b2db Added initial mypy / pep-484 type hinting support. 75550c3 Properly display non-ascii chars in table output 76aa503 allow property set of empty string 0bb430d Better version spec for docker-py 17ef5a5 add api comment to api about locality of api objects e06ce72 change 'kolla' to 'root' when asking for password in host setup acda25f fix string pointer bug in last checkin 8f50a8c fix string pointer bug in last checkin 459a76d make ansible plugin message display more reliable aa3b2fe update docs 5263b89 fix host & service associations in group object fc53a26 properly handle repeated params in group/service get 05e0161 add timeout option to deploy 0d17761 show error details on 'One or more items failed' f044f2b various upgrade fixes c5833aa Use inventory_samples for location of all-in-one file 8dba45e use all-in-one file to seed initial inventory 246ab1c use private ansible.cfg file 5f4f828 add ability to enable plugin debug via /tmp file ad7536d add version to api, misc fixes 5a6903a add warning on local deploy and avoid doc build errors on doc gen 984b190 Updated pipeling setting in line with bug 23282017 933fb64 update test to use kottos_home instead of kolla_home 4230879 move properties into zookeeper - phase 2 102ac39 move properties into zookeeper - phase 2 977af68 change order of chmod on create fifo 0b69fa5 update kollacli with latest kottos changes 81397af move callback plugin pipe down a level in /tmp 3fe4d52 add ability to set debug logging as an env variable 53dfd16 fix for fragmentation bug in callback plugin msg processing 10098e7 remove unneeded update of callback_plugins line in rpm spec ea1cc5a Updates to rpm build spec for callback_plugins configuration eccf5cc updates to rpm build spec of sequence of requirements for plugin and kollacli b91ac84 update cli rpm spec to require ansible plugin 6f6b517 Updated the kpolla-ansible requirement to 3.0.0 5e29f90 Remove no longer needed packages (six, paramiko) in cli v4 53eb65c Updated the RPM to 3.0 464b903 update rpm build spec for kollacli v4 a84bc67 fix discrepency of ansible lock location d0ad451 removal of blaze code from cli (for v4) 7b53814 fix to handle string checking in p2 & py3 7e68685 update docs e7e362c Api doc change to remove groups/services from deploy c43a1b4 Remove service / group deploy and modified host deploy to throw an error if a targeted host deploy is attempted against non-compute hosts. d680bb3 Updated docs to reflect latest api change for changing multiple properties in a single call. 614295d fix another instance where ansible error does not surface to the job error message f8b81d7 Fixed deploy lock message to remove pid / owner info no longer available using flock. e539a62 fixed false positive bandit complaints about yaml load 03e162c Properties are now only loaded on list calls, making set / clear more performant. 161d50b update docs c96c508 update build spec to remove etc and share, fix warnings 6338975 removed suprious unneeded debug log line 62ff3c5 Fixed issue when setting a property on multiple hosts / groups 7a1547a Added support for settings / clearing multiple properties in a single call. 89327d1 add kolla ansible plugin to cli build spec b31c030 Added support for sync on ansible playbook operations. Writes cannot occur when ansible operations are happening. 2a9497f add job kill to api 7b14d4f configure logging for api fed76d8 fix inventory bug in playbook 41eead3 Update api docs 9ef934e add api for log collecting c60b33a Replaced flock with NFS safe locking mechanism d3ecb14 cleanup of playbook runs bbf65a0 better error messages in playbook failures e3d0df6 improved logging of failed ansible playbooks 144a9f8 change group add/remove api to take list of groups 6f695e5 Updated api docs aef4fe7 Add more doc for async deploy 435a733 Added property API a7dfc5c add some unittests for group api 10f5f1e add some unittests for host api 40c32ee preserve inventory and vars during utests 289e281 Added code to check sub-service parent enablement to determine if a group needs hosts 5096708 add generic param checker for api 0eccaab add new unit test for client upgrade 88a13c7 Fix upgrade bug and sub-service parent/group issue 9ad9c13 create api for dump command fd61b3d rebuild & update the docs 4b229c6 finish up group api ee80b1b add get and get_all methods to group api 8d9e66a new password api, partial service api 1526794 Initial API Doc commit 8f84593 Added API calls for group add / remove with docs. 3eeb6ba Sphinx formatted docs for deploy mode set eb82eb5 second pass thru the new host api b883423 update host api & cli 5a1596f Added initial sphinx documentation support for the python api f593325 Update host destroy and precheck api to take list of hosts 1cdd533 Merge branch 'o3l_2.next' 1e58e92 create apis for deploy, upgrade, host_destroy, host_precheck 67ef234 WIP - make playbook runs async dfee416 update tox.ini to ignore apache license in callback eca0695 WIP - add callback pipe logic to client side a15db48 Better handling of unicode in log collector. 2ed4290 Changed way we handle global properties (from /etc/kolla/globals.yml to /usr/share/kolla/ansible/group_vars/__GLOBAL). 9298bd1 fixed bug in host related property listing when the list is empty 57c1acf Improved property output to include override data. b655bc6 WIP - add named pipe code for message sending to client e0c6079 WIP - add first draft of ansible callback f9db085 Fix property ordering to go all / globals / groups / hosts. (lowest to highest priority) 98822a7 create unique deploy id for deployments 5cb91ee initial HostApi class and moved host add / remove to use the api 4e9d9da fix getting logs on ovm servers 7174233 added support for host remove all b4fb6b3 fix minor help formatting issues 6703068 Added checking of groups with enabled services. Each must have a minimum of one host associated to them. e115ff6 fix issue when list property list --groups all 7f26c5e Added proper egg-info directory clean up to rpm update. 43173f5 disable retry_files_enabled in ansible.cfg 1447e67 disable retry_files_enabled in ansible.cfg ef0d46f fix for property set side effect issue a966412 Fixed property call that would match the wrong / extra lines 2d3899b fix docker ps issue in new destroy playbooks cad25ae add new --predeploy option to host check e0a4660 Improve host setup error reporting 069af18 Support host check all in cli bf9aaea comment out 3.x WIP code af385dd new ansible playbook + api code WIP b8c5434 fix getting logs on ovm servers 4dbf1c3 added host remove all test 2f586ac added support for host remove all 72c3983 fix minor help formatting issues 083e57d Added checking of groups with enabled services. Each must have a minimum of one host associated to them. 31d0402 fix issue when list property list --groups all 6cc807a Added proper egg-info directory clean up to rpm update. f4de282 disable retry_files_enabled in ansible.cfg b50b602 disable retry_files_enabled in ansible.cfg 4cdd43f fix for property set side effect issue 3b06fbd Merge branch 'o3l_2.next' of ssh://ca-git.us.oracle.com/openstack-kollacli into o3l_2.next 2a003dd fix docker ps issue in new destroy playbooks e2a4b88 Fixed property call that would match the wrong / extra lines 301c5d9 add new --predeploy option to host check ea7f091 Improve host setup error reporting 45b195f Support host check all in cli f2b7225 Merge branch 'o3l_2.next' e658a84 Fixed cli version to match rpm version. cae8cb3 Display errors if host destroy fails 3b0fd1a update copyrights to 2016 92d9037 update log_collector tool for 2.1.1 5ac3e29 specifically disallow pexpect version 3.3 e903e75 Added clean up of /var/lib/kolla to the includedata cleanup. 497988e Added support to execute upgrade from cli. 965f140 fix up inventory path when loading a v1 inventory file 192769d put ceilometer in groups on upgrade 505fc81 fix a few ceilometer container names d344809 add kollacli support for ceilometer 86ca539 Added calls into openstack-kolla precheck playbook during check command execution. 1dbc58d Added password confirmation to the password set command. c8122d1 fix argparse when groups and hosts not specified 4b2ceb8 some refactoring for new host/groups properties 84e59f8 add unit tests for new host/groups properties 2e44000 Fixed permission on new host_vars dir 72c2bc7 simplify safe_decode and remove unneeded safe_encode e444d95 remove oslo utils dependency from rpm spec ceff841 remove oslo utils dependency fa4b2fb Added support for group / host property editing bdeb9cd Display errors if host destroy fails 9f37bb7 update copyrights to 2016 c679272 update log_collector tool for 2.1.1 e698e1c specifically disallow pexpect version 3.3 29ecad0 Merge branch 'o3l_2.next' of ssh://ca-git.us.oracle.com/openstack-kollacli into o3l_2.next 3338478 Merge branch 'o3l_2.next' of ssh://ca-git.us.oracle.com/openstack-kollacli into o3l_2.next df43a51 fix up inventory path when loading a v1 inventory file 0ef1f25 Added clean up of /var/lib/kolla to the includedata cleanup. 04e47b6 Added support to execute upgrade from cli. 223c1e2 put ceilometer in groups on upgrade 70bc61c fix a few ceilometer container names 3f6cd72 add kollacli support for ceilometer 98c98d1 Added calls into openstack-kolla precheck playbook during check command execution. f006136 Added password confirmation to the password set command. a8c36b0 fix argparse when groups and hosts not specified 7b1a88a some refactoring for new host/groups properties b0f1a2b add unit tests for new host/groups properties dfac32a Merge branch 'o3l_2.next' of ssh://ca-git.us.oracle.com/openstack-kollacli into o3l_2.next 1be74db simplify safe_decode and remove unneeded safe_encode efd6b47 Fixed permission on new host_vars dir 15ec75e remove oslo utils dependency from rpm spec 8d28135 remove oslo utils dependency e106853 Added support for group / host property editing 654fd18 Merge branch 'o3l_2.next' of git://ca-git.us.oracle.com/openstack-kollacli 5995838 Changed default property values shown and added --all and --long flags. 5ff4aaa Jira-Issue:OPENSTACK-545 e2b2972 Jira-Issue:OPENSTACK-547 6225da7 Jira-Issue:OPENSTACK-545 c466e2b Forward port of data preservation for host destroy 8f6dd90 host destroy will not not destroy data containers by default. --includedata option added to destroy to remove all containers. 72db18d move dump logic out of cli Do you really want to submit the above commits? Type 'yes' to confirm, other to cancel: yes remote: Processing changes: refs: 1, done remote: remote: ERROR: In commit 72db18d96781c41a617ca2a3332dc02cdeb37f97 remote: ERROR: committer email address steve.noyes at oracle.com remote: ERROR: does not match your user account. remote: ERROR: remote: ERROR: The following addresses are currently registered: remote: ERROR: borne.mace at gmail.com remote: ERROR: borne.mace at oracle.com remote: ERROR: remote: ERROR: To register an email address, please visit: remote: ERROR: https://review.openstack.org/#/settings/contact remote: remote: To ssh://bmace at review.openstack.org:29418/openstack/kolla-cli ! [remote rejected] HEAD -> refs/publish/master/bug/27219113 (invalid committer) error: failed to push some refs to 'ssh://bmace at review.openstack.org:29418/openstack/kolla-cli' From dmsimard at redhat.com Thu Mar 29 13:00:01 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Thu, 29 Mar 2018 09:00:01 -0400 Subject: [OpenStack-Infra] Public numbers about the scale of the infrastructure/CI ? In-Reply-To: References: Message-ID: The talk was this week and it's up on YouTube [1]. During the talk which was basically a long live demo, we... - Sent a patch to fix a typo in the talk [2] - Fixed a Zuul job through speculative testing [3] - Updated the openstack-infra IRC meeting chair [4]. Oh, and we also added an item on the next meeting to talk about this talk [5]. It was fun. [1]: https://youtu.be/6gTsL7E7U7Q [2]: https://review.openstack.org/#/c/556738/ [3]: https://review.openstack.org/#/c/556615/ [4]: https://review.openstack.org/#/c/557095/ [5]: https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] On Sat, Mar 24, 2018 at 9:28 PM, David Moreau Simard wrote: > Hi -infra, > > I'll be presenting a talk at a local OpenStack meetup next week [1] > that will highlight some examples about how people can help and > contribute to the infrastructure project. > The talk will be recorded and should hopefully serve as a form of > informal documentation. > > I'd like to disclose some semi-official numbers (as I'd personally > pull them up) to let people have an idea of the scale our contributors > are maintaining. > I suppose this data is already somewhat public if you know where to > look but I don't think it's been written down in a digestable format > in recent history. > > Unless there's any objection, I'd have a slide with up to date numbers such as: > - # of projects hosted (as per git.openstack.org) > - # of servers (in aggregate of all our regions) > -- (Maybe some big highlights like the size of logstash, logs.o.o, Zuul) > - Nodepool capacity (number of clouds, aggregate capacity) > - # of jobs and Ansible playbooks per month ran by Zuul > - Approximate number of maintained and hosted services (irc, > gerritbot, meetbot, gerrit, git, mailing lists, wiki, ask.openstack, > storyboard, codesearch, etc.) > - Probably some high level numbers from Stackalytics > - Maybe something else I haven't thought about > > The idea of the talk is not to brag about all the stuff we're doing > but rather, "hey, you don't need to be a pro in OpenStack to > contribute, we got all these different things you can help with". > > I realize it's a bit last minute but please let me know if you see > anything wrong with this ! > > [1]: https://www.meetup.com/Montreal-OpenStack/events/248344351/ > > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] From cboylan at sapwetik.org Thu Mar 29 16:15:25 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 29 Mar 2018 09:15:25 -0700 Subject: [OpenStack-Infra] Problems setting up my own OpenStack Infrastructure In-Reply-To: <014701d3c629$68299230$387cb690$@gmail.com> References: <002e01d3c4cf$59e21720$0da64560$@gmail.com> <1522087898.2381285.1316647112.11583426@webmail.messagingengine.com> <014701d3c629$68299230$387cb690$@gmail.com> Message-ID: <1522340125.1638046.1320458640.10B4D336@webmail.messagingengine.com> On Tue, Mar 27, 2018, at 5:12 PM, Bernd Bausch wrote: > Resending this message because it was too large for the distribution list. > > ------- > > Clark, > > My first test uses this local.pp. It's copied verbatim from [1]: > ~~~~ > # local.pp > class { 'openstack_project::etherpad': > ssl_cert_file_contents => hiera('etherpad_ssl_cert_file_contents'), This is the public portion of ssl certificate use to run an https server. It includes the BEGIN and END CERTIFICATE lines of the cert file contents and everything in between. > ssl_key_file_contents => hiera('etherpad_ssl_key_file_contents'), This is the portion portion of ssl certificate use to run an https server. It includes the BEGIN and END PRIVATE KEY lines of the cert file contents and everything in between. > ssl_chain_file_contents => hiera('etherpad_ssl_chain_file_contents'), This is the chain of certificates needed to trust the certificate (if required, not all certs will have this). > mysql_host => hiera('etherpad_db_host', 'localhost'), > mysql_user => hiera('etherpad_db_user', 'etherpad'), > mysql_password => hiera('etherpad_db_password','etherpad'), > } In the case of using built in snakeoil certs on ubuntu you can just provide the ssl_key_file and ssl_cert_file values and rely on the contents being already in those files to make this simpler rather than going and getting a certificate. However you could also use something like Let's Encrypt to get the certificates and set their content above. Example of using snakeoil certs at https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/etherpad_dev.pp#n12 > ~~~~ > The commands I run are also verbatim from the same page: > ~~~~ > # ./install_puppet.sh > # ./install_modules.sh > # puppet apply -l /tmp/manifest.log --modulepath=modules:/etc/puppet/modules > manifests/local.pp > ~~~~ > > My second test closely follows [2]. Here, I take the puppetmaster's original > site.pp, adapt the domain "openstack.org" to my domain at home and remove all > node definitions except puppetmaster and etherpad. My file is at the end of > this message[4]. > > The commands: > ~~~~ > # ./install_puppet.sh > # ./install_modules.sh > # vi site.pp # see [4] > # puppet > apply --modulepath='/opt/system-config/production/modules:/etc/puppet/modules' > -e 'include openstack_project::puppetmaster' > ~~~~ > > > Generally though hiera is used for anything that will be secret or very site > > specific. So in this case the expectation is that you will set up a hiera > > file with the info specific for your deployment (because you shouldn't have > > the ssl cert private data for our deployment and we shouldn't have yours). > > This is likely a missing set of info for our docs. We should add something > > with general hiera setup to get people going. > > Yes. The documentation seems to treat the hiera as a given; it just exists, > and there doesn't seem to be any information about its content or even whether > it's really required. > Once I know the issues and technology better (steep learning curve), I'd be > happy to write documentation from the perspective of a newbie. > For now, let me do more testing with hardcoded values rather than hiera. I > certainly learn a lot doing this. > > > Unfortunately I don't remember off the top of my head how to set up a hiera > > so I will have to dig into docs (or maybe someone else can chime in with > > that info). > > In principle, I can do that (for Puppet 4 at least), but the question is what > goes into the OpenStack CI production hiera. I see a directory > /opt/system-config/production/hiera [3] - is that it? It doesn't contain > anything about Etherpad, though. I also did a codesearch for > "etherpad_ssl_cert_file_contents", no result (except for the site.pp). This is the public hiera which lives in the system-config repo itself. We can put content in there that is safe to share publicly but may still need to be customized by downstream deployments. Unfortunately because we can't share our private hiera content that data remains harder to share and will in many cases be manifest dependent. Your private hiera should live elsewhere in the hiera lookup path. I believe ours lives in /etc/puppet/hieradata/production. One approach we may want to take is go node by node in site.pp and try to provide descriptions for the content of each hiera lookup used (or when there are logical groups of hiera lookups descriptions for that group). That will hopefully make it more clear what the data is without needing to divulge the actual sensitive informtation. I hope this helps. Sorry I didn't respond sooner, have been traveling and attending a conference. > > Thanks much, Clark! > > Bernd > --- > [1] > https://docs.openstack.org/infra/system-config/sysadmin.html#making-a-change-in-puppet > [2] https://docs.openstack.org/infra/system-config/puppet.html > [3] > https://git.openstack.org/cgit/openstack-infra/system-config/tree/hiera > [4] My site.pp: From cboylan at sapwetik.org Thu Mar 29 18:11:39 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 29 Mar 2018 11:11:39 -0700 Subject: [OpenStack-Infra] Recap of the Cross Community Infra/CI/CD event before ONS Message-ID: <1522347099.1684469.1320576008.19428E36@webmail.messagingengine.com> Hello everyone, Thought I would give a recap of the Cross Community CI event that Fatih, Melvin, and Robyn hosted prior to the ONS conference this last weekend. As a small disclaimer there was a lot to ingest over a short period of time so apologies if I misremember and get names or projects or topics wrong. The event had representatives from OpenStack, Ansible, Linux Foundation, OpenDaylight, OPNFV, ONAP, CNCF, and fd.io (and probably others that I don't remember). The event was largely split into two halves, the first a get to know each project (the community they represent, the tools and methods they use and the challenges they face) and the second working together to reach common understanding on topics such as vocabulary, tooling pracitices, and addressing particular issues that affect many of us. Notes were taken for each day (half) and can be found on mozilla's etherpad [0] [1]. My biggest takeaway from the event was that while we produce different software we face many of the same challenges performing CI/CD for this software and there is a lot of opportunity for us to work together. In many cases we already use many of the same tools. Gerrit for example is quite popular with the LF projects. In other places we have made distinct choices like Jenkins or Zuul or Gitlab CI, but still have to solve similar issues across these tools like security of job runs and signing of release artifacts. I've personally volunteered along with Trevor Bramwell at the LF to sort out some of the common security issues we face running arbitrary code pulled down from the Internet. Another topic that had a lot of interest was building (or consuming some existing if it already exists) message bus to enable machine to machine communication between CI systems. This would help groups like OPNFV which are integrating the output of OpenStack and others to know when there are new things that needs testing and where to get them. Basically we previously operated in silos despite significant overlap in tooling and issues we face and since we all work on open source software little prevents us from working together so we should do that more. If this sounds like a good idea and is interesting to you there is a wiki [2] with information on places to collaborate. Currently there are things like a mailing list, freenode IRC channel (other chat tools too if you prefer), and a wiki. Feel free to sign up and get involved. Also I'm happy to give my thoughts on the event if you have further questions. [0] https://public.etherpad-mozilla.org/p/infra_cicd_day1 [1] https://public.etherpad-mozilla.org/p/infra_cicd_day2 [2] https://gitlab.openci.io/openci/community/wikis/home#collaboration-tools Thank you to everyone who helped organize and attended making it a success, Clark From pabelanger at redhat.com Thu Mar 29 18:55:06 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 29 Mar 2018 14:55:06 -0400 Subject: [OpenStack-Infra] Recap of the Cross Community Infra/CI/CD event before ONS In-Reply-To: <1522347099.1684469.1320576008.19428E36@webmail.messagingengine.com> References: <1522347099.1684469.1320576008.19428E36@webmail.messagingengine.com> Message-ID: <20180329185506.GA1172@localhost.localdomain> On Thu, Mar 29, 2018 at 11:11:39AM -0700, Clark Boylan wrote: > Hello everyone, > > Thought I would give a recap of the Cross Community CI event that Fatih, Melvin, and Robyn hosted prior to the ONS conference this last weekend. As a small disclaimer there was a lot to ingest over a short period of time so apologies if I misremember and get names or projects or topics wrong. > > The event had representatives from OpenStack, Ansible, Linux Foundation, OpenDaylight, OPNFV, ONAP, CNCF, and fd.io (and probably others that I don't remember). The event was largely split into two halves, the first a get to know each project (the community they represent, the tools and methods they use and the challenges they face) and the second working together to reach common understanding on topics such as vocabulary, tooling pracitices, and addressing particular issues that affect many of us. Notes were taken for each day (half) and can be found on mozilla's etherpad [0] [1]. > > My biggest takeaway from the event was that while we produce different software we face many of the same challenges performing CI/CD for this software and there is a lot of opportunity for us to work together. In many cases we already use many of the same tools. Gerrit for example is quite popular with the LF projects. In other places we have made distinct choices like Jenkins or Zuul or Gitlab CI, but still have to solve similar issues across these tools like security of job runs and signing of release artifacts. > > I've personally volunteered along with Trevor Bramwell at the LF to sort out some of the common security issues we face running arbitrary code pulled down from the Internet. Another topic that had a lot of interest was building (or consuming some existing if it already exists) message bus to enable machine to machine communication between CI systems. This would help groups like OPNFV which are integrating the output of OpenStack and others to know when there are new things that needs testing and where to get them. > > Basically we previously operated in silos despite significant overlap in tooling and issues we face and since we all work on open source software little prevents us from working together so we should do that more. If this sounds like a good idea and is interesting to you there is a wiki [2] with information on places to collaborate. Currently there are things like a mailing list, freenode IRC channel (other chat tools too if you prefer), and a wiki. Feel free to sign up and get involved. Also I'm happy to give my thoughts on the event if you have further questions. > > [0] https://public.etherpad-mozilla.org/p/infra_cicd_day1 > [1] https://public.etherpad-mozilla.org/p/infra_cicd_day2 > [2] https://gitlab.openci.io/openci/community/wikis/home#collaboration-tools > > Thank you to everyone who helped organize and attended making it a success, > Clark > Great report, What was the feedback about continuing these meetings ever 6 / 12 months? Do you think it was a one off or something that looks to grow into a recurring event? I'm interested in the message bus topic myself, it reminds me to rebase some fedmsg patches :) Thanks for the report, Paul From cboylan at sapwetik.org Thu Mar 29 19:03:57 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 29 Mar 2018 12:03:57 -0700 Subject: [OpenStack-Infra] Recap of the Cross Community Infra/CI/CD event before ONS In-Reply-To: <20180329185506.GA1172@localhost.localdomain> References: <1522347099.1684469.1320576008.19428E36@webmail.messagingengine.com> <20180329185506.GA1172@localhost.localdomain> Message-ID: <1522350237.2701096.1320643160.399A74E8@webmail.messagingengine.com> > Great report, > > What was the feedback about continuing these meetings ever 6 / 12 months? Do you > think it was a one off or something that looks to grow into a recurring > event? Great question and something I probably should have touched on. The current planning there is to hop around to different community events through the year so that a wider range of participants can take part and to avoid becoming focused on any one community. I want to say the current rough plan (please confirm before booking any travel) is to have some sort of gathering around OpenDev (which is going along the OpenStack Summit in May), AnsibleFest, and Kubecon in Seattle at the end of the year. That should give us a good diversity in community and attendance (ONS was an LF event). > > I'm interested in the message bus topic myself, it reminds me to rebase some > fedmsg patches :) Yes, your name came up during this topic as someone that has been working with some of these tools already. Definitely reach out and get involved. > > Thanks for the report, > Paul From cboylan at sapwetik.org Thu Mar 29 19:48:24 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 29 Mar 2018 12:48:24 -0700 Subject: [OpenStack-Infra] extending python-jenkins-core group In-Reply-To: <1521069307.2167490.1303528184.1BA49C59@webmail.messagingengine.com> References: <5CD43122-6ED5-4160-B1AF-0270D1CFC41A@redhat.com> <1521069307.2167490.1303528184.1BA49C59@webmail.messagingengine.com> Message-ID: <1522352904.2718172.1320693248.7837523F@webmail.messagingengine.com> On Wed, Mar 14, 2018, at 4:15 PM, Clark Boylan wrote: > On Tue, Feb 27, 2018, at 1:33 PM, Sorin Sbarnea wrote: > > Hi! > > > > I would like to propose extending the list of people with commit access > > to python-jenkins because that repository needs more attention. > > > > As you know this is a key dependency of jenkins-job-builder and > > sometimes we need to fix bugs (or implement features) in the library. > > > > https://review.openstack.org/#/admin/groups/322,members > > > > > > Is seems that the current list of members is not long enough as even few > > trivial reviews were ignored for long time. > > > > I think that adding few others should give the project a boost, all of > > them already core committers on jenkins-job-builder-core: > > * Thanh Ha > > * Sorin Sbarnea (nominating myself, bit lame...) > > * Wayne Warren > > (picked based on who performed reviews recently) > > > > An alternative would be to add the entire jenkins-job-builder-core group > > as member of python-jenkins one. > > > > Please let me know what you think about this proposal. > > Fungi has pointed out that python-jenkins isn't an official Infra (or > even OpenStack) project. This means my input here is mostly as an > outside observer and should not be treated as special in any way. > > I think it would be a great idea to expand the core membership > particularly if these individuals are interested in maintaining the > project. My recollection was that we imported the project into Gerrit in > the first place because it had gone stale on launchpad. The initial > people involved in that were James Page, Jim Blair, and Khai Do. You > already have input from one of the three, but maybe check up with the > other two and if they give you the go ahead call it good and update the > group? After speaking to Thanh this week at a conference and not seeing any objections to this general idea I have gone ahead and made some of these group changes. Specifically Thanh has been added to python-jenkins-release (as he is familiar with the release process JJB uses) and Thanh, Sorin, and Wayne have been added to python-jenkins-core. I think the new cores can work out if anyone else would be appropriate to add but I didn't want to go too overboard with my Gerrit admin powers. Hope this gets things moving and let me know if I can help in other ways, Clark From fatih.degirmenci at ericsson.com Thu Mar 29 22:07:25 2018 From: fatih.degirmenci at ericsson.com (Fatih Degirmenci) Date: Thu, 29 Mar 2018 22:07:25 +0000 Subject: [OpenStack-Infra] Recap of the Cross Community Infra/CI/CD event before ONS In-Reply-To: <1522350237.2701096.1320643160.399A74E8@webmail.messagingengine.com> References: <1522347099.1684469.1320576008.19428E36@webmail.messagingengine.com> <20180329185506.GA1172@localhost.localdomain> <1522350237.2701096.1320643160.399A74E8@webmail.messagingengine.com> Message-ID: Hi, I would like to sincerely thank OpenStack community for actively engaging in this initiative. You have great experience in the key topics we discussed and we can all learn from them to address the challenges we all face. About the initial key themes listed on OpenCI Wiki; it is crucial to get more people involved in them. Please share your thoughts, ideas, concerns and what you've already been doing in those areas. As Clark mentioned below, OpenDev is the next event we will be meeting and having working sessions/collaborative discussions as part of the event. Please keep an eye on the schedule to see what topics will be discussed and when. [1] The schedule should be finalized soon. [1] http://2018.opendevconf.com/schedule/ /Fatih On 2018-03-29, 12:06, "Clark Boylan" wrote: > Great report, > > What was the feedback about continuing these meetings ever 6 / 12 months? Do you > think it was a one off or something that looks to grow into a recurring > event? Great question and something I probably should have touched on. The current planning there is to hop around to different community events through the year so that a wider range of participants can take part and to avoid becoming focused on any one community. I want to say the current rough plan (please confirm before booking any travel) is to have some sort of gathering around OpenDev (which is going along the OpenStack Summit in May), AnsibleFest, and Kubecon in Seattle at the end of the year. That should give us a good diversity in community and attendance (ONS was an LF event). > > I'm interested in the message bus topic myself, it reminds me to rebase some > fedmsg patches :) Yes, your name came up during this topic as someone that has been working with some of these tools already. Definitely reach out and get involved. > > Thanks for the report, > Paul _______________________________________________ OpenStack-Infra mailing list OpenStack-Infra at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra From berndbausch at gmail.com Fri Mar 30 02:27:25 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Fri, 30 Mar 2018 11:27:25 +0900 Subject: [OpenStack-Infra] Problems setting up my own OpenStack Infrastructure In-Reply-To: <1522340125.1638046.1320458640.10B4D336@webmail.messagingengine.com> References: <002e01d3c4cf$59e21720$0da64560$@gmail.com> <1522087898.2381285.1316647112.11583426@webmail.messagingengine.com> <014701d3c629$68299230$387cb690$@gmail.com> <1522340125.1638046.1320458640.10B4D336@webmail.messagingengine.com> Message-ID: Thanks much, Clark. Please don't worry about fast or slow responses. Regarding the certificate: I had gone over this obstacle by creating my own self-signed certificate and setting the trust chain parameter to the empty string. This seems to work. Regarding the hiera: That makes sense to me. Certificates count as private data, I guess. Documenting all parameters in site.pp looks like a large task (67 node declarations if I counted right). Before I volunteer :), I will first set up my Etherpad. After fixing the certificate problem, I am hitting more obstacles. I decided to document my progress on an Etherpad https://etherpad.openstack.org/p/Creating_an_OpenStack_CI_at_home, hoping my problems will be useful for improving the docs. I will probably send the occasional email summarizing the status or asking for help. Bernd. On Fri, Mar 30, 2018 at 1:15 AM, Clark Boylan wrote: > On Tue, Mar 27, 2018, at 5:12 PM, Bernd Bausch wrote: > > My first test uses this local.pp. It's copied verbatim from [1]: > > ~~~~ > > # local.pp > > class { 'openstack_project::etherpad': > > ssl_cert_file_contents => hiera('etherpad_ssl_cert_file_contents'), > > This is the public portion of ssl certificate use to run an https server. > It includes the BEGIN and END CERTIFICATE lines of the cert file contents > and everything in between. > > > ssl_key_file_contents => hiera('etherpad_ssl_key_file_contents'), > > This is the portion portion of ssl certificate use to run an https server. > It includes the BEGIN and END PRIVATE KEY lines of the cert file contents > and everything in between. > > > ssl_chain_file_contents => hiera('etherpad_ssl_chain_file_contents'), > > This is the chain of certificates needed to trust the certificate (if > required, not all certs will have this). > > > mysql_host => hiera('etherpad_db_host', 'localhost'), > > mysql_user => hiera('etherpad_db_user', 'etherpad'), > > mysql_password => hiera('etherpad_db_password','etherpad'), > > } > > In the case of using built in snakeoil certs on ubuntu you can just > provide the ssl_key_file and ssl_cert_file values and rely on the contents > being already in those files to make this simpler rather than going and > getting a certificate. However you could also use something like Let's > Encrypt to get the certificates and set their content above. > > Example of using snakeoil certs at https://git.openstack.org/ > cgit/openstack-infra/system-config/tree/modules/openstack_ > project/manifests/etherpad_dev.pp#n12 > > > > Unfortunately I don't remember off the top of my head how to set up a > hiera > > > so I will have to dig into docs (or maybe someone else can chime in > with > > > that info). > > > > In principle, I can do that (for Puppet 4 at least), but the question is > what > > goes into the OpenStack CI production hiera. I see a directory > > /opt/system-config/production/hiera [3] - is that it? It doesn't contain > > anything about Etherpad, though. I also did a codesearch for > > "etherpad_ssl_cert_file_contents", no result (except for the site.pp). > > This is the public hiera which lives in the system-config repo itself. We > can put content in there that is safe to share publicly but may still need > to be customized by downstream deployments. Unfortunately because we can't > share our private hiera content that data remains harder to share and will > in many cases be manifest dependent. Your private hiera should live > elsewhere in the hiera lookup path. I believe ours lives in > /etc/puppet/hieradata/production. > > One approach we may want to take is go node by node in site.pp and try to > provide descriptions for the content of each hiera lookup used (or when > there are logical groups of hiera lookups descriptions for that group). > That will hopefully make it more clear what the data is without needing to > divulge the actual sensitive informtation. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Mar 30 13:54:38 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 30 Mar 2018 13:54:38 +0000 Subject: [OpenStack-Infra] Problems setting up my own OpenStack Infrastructure In-Reply-To: References: <002e01d3c4cf$59e21720$0da64560$@gmail.com> <1522087898.2381285.1316647112.11583426@webmail.messagingengine.com> <014701d3c629$68299230$387cb690$@gmail.com> <1522340125.1638046.1320458640.10B4D336@webmail.messagingengine.com> Message-ID: <20180330135437.kbhp57vjsat6orwf@yuggoth.org> On 2018-03-30 11:27:25 +0900 (+0900), Bernd Bausch wrote: [...] > Regarding the hiera: That makes sense to me. Certificates count as > private data, I guess. [...] To be fair, certificates and chains are public data published from the servers onto which they're installed. The reason they're in hiera is mostly out of laziness/convenience since we _do_ need to keep the corresponding keys private, and if we replace the keys we need to replace the certs at the exact same time. The inherent asynchronicity we'd end up with by splitting them between private hiera on our management system and public hiera through code review would make that task much harder. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rchugh1 at lenovo.com Tue Mar 6 17:23:07 2018 From: rchugh1 at lenovo.com (Rushil Chugh1) Date: Tue, 06 Mar 2018 17:23:07 -0000 Subject: [OpenStack-Infra] Need an account to setup Lenovo Ironic CI Message-ID: <02A201D9587BB14BA9A10136679446E45803C90B@USMAILMBX03> Hi, Lenovo has a driver in OpenStack Ironic since the Queens release. We need to start reporting as a 3rd party CI vendor by end of Rocky. This email is to request a service account to start reporting as a third party CI system. Please let us know if you need anything else from our side. Thanks Rushil -------------- next part -------------- An HTML attachment was scrubbed... URL: From sangho at opennetworking.org Wed Mar 14 03:56:40 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Wed, 14 Mar 2018 03:56:40 -0000 Subject: [OpenStack-Infra] How to change the owner of a project? References: <4BB46CF5-12F8-465D-B7E5-4380430C8CA9@opennetworking.org> Message-ID: <8003D02E-CACE-455E-8B6B-1988E761ED20@opennetworking.org> Hello, I would like to know the official process of how to change the owner of a project owner. I am a committer of the networking-onos project, and I want to take over the project. Of course, ]the owner (maintainer) of the project agreed to that. Thank you, Sangho -------------- next part -------------- An HTML attachment was scrubbed... URL: