From cboylan at sapwetik.org Mon Aug 5 19:26:06 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 05 Aug 2019 12:26:06 -0700 Subject: [OpenStack-Infra] Weekly Infra Team Meeting Agenda for August 6, 2019 Message-ID: <716cdce6-3a0d-4e45-82f2-8b5d4676bce5@www.fastmail.com> We will be meeting tomorrow at 19:00 UTC in #openstack-meeting on freenode with this agenda: == Agenda for next meeting == * Announcements ** clarkb busy with other meetings August 13, 2019. Will need volunteer meeting chair. * Actions from last meeting * Specs approval * Priority Efforts (Standing meeting agenda items. Please expand if you have subtopics.) ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack] ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/update-config-management.html Update Config Management] *** topic:update-cfg-mgmt *** Zuul as CD engine ** OpenDev *** Still have OOM issues, though they are less frequent. *** https://etherpad.openstack.org/p/debugging-gitea08-OOM * General topics ** Trusty Upgrade Progress (clarkb 201900806) *** Testing wiki-dev02 *** Next steps for hosting job logs in swift ** Cloud status update (clarkb 201900806) *** FortNebula status update *** Linaro and MOC updates ** AFS mirroring status (clarkb 20190806) *** Fedora did not update for about a month. Cleanup in progress as well as reducing total size of volume (should make vos release more reliable) *** Debian buster updates not populated by reprepro but are assumed to be present by our mirror setup roles. ** PTG Planning (clarkb 20190806) *** https://etherpad.openstack.org/p/OpenDev-Shanghai-PTG-2019 * Open discussion From jalali.h at msd-my.com Wed Aug 7 14:23:01 2019 From: jalali.h at msd-my.com (jalali.h) Date: Wed, 07 Aug 2019 22:23:01 +0800 Subject: [OpenStack-Infra] How can I have access to OpenStack source code? Message-ID: Hello,We are developing our systems based on AWS. We are studying other clouds beacause costs of AWS.Im interested to OpenStack. According to your website, OpenStack is an open source software but I couldnt find its source. Where can I find its source?Thanks,Reza. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Aug 7 21:42:00 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 7 Aug 2019 21:42:00 +0000 Subject: [OpenStack-Infra] How can I have access to OpenStack source code? In-Reply-To: References: Message-ID: <20190807214159.lj677vkclfmkq3i4@yuggoth.org> On 2019-08-07 22:23:01 +0800 (+0800), jalali.h wrote: > Hello, We are developing our systems based on AWS. We are studying > other clouds beacause costs of AWS. Im interested to OpenStack. > According to your website, OpenStack is an open source software > but I couldnt find its source. Where can I find its source? > Thanks, Reza. Assuming you're looking at https://www.openstack.org/ when you refer to the site, at the top you should see a drop-down which says "Software" and if you click on that (or expand it and select "Overview") then the Software page has a link near the top for "Source Code" which should get you the list of sources for the latest release. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jimmy at tipit.net Wed Aug 7 21:43:54 2019 From: jimmy at tipit.net (Jimmy Mcarthur) Date: Wed, 07 Aug 2019 16:43:54 -0500 Subject: [OpenStack-Infra] How can I have access to OpenStack source code? In-Reply-To: <20190807214159.lj677vkclfmkq3i4@yuggoth.org> References: <20190807214159.lj677vkclfmkq3i4@yuggoth.org> Message-ID: <5D4B461A.8090603@tipit.net> Another good resource is: https://www.openstack.org/software/start/ Cheers, Jimmy > Jeremy Stanley > August 7, 2019 at 4:42 PM > > Assuming you're looking at https://www.openstack.org/ when you refer > to the site, at the top you should see a drop-down which says > "Software" and if you click on that (or expand it and select > "Overview") then the Software page has a link near the top for > "Source Code" which should get you the list of sources for the > latest release. > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > jalali.h > August 7, 2019 at 9:23 AM > Hello, > > We are developing our systems based on AWS. We are studying other > clouds beacause costs of AWS. > Im interested to OpenStack. According to your website, OpenStack is an > open source software but I couldnt find its source. Where can I find > its source? > > Thanks, > Reza > > > > . > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Wed Aug 7 23:58:20 2019 From: donny at fortnebula.com (Donny Davis) Date: Wed, 7 Aug 2019 19:58:20 -0400 Subject: [OpenStack-Infra] How can I have access to OpenStack source code? In-Reply-To: References: Message-ID: Reza, Below is a link to the repos that contain the openstack codebase https://opendev.org/openstack You may also want to check out our documentation https://docs.openstack.org Cheers Donny Davis c: 805 814 6800 On Wed, Aug 7, 2019, 5:30 PM jalali.h wrote: > Hello, > > We are developing our systems based on AWS. We are studying other clouds > beacause costs of AWS. > Im interested to OpenStack. According to your website, OpenStack is an > open source software but I couldnt find its source. Where can I find its > source? > > Thanks, > Reza > > > > . > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -------------- next part -------------- An HTML attachment was scrubbed... URL: From shadi at autofitcloud.com Mon Aug 12 16:43:17 2019 From: shadi at autofitcloud.com (Shadi Akiki) Date: Mon, 12 Aug 2019 18:43:17 +0200 Subject: [OpenStack-Infra] opensource infra: server sizes Message-ID: Hello. I've been going through the opensource infrastructure configurations at https://docs.openstack.org/infra/system-config/ and linked from https://opensourceinfra.org/ I don't see any information about server sizes. Is this something that is not interesting to share as part of the opensource infrastructure initiative? -- Shadi Akiki Founder & CEO, AutofitCloud https://autofitcloud.com/ +1 813 579 4935 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Mon Aug 12 16:47:47 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 12 Aug 2019 09:47:47 -0700 Subject: [OpenStack-Infra] opensource infra: server sizes In-Reply-To: References: Message-ID: On Mon, Aug 12, 2019, at 9:43 AM, Shadi Akiki wrote: > Hello. I've been going through the opensource infrastructure > configurations at https://docs.openstack.org/infra/system-config/ > and linked from https://opensourceinfra.org/ > > I don't see any information about server sizes. > Is this something that is not interesting to share as part of the > opensource infrastructure initiative? Our test node sizing is documented here [0]. This has remained fairly static as we try to ensure the test sizes are small enough that developers have the opportunity to replicate results locally without needing a datacenter. For the control plane those server sizes tend not to remain fixed over time as demand rises and falls. You can see a current snapshot of server sizing on our cacti server [1]. These values may change as we find the servers are too small or too big though. [0] https://docs.openstack.org/infra/manual/testing.html [1] http://cacti.openstack.org/cacti/graph_view.php Hope this helps, Clark From mnaser at vexxhost.com Mon Aug 12 16:47:47 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 12 Aug 2019 12:47:47 -0400 Subject: [OpenStack-Infra] opensource infra: server sizes In-Reply-To: References: Message-ID: You can see them listed here: https://docs.openstack.org/infra/system-config/contribute-cloud.html On Mon, Aug 12, 2019 at 12:45 PM Shadi Akiki wrote: > > Hello. I've been going through the opensource infrastructure configurations at https://docs.openstack.org/infra/system-config/ > and linked from https://opensourceinfra.org/ > > I don't see any information about server sizes. > Is this something that is not interesting to share as part of the opensource infrastructure initiative? > -- > Shadi Akiki > Founder & CEO, AutofitCloud > https://autofitcloud.com/ > +1 813 579 4935 > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From iwienand at redhat.com Mon Aug 12 23:25:25 2019 From: iwienand at redhat.com (Ian Wienand) Date: Tue, 13 Aug 2019 09:25:25 +1000 Subject: [OpenStack-Infra] Weekly Infra Team Meeting Agenda for August 13, 2019 Message-ID: <20190812232525.GC7334@fedora19.localdomain> We will be meeting tomorrow at 19:00 UTC in #openstack-meeting on freenode with this agenda: == Agenda for next meeting == * Announcements * Actions from last meeting * Specs approval * Priority Efforts (Standing meeting agenda items. Please expand if you have subtopics.) ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack] ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/update-config-management.html Update Config Management] *** topic:update-cfg-mgmt *** Zuul as CD engine ** OpenDev * General topics ** Trusty Upgrade Progress (clarkb 201900813) *** Next steps for hosting job logs in swift ** AFS mirroring status (ianw 20190813) *** Debian buster updates not populated by reprepro but are assumed to be present by our mirror setup roles. ** PTG Planning (clarkb 20190813) *** https://etherpad.openstack.org/p/OpenDev-Shanghai-PTG-2019 ** New backup server (ianw 20190813) *** https://review.opendev.org/#/c/675537 * Open discussion Thanks, -i From shadi at autofitcloud.com Tue Aug 13 09:44:36 2019 From: shadi at autofitcloud.com (Shadi Akiki) Date: Tue, 13 Aug 2019 11:44:36 +0200 Subject: [OpenStack-Infra] opensource infra: server sizes In-Reply-To: References: Message-ID: Thanks Clark and Mohammed. What I'm looking for is potential savings in resources behind the published openstack infra. For example, if I focus on the ask.openstack.org service (powering the Q&A website), I'm looking for 1- whether or not the ask.openstack.org server is oversized. This is answered by cacti showing - max cpu 40% over 2 years here - traffic being at 20% what it was 2 years ago here - used memory at 25% here 2- how the allocated resource can be downsized (which I was hoping to find in the opendev/system-config repo) - the only related info I could file (doesn't contain resource sizing info) are - inventory/groups.yaml - inventory/openstack.yaml - hiera/common.yaml - the only files in the repo with sizing info are (not about ask.openstack.org) - hiera/group/infracloud.yaml (all "cpu" entries are either "24" or empty) - playbooks/clouds_layouts.yml (only 2 entries on vcpu) Where can I find an answer to point 2? -- Shadi Akiki Founder & CEO, AutofitCloud https://autofitcloud.com/ +1 813 579 4935 On Mon, Aug 12, 2019 at 6:47 PM Mohammed Naser wrote: > You can see them listed here: > > https://docs.openstack.org/infra/system-config/contribute-cloud.html > > On Mon, Aug 12, 2019 at 12:45 PM Shadi Akiki > wrote: > > > > Hello. I've been going through the opensource infrastructure > configurations at https://docs.openstack.org/infra/system-config/ > > and linked from https://opensourceinfra.org/ > > > > I don't see any information about server sizes. > > Is this something that is not interesting to share as part of the > opensource infrastructure initiative? > > -- > > Shadi Akiki > > Founder & CEO, AutofitCloud > > https://autofitcloud.com/ > > +1 813 579 4935 > > _______________________________________________ > > OpenStack-Infra mailing list > > OpenStack-Infra at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Thu Aug 15 01:15:11 2019 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 15 Aug 2019 11:15:11 +1000 Subject: [OpenStack-Infra] opensource infra: server sizes In-Reply-To: References: Message-ID: <20190815011511.GA5923@fedora19.localdomain> On Tue, Aug 13, 2019 at 11:44:36AM +0200, Shadi Akiki wrote: > 2- how the allocated resource can be downsized (which I was hoping to find > in the opendev/system-config > repo) You are correct that the sizing details for control plane servers are not really listed anywhere. This is really an artifact of us manually creating control-plane servers. When we create a new control-plane server, we use the launch tooling in [1] where you will see we manually select a flavor size. This is dependent on the cloud we launch the server in and the flavors they provide us. There isn't really a strict rule on what flavor is chosen; it's more art than science :) Basically the smallest for what seems appropriate for what the server is doing. After the server is created the exact flavor used is not recorded separately (i.e. other than querying nova directly). So there is no central YAML file or anything with the server and the flavor it was created with. Sometimes the cloud provider will provide us with custom flavors, or ask us to use a particular variant. So in terms of resizing the servers, we are limited to the flavors provided to us by the providers, which varies. In terms of the practicality of resizing, as I'm sure you know this can be harder or easier depending on a big variety of things from the provider. We have resized servers before when it becomes clear they're not performing (recently adding swap to the gitea servers comes to mind). Depending on the type of service it varies; for something not load-balanced that requires production downtime, it's a very manual process. Nobody is opposed to making any of this more programatic, I'm sure. It's just a trade-off between development time to create and maintain that, and how often we actually start control-plane servers. In terms of ask.o.o, that is a "8 GB Performance" flavor, as defined by RAX's flavors. This was rebuilt when we upgraded it to Xenial as an 8GB node (from 4GB) as investigation at the time showed 4GB was a bit tight [2]. 8GB is the next quanta up of flavor provided by RAX over 4GB. I hope this helps! -i [1] https://opendev.org/opendev/system-config/src/branch/master/launch [2] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129078.html From corvus at inaugust.com Thu Aug 15 17:04:53 2019 From: corvus at inaugust.com (James E. Blair) Date: Thu, 15 Aug 2019 10:04:53 -0700 Subject: [OpenStack-Infra] [all][infra] Zuul logs are in swift In-Reply-To: <87y305onco.fsf@meyer.lemoncheese.net> (James E. Blair's message of "Tue, 06 Aug 2019 17:01:11 -0700") References: <87y305onco.fsf@meyer.lemoncheese.net> Message-ID: <87wofepdfu.fsf@meyer.lemoncheese.net> Hi, We have made the switch to begin storing all of the build logs from Zuul in Swift. Each build's logs will be stored in one of 7 randomly chosen Swift regions in Fort Nebula, OVH, Rackspace, and Vexxhost. Thanks to those providers! You'll note that the links in Gerrit to the Zuul jobs now go to a page on the Zuul web app. A lot of the features previously available on the log server are now available there, plus some new ones. If you're looking for a link to a docs preview build, you'll find that on the build page under the "Artifacts" section now. If you're curious about where your logs ended up, you can see the Swift hostname under the "logs_url" row in the summary table. Please let us know if you have any questions or encounter any issues, either here, or in #openstack-infra on IRC. -Jim From mark at stackhpc.com Fri Aug 16 08:32:41 2019 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 16 Aug 2019 09:32:41 +0100 Subject: [OpenStack-Infra] [all][infra] Zuul logs are in swift In-Reply-To: <87wofepdfu.fsf@meyer.lemoncheese.net> References: <87y305onco.fsf@meyer.lemoncheese.net> <87wofepdfu.fsf@meyer.lemoncheese.net> Message-ID: On Thu, 15 Aug 2019 at 18:05, James E. Blair wrote: > > Hi, > > We have made the switch to begin storing all of the build logs from Zuul > in Swift. > > Each build's logs will be stored in one of 7 randomly chosen Swift > regions in Fort Nebula, OVH, Rackspace, and Vexxhost. Thanks to those > providers! > > You'll note that the links in Gerrit to the Zuul jobs now go to a page > on the Zuul web app. A lot of the features previously available on the > log server are now available there, plus some new ones. > > If you're looking for a link to a docs preview build, you'll find that > on the build page under the "Artifacts" section now. > > If you're curious about where your logs ended up, you can see the Swift > hostname under the "logs_url" row in the summary table. > > Please let us know if you have any questions or encounter any issues, > either here, or in #openstack-infra on IRC. One minor thing I noticed is that the emails to openstack-stable-maint list no longer reference the branch. It was previously visible in the URL, e.g. - openstack-tox-py27 https://logs.opendev.org/periodic-stable/opendev.org/openstack/networking-midonet/stable/pike/openstack-tox-py27/649bbb2/ : RETRY_LIMIT in 3m 08s However now it is not: openstack-tox-py27 https://zuul.opendev.org/t/openstack/build/464ae8b594cf4dc5b6da532c4ea179a7 : RETRY_LIMIT in 3m 31s I can see the branch if I click through to the linked Zuul build page. > > -Jim > From shadi at autofitcloud.com Mon Aug 19 16:36:13 2019 From: shadi at autofitcloud.com (Shadi Akiki) Date: Mon, 19 Aug 2019 18:36:13 +0200 Subject: [OpenStack-Infra] opensource infra: server sizes In-Reply-To: <20190815011511.GA5923@fedora19.localdomain> References: <20190815011511.GA5923@fedora19.localdomain> Message-ID: > > the sizing details for control plane servers are not really listed anywhere If they're not listed anywhere, I suppose that nobody follows up on the sizing until something breaks? Sometimes the cloud provider will provide us with custom flavors, or ask us > to use a particular variant Why would they ask for a particular variant? Is it because these resources are donated by the cloud provider? That's my best guess to justify this. for something not load-balanced that requires production downtime, it's a > very manual process Is it also the case that the load-balancing settings are not recorded anywhere? eg minimum and maximum number of machines in a load-balancing cluster, machine flavor It's just a trade-off between development time to create and maintain > that, and how often we actually start control-plane servers. > Are the control-plane servers the only cloud cost aspect to outweigh the development costs? I'm surprised there isn't already a tool out there that interfaces with rrdtool and/or cacti to help with this. rrdtool seems to have been around since 20 years now [1] [2] [1] https://tobi.oetiker.ch/webtools/appreciators.txt [2] https://github.com/oetiker/rrdtool-1.x/commit/37fc663811528ddf3ded4fe236ea26f4f76fa32d#diff-dee0aab09da2b4d69b6722a85037700d -- Shadi Akiki Founder & CEO, AutofitCloud https://autofitcloud.com/ +1 813 579 4935 On Thu, Aug 15, 2019 at 3:15 AM Ian Wienand wrote: > On Tue, Aug 13, 2019 at 11:44:36AM +0200, Shadi Akiki wrote: > > 2- how the allocated resource can be downsized (which I was hoping to > find > > in the opendev/system-config > > repo) > > You are correct that the sizing details for control plane servers are > not really listed anywhere. > > This is really an artifact of us manually creating control-plane > servers. When we create a new control-plane server, we use the launch > tooling in [1] where you will see we manually select a flavor size. > This is dependent on the cloud we launch the server in and the flavors > they provide us. > > There isn't really a strict rule on what flavor is chosen; it's more > art than science :) Basically the smallest for what seems appropriate > for what the server is doing. > > After the server is created the exact flavor used is not recorded > separately (i.e. other than querying nova directly). So there is no > central YAML file or anything with the server and the flavor it was > created with. Sometimes the cloud provider will provide us with > custom flavors, or ask us to use a particular variant. > > So in terms of resizing the servers, we are limited to the flavors > provided to us by the providers, which varies. In terms of the > practicality of resizing, as I'm sure you know this can be harder or > easier depending on a big variety of things from the provider. We > have resized servers before when it becomes clear they're not > performing (recently adding swap to the gitea servers comes to mind). > Depending on the type of service it varies; for something not > load-balanced that requires production downtime, it's a very manual > process. > > Nobody is opposed to making any of this more programatic, I'm sure. > It's just a trade-off between development time to create and maintain > that, and how often we actually start control-plane servers. > > In terms of ask.o.o, that is a "8 GB Performance" flavor, as defined > by RAX's flavors. This was rebuilt when we upgraded it to Xenial as > an 8GB node (from 4GB) as investigation at the time showed 4GB was a > bit tight [2]. 8GB is the next quanta up of flavor provided by RAX > over 4GB. > > I hope this helps! > > -i > > [1] https://opendev.org/opendev/system-config/src/branch/master/launch > [2] > http://lists.openstack.org/pipermail/openstack-dev/2018-April/129078.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Mon Aug 19 16:53:54 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 19 Aug 2019 09:53:54 -0700 Subject: [OpenStack-Infra] opensource infra: server sizes In-Reply-To: References: <20190815011511.GA5923@fedora19.localdomain> Message-ID: On Mon, Aug 19, 2019, at 9:36 AM, Shadi Akiki wrote: > > the sizing details for control plane servers are not really listed anywhere > > If they're not listed anywhere, I suppose that nobody follows up on the > sizing until something breaks? > > > Sometimes the cloud provider will provide us with custom flavors, or ask us to use a particular variant > > Why would they ask for a particular variant? Is it because these > resources are donated by the cloud provider? > That's my best guess to justify this. Yes, resources are donated by the cloud providers. I think providers tend to use special flavors to control how are resources are scheduled. > > > for something not load-balanced that requires production downtime, it's a very manual process > > Is it also the case that the load-balancing settings are not recorded > anywhere? > eg minimum and maximum number of machines in a load-balancing cluster, > machine flavor Our load balancers don't currently do auto scaling. But configs do live in config management. For example: https://opendev.org/opendev/system-config/src/branch/master/playbooks/group_vars/gitea-lb.yaml There are longer term plans to host that service in kubernetes which could make use of scaling, but gitea the software isn't capable of running in a true cluster yet. > > > It's just a trade-off between development time to create and maintain > > that, and how often we actually start control-plane servers. > > Are the control-plane servers the only cloud cost aspect to outweigh > the development costs? > I'm surprised there isn't already a tool out there that interfaces with > rrdtool and/or cacti to help with this. > rrdtool seems to have been around since 20 years now [1] [2] The expectation from cacti is likely that you'll use the same snmp MIB data that cacti uses to build its graphs to query sizing info directly if you want it. Rather than expecting cacti or rrdtool to expose that to you. > > [1] https://tobi.oetiker.ch/webtools/appreciators.txt > [2] > https://github.com/oetiker/rrdtool-1.x/commit/37fc663811528ddf3ded4fe236ea26f4f76fa32d#diff-dee0aab09da2b4d69b6722a85037700d > -- > Shadi Akiki > Founder & CEO, AutofitCloud > https://autofitcloud.com/ > +1 813 579 4935 From cboylan at sapwetik.org Mon Aug 19 21:09:21 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 19 Aug 2019 14:09:21 -0700 Subject: [OpenStack-Infra] Meeting Agenda for August 20, 2019 Message-ID: We will have our weekly team meeting at 19:00UTC tomorrow August 20, 2019. This is the agenda we will follow: == Agenda for next meeting == * Announcements * Actions from last meeting * Specs approval * Priority Efforts (Standing meeting agenda items. Please expand if you have subtopics.) ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack] ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/update-config-management.html Update Config Management] *** topic:update-cfg-mgmt *** Zuul as CD engine ** OpenDev * General topics ** Trusty Upgrade Progress (clarkb 201900820) *** Job logs are now in swift ** AFS mirroring status (ianw 20190820) *** Debian buster updates not populated by reprepro but are assumed to be present by our mirror setup roles. *** Fedora seems to have stopped updated again ** PTG Planning (clarkb 20190820) *** https://etherpad.openstack.org/p/OpenDev-Shanghai-PTG-2019 ** New backup server (ianw 20190820) *** https://review.opendev.org/#/c/675537 ** Intermittent Lack of IPv4 in Limestone (clarkb 20190820) * Open discussion From donny at fortnebula.com Wed Aug 21 14:31:08 2019 From: donny at fortnebula.com (Donny Davis) Date: Wed, 21 Aug 2019 10:31:08 -0400 Subject: [OpenStack-Infra] opensource infra: server sizes In-Reply-To: References: <20190815011511.GA5923@fedora19.localdomain> Message-ID: >If they're not listed anywhere, I suppose that nobody follows up on the sizing until something breaks? I have been introspecting the infra mirror on FortNebula to determine if there is any last little bit of performance to be gained. Its also publicly tracked here https://grafana.fortnebula.com/d/9MMqh8HWk/openstack-utilization On Mon, Aug 19, 2019 at 12:57 PM Clark Boylan wrote: > On Mon, Aug 19, 2019, at 9:36 AM, Shadi Akiki wrote: > > > the sizing details for control plane servers are not really listed > anywhere > > > > If they're not listed anywhere, I suppose that nobody follows up on the > > sizing until something breaks? > > > > > Sometimes the cloud provider will provide us with custom flavors, or > ask us to use a particular variant > > > > Why would they ask for a particular variant? Is it because these > > resources are donated by the cloud provider? > > That's my best guess to justify this. > > Yes, resources are donated by the cloud providers. I think providers tend > to use special flavors to control how are resources are scheduled. > > > > > > for something not load-balanced that requires production downtime, > it's a very manual process > > > > Is it also the case that the load-balancing settings are not recorded > > anywhere? > > eg minimum and maximum number of machines in a load-balancing cluster, > > machine flavor > > Our load balancers don't currently do auto scaling. But configs do live in > config management. For example: > https://opendev.org/opendev/system-config/src/branch/master/playbooks/group_vars/gitea-lb.yaml > > There are longer term plans to host that service in kubernetes which could > make use of scaling, but gitea the software isn't capable of running in a > true cluster yet. > > > > > > It's just a trade-off between development time to create and maintain > > > that, and how often we actually start control-plane servers. > > > > Are the control-plane servers the only cloud cost aspect to outweigh > > the development costs? > > I'm surprised there isn't already a tool out there that interfaces with > > rrdtool and/or cacti to help with this. > > rrdtool seems to have been around since 20 years now [1] [2] > > The expectation from cacti is likely that you'll use the same snmp MIB > data that cacti uses to build its graphs to query sizing info directly if > you want it. Rather than expecting cacti or rrdtool to expose that to you. > > > > > [1] https://tobi.oetiker.ch/webtools/appreciators.txt > > [2] > > > https://github.com/oetiker/rrdtool-1.x/commit/37fc663811528ddf3ded4fe236ea26f4f76fa32d#diff-dee0aab09da2b4d69b6722a85037700d > > -- > > Shadi Akiki > > Founder & CEO, AutofitCloud > > https://autofitcloud.com/ > > +1 813 579 4935 > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -------------- next part -------------- An HTML attachment was scrubbed... URL: From honjo.rikimaru at ntt-tx.co.jp Fri Aug 23 05:21:53 2019 From: honjo.rikimaru at ntt-tx.co.jp (Rikimaru Honjo) Date: Fri, 23 Aug 2019 14:21:53 +0900 Subject: [OpenStack-Infra] Failed to SSH login into the gerrit(Too many concurrent connections ) Message-ID: <3c0bba41-29b1-0822-84b3-6761e94d8c59@ntt-tx.co.jp_1> Hello, I have my zuulv3 instance for 3rd party CI. I stopped & restarted it several times to test my configuration yesterday. As aresult, my zuulv3 instance started to fail to connect the gerrit(review.opendev.org). The following error was occurred at that time. > Received disconnect from 104.130.246.32 port 29418:12: Too many concurrent connections (64) - max. allowed: 64 I tried to run ssh command[1] manually. It was also failed by same reason. And, I also tried it on other machine. The result was the same. How can I resolve this issue? [1] e.g. "gerrit stream-events", "git-upload-pack " Best regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at ntt-tx.co.jp From honjo.rikimaru at ntt-tx.co.jp Fri Aug 23 02:58:44 2019 From: honjo.rikimaru at ntt-tx.co.jp (Rikimaru Honjo) Date: Fri, 23 Aug 2019 11:58:44 +0900 Subject: [OpenStack-Infra] Failed to SSH login into the gerrit(Too many concurrent connections ) Message-ID: Hello, I have my zuulv3 instance for 3rd party CI. I stopped & restarted it several times to test my configuration yesterday. As aresult, my zuulv3 instance started to fail to connect the gerrit(review.opendev.org). The following error was occurred at that time. > Received disconnect from 104.130.246.32 port 29418:12: Too many concurrent connections (64) - max. allowed: 64 I tried to run ssh command[1] manually. It was also failed by same reason. And, I also tried it on other machine. The result was the same. How can I resolve this issue? [1] e.g. "gerrit stream-events", "git-upload-pack " Best regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at ntt-tx.co.jp From cboylan at sapwetik.org Fri Aug 23 21:07:50 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 23 Aug 2019 14:07:50 -0700 Subject: [OpenStack-Infra] =?utf-8?q?Failed_to_SSH_login_into_the_gerrit?= =?utf-8?q?=28Too_many=09concurrent_connections_=29?= In-Reply-To: References: Message-ID: On Fri, Aug 23, 2019, at 9:40 AM, Rikimaru Honjo wrote: > Hello, > > I have my zuulv3 instance for 3rd party CI. > > I stopped & restarted it several times to test my configuration > yesterday. > As aresult, my zuulv3 instance started to fail to connect the > gerrit(review.opendev.org). > The following error was occurred at that time. > > > Received disconnect from 104.130.246.32 port 29418:12: Too many concurrent connections (64) - max. allowed: 64 > > I tried to run ssh command[1] manually. It was also failed by same reason. > And, I also tried it on other machine. The result was the same. > > How can I resolve this issue? There is only one user with 64 connections currently so I am going to assume this is your third party CI user (it is NTT SystemFault MasakariIntegration CI). I've gone ahead and closed all 64 existing connections in Gerrit. This is typically caused by ssh clients that don't properly close their ssh connection with gerrit. At one point this happened because of a bug in Zuul. That bug has long since been fixed; you should ensure that you are running an up to date version of zuul and paramiko to avoid this. Firewalls that timeout connections may also cause this to happen. Also for future debugging it helps to get the client name and/or ID so that we can be sure we are identifying the correct accounts and connections on the Gerrit side of things. > > [1] > e.g. "gerrit stream-events", "git-upload-pack " > > Best regards, > -- > _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ > Rikimaru Honjo > E-mail:honjo.rikimaru at ntt-tx.co.jp From honjo.rikimaru at ntt-tx.co.jp Mon Aug 26 02:59:59 2019 From: honjo.rikimaru at ntt-tx.co.jp (Rikimaru Honjo) Date: Mon, 26 Aug 2019 11:59:59 +0900 Subject: [OpenStack-Infra] Failed to SSH login into the gerrit(Too many concurrent connections ) In-Reply-To: References: Message-ID: <9c470f47-70e4-b2d2-7d76-a7a8d99923b4@ntt-tx.co.jp_1> Hi Clark, On 2019/08/24 6:07, Clark Boylan wrote: > On Fri, Aug 23, 2019, at 9:40 AM, Rikimaru Honjo wrote: >> Hello, >> >> I have my zuulv3 instance for 3rd party CI. >> >> I stopped & restarted it several times to test my configuration >> yesterday. >> As aresult, my zuulv3 instance started to fail to connect the >> gerrit(review.opendev.org). >> The following error was occurred at that time. >> >>> Received disconnect from 104.130.246.32 port 29418:12: Too many concurrent connections (64) - max. allowed: 64 >> >> I tried to run ssh command[1] manually. It was also failed by same reason. >> And, I also tried it on other machine. The result was the same. >> >> How can I resolve this issue? > > There is only one user with 64 connections currently so I am going to assume this is your third party CI user (it is NTT SystemFault MasakariIntegration CI). I've gone ahead and closed all 64 existing connections in Gerrit. Thank you for closing ssh connections! I can connect now. > This is typically caused by ssh clients that don't properly close their ssh connection with gerrit. At one point this happened because of a bug in Zuul. That bug has long since been fixed; you should ensure that you are running an up to date version of zuul and paramiko to avoid this. Firewalls that timeout connections may also cause this to happen. OK. I haven't updated zuul for a long time. I try to update it. > Also for future debugging it helps to get the client name and/or ID so that we can be sure we are identifying the correct accounts and connections on the Gerrit side of things. It sounds great. Best regards, >> >> [1] >> e.g. "gerrit stream-events", "git-upload-pack " >> >> Best regards, >> -- >> _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ >> Rikimaru Honjo >> E-mail:honjo.rikimaru at ntt-tx.co.jp > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at ntt-tx.co.jp From cboylan at sapwetik.org Mon Aug 26 21:35:23 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 26 Aug 2019 14:35:23 -0700 Subject: [OpenStack-Infra] Meeting Agenda for August 27, 2019 Message-ID: We will meet at 19:00 UTC on August 27, 2019 to discuss this agenda: == Agenda for next meeting == * Announcements ** OpenStack election season is upon us * Actions from last meeting * Specs approval * Priority Efforts (Standing meeting agenda items. Please expand if you have subtopics.) ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack] ** [http://specs.openstack.org/openstack-infra/infra-specs/specs/update-config-management.html Update Config Management] *** topic:update-cfg-mgmt *** Zuul as CD engine ** OpenDev * General topics ** Trusty Upgrade Progress (clarkb 201900827) *** Wiki updates ** static.openstack.org (ianw 20190827) *** service audit -> https://etherpad.openstack.org/p/static-services *** future does not seem to be uploading too/serving from a static host *** move redirects to dockered haproxy instance **** https://review.opendev.org/#/c/677903/5 -> split haproxy out to be more generic from gitea **** https://review.opendev.org/678159 -> Add a service load balancer, add existing redirects, test *** longer term goal move others to afs publishing? *** convert to ansible/bionic host post logs.o.o expiration? ** AFS mirroring status (ianw 20190827) *** Debian buster updates not populated by reprepro but are assumed to be present by our mirror setup roles. *** fedora issues again (http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2019-08-26.log.html#t2019-08-26T07:25:16) -- pattern? ** PTG Planning (clarkb 20190827) *** https://etherpad.openstack.org/p/OpenDev-Shanghai-PTG-2019 ** Next rename window (Kayobe rename(s) needed for Train release inclusion) * Open discussion From nate.johnston at redhat.com Tue Aug 27 16:19:27 2019 From: nate.johnston at redhat.com (Nate Johnston) Date: Tue, 27 Aug 2019 12:19:27 -0400 Subject: [OpenStack-Infra] [all][infra] Zuul logs are in swift In-Reply-To: <87wofepdfu.fsf@meyer.lemoncheese.net> References: <87y305onco.fsf@meyer.lemoncheese.net> <87wofepdfu.fsf@meyer.lemoncheese.net> Message-ID: <20190827161927.h462vwwij6d2nraq@bishop> On Thu, Aug 15, 2019 at 10:04:53AM -0700, James E. Blair wrote: > Hi, > > We have made the switch to begin storing all of the build logs from Zuul > in Swift. > > Each build's logs will be stored in one of 7 randomly chosen Swift > regions in Fort Nebula, OVH, Rackspace, and Vexxhost. Thanks to those > providers! > > You'll note that the links in Gerrit to the Zuul jobs now go to a page > on the Zuul web app. A lot of the features previously available on the > log server are now available there, plus some new ones. > > If you're looking for a link to a docs preview build, you'll find that > on the build page under the "Artifacts" section now. > > If you're curious about where your logs ended up, you can see the Swift > hostname under the "logs_url" row in the summary table. > > Please let us know if you have any questions or encounter any issues, > either here, or in #openstack-infra on IRC. Where should I go to see the logs for periodic jobs? I assume these have been transferred over, since (for example) the neutron periodic jobs stopped logging their daily runs after 8/15, except for one time on 8/24. Thanks, Nate From will.code.for.pizza at gmail.com Thu Aug 29 17:08:07 2019 From: will.code.for.pizza at gmail.com (Ich bins) Date: Thu, 29 Aug 2019 19:08:07 +0200 Subject: [OpenStack-Infra] Questions: openstack-ansible-ops -> multi-node-aio Message-ID: Hi folks, I have some short questions about the multi node AIO deployment. - Is it possible to deploy the virtual environment on just one disk ? I have a large disk, which is pre-formatted with one big LVM called "/dev/mapper/cloudbox001-vg/root" and no unformatted space on it. There is enough space (aprox. 2 TB) for deployment, but it fails to deploy. - Which environment variables have to be configured ? If I start build.sh without any variables, it ends up in https://pastebin.com/pyFR7Mzi Regards and take care. Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.code.for.pizza at gmail.com Thu Aug 29 17:12:51 2019 From: will.code.for.pizza at gmail.com (Ich bins) Date: Thu, 29 Aug 2019 19:12:51 +0200 Subject: [OpenStack-Infra] Questions: openstack-ansible-ops -> multi-node-aio In-Reply-To: References: Message-ID: After upgrading "pip" to newest version, now the following error occurs: ERROR: Package 'more-itertools' requires a different Python: 2.7.15 not in '>=3.4' Am Do., 29. Aug. 2019 um 19:08 Uhr schrieb Ich bins < will.code.for.pizza at gmail.com>: > Hi folks, > > I have some short questions about the multi node AIO deployment. > > - Is it possible to deploy the virtual environment on just one disk ? > I have a large disk, which is pre-formatted with one big LVM called > "/dev/mapper/cloudbox001-vg/root" and no unformatted space on it. There is > enough space (aprox. 2 TB) for deployment, but it fails to deploy. > > - Which environment variables have to be configured ? > If I start build.sh without any variables, it ends up in > > https://pastebin.com/pyFR7Mzi > > Regards and take care. > > Will > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Thu Aug 29 17:19:57 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 29 Aug 2019 10:19:57 -0700 Subject: [OpenStack-Infra] =?utf-8?q?Questions=3A_openstack-ansible-ops_-?= =?utf-8?q?=3E=09multi-node-aio?= In-Reply-To: References: Message-ID: <7bc3cfd5-19ac-4a0a-b0cb-91e58ecdccff@www.fastmail.com> On Thu, Aug 29, 2019, at 10:12 AM, Ich bins wrote: > After upgrading "pip" to newest version, now the following error occurs: > > ERROR: Package 'more-itertools' requires a different Python: 2.7.15 not > in '>=3.4' > > > > > Am Do., 29. Aug. 2019 um 19:08 Uhr schrieb Ich bins > : > > Hi folks, > > > > I have some short questions about the multi node AIO deployment. > > > > - Is it possible to deploy the virtual environment on just one disk ? > > I have a large disk, which is pre-formatted with one big LVM called "/dev/mapper/cloudbox001-vg/root" and no unformatted space on it. There is enough space (aprox. 2 TB) for deployment, but it fails to deploy. > > > > - Which environment variables have to be configured ? > > If I start build.sh without any variables, it ends up in > > > > https://pastebin.com/pyFR7Mzi > > > > Regards and take care. > > > > Will The OpenStack Infra team is predominantly tasked with building and operating tools to build the openstack software. Unfortunately, this means we don't have a ton of experience with tools like openstack ansible. You probably want to send your queries to openstack-discuss at lists.openstack.org instead where you should have a mix of developers, operators, and users of openstack including those that build and deploy OSA. Clark From shadi at autofitcloud.com Thu Aug 29 17:20:44 2019 From: shadi at autofitcloud.com (Shadi Akiki) Date: Thu, 29 Aug 2019 20:20:44 +0300 Subject: [OpenStack-Infra] Questions: openstack-ansible-ops -> multi-node-aio In-Reply-To: References: Message-ID: Try to use python 3 and pip 3 instead of version 2. On ubuntu, that would be sudo apt-get install python3 python3-pip and then sudo pip3 install ... later -- Shadi Akiki Founder & CEO, AutofitCloud https://autofitcloud.com/ +1 813 579 4935 On Thu, Aug 29, 2019 at 8:15 PM Ich bins wrote: > After upgrading "pip" to newest version, now the following error occurs: > > ERROR: Package 'more-itertools' requires a different Python: 2.7.15 not in > '>=3.4' > > > > > Am Do., 29. Aug. 2019 um 19:08 Uhr schrieb Ich bins < > will.code.for.pizza at gmail.com>: > >> Hi folks, >> >> I have some short questions about the multi node AIO deployment. >> >> - Is it possible to deploy the virtual environment on just one disk ? >> I have a large disk, which is pre-formatted with one big LVM called >> "/dev/mapper/cloudbox001-vg/root" and no unformatted space on it. There is >> enough space (aprox. 2 TB) for deployment, but it fails to deploy. >> >> - Which environment variables have to be configured ? >> If I start build.sh without any variables, it ends up in >> >> https://pastebin.com/pyFR7Mzi >> >> Regards and take care. >> >> Will >> >> >> >> >> _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.code.for.pizza at gmail.com Thu Aug 29 19:57:37 2019 From: will.code.for.pizza at gmail.com (Ich bins) Date: Thu, 29 Aug 2019 21:57:37 +0200 Subject: [OpenStack-Infra] Questions: openstack-ansible-ops -> multi-node-aio In-Reply-To: References: Message-ID: Hi Shadi, looks good. The script continues.... But now my first question is still open: ----------------------------- TASK [Prepare the data disk file system] ************ task path: /home/wartung/openstack-ansible-ops/multi-node-aio/playbooks/setup-host.yml:365 fatal: [mnaio1]: FAILED! => { "changed": false } MSG: Device /dev/sdb1 not found. ----------------------------- Is it possible to use a ready to use and running partition ? $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 2,2T 0 disk ├─sda1 8:1 0 1M 0 part └─sda2 8:2 0 2,2T 0 part ├─cloudbox001--vg-root 253:0 0 2,2T 0 lvm / └─cloudbox001--vg-swap_1 253:1 0 976M 0 lvm [SWAP] Regards ! Am Do., 29. Aug. 2019 um 19:20 Uhr schrieb Shadi Akiki < shadi at autofitcloud.com>: > Try to use python 3 and pip 3 instead of version 2. > On ubuntu, that would be > sudo apt-get install python3 python3-pip > and then > sudo pip3 install ... later > -- > Shadi Akiki > Founder & CEO, AutofitCloud > https://autofitcloud.com/ > +1 813 579 4935 > > > On Thu, Aug 29, 2019 at 8:15 PM Ich bins > wrote: > >> After upgrading "pip" to newest version, now the following error occurs: >> >> ERROR: Package 'more-itertools' requires a different Python: 2.7.15 not >> in '>=3.4' >> >> >> >> >> Am Do., 29. Aug. 2019 um 19:08 Uhr schrieb Ich bins < >> will.code.for.pizza at gmail.com>: >> >>> Hi folks, >>> >>> I have some short questions about the multi node AIO deployment. >>> >>> - Is it possible to deploy the virtual environment on just one disk ? >>> I have a large disk, which is pre-formatted with one big LVM called >>> "/dev/mapper/cloudbox001-vg/root" and no unformatted space on it. There is >>> enough space (aprox. 2 TB) for deployment, but it fails to deploy. >>> >>> - Which environment variables have to be configured ? >>> If I start build.sh without any variables, it ends up in >>> >>> https://pastebin.com/pyFR7Mzi >>> >>> Regards and take care. >>> >>> Will >>> >>> >>> >>> >>> _______________________________________________ >> OpenStack-Infra mailing list >> OpenStack-Infra at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Aug 29 20:08:11 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 29 Aug 2019 20:08:11 +0000 Subject: [OpenStack-Infra] Questions: openstack-ansible-ops -> multi-node-aio In-Reply-To: References: Message-ID: <20190829200810.ocsq5al75h2mygzc@yuggoth.org> On 2019-08-29 21:57:37 +0200 (+0200), Ich bins wrote: > looks good. The script continues.... > > But now my first question is still open: [...] Please continue this discussion elsewhere (for example the openstack-discuss at lists.openstack.org mailing list). As Clark tried to explain earlier, you seem to have accidentally found the mailing list used to coordinate maintaining the systems and services which are *used by* the OpenStack community. The mailing list you are mistakenly discussing this on is not intended for general discussions about how to install or use OpenStack, it's a list about things like the OpenDev source code hosting, code review and gating CI systems, as well as task tracking services, documentation hosting, mailing list servers, wiki... not actually about deploying and running OpenStack itself though. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: