From lennyb at mellanox.com Sun Apr 1 05:21:56 2018 From: lennyb at mellanox.com (Lenny Berkhovsky) Date: Sun, 1 Apr 2018 05:21:56 +0000 Subject: [OpenStack-Infra] Problems setting up my own OpenStack Infrastructure In-Reply-To: <014701d3c629$68299230$387cb690$@gmail.com> References: <002e01d3c4cf$59e21720$0da64560$@gmail.com> <1522087898.2381285.1316647112.11583426@webmail.messagingengine.com> <014701d3c629$68299230$387cb690$@gmail.com> Message-ID: Hello Bernd, There is also a Third Party CI page[1] that may assist you [1] https://docs.openstack.org/infra/openstackci/third_party_ci.html -----Original Message----- From: Bernd Bausch [mailto:berndbausch at gmail.com] Sent: Wednesday, March 28, 2018 3:12 AM To: openstack-infra at lists.openstack.org Subject: Re: [OpenStack-Infra] Problems setting up my own OpenStack Infrastructure Resending this message because it was too large for the distribution list. ------- Clark, My first test uses this local.pp. It's copied verbatim from [1]: ~~~~ # local.pp class { 'openstack_project::etherpad': ssl_cert_file_contents => hiera('etherpad_ssl_cert_file_contents'), ssl_key_file_contents => hiera('etherpad_ssl_key_file_contents'), ssl_chain_file_contents => hiera('etherpad_ssl_chain_file_contents'), mysql_host => hiera('etherpad_db_host', 'localhost'), mysql_user => hiera('etherpad_db_user', 'etherpad'), mysql_password => hiera('etherpad_db_password','etherpad'), } ~~~~ The commands I run are also verbatim from the same page: ~~~~ # ./install_puppet.sh # ./install_modules.sh # puppet apply -l /tmp/manifest.log --modulepath=modules:/etc/puppet/modules manifests/local.pp ~~~~ My second test closely follows [2]. Here, I take the puppetmaster's original site.pp, adapt the domain "openstack.org" to my domain at home and remove all node definitions except puppetmaster and etherpad. My file is at the end of this message[4]. The commands: ~~~~ # ./install_puppet.sh # ./install_modules.sh # vi site.pp # see [4] # puppet apply --modulepath='/opt/system-config/production/modules:/etc/puppet/modules' -e 'include openstack_project::puppetmaster' ~~~~ > Generally though hiera is used for anything that will be secret or > very site specific. So in this case the expectation is that you will > set up a hiera file with the info specific for your deployment > (because you shouldn't have the ssl cert private data for our deployment and we shouldn't have yours). > This is likely a missing set of info for our docs. We should add > something with general hiera setup to get people going. Yes. The documentation seems to treat the hiera as a given; it just exists, and there doesn't seem to be any information about its content or even whether it's really required. Once I know the issues and technology better (steep learning curve), I'd be happy to write documentation from the perspective of a newbie. For now, let me do more testing with hardcoded values rather than hiera. I certainly learn a lot doing this. > Unfortunately I don't remember off the top of my head how to set up a > hiera so I will have to dig into docs (or maybe someone else can chime > in with that info). In principle, I can do that (for Puppet 4 at least), but the question is what goes into the OpenStack CI production hiera. I see a directory /opt/system-config/production/hiera [3] - is that it? It doesn't contain anything about Etherpad, though. I also did a codesearch for "etherpad_ssl_cert_file_contents", no result (except for the site.pp). Thanks much, Clark! Bernd --- [1] https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openstack.org%2Finfra%2Fsystem-config%2Fsysadmin.html%23making-a-change-in-puppet&data=02%7C01%7Clennyb%40mellanox.com%7Cec2acbd3ae964b7c601308d594409eee%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636577927640908874&sdata=QDK%2FYxBHdxDbBgTu0IBNbhTwIqcphhL0lkktmnhTTzs%3D&reserved=0 [2] https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openstack.org%2Finfra%2Fsystem-config%2Fpuppet.html&data=02%7C01%7Clennyb%40mellanox.com%7Cec2acbd3ae964b7c601308d594409eee%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636577927640908874&sdata=XYYNtE8Wi4aqwX9PxmUY714%2FBnzs4%2FWgy%2BNIB2HfuA0%3D&reserved=0 [3] https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.openstack.org%2Fcgit%2Fopenstack-infra%2Fsystem-config%2Ftree%2Fhiera&data=02%7C01%7Clennyb%40mellanox.com%7Cec2acbd3ae964b7c601308d594409eee%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636577927640908874&sdata=Me0AmFUukBFrXqiFtEay7L0OvAX02A%2B7WfQnU1nftTI%3D&reserved=0 [4] My site.pp: ~~~~ # # Top-level variables # # There must not be any whitespace between this comment and the variables or # in between any two variables in order for them to be correctly parsed and # passed around in test.sh # $elasticsearch_nodes = hiera_array('elasticsearch_nodes') # # Default: should at least behave like an openstack server # node default { class { 'openstack_project::server': sysadmins => hiera('sysadmins', []), } } # Node-OS: trusty # (I try this with Centos 7 first) node 'puppetmaster.home' { class { 'openstack_project::server': iptables_public_tcp_ports => [8140], sysadmins => hiera('sysadmins', []), pin_puppet => '3.6.', } class { 'openstack_project::puppetmaster': root_rsa_key => hiera('puppetmaster_root_rsa_key'), puppetmaster_clouds => hiera('puppetmaster_clouds'), enable_mqtt => true, mqtt_password => hiera('mqtt_service_user_password'), mqtt_ca_cert_contents => hiera('mosquitto_tls_ca_file'), } file { '/etc/openstack/infracloud_vanilla_cacert.pem': ensure => present, owner => 'root', group => 'root', mode => '0444', content => hiera('infracloud_vanilla_ssl_cert_file_contents'), require => Class['::openstack_project::puppetmaster'], } file { '/etc/openstack/infracloud_chocolate_cacert.pem': ensure => present, owner => 'root', group => 'root', mode => '0444', content => hiera('infracloud_chocolate_ssl_cert_file_contents'), require => Class['::openstack_project::puppetmaster'], } file { '/etc/openstack/limestone_cacert.pem': ensure => present, owner => 'root', group => 'root', mode => '0444', content => hiera('limestone_ssl_cert_file_contents'), require => Class['::openstack_project::puppetmaster'], } } # Node-OS: trusty # Node-OS: xenial node /^etherpad\d*\.home$/ { class { 'openstack_project::server': iptables_public_tcp_ports => [22, 80, 443], sysadmins => hiera('sysadmins', []), } class { 'openstack_project::etherpad': ssl_cert_file_contents => hiera('etherpad_ssl_cert_file_contents'), ssl_key_file_contents => hiera('etherpad_ssl_key_file_contents'), ssl_chain_file_contents => hiera('etherpad_ssl_chain_file_contents'), mysql_host => hiera('etherpad_db_host', 'localhost'), mysql_user => hiera('etherpad_db_user', 'username'), mysql_password => hiera('etherpad_db_password'), } } # Node-OS: trusty # Node-OS: xenial node /^etherpad-dev\d*\.home$/ { class { 'openstack_project::server': iptables_public_tcp_ports => [22, 80, 443], sysadmins => hiera('sysadmins', []), } class { 'openstack_project::etherpad_dev': mysql_host => hiera('etherpad-dev_db_host', 'localhost'), mysql_user => hiera('etherpad-dev_db_user', 'username'), mysql_password => hiera('etherpad-dev_db_password'), } } ~~~~ _______________________________________________ OpenStack-Infra mailing list OpenStack-Infra at lists.openstack.org https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack-infra&data=02%7C01%7Clennyb%40mellanox.com%7Cec2acbd3ae964b7c601308d594409eee%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636577927640908874&sdata=avJe4%2FCdCg56Dydq9Te37onb2Ej6JfuqOEKOHkT6N8o%3D&reserved=0 From cboylan at sapwetik.org Tue Apr 3 20:23:44 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 03 Apr 2018 13:23:44 -0700 Subject: [OpenStack-Infra] Selecting New Priority Effort(s) Message-ID: <1522787024.2344911.1325411024.3114AC9F@webmail.messagingengine.com> Hello everyone, I just approved the change to mark the Zuul v3 priority effort as completed in the infra-specs repo. Thank you to everyone that made that possible. With Zuul v3 work largely done we can now look forward to our next priority efforts. Currently the only task marked as a priority is the task-tracker spec which at this point is migrating projects into storyboard. I think we can likely add one or two new priority efforts to this list. After some quick initial brainstorming these were the ideas I had for getting onto that list (note some may require we actually write a spec): * Gerrit upgrade to 2.14/2.15 * Control Plane operating system upgrades to Xenial * Bringing wiki under config management management My bias here is I've personally been working to try and pay down some of this tech debt we've built up simply due to bit rot, but I know we have other specs and I'm sure we can make good arguments for why other efforts should be made a priority. I'd love to get feedback on what others think would make good priority efforts. Let's use this thread to identify candidates then whittle the list down to one or two to focus on for the next little while. Thank you, Clark From cboylan at sapwetik.org Tue Apr 3 20:48:22 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 03 Apr 2018 13:48:22 -0700 Subject: [OpenStack-Infra] [kolla] kolla-cli master pointer change In-Reply-To: <40811eb6-b28b-dc50-1278-ede5e671e344@oracle.com> References: <40811eb6-b28b-dc50-1278-ede5e671e344@oracle.com> Message-ID: <1522788502.2354576.1325444520.36BB4C84@webmail.messagingengine.com> On Wed, Mar 28, 2018, at 11:14 AM, Borne Mace wrote: > Hi All, > > I brought up my issue in #openstack-infra and it was suggested that I > send an email to this list. > > The kolla-cli repository was recently created, from existing sources. > There was an issue with the source repo where the master branch was > sorely out of date, but there is tagged source which is up to date. My > hope is that someone can force-push the tag as master so that the master > branch can be fixed / updated. > > I tried to solve this process through the normal merge process, but > since I was not the only committer to that repository gerrit refused to > post my review. I will add the full output of that attempt at the end > so folks can see what I'm talking about. If there is some other process > that is more appropriate for me to follow here let me know and I'm happy > to go through it. > > The latest / optimal code is tagged as o3l_4.0.1. > > Thanks much for your help! > > -- Borne Mace Responding to the list to make sure we properly record the steps that were taken here. I checked out o3l_4.0.1 in kolla-cli locally then pushed it to Gerrit as an admin using `git push gerrit local-branch:master`. Because this was a fast forward I didn't even need to force push it. This also means local clients should update cleanly to the new master commit as well. I have since received confirmation from Borne that all looks good. Thank you for your patience, Clark From dmsimard at redhat.com Wed Apr 4 02:33:56 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Tue, 3 Apr 2018 22:33:56 -0400 Subject: [OpenStack-Infra] Selecting New Priority Effort(s) In-Reply-To: <1522787024.2344911.1325411024.3114AC9F@webmail.messagingengine.com> References: <1522787024.2344911.1325411024.3114AC9F@webmail.messagingengine.com> Message-ID: It won't be very exciting but we really need to do one of the following two things soon: 1) Ansiblify control plane [1] 2) Update our puppet things to puppet 4 (or 5?) Puppet 3 has been end of life since Dec 31, 2016. [2] The longer we draw this out, the more work it'll be :( [1]: https://review.openstack.org/#/c/469983/ [2]: https://groups.google.com/forum/#!topic/puppet-users/IdutL5FTW7w David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] On Tue, Apr 3, 2018 at 4:23 PM, Clark Boylan wrote: > Hello everyone, > > I just approved the change to mark the Zuul v3 priority effort as completed in the infra-specs repo. Thank you to everyone that made that possible. With Zuul v3 work largely done we can now look forward to our next priority efforts. > > Currently the only task marked as a priority is the task-tracker spec which at this point is migrating projects into storyboard. I think we can likely add one or two new priority efforts to this list. > > After some quick initial brainstorming these were the ideas I had for getting onto that list (note some may require we actually write a spec): > > * Gerrit upgrade to 2.14/2.15 > * Control Plane operating system upgrades to Xenial > * Bringing wiki under config management management > > My bias here is I've personally been working to try and pay down some of this tech debt we've built up simply due to bit rot, but I know we have other specs and I'm sure we can make good arguments for why other efforts should be made a priority. I'd love to get feedback on what others think would make good priority efforts. > > Let's use this thread to identify candidates then whittle the list down to one or two to focus on for the next little while. > > Thank you, > Clark > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra From berndbausch at gmail.com Wed Apr 4 05:45:30 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Wed, 4 Apr 2018 14:45:30 +0900 Subject: [OpenStack-Infra] Problems setting up my own OpenStack Infrastructure In-Reply-To: References: <002e01d3c4cf$59e21720$0da64560$@gmail.com> <1522087898.2381285.1316647112.11583426@webmail.messagingengine.com> <014701d3c629$68299230$387cb690$@gmail.com> Message-ID: <4857cd5f-1335-6cf7-fcb2-812ad92f65f1@gmail.com> Lenny, thanks, these instructions are a bit more robust and easier to understand than [2]. One details stands out for me: They make it clear that Ubuntu 14 is required. A few Puppet modules, in particular Etherpad used as an example in [2], assume Upstart. I don't know if Upstart is available in Xenialor recent non-Ubuntu distros, but it's definitely not there by default. I did find a few places that could be improved or may even be incorrect. How can I formally submit suggestions and bugs in the OpenStack-Infra documentation? Here they are: - First, install_puppet.sh is downloaded and executed, then system-config is cloned.   Since system-config contains install_puppet.sh, it would be more efficient to clone, then   install Puppet. - Configuration of /etc/puppet/environments/common.yaml is not quite trivial. Perhaps a few   examples would help people like me. - The instructions first install the log server, then the CI server. The log server is tested   by uploading a file to Jenkins, which runs on the CI server and is not yet available at that   point. - The Jenkins installation fails since a prerequisite can't be found:    The following packages have unmet dependencies:     jenkins : Depends: default-jre-headless (>= 2:1.8) but it is not going to be installed or               java8-runtime-headless but it is not installable - I was unable to start nodepool-builder with "service nodepool-builder start".   First, nodepool-builder aborted since it is configured to log to a file under   /var/log/nodepool/images/, which doesn't exist.   After fixing this manually, the service command is successful, but no   nodepool-builder process is running. I didn't find out why and just started   the daemon manually. - Attempting an image build fails with a stacktrace containing:     diskimage_builder.element_dependencies.MissingElementException:     Element 'openstack-repos' not found This is how far I got for the moment. Bernd On 4/1/2018 2:21 PM, Lenny Berkhovsky wrote: > Hello Bernd, > There is also a Third Party CI page[1] that may assist you > > [1] https://docs.openstack.org/infra/openstackci/third_party_ci.html > > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From cboylan at sapwetik.org Wed Apr 4 18:45:42 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 04 Apr 2018 11:45:42 -0700 Subject: [OpenStack-Infra] Problems setting up my own OpenStack Infrastructure In-Reply-To: <4857cd5f-1335-6cf7-fcb2-812ad92f65f1@gmail.com> References: <002e01d3c4cf$59e21720$0da64560$@gmail.com> <1522087898.2381285.1316647112.11583426@webmail.messagingengine.com> <014701d3c629$68299230$387cb690$@gmail.com> <4857cd5f-1335-6cf7-fcb2-812ad92f65f1@gmail.com> Message-ID: <1522867542.282544.1326660960.72116BB8@webmail.messagingengine.com> On Tue, Apr 3, 2018, at 10:45 PM, Bernd Bausch wrote: > Lenny, > > thanks, these instructions are a bit more robust and easier to > understand than [2]. > > One details stands out for me: They make it clear that Ubuntu 14 is > required. A few Puppet modules, in particular Etherpad used as an > example in [2], assume Upstart. I don't know if Upstart is available in > Xenialor recent non-Ubuntu distros, but it's definitely not there by > default. Correct, we are in the process of migrating to 16.04 (Xenial) currently which is taking some time in the cases where upstart was used or assumed. Generally we've not removed upstart support when we switch to supporting systemd so 14.04 should also work in cases where we now deploy to 16.04. > > I did find a few places that could be improved or may even be incorrect. > How can I formally submit suggestions and bugs in the OpenStack-Infra > documentation? > > Here they are: > > - First, install_puppet.sh is downloaded and executed, then > system-config is cloned. >   Since system-config contains install_puppet.sh, it would be more > efficient to clone, then >   install Puppet. I think this is a good suggestion. Would you be willing to push a change for that improvement? The documentation lives at https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/doc/source/third_party_ci.rst. Feel free to ping me for reviews if you do push some changes. > > - Configuration of /etc/puppet/environments/common.yaml is not quite > trivial. Perhaps a few >   examples would help people like me. ++ It may be best if a user of the third party deployment tooling contributes this as it will most likely reflect reality if they do rather than someone trying to come up with example examples. That said something is better than nothing if we have a volunteer to push that. > > - The instructions first install the log server, then the CI server. The > log server is tested >   by uploading a file to Jenkins, which runs on the CI server and is not > yet available at that >   point. > > - The Jenkins installation fails since a prerequisite can't be found: > >    The following packages have unmet dependencies: >     jenkins : Depends: default-jre-headless (>= 2:1.8) but it is not > going to be installed or >               java8-runtime-headless but it is not installable > > - I was unable to start nodepool-builder with "service nodepool-builder > start". >   First, nodepool-builder aborted since it is configured to log to a > file under >   /var/log/nodepool/images/, which doesn't exist. >   After fixing this manually, the service command is successful, but no >   nodepool-builder process is running. I didn't find out why and just > started >   the daemon manually. The previous two items are likely bugs in the puppet itself that will have to be addressed by updating resource dependencies for Jenkins and creating the images log dir for nodepool-builder. If you would like to work on fixing these I am happy to help. Creating the images log dir should be straightforward if we want to start there. Maybe we can coordinate on IRC as it will be a bit more realtime as far as walking through that. > > - Attempting an image build fails with a stacktrace containing: > >     diskimage_builder.element_dependencies.MissingElementException: >     Element 'openstack-repos' not found This element should come from https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/doc/source/third_party_ci.rst which I thought the puppeting would clone to the correct location for you. This one has me a bit stumped. Happy to help debug it more real time if you like, but to start look in your /etc/nodepool/nodepool.yaml file for an elements-dir config item. That directory (something like /etc/nodepool/elements) should contain a directory called openstack-repos if things were set up properly. Working back from why that isn't happening is probably the best way to debug this error. > > This is how far I got for the moment. > > Bernd > > On 4/1/2018 2:21 PM, Lenny Berkhovsky wrote: > > Hello Bernd, > > There is also a Third Party CI page[1] that may assist you > > > > [1] https://docs.openstack.org/infra/openstackci/third_party_ci.html From berndbausch at gmail.com Thu Apr 5 00:41:36 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Thu, 5 Apr 2018 09:41:36 +0900 Subject: [OpenStack-Infra] Problems setting up my own OpenStack Infrastructure In-Reply-To: <1522867542.282544.1326660960.72116BB8@webmail.messagingengine.com> References: <002e01d3c4cf$59e21720$0da64560$@gmail.com> <1522087898.2381285.1316647112.11583426@webmail.messagingengine.com> <014701d3c629$68299230$387cb690$@gmail.com> <4857cd5f-1335-6cf7-fcb2-812ad92f65f1@gmail.com> <1522867542.282544.1326660960.72116BB8@webmail.messagingengine.com> Message-ID: <2252862f-ef8b-e990-4caa-2f01348940f4@gmail.com> Clark, looks like I have my first task :) I will look into fixing the nodepool-builder problems below. Three of them, if I understand this right: * Puppet doesn't create the //var/log/nodepool///images /log directory * The command /service //nodepool-builder//start /seems to//start a nodepool process that immediately aborts/ / * The nodepool element /openstack-repos/ is not found Let me see how far I can get on my own. Thanks much for the offer to tutor me on the IRC; I will watch out for you in my morning. Our time difference is between 13 hours (EDT) and 16 hours (PDT) if you are located in the continental US, i.e. 7pm EDT is 8am next day here in Japan. Bernd. On 4/5/2018 3:45 AM, Clark Boylan wrote: > On Tue, Apr 3, 2018, at 10:45 PM, Bernd Bausch wrote: >> Lenny, >> >> thanks, these instructions are a bit more robust and easier to >> understand than [2]. >> >> One details stands out for me: They make it clear that Ubuntu 14 is >> required. A few Puppet modules, in particular Etherpad used as an >> example in [2], assume Upstart. I don't know if Upstart is available in >> Xenialor recent non-Ubuntu distros, but it's definitely not there by >> default. > Correct, we are in the process of migrating to 16.04 (Xenial) currently which is taking some time in the cases where upstart was used or assumed. Generally we've not removed upstart support when we switch to supporting systemd so 14.04 should also work in cases where we now deploy to 16.04. > >> I did find a few places that could be improved or may even be incorrect. >> How can I formally submit suggestions and bugs in the OpenStack-Infra >> documentation? >> >> Here they are: >> >> - First, install_puppet.sh is downloaded and executed, then >> system-config is cloned. >>   Since system-config contains install_puppet.sh, it would be more >> efficient to clone, then >>   install Puppet. > I think this is a good suggestion. Would you be willing to push a change for that improvement? The documentation lives at https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/doc/source/third_party_ci.rst. Feel free to ping me for reviews if you do push some changes. > >> - Configuration of /etc/puppet/environments/common.yaml is not quite >> trivial. Perhaps a few >>   examples would help people like me. > ++ It may be best if a user of the third party deployment tooling contributes this as it will most likely reflect reality if they do rather than someone trying to come up with example examples. That said something is better than nothing if we have a volunteer to push that. > >> - The instructions first install the log server, then the CI server. The >> log server is tested >>   by uploading a file to Jenkins, which runs on the CI server and is not >> yet available at that >>   point. >> >> - The Jenkins installation fails since a prerequisite can't be found: >> >>    The following packages have unmet dependencies: >>     jenkins : Depends: default-jre-headless (>= 2:1.8) but it is not >> going to be installed or >>               java8-runtime-headless but it is not installable >> >> - I was unable to start nodepool-builder with "service nodepool-builder >> start". >>   First, nodepool-builder aborted since it is configured to log to a >> file under >>   /var/log/nodepool/images/, which doesn't exist. >>   After fixing this manually, the service command is successful, but no >>   nodepool-builder process is running. I didn't find out why and just >> started >>   the daemon manually. > The previous two items are likely bugs in the puppet itself that will have to be addressed by updating resource dependencies for Jenkins and creating the images log dir for nodepool-builder. If you would like to work on fixing these I am happy to help. Creating the images log dir should be straightforward if we want to start there. Maybe we can coordinate on IRC as it will be a bit more realtime as far as walking through that. > >> - Attempting an image build fails with a stacktrace containing: >> >>     diskimage_builder.element_dependencies.MissingElementException: >>     Element 'openstack-repos' not found > This element should come from https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/doc/source/third_party_ci.rst which I thought the puppeting would clone to the correct location for you. This one has me a bit stumped. Happy to help debug it more real time if you like, but to start look in your /etc/nodepool/nodepool.yaml file for an elements-dir config item. That directory (something like /etc/nodepool/elements) should contain a directory called openstack-repos if things were set up properly. Working back from why that isn't happening is probably the best way to debug this error. > >> This is how far I got for the moment. >> >> Bernd >> >> On 4/1/2018 2:21 PM, Lenny Berkhovsky wrote: >>> Hello Bernd, >>> There is also a Third Party CI page[1] that may assist you >>> >>> [1] https://docs.openstack.org/infra/openstackci/third_party_ci.html > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From iwienand at redhat.com Thu Apr 5 00:59:36 2018 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 5 Apr 2018 10:59:36 +1000 Subject: [OpenStack-Infra] Problems setting up my own OpenStack Infrastructure In-Reply-To: <2252862f-ef8b-e990-4caa-2f01348940f4@gmail.com> References: <002e01d3c4cf$59e21720$0da64560$@gmail.com> <1522087898.2381285.1316647112.11583426@webmail.messagingengine.com> <014701d3c629$68299230$387cb690$@gmail.com> <4857cd5f-1335-6cf7-fcb2-812ad92f65f1@gmail.com> <1522867542.282544.1326660960.72116BB8@webmail.messagingengine.com> <2252862f-ef8b-e990-4caa-2f01348940f4@gmail.com> Message-ID: <760ae3f3-36be-4786-e2cd-bf0e518a1fce@redhat.com> > * Puppet doesn't create the //var/log/nodepool///images /log directory Note that since [1] the builder log output changed; previously it went through python logging into the directory you mention, now it is written into log files directly in /var/log/nodepool/builds (by default) > * The command /service //nodepool-builder//start /seems to//start a > nodepool process that immediately aborts/ You may be seeing the result of a bad logging configuration file. In this case, the daemonise happens correctly (so systemd thinks it worked) but it crashes soon after, but before any useful logging is captured. I have a change out for that in [2] (reviews appreciated :) > Let me see how far I can get on my own. Thanks much for the offer to > tutor me on the IRC; I will watch out for you in my morning. Our > time difference is between 13 hours (EDT) and 16 hours (PDT) if you > are located in the continental US, i.e. 7pm EDT is 8am next day here > in Japan. FWIW there are a couple of us in APAC who are happy to help too. IRC will always be the most immediate way however :) -i [1] https://review.openstack.org/#/c/542386/ [2] https://review.openstack.org/#/c/547889/ From berndbausch at gmail.com Thu Apr 5 13:55:29 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Thu, 5 Apr 2018 22:55:29 +0900 Subject: [OpenStack-Infra] Problems setting up my own OpenStack Infrastructure In-Reply-To: <760ae3f3-36be-4786-e2cd-bf0e518a1fce@redhat.com> References: <002e01d3c4cf$59e21720$0da64560$@gmail.com> <1522087898.2381285.1316647112.11583426@webmail.messagingengine.com> <014701d3c629$68299230$387cb690$@gmail.com> <4857cd5f-1335-6cf7-fcb2-812ad92f65f1@gmail.com> <1522867542.282544.1326660960.72116BB8@webmail.messagingengine.com> <2252862f-ef8b-e990-4caa-2f01348940f4@gmail.com> <760ae3f3-36be-4786-e2cd-bf0e518a1fce@redhat.com> Message-ID: <35360c60-3706-7ffb-186c-89b6c9878e17@gmail.com> Ian's message made me wonder what nodepool version was installed on my system, and indeed, I found this line in /etc/puppet/environment/common.yaml: nodepool_revision: 0.3.1 common.yaml serves as the hiera for this sample installation. It's setup is described at  https://docs.openstack.org/infra/openstackci/third_party_ci.html#configure-masterless-puppet. When I set the revision to /master/, nodepool-builder aborts with a syntax error because it contains Python 3 syntax (function annotations) but it's run by Python 2.7. It's also installed in /usr/local/lib/python2.7/dist-packages. I currently don't know how to change this. My guess is that the instructions and the 3rd party CI code assume Python 2.7. To be continued. Bernd On 4/5/2018 9:59 AM, Ian Wienand wrote: >>    * Puppet doesn't create the //var/log/nodepool///images /log >> directory > > Note that since [1] the builder log output changed; previously it went > through python logging into the directory you mention, now it is > written into log files directly in /var/log/nodepool/builds (by > default) > >>    * The command /service //nodepool-builder//start /seems to//start a >>      nodepool process that immediately aborts/ > > You may be seeing the result of a bad logging configuration file.  In > this case, the daemonise happens correctly (so systemd thinks it > worked) but it crashes soon after, but before any useful logging is > captured. I have a change out for that in [2] (reviews appreciated :) >> Let me see how far I can get on my own. Thanks much for the offer to >> tutor me on the IRC; I will watch out for you in my morning. Our >> time difference is between 13 hours (EDT) and 16 hours (PDT) if you >> are located in the continental US, i.e. 7pm EDT is 8am next day here >> in Japan. > > FWIW there are a couple of us in APAC who are happy to help too.  IRC > will always be the most immediate way however :) > > -i > > [1] https://review.openstack.org/#/c/542386/ > [2] https://review.openstack.org/#/c/547889/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From j.harbott at x-ion.de Thu Apr 5 14:35:27 2018 From: j.harbott at x-ion.de (Jens Harbott) Date: Thu, 5 Apr 2018 14:35:27 +0000 Subject: [OpenStack-Infra] Selecting New Priority Effort(s) In-Reply-To: References: <1522787024.2344911.1325411024.3114AC9F@webmail.messagingengine.com> Message-ID: 2018-04-04 2:33 GMT+00:00 David Moreau Simard : > It won't be very exciting but we really need to do one of the > following two things soon: > > 1) Ansiblify control plane [1] > 2) Update our puppet things to puppet 4 (or 5?) > > Puppet 3 has been end of life since Dec 31, 2016. [2] > > The longer we draw this out, the more work it'll be :( > > [1]: https://review.openstack.org/#/c/469983/ > [2]: https://groups.google.com/forum/#!topic/puppet-users/IdutL5FTW7w I agree and would vote for option 1), that would also seem to blend well with upgrading to Xenial. Avoid having to invest much effort in making puppet things work for Xenial, like we just discovered would be needed for askbot. From fungi at yuggoth.org Thu Apr 5 14:57:07 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 5 Apr 2018 14:57:07 +0000 Subject: [OpenStack-Infra] Selecting New Priority Effort(s) In-Reply-To: References: <1522787024.2344911.1325411024.3114AC9F@webmail.messagingengine.com> Message-ID: <20180405145707.sh7foraypy6ioxlk@yuggoth.org> On 2018-04-05 14:35:27 +0000 (+0000), Jens Harbott wrote: > 2018-04-04 2:33 GMT+00:00 David Moreau Simard : > > It won't be very exciting but we really need to do one of the > > following two things soon: > > > > 1) Ansiblify control plane [1] > > 2) Update our puppet things to puppet 4 (or 5?) > > > > Puppet 3 has been end of life since Dec 31, 2016. [2] > > > > The longer we draw this out, the more work it'll be :( > > > > [1]: https://review.openstack.org/#/c/469983/ > > [2]: https://groups.google.com/forum/#!topic/puppet-users/IdutL5FTW7w > > I agree and would vote for option 1), that would also seem to blend > well with upgrading to Xenial. Avoid having to invest much effort in > making puppet things work for Xenial, like we just discovered would be > needed for askbot. It's not immediately clear to me how rewriting numerous Puppet modules in Ansible avoids having to invest much effort... or is it the case that a lot of the things we're installing now have corresponding Ansible modules already? Has anyone skimmed through https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules.env and figured out how many of those seem supported by the existing Ansible ecosystem vs how many we'd have to create ourselves? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From colleen at gazlene.net Fri Apr 6 12:47:56 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 06 Apr 2018 14:47:56 +0200 Subject: [OpenStack-Infra] Selecting New Priority Effort(s) In-Reply-To: <20180405145707.sh7foraypy6ioxlk@yuggoth.org> References: <1522787024.2344911.1325411024.3114AC9F@webmail.messagingengine.com> <20180405145707.sh7foraypy6ioxlk@yuggoth.org> Message-ID: <1523018876.4031359.1328789904.2F381299@webmail.messagingengine.com> On Thu, Apr 5, 2018, at 4:57 PM, Jeremy Stanley wrote: > On 2018-04-05 14:35:27 +0000 (+0000), Jens Harbott wrote: > > 2018-04-04 2:33 GMT+00:00 David Moreau Simard : > > > It won't be very exciting but we really need to do one of the > > > following two things soon: > > > > > > 1) Ansiblify control plane [1] > > > 2) Update our puppet things to puppet 4 (or 5?) > > > > > > Puppet 3 has been end of life since Dec 31, 2016. [2] > > > > > > The longer we draw this out, the more work it'll be :( > > > > > > [1]: https://review.openstack.org/#/c/469983/ > > > [2]: https://groups.google.com/forum/#!topic/puppet-users/IdutL5FTW7w > > > > I agree and would vote for option 1), that would also seem to blend > > well with upgrading to Xenial. Avoid having to invest much effort in > > making puppet things work for Xenial, like we just discovered would be > > needed for askbot. > > It's not immediately clear to me how rewriting numerous Puppet > modules in Ansible avoids having to invest much effort... or is it > the case that a lot of the things we're installing now have > corresponding Ansible modules already? Has anyone skimmed through > https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules.env > and figured out how many of those seem supported by the existing > Ansible ecosystem vs how many we'd have to create ourselves? > -- > Jeremy Stanley The puppet modules are already tested with puppet-apply and beaker on Xenial. There should be very little if any effort to ensure they work on Xenial. It is a bit hard for me to imagine that a complete rewrite would be easier. Colleen From j.harbott at x-ion.de Fri Apr 6 13:37:10 2018 From: j.harbott at x-ion.de (Jens Harbott) Date: Fri, 6 Apr 2018 13:37:10 +0000 Subject: [OpenStack-Infra] Selecting New Priority Effort(s) In-Reply-To: <1523018876.4031359.1328789904.2F381299@webmail.messagingengine.com> References: <1522787024.2344911.1325411024.3114AC9F@webmail.messagingengine.com> <20180405145707.sh7foraypy6ioxlk@yuggoth.org> <1523018876.4031359.1328789904.2F381299@webmail.messagingengine.com> Message-ID: 2018-04-06 12:47 GMT+00:00 Colleen Murphy : > On Thu, Apr 5, 2018, at 4:57 PM, Jeremy Stanley wrote: >> On 2018-04-05 14:35:27 +0000 (+0000), Jens Harbott wrote: >> > 2018-04-04 2:33 GMT+00:00 David Moreau Simard : >> > > It won't be very exciting but we really need to do one of the >> > > following two things soon: >> > > >> > > 1) Ansiblify control plane [1] >> > > 2) Update our puppet things to puppet 4 (or 5?) >> > > >> > > Puppet 3 has been end of life since Dec 31, 2016. [2] >> > > >> > > The longer we draw this out, the more work it'll be :( >> > > >> > > [1]: https://review.openstack.org/#/c/469983/ >> > > [2]: https://groups.google.com/forum/#!topic/puppet-users/IdutL5FTW7w >> > >> > I agree and would vote for option 1), that would also seem to blend >> > well with upgrading to Xenial. Avoid having to invest much effort in >> > making puppet things work for Xenial, like we just discovered would be >> > needed for askbot. >> >> It's not immediately clear to me how rewriting numerous Puppet >> modules in Ansible avoids having to invest much effort... or is it >> the case that a lot of the things we're installing now have >> corresponding Ansible modules already? Has anyone skimmed through >> https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules.env >> and figured out how many of those seem supported by the existing >> Ansible ecosystem vs how many we'd have to create ourselves? >> -- >> Jeremy Stanley > > The puppet modules are already tested with puppet-apply and beaker on Xenial. There should be very little if any effort to ensure they work on Xenial. It is a bit hard for me to imagine that a complete rewrite would be easier. I didn't intend to say that this was easier. My comment was related to the efforts in https://review.openstack.org/558991 , which could be avoided if we decided to deploy askbot on Xenial with Ansible. The amount of work needed to perform the latter task would not change, but we could skip the intermediate step, assuming that we would start implementing 1) now instead of deciding to do it at a later stage. From iwienand at redhat.com Mon Apr 9 11:04:21 2018 From: iwienand at redhat.com (Ian Wienand) Date: Mon, 9 Apr 2018 21:04:21 +1000 Subject: [OpenStack-Infra] Selecting New Priority Effort(s) In-Reply-To: References: <1522787024.2344911.1325411024.3114AC9F@webmail.messagingengine.com> <20180405145707.sh7foraypy6ioxlk@yuggoth.org> <1523018876.4031359.1328789904.2F381299@webmail.messagingengine.com> Message-ID: On 04/06/2018 11:37 PM, Jens Harbott wrote: > I didn't intend to say that this was easier. My comment was related > to the efforts in https://review.openstack.org/558991 , which could > be avoided if we decided to deploy askbot on Xenial with > Ansible. The amount of work needed to perform the latter task would > not change, but we could skip the intermediate step, assuming that > we would start implementing 1) now instead of deciding to do it at a > later stage. I disagree with this; having found a myriad of issues it's *still* simpler that re-writing the whole thing IMO. It doesn't matter, ansible, puppet, chef, bash scripts -- the underlying problem is that we choose support libraries for postgres, solr, celery, askbot, logs etc etc, get it to deploy, then forget about it until the next LTS release 2 years later. Of course the whole world has moved on, but we're pinned to old versions of everything and never tested on new platforms. What *would* have helped is a rspec test that even just simply applies the manifest on new platforms. We have great infrastructure for these tests; but most of our modules don't actually *run* anything (e.g., here's ethercalc and etherpad-lite issues too [1,2]). These make it so much easier to collaborate; we can all see the result of changes, link to logs, get input on what's going wrong, etc etc. -i [1] https://review.openstack.org/527822 [2] https://review.openstack.org/528130 From daragh.bailey at gmail.com Tue Apr 10 09:44:31 2018 From: daragh.bailey at gmail.com (Darragh Bailey) Date: Tue, 10 Apr 2018 10:44:31 +0100 Subject: [OpenStack-Infra] Selecting New Priority Effort(s) In-Reply-To: References: <1522787024.2344911.1325411024.3114AC9F@webmail.messagingengine.com> Message-ID: On 4 April 2018 at 03:33, David Moreau Simard wrote: > It won't be very exciting but we really need to do one of the > following two things soon: > > 1) Ansiblify control plane [1] > 2) Update our puppet things to puppet 4 (or 5?) > > Puppet 3 has been end of life since Dec 31, 2016. [2] > > The longer we draw this out, the more work it'll be :( > > [1]: https://review.openstack.org/#/c/469983/ > [2]: https://groups.google.com/forum/#!topic/puppet-users/IdutL5FTW7w > > > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] > > I would suggest that whether it's decided to switch to ansible for the control plane or update puppet modules, it will be well worth investing thought into performance when running across nodes that contain "different" services to perform "different functions". Ansible is very very good at running the same task across multiple machines, e.g. configuring homogeneous servers. But control planes have a tendency to have a lot of different services running on subsets, and this has a consequence of resulting in lots of time spent waiting on tasks to complete on some nodes and skip on the rest due to the synchronization of tasks across the entire set. When working on the precursor to https://github.com/ArdanaCLM (original was used as part of Helion OpenStack by HP(E)) we had a CI job testing the deployment of a small control plane and some services on a set of 6 VMs and the time cost was prohibitive at 1.5hrs ~ 2.5hrs (upgrade testing CI was double these figures). A lot of the time 50% or more of VMs were idle because tasks that involved a few nodes meant nothing else could be done on the others. There were some thoughts around adding a strategy plugin to ansible that could do a cross between the free-run and synchronized behaviour where you could free run to completion on nodes unless you encountered certain tasks. Other alternatives included nest ansible runs to have free runs done to a point before then performing the tasks that involved cluster style operations in synchronization or careful crafting of the playbooks to achieve the same. Never got around to solving these, and some of the problems were caused by us adopting an approach without necessarily having a deep understanding of the tooling. None of this is to say the same problems will exist here, but when you are managing systems/services that interact, and it's difficult to CI them in isolation at the project, potentially you'll want some way for developers and CI on changes to exercise a test env. The cost of developing/testing/integrating with either approach should probably be investigated for both in detail. Before you look at whether it's easy to replace the puppet modules with ansible or update to puppet 4/5, so it might be worth focusing on what approaches might be needed to extract the best experience first (stability, ease of writing/maintenance & speed of dev-env bring up come to mind as important)? Past experience with any config management suggests that when you start simple it's easy to incrementally improve on the existing approach, but reserving direction when you hit dead ends is almost impossible ;) -- Darragh Bailey "Nothing is foolproof to a sufficiently talented fool" -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue Apr 10 23:34:26 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 10 Apr 2018 16:34:26 -0700 Subject: [OpenStack-Infra] Selecting New Priority Effort(s) In-Reply-To: References: <1522787024.2344911.1325411024.3114AC9F@webmail.messagingengine.com> Message-ID: <1523403266.2095086.1333639320.49B76A4F@webmail.messagingengine.com> On Tue, Apr 3, 2018, at 7:33 PM, David Moreau Simard wrote: > It won't be very exciting but we really need to do one of the > following two things soon: > > 1) Ansiblify control plane [1] > 2) Update our puppet things to puppet 4 (or 5?) > > Puppet 3 has been end of life since Dec 31, 2016. [2] > > The longer we draw this out, the more work it'll be :( > > [1]: https://review.openstack.org/#/c/469983/ > [2]: https://groups.google.com/forum/#!topic/puppet-users/IdutL5FTW7w This is an excellent point, thank you for bringing this up. I think I've largely decided that the modernization of our control plane deployment should be our next priority effort. During the infra meeting there didn't seem to be any disagreement on that front. Now would be a good time to raise concerns if there are more pressing items we should be addressing, but I think I'm personally operating under the assumption this is it unless others speak up. Because David is right, this will only get worse as time goes on and we need to address it. The process I've proposed for actually making progress on this front is to update the existing specs for ansibilifying the control plane and performing a puppet upgrade, and Monty has volunteered to write a new spec to cover how we might containerize the control plane. Paul has volunteered to update the ansible spec and we need a volunteer to update the puppet spec (and maybe if we don't get a volunteer for that that itself is important information?). This way we can consider the options available to us side by side before making a major decision like this. Hopefully we can get that done by next week and we can start to do some serious review of the options here. Thank you, Clark From aj at suse.com Wed Apr 11 18:01:39 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 11 Apr 2018 20:01:39 +0200 Subject: [OpenStack-Infra] devstack-plugin-vmax-core In-Reply-To: <3D7FD0810DD0E04EAB5786711833788B4EEE4B2B@MX202CL01.corp.emc.com> References: <3D7FD0810DD0E04EAB5786711833788B4EEE4B2B@MX202CL01.corp.emc.com> Message-ID: On 2018-01-09 15:59, Okpoyo, Unana wrote: > Hi there, > >   > > I am trying to create a pike branch for the devstack-plugin-vmax plugin > but am unable to create branches. Could I be given access to be able to > do so? Please sent a patch to update the ACL file, see https://docs.openstack.org/infra/manual/creators.html#creation-of-branches for details, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From cboylan at sapwetik.org Wed Apr 11 19:00:21 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 11 Apr 2018 12:00:21 -0700 Subject: [OpenStack-Infra] Need an account to setup Lenovo Ironic CI In-Reply-To: <02A201D9587BB14BA9A10136679446E45803C90B@USMAILMBX03> References: <02A201D9587BB14BA9A10136679446E45803C90B@USMAILMBX03> Message-ID: <1523473221.248387.1334723592.1B3F4A00@webmail.messagingengine.com> On Tue, Mar 6, 2018, at 10:20 AM, Rushil Chugh1 wrote: > Hi, > > Lenovo has a driver in OpenStack Ironic since the Queens release. We > need to start reporting as a 3rd party CI vendor by end of Rocky. This > email is to request a service account to start reporting as a third > party CI system. Please let us know if you need anything else from our > side. This process should be entirely self service. See https://docs.openstack.org/infra/system-config/third_party.html#creating-a-service-account for details on how to go about creating the account. Happy to help if you have more questions. Clark From cboylan at sapwetik.org Wed Apr 11 19:09:32 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 11 Apr 2018 12:09:32 -0700 Subject: [OpenStack-Infra] How to change the owner of a project? In-Reply-To: <8003D02E-CACE-455E-8B6B-1988E761ED20@opennetworking.org> References: <4BB46CF5-12F8-465D-B7E5-4380430C8CA9@opennetworking.org> <8003D02E-CACE-455E-8B6B-1988E761ED20@opennetworking.org> Message-ID: <1523473772.252278.1334724520.70B6AB5D@webmail.messagingengine.com> On Tue, Mar 13, 2018, at 8:56 PM, Sangho Shin wrote: > > Hello, > > I would like to know the official process of how to change the owner of > a project owner. > I am a committer of the networking-onos project, and I want to take over > the project. Of course, ]the owner (maintainer) of the project agreed to > that. Typically the project management groups are self owned, this means that any member of a self owned group can add and remove other members. In this case because you have the existing owner on board with this change you should ask them to update the group membership for the core group to include you which appears to have been done. The other group in question is the release group: https://review.openstack.org/#/admin/groups/1002,members Which didn't have any members. I've gone ahead and added the individual that created the networking-onos project in the first place as this is what we would've done if this was done then. You will want to coordinate with them to add you to the group. If this isn't practical let us know and I am sure we can work something out. As a final thought you may want to note somewhere that a transition was made just so that people looking for the right individuals to get help with the project know where to find you. Additionally may help if you need to transition to another owner in the future we can trace that back. Clark From jimmy at tipit.net Wed Apr 11 19:19:54 2018 From: jimmy at tipit.net (Jimmy McArthur) Date: Wed, 11 Apr 2018 21:19:54 +0200 Subject: [OpenStack-Infra] Asking for comments on translate-dev.o.o auth table mismatch In-Reply-To: References: <9E59F39C-5320-4FFC-B472-8820C49AE73E@openstack.org> Message-ID: On Jan 10, 2018, at 6:05 PM, Patrick Huang wrote: Not only the password mismatch. The problem is username (among other things, e.g. the identifier url, name and email, are returned from openstackId authentication and can be changed by user then saved to Zanata) has to be unique in Zanata. Since the identifier values come from two different providers (openstackid and openstackid dev), Zanata will consider they are two different users. Therefore for the same person, he would have two users in Zanata with different username. The idea here is to remove the obsolete user already in the database. Now the new propose is to start with a fresh database which I think it's much easier. +1 Regards, Patrick On Thu, Jan 11, 2018 at 8:16 AM, Jimmy McArthur wrote: > Technically those affected users could just update their password on both > the OpenStackID production and dev sites. Then the problem would be solved. > I can’t imagine we are talking about that many people that have changed > their passwords? > > Thanks, > Jimmy McArthur > 512.965.4846 <(512)%20965-4846> > > > On Jan 10, 2018, at 3:48 PM, Alex Eng wrote: > > According to his comment, it would be nice if Zanata manages openid full >> url by separating >> into domain part (e.g., "openstackid.org") and rest part (e.g., "/..."). >> @Patrick, would it be possible to support in Zanata >> as a new feature? > > > As much as I would like to solve issue asap, I don't think this is the > right solution. > It is best to handle the URL changes through a script or the jboss-cli. > > On Thu, Jan 11, 2018 at 2:08 AM, Ian Y. Choi wrote: > >> Hello, >> >> I would like to update this (sorry for my late sharing on this mailing >> list and infra team): >> >> - Jimmy commented on the Etherpad on Dec 13. >> >> According to his comment, it would be nice if Zanata manages openid full >> url by separating >> into domain part (e.g., "openstackid.org") and rest part (e.g., "/..."). >> @Patrick, would it be possible to support in Zanata >> as a new feature? >> >> By the way, I18n team decided to have another approach: freshing database >> used in translate-dev.openstack.org, >> which would address current openstackid issues and I18n PTL proposed >> https://review.openstack.org/#/c/531736/ . >> It would be so nice if the patch gets more attention from system-config >> cores. >> >> >> With many thanks, >> >> /Ian >> >> Patrick Huang wrote on 12/6/2017 11:03 AM: >> >>> I've put my comments in the etherpad. >>> >>> On Wed, Dec 6, 2017 at 11:19 AM, Ian Y. Choi >> > wrote: >>> >>> Hello, >>> >>> Since Zanata upgrade to 4.3.1 on translate-dev.openstack.org >>> is going well [1], >>> I think it is a good time to discuss translate-dev.o.o >>> authentication problems. >>> >>> I wrote a brief current problem on authentication issues in >>> translate-dev.o.o and my suggestion on proposed solution >>> : >>> https://etherpad.openstack.org/p/translate-dev-openstackid-issues >>> >>> . >>> >>> Clark looked at this proposal and said that it looked good >>> previously. >>> It would be so nice if infra team, openstackid developers, I18n >>> PTL, and Zanata development team >>> would be in same pages. >>> >>> In my opinion we can discuss more on for example: >>> - openstackid developers: How the sync between openstackid-dev and >>> openstackid databases is accomplished >>> (regarding password mismatch) >>> - Sharing Zanata auth table structure from Zanata development team >>> would be nice. >>> >>> >>> With many thanks, >>> >>> /Ian >>> >>> [1] https://storyboard.openstack.org/#!/story/2001362 >>> >>> >>> >>> >>> >>> -- >>> Patrick Huang >>> Senior Software Engineer >>> Engineering - Internationalisation >>> Red Hat, Asia-Pacific Pty Ltd >>> Level 1, 193 North Quay >>> >>> Brisbane 4000 >>> Office: +61 7 3514 8278 >>> Fax: +61 7 3514 8199 >>> IRC: pahuang >>> github: github.com/huangp >>> Website: www.redhat.com >>> >> >> >> > > > -- > > ALEX ENG > > ASSOCIATE MANAGER > > globalization TOOLING, CUSTOMER PLATFORM > > Red Hat Inc > > 193 North Quay, Brisbane City QLD 4000 > > > alex.eng at redhat.com M: 0423353457 IM: aeng > > > > -- Patrick Huang Senior Software Engineer Engineering - Internationalisation Red Hat, Asia-Pacific Pty Ltd Level 1, 193 North Quay Brisbane 4000 Office: +61 7 3514 8278 Fax: +61 7 3514 8199 IRC: pahuang github: github.com/huangp Website: www.redhat.com _______________________________________________ OpenStack-Infra mailing list OpenStack-Infra at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Thu Apr 12 13:00:15 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 12 Apr 2018 09:00:15 -0400 Subject: [OpenStack-Infra] Adding ARM64 cloud to infra In-Reply-To: References: <27918520-279d-08d6-bd4f-3753fa226a84@linaro.org> <20180112144939.GA11821@localhost.localdomain> <20180112155419.ead6msjlbbobup5o@yuggoth.org> <20180112180134.zdtkmqffvuddb4n4@yuggoth.org> <34e8f12e-82bd-5a30-650b-630503f4365a@redhat.com> <785fb91c-268b-66f8-2149-a542e57f8228@redhat.com> Message-ID: <20180412130015.GA29965@localhost.localdomain> On Mon, Jan 15, 2018 at 01:11:23PM +0000, Frank Jansen wrote: > Hi Ian, > > do you have any insight into the availability of a physical environment for the ARM64 cloud? > > I’m curious, as there may be a need for downstream testing, which I would assume will want to make use of our existing OSP CI framework. > The hardware is donated by Linaro and the first cloud is currently located in China. As for details of hardware, I recently asked hrw in #openstack-infra and this was his reply: hrw | pabelanger: misc aarch64 servers with 32+GB of ram and some GB/TB of storage. different vendors. That's probably the closest to what I can say hrw | pabelanger: some machines may be under NDA, some never reached mass market, some are mass market available, some are no longer mass market available. As for downstream testing, are you looking for arm64 hardware or hoping to use the Linaro clouds for the testing. - Paul From pabelanger at redhat.com Thu Apr 12 13:03:34 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 12 Apr 2018 09:03:34 -0400 Subject: [OpenStack-Infra] Adding ARM64 cloud to infra In-Reply-To: <20180412130015.GA29965@localhost.localdomain> References: <27918520-279d-08d6-bd4f-3753fa226a84@linaro.org> <20180112144939.GA11821@localhost.localdomain> <20180112155419.ead6msjlbbobup5o@yuggoth.org> <20180112180134.zdtkmqffvuddb4n4@yuggoth.org> <34e8f12e-82bd-5a30-650b-630503f4365a@redhat.com> <785fb91c-268b-66f8-2149-a542e57f8228@redhat.com> <20180412130015.GA29965@localhost.localdomain> Message-ID: <20180412130334.GA30549@localhost.localdomain> On Thu, Apr 12, 2018 at 09:00:15AM -0400, Paul Belanger wrote: > On Mon, Jan 15, 2018 at 01:11:23PM +0000, Frank Jansen wrote: > > Hi Ian, > > > > do you have any insight into the availability of a physical environment for the ARM64 cloud? > > > > I’m curious, as there may be a need for downstream testing, which I would assume will want to make use of our existing OSP CI framework. > > > The hardware is donated by Linaro and the first cloud is currently located in > China. As for details of hardware, I recently asked hrw in #openstack-infra and > this was his reply: > > hrw | pabelanger: misc aarch64 servers with 32+GB of ram and some GB/TB of storage. different vendors. That's probably the closest to what I can say > hrw | pabelanger: some machines may be under NDA, some never reached mass market, some are mass market available, some are no longer mass market available. > > As for downstream testing, are you looking for arm64 hardware or hoping to use > the Linaro clouds for the testing. > Also, I just noticed this was from Jan 15th, but only just showed up in my inbox. Sorry for the noise, and will try to look at headers before replying :) Paul From angeiv.zhang at gmail.com Mon Apr 16 12:52:22 2018 From: angeiv.zhang at gmail.com (=?utf-8?B?5byg5bm4?=) Date: Mon, 16 Apr 2018 20:52:22 +0800 Subject: [OpenStack-Infra] pip-10.0.0 will make tox library in openstack-infra/zuul-jobs won't work Message-ID: <5188358F-AB15-4C0E-8A40-03E1CCC3A6FC@gmail.com> Hi all, I have noticed that pip release a new version days ago, and when I test on my own CI system trigered by job openstack-zuul-jobs-linters, there was an error, looks like pip version installed in nodepool image is higher the the code need: File: https://github.com/openstack-infra/zuul-jobs/blob/master/roles/tox/library/tox_install_sibling_packages.py#L114 Logs: 2018-04-16 11:23:07.578344 | TASK [tox : Run tox without tests] 2018-04-16 11:23:07.970229 | ubuntu-xenial | linters create: /home/zuul/src/git.xxx.com/xxx/xxx-zuul-jobs/.tox/linters 2018-04-16 11:23:17.073502 | ubuntu-xenial | linters installdeps: -r/home/zuul/src/git.xxx.com/xxx/xxx-zuul-jobs/test-requirements.txt 2018-04-16 11:24:57.020033 | ubuntu-xenial | linters installed: alabaster==0.7.10,ansible==2.3.3.0,ansible-lint==3.4.21,asn1crypto==0.24.0,Babel==2.5.3,bashate==0.5.1,bcrypt==3.1.4,certifi==2018.1.18,cffi==1.11.5,chardet==3.0.4,cryptography==2.2.2,docutils==0.14,flake8==2.5.5,hacking==0.12.0,idna==2.6,imagesize==1.0.0,Jinja2==2.10,MarkupSafe==1.0,mccabe==0.2.1,packaging==17.1,paramiko==2.4.1,pbr==4.0.2,pep8==1.5.7,pyasn1==0.4.2,pycparser==2.18,pycrypto==2.6.1,pyflakes==0.8.1,Pygments==2.2.0,PyNaCl==1.2.1,pyparsing==2.2.0,pytz==2018.4,PyYAML==3.12,requests==2.18.4,six==1.11.0,snowballstemmer==1.2.1,Sphinx==1.7.2,sphinxcontrib-websupport==1.0.1,urllib3==1.22,zuul-sphinx==0.2.2 2018-04-16 11:24:57.020616 | ubuntu-xenial | ___________________________________ summary ____________________________________ 2018-04-16 11:24:57.020680 | ubuntu-xenial | linters: skipped tests 2018-04-16 11:24:57.020716 | ubuntu-xenial | congratulations :) 2018-04-16 11:24:57.240157 | ubuntu-xenial | ok: Runtime: 0:01:49.290302 2018-04-16 11:24:57.301782 | 2018-04-16 11:24:57.302127 | TASK [tox : Install any sibling python packages] 2018-04-16 11:24:58.153887 | ubuntu-xenial | ERROR 2018-04-16 11:24:58.154949 | ubuntu-xenial | { 2018-04-16 11:24:58.155064 | ubuntu-xenial | "failed": true, 2018-04-16 11:24:58.155138 | ubuntu-xenial | "log": "Processing siblings for ustack-zuul-jobs from src/git.xxx.com/xxx/xxx-zuul-jobs\nSiblingxxx-zuul-jobs at src/git.xxx.com/xxx/xxx-zuul-jobs\nSibling zuul-jobs at src/github.com/openstack-infra/zuul-jobs\n'module' object has no attribute 'req'\nTraceback (most recent call last):\n File \"/tmp/ansible_o6uPfQ/ansible_module_tox_install_sibling_packages.py\", line 185, in main\n for package in get_installed_packages(tox_python):\n File \"/tmp/ansible_o6uPfQ/ansible_module_tox_install_sibling_packages.py\", line 114, in get_installed_packages\n return pip.req.req_file.parse_requirements(\nAttributeError: 'module' object has no attribute 'req'\n", 2018-04-16 11:24:58.155201 | ubuntu-xenial | "msg": "'module' object has no attribute 'req'" 2018-04-16 11:24:58.155259 | ubuntu-xenial | } 2018-04-16 11:24:58.194821 | 2018-04-16 11:24:58.195028 | PLAY RECAP 2018-04-16 11:24:58.195148 | ubuntu-xenial | ok: 4 changed: 4 unreachable: 0 failed: 1 I have tested in pip-8.x, pip-9.x and pip-10.x simple by import the pip like this: root at xxx:~# pip --version pip 8.1.1 from /usr/lib/python2.7/dist-packages (python 2.7) root at xxx:~# python Python 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pip >>> pip.req >>> Successfully installed pip-9.0.3 You are using pip version 9.0.3, however version 10.0.0 is available. You should consider upgrading via the 'pip install --upgrade pip' command. root at test:~# python Python 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pip >>> pip.req >>> Successfully installed pip-10.0.0 root at test:~# python Python 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pip >>> pip.req Traceback (most recent call last): File "", line 1, in AttributeError: 'module' object has no attribute 'req' >>> Maybe there are some code won't work on pip-10.x Hope we can fix it asop. Best, Angeiv Zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Apr 16 16:08:01 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 16 Apr 2018 16:08:01 +0000 Subject: [OpenStack-Infra] pip-10.0.0 will make tox library in openstack-infra/zuul-jobs won't work In-Reply-To: <5188358F-AB15-4C0E-8A40-03E1CCC3A6FC@gmail.com> References: <5188358F-AB15-4C0E-8A40-03E1CCC3A6FC@gmail.com> Message-ID: <20180416160801.rnjcw53lx2zzrpgs@yuggoth.org> On 2018-04-16 20:52:22 +0800 (+0800), 张幸 wrote: > I have noticed that pip release a new version days ago, and when I > test on my own CI system trigered by job > openstack-zuul-jobs-linters, there was an error, looks like pip > version installed in nodepool image is higher the the code need: > > File: > https://github.com/openstack-infra/zuul-jobs/blob/master/roles/tox/library/tox_install_sibling_packages.py#L114 [...] Yes, we missed that we were importing from pip's internals here, even knowing that pip 10 would intentionally break anything that did. There are a few possibilities: 1. We could temporarily get it from pip._internal.req if we want a fast stop-gap while discussing more thorough solutions. 2. We could implement our own parse_requirements() based on a supported API from pkg_resources (like PBR does) or the packaging library. 3. We can revisit whether we need to perform fancy parsing the `pip freeze` output at all. Clark's https://review.openstack.org/561659 is attempting to do #3. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From garyv at gmoneylove.com Wed Apr 11 19:34:41 2018 From: garyv at gmoneylove.com (Gary Verhulp) Date: Wed, 11 Apr 2018 12:34:41 -0700 Subject: [OpenStack-Infra] shade - reboot instance Message-ID: <3d422c6a-7231-db2e-8ef7-9e62d20460eb@gmoneylove.com> Is there a better way to do this? nova_client = os_client_config.make_client('compute',cloud = 'cloud1-project1') conn.nova_client.servers.reboot(server_id) I've been reading through the methods: https://docs.openstack.org/shade/latest/user/usage.html I only see reboot options for baremetal. thanks, Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Thu Apr 19 23:15:18 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 19 Apr 2018 19:15:18 -0400 Subject: [OpenStack-Infra] Stop supporting bindep-fallback.txt moving forward Message-ID: <20180419231518.GA28784@localhost.localdomain> Greetings, I'd like to propose we hard freeze changes to bindep-fallback.txt[1] and push projects to start using a local bindep.txt file. This would mean, moving forward with ubuntu-bionic, if a project was still depending on bindep-fallback.txt, their jobs may raise a syntax error. In fact, today ubuntu-bionic does seem to pass properly with bindep-fallback.txt, but perhaps we prime it with a bad package on purpose to force the issue. As clarkb points out, the downside to this it does make it harder for projects to be flipped to ubuntu-bionic. It is possible we could also prime gerrit patches for projects that are missing bindep.txt to help push this effort along. Thoughts? [1] http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/bindep-fallback.txt From aj at suse.com Fri Apr 20 07:07:25 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 20 Apr 2018 09:07:25 +0200 Subject: [OpenStack-Infra] Stop supporting bindep-fallback.txt moving forward In-Reply-To: <20180419231518.GA28784@localhost.localdomain> References: <20180419231518.GA28784@localhost.localdomain> Message-ID: <986a6814-b2aa-e0cb-4572-bea34dd66b27@suse.com> On 2018-04-20 01:15, Paul Belanger wrote: > Greetings, > > I'd like to propose we hard freeze changes to bindep-fallback.txt[1] and push > projects to start using a local bindep.txt file. > > This would mean, moving forward with ubuntu-bionic, if a project was still > depending on bindep-fallback.txt, their jobs may raise a syntax error. > > In fact, today ubuntu-bionic does seem to pass properly with > bindep-fallback.txt, but perhaps we prime it with a bad package on purpose to > force the issue. As clarkb points out, the downside to this it does make it > harder for projects to be flipped to ubuntu-bionic. It is possible we could > also prime gerrit patches for projects that are missing bindep.txt to help push > this effort along. > > Thoughts? > > [1] http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/bindep-fallback.txt This might break all stable branches as well. Pushing those changes in is a huge effort ;( Is that worth it? Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From pabelanger at redhat.com Fri Apr 20 14:05:01 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Fri, 20 Apr 2018 10:05:01 -0400 Subject: [OpenStack-Infra] Stop supporting bindep-fallback.txt moving forward In-Reply-To: <986a6814-b2aa-e0cb-4572-bea34dd66b27@suse.com> References: <20180419231518.GA28784@localhost.localdomain> <986a6814-b2aa-e0cb-4572-bea34dd66b27@suse.com> Message-ID: <20180420140501.GA8216@localhost.localdomain> On Fri, Apr 20, 2018 at 09:07:25AM +0200, Andreas Jaeger wrote: > On 2018-04-20 01:15, Paul Belanger wrote: > > Greetings, > > > > I'd like to propose we hard freeze changes to bindep-fallback.txt[1] and push > > projects to start using a local bindep.txt file. > > > > This would mean, moving forward with ubuntu-bionic, if a project was still > > depending on bindep-fallback.txt, their jobs may raise a syntax error. > > > > In fact, today ubuntu-bionic does seem to pass properly with > > bindep-fallback.txt, but perhaps we prime it with a bad package on purpose to > > force the issue. As clarkb points out, the downside to this it does make it > > harder for projects to be flipped to ubuntu-bionic. It is possible we could > > also prime gerrit patches for projects that are missing bindep.txt to help push > > this effort along. > > > > Thoughts? > > > > [1] http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/bindep-fallback.txt > > This might break all stable branches as well. Pushing those changes in > is a huge effort ;( Is that worth it? > I wouldn't expect stable branches to be running bionic, unless I am missing something obvious. > > Andreas > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra From aj at suse.com Fri Apr 20 14:26:16 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 20 Apr 2018 16:26:16 +0200 Subject: [OpenStack-Infra] Stop supporting bindep-fallback.txt moving forward In-Reply-To: <20180420140501.GA8216@localhost.localdomain> References: <20180419231518.GA28784@localhost.localdomain> <986a6814-b2aa-e0cb-4572-bea34dd66b27@suse.com> <20180420140501.GA8216@localhost.localdomain> Message-ID: On 2018-04-20 16:05, Paul Belanger wrote: > On Fri, Apr 20, 2018 at 09:07:25AM +0200, Andreas Jaeger wrote: >> On 2018-04-20 01:15, Paul Belanger wrote: >>> Greetings, >>> >>> I'd like to propose we hard freeze changes to bindep-fallback.txt[1] and push >>> projects to start using a local bindep.txt file. >>> >>> This would mean, moving forward with ubuntu-bionic, if a project was still >>> depending on bindep-fallback.txt, their jobs may raise a syntax error. >>> >>> In fact, today ubuntu-bionic does seem to pass properly with >>> bindep-fallback.txt, but perhaps we prime it with a bad package on purpose to >>> force the issue. As clarkb points out, the downside to this it does make it >>> harder for projects to be flipped to ubuntu-bionic. It is possible we could >>> also prime gerrit patches for projects that are missing bindep.txt to help push >>> this effort along. >>> >>> Thoughts? >>> >>> [1] http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/bindep-fallback.txt >> >> This might break all stable branches as well. Pushing those changes in >> is a huge effort ;( Is that worth it? >> > I wouldn't expect stable branches to be running bionic, unless I am missing > something obvious. How do you want to change the set up tox jobs, especially python27, sphinx-docs, and python35? Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From fungi at yuggoth.org Fri Apr 20 16:01:47 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 20 Apr 2018 16:01:47 +0000 Subject: [OpenStack-Infra] Stop supporting bindep-fallback.txt moving forward In-Reply-To: <20180419231518.GA28784@localhost.localdomain> References: <20180419231518.GA28784@localhost.localdomain> Message-ID: <20180420160147.wx7aqg7c34azy7cp@yuggoth.org> On 2018-04-19 19:15:18 -0400 (-0400), Paul Belanger wrote: [...] > today ubuntu-bionic does seem to pass properly with > bindep-fallback.txt, but perhaps we prime it with a bad package on > purpose to force the issue. As clarkb points out, the downside to > this it does make it harder for projects to be flipped to > ubuntu-bionic. [...] My main concern is that this seems sort of at odds with how we discussed simply forcing all PTI jobs from ubuntu-xenial to ubuntu-bionic on master branches rather than giving projects the option to transition on their own timelines (which worked out pretty terribly when we tried being flexible with them on the ubuntu-trusty to ubuntu-xenial transition a couple years ago). Adding a forced mass migration to in-repo bindep.txt files at the same moment we also force all the PTI jobs to a new platform will probably result in torches and pitchforks. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Fri Apr 20 16:13:17 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 20 Apr 2018 09:13:17 -0700 Subject: [OpenStack-Infra] Stop supporting bindep-fallback.txt moving forward In-Reply-To: <20180420160147.wx7aqg7c34azy7cp@yuggoth.org> References: <20180419231518.GA28784@localhost.localdomain> <20180420160147.wx7aqg7c34azy7cp@yuggoth.org> Message-ID: <1524240797.120839.1345100816.30E89737@webmail.messagingengine.com> On Fri, Apr 20, 2018, at 9:01 AM, Jeremy Stanley wrote: > On 2018-04-19 19:15:18 -0400 (-0400), Paul Belanger wrote: > [...] > > today ubuntu-bionic does seem to pass properly with > > bindep-fallback.txt, but perhaps we prime it with a bad package on > > purpose to force the issue. As clarkb points out, the downside to > > this it does make it harder for projects to be flipped to > > ubuntu-bionic. > [...] > > My main concern is that this seems sort of at odds with how we > discussed simply forcing all PTI jobs from ubuntu-xenial to > ubuntu-bionic on master branches rather than giving projects the > option to transition on their own timelines (which worked out pretty > terribly when we tried being flexible with them on the ubuntu-trusty > to ubuntu-xenial transition a couple years ago). Adding a forced > mass migration to in-repo bindep.txt files at the same moment we > also force all the PTI jobs to a new platform will probably result > in torches and pitchforks. Yup, this was my concern as well. I think the value of not being on older platforms outweighs needing to manage a list of packages for longer. We likely just need to keep pushing on projects to add/update bindep.txt in repo instead. We can run a logstash query against job-output.txt looking for output of using the fallback file and nicely remind projects if they show up on that list. Clark From pabelanger at redhat.com Fri Apr 20 16:31:24 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Fri, 20 Apr 2018 12:31:24 -0400 Subject: [OpenStack-Infra] Stop supporting bindep-fallback.txt moving forward In-Reply-To: <1524240797.120839.1345100816.30E89737@webmail.messagingengine.com> References: <20180419231518.GA28784@localhost.localdomain> <20180420160147.wx7aqg7c34azy7cp@yuggoth.org> <1524240797.120839.1345100816.30E89737@webmail.messagingengine.com> Message-ID: <20180420163124.GA7085@localhost.localdomain> On Fri, Apr 20, 2018 at 09:13:17AM -0700, Clark Boylan wrote: > On Fri, Apr 20, 2018, at 9:01 AM, Jeremy Stanley wrote: > > On 2018-04-19 19:15:18 -0400 (-0400), Paul Belanger wrote: > > [...] > > > today ubuntu-bionic does seem to pass properly with > > > bindep-fallback.txt, but perhaps we prime it with a bad package on > > > purpose to force the issue. As clarkb points out, the downside to > > > this it does make it harder for projects to be flipped to > > > ubuntu-bionic. > > [...] > > > > My main concern is that this seems sort of at odds with how we > > discussed simply forcing all PTI jobs from ubuntu-xenial to > > ubuntu-bionic on master branches rather than giving projects the > > option to transition on their own timelines (which worked out pretty > > terribly when we tried being flexible with them on the ubuntu-trusty > > to ubuntu-xenial transition a couple years ago). Adding a forced > > mass migration to in-repo bindep.txt files at the same moment we > > also force all the PTI jobs to a new platform will probably result > > in torches and pitchforks. > > Yup, this was my concern as well. I think the value of not being on older platforms outweighs needing to manage a list of packages for longer. We likely just need to keep pushing on projects to add/update bindep.txt in repo instead. We can run a logstash query against job-output.txt looking for output of using the fallback file and nicely remind projects if they show up on that list. > That is fine, if we want to do the mass migration to bionic first, then start looking at which projects are still using bindep-fallback.txt is fine with me. I just wanted to highlight I think it is time we start pushing a little harder on projects to stop using this logic and start managing bindep.txt themself. -Paul From fungi at yuggoth.org Fri Apr 20 17:34:09 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 20 Apr 2018 17:34:09 +0000 Subject: [OpenStack-Infra] Stop supporting bindep-fallback.txt moving forward In-Reply-To: <20180420163124.GA7085@localhost.localdomain> References: <20180419231518.GA28784@localhost.localdomain> <20180420160147.wx7aqg7c34azy7cp@yuggoth.org> <1524240797.120839.1345100816.30E89737@webmail.messagingengine.com> <20180420163124.GA7085@localhost.localdomain> Message-ID: <20180420173409.uxapdtenroxbbskd@yuggoth.org> On 2018-04-20 12:31:24 -0400 (-0400), Paul Belanger wrote: [...] > That is fine, if we want to do the mass migration to bionic first, > then start looking at which projects are still using > bindep-fallback.txt is fine with me. > > I just wanted to highlight I think it is time we start pushing a > little harder on projects to stop using this logic and start > managing bindep.txt themself. Yep, this is something I _completely_ agree with. We could even start with a deprecation warning in the fallback path so it starts showing up more clearly in the job logs too. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Fri Apr 20 17:42:48 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 20 Apr 2018 10:42:48 -0700 Subject: [OpenStack-Infra] PTG September 10-14 in Denver Message-ID: <1524246168.158201.1345189728.6F882F1F@webmail.messagingengine.com> Hello everyone, I've been asked if the Infra team plans to attend the next PTG in Denver. My current position is that it would be good to attend as a team as I think it will give us a good opportunity to work on modernizing config management efforts. But before I go ahead and commit to that it would be helpful to get a rough headcount of who intends to go (if it will just be me then likely don't need to have team space). Don't worry if you don't have approval yet or have to sort out other details. Mostly just interested in a "do we intend on being there or not" type of answer. More details on the event can be found at http://lists.openstack.org/pipermail/openstack-dev/2018-April/129564.html. Feel free to ask questions if that will help you too. Let me know (doesn't have to be to the list if you aren't comfortable with that) and thanks! Clark From pabelanger at redhat.com Fri Apr 20 17:57:01 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Fri, 20 Apr 2018 13:57:01 -0400 Subject: [OpenStack-Infra] PTG September 10-14 in Denver In-Reply-To: <1524246168.158201.1345189728.6F882F1F@webmail.messagingengine.com> References: <1524246168.158201.1345189728.6F882F1F@webmail.messagingengine.com> Message-ID: <20180420175701.GA9050@localhost.localdomain> On Fri, Apr 20, 2018 at 10:42:48AM -0700, Clark Boylan wrote: > Hello everyone, > > I've been asked if the Infra team plans to attend the next PTG in Denver. My current position is that it would be good to attend as a team as I think it will give us a good opportunity to work on modernizing config management efforts. But before I go ahead and commit to that it would be helpful to get a rough headcount of who intends to go (if it will just be me then likely don't need to have team space). > > Don't worry if you don't have approval yet or have to sort out other details. Mostly just interested in a "do we intend on being there or not" type of answer. > > More details on the event can be found at http://lists.openstack.org/pipermail/openstack-dev/2018-April/129564.html. Feel free to ask questions if that will help you too. > > Let me know (doesn't have to be to the list if you aren't comfortable with that) and thanks! > Intend on being there (pending travel approval) -Paul From fungi at yuggoth.org Fri Apr 20 17:58:21 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 20 Apr 2018 17:58:21 +0000 Subject: [OpenStack-Infra] PTG September 10-14 in Denver In-Reply-To: <1524246168.158201.1345189728.6F882F1F@webmail.messagingengine.com> References: <1524246168.158201.1345189728.6F882F1F@webmail.messagingengine.com> Message-ID: <20180420175821.5pcmlywe2rieeyns@yuggoth.org> On 2018-04-20 10:42:48 -0700 (-0700), Clark Boylan wrote: [...] > Let me know (doesn't have to be to the list if you aren't > comfortable with that) and thanks! You can expect me there. Not only was the venue great, the restaurants within walking distance were just my speed and, as icing on the cake, Denver is one of the few airports to/from which I can get direct flights! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mrhillsman at gmail.com Fri Apr 20 18:02:30 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Fri, 20 Apr 2018 13:02:30 -0500 Subject: [OpenStack-Infra] PTG September 10-14 in Denver In-Reply-To: <20180420175821.5pcmlywe2rieeyns@yuggoth.org> References: <1524246168.158201.1345189728.6F882F1F@webmail.messagingengine.com> <20180420175821.5pcmlywe2rieeyns@yuggoth.org> Message-ID: Planning on being in attendance (travel approval pending) On Fri, Apr 20, 2018 at 12:58 PM, Jeremy Stanley wrote: > On 2018-04-20 10:42:48 -0700 (-0700), Clark Boylan wrote: > [...] > > Let me know (doesn't have to be to the list if you aren't > > comfortable with that) and thanks! > > You can expect me there. Not only was the venue great, the > restaurants within walking distance were just my speed and, as icing > on the cake, Denver is one of the few airports to/from which I can > get direct flights! > -- > Jeremy Stanley > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Fri Apr 20 20:32:31 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Fri, 20 Apr 2018 16:32:31 -0400 Subject: [OpenStack-Infra] Stop supporting bindep-fallback.txt moving forward In-Reply-To: <20180420173409.uxapdtenroxbbskd@yuggoth.org> References: <20180419231518.GA28784@localhost.localdomain> <20180420160147.wx7aqg7c34azy7cp@yuggoth.org> <1524240797.120839.1345100816.30E89737@webmail.messagingengine.com> <20180420163124.GA7085@localhost.localdomain> <20180420173409.uxapdtenroxbbskd@yuggoth.org> Message-ID: <20180420203231.GA10080@localhost.localdomain> On Fri, Apr 20, 2018 at 05:34:09PM +0000, Jeremy Stanley wrote: > On 2018-04-20 12:31:24 -0400 (-0400), Paul Belanger wrote: > [...] > > That is fine, if we want to do the mass migration to bionic first, > > then start looking at which projects are still using > > bindep-fallback.txt is fine with me. > > > > I just wanted to highlight I think it is time we start pushing a > > little harder on projects to stop using this logic and start > > managing bindep.txt themself. > > Yep, this is something I _completely_ agree with. We could even > start with a deprecation warning in the fallback path so it starts > showing up more clearly in the job logs too. > -- > Jeremy Stanley Okay, looking at codesearch.o.o, I've been able to start pushing up changes to remove bindep-fallback.txt. https://review.openstack.org/#/q/topic:bindep.txt This adds bindep.txt to projects that need it, and also removes the legacy install-distro-packages.sh scripts in favor of our bindep role. Paul From joshua.hesketh at gmail.com Sat Apr 21 14:28:03 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Sun, 22 Apr 2018 00:28:03 +1000 Subject: [OpenStack-Infra] PTG September 10-14 in Denver In-Reply-To: References: <1524246168.158201.1345189728.6F882F1F@webmail.messagingengine.com> <20180420175821.5pcmlywe2rieeyns@yuggoth.org> Message-ID: If I get the opportunity I would love to come. Unsure what the likelihood is though. On Sat, Apr 21, 2018 at 4:02 AM, Melvin Hillsman wrote: > Planning on being in attendance (travel approval pending) > > On Fri, Apr 20, 2018 at 12:58 PM, Jeremy Stanley > wrote: > >> On 2018-04-20 10:42:48 -0700 (-0700), Clark Boylan wrote: >> [...] >> > Let me know (doesn't have to be to the list if you aren't >> > comfortable with that) and thanks! >> >> You can expect me there. Not only was the venue great, the >> restaurants within walking distance were just my speed and, as icing >> on the cake, Denver is one of the few airports to/from which I can >> get direct flights! >> -- >> Jeremy Stanley >> >> _______________________________________________ >> OpenStack-Infra mailing list >> OpenStack-Infra at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra >> > > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Apr 25 16:32:55 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 25 Apr 2018 09:32:55 -0700 Subject: [OpenStack-Infra] Team get together/dinner at Vancouver Summit Message-ID: <1524673975.838621.1350524704.5B4FDF90@webmail.messagingengine.com> Hello everyone, Many of us will be at the Vancouver summit in just under a month and thought we might try to organize a get together/dinner of some sort. I've quickly thrown up https://ethercalc.openstack.org/7vm2xrsk1yju to start collecting availability info. If you are interested please mark down when you can join us. As for venue options I have yet to start looking and missed the last Vancouver summit so if you have any suggestions lets me know. Thanks, Clark From corvus at inaugust.com Mon Apr 30 15:03:32 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 30 Apr 2018 08:03:32 -0700 Subject: [OpenStack-Infra] Zuul memory improvements Message-ID: <87wowo4tyz.fsf@meyer.lemoncheese.net> Hi, We recently made some changes to Zuul which you may want to know about if you interact with a large number of projects. Previously, each change to Zuul which updated Zuul's configuration (e.g., a change to a project's zuul.yaml file) would consume a significant amount of memory. If we had too many of these in the queue at a time, the server would run out of RAM. To mitigate this, we asked folks who regularly submit large numbers of configuration changes to only submit a few at a time. We have updated Zuul so it now caches much more of its configuration, and the cost in memory of an additional configuration change is very small. An added bonus: they are computed more quickly as well. Of course, there's still a cost to every change pushed up to Gerrit -- each one uses test nodes, for instance, so if you need to make a large number of changes, please do consider the impact to the whole system and other users. However, there's no longer a need to severely restrict configuration changes as a class -- consider them as any other change. -Jim