From johnsomor at gmail.com Wed Aug 1 00:35:59 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 31 Jul 2018 17:35:59 -0700 Subject: [Openstack-operators] [OCTAVIA][KOLLA] - Amphora to control plan communication question. In-Reply-To: References: Message-ID: Hi Flint, Happy to help. Right now the list of controller endpoints is pushed at boot time and loaded into the amphora via config drive/nova. In the future we plan to be able to update this list via the amphora API, but it has not been developed yet. I am pretty sure centos is getting the config file as our gate job that runs with centos 7 amphora has been passing. It should be in the same /etc/octavia/amphora-agent.conf location as the ubuntu based amphora. Michael On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS wrote: > > Hi Michael, thanks a lot for that explanation, it’s actually how I envisioned the flow. > > I’ll have to produce a diagram for my peers understanding, I maybe can share it with you. > > There is still one point that seems to be a little bit odd to me. > > How the amphora agent know where to find out the healthManagers and worker services? Is that because the worker is sending the agent some catalog informations or because we set that at diskimage-create time? > > If so, I think the Centos based amphora is missing the agent.conf because currently my vms doesn’t have any. > > Once again thanks for your help! > Le mar. 31 juil. 2018 à 18:15, Michael Johnson a écrit : >> >> Hi Flint, >> >> We don't have a logical network diagram at this time (it's still on >> the to-do list), but I can talk you through it. >> >> The Octavia worker, health manager, and housekeeping need to be able >> to reach the amphora (service VM at this point) over the lb-mgmt-net >> on TCP 9443. It knows the amphora IP addresses on the lb-mgmt-net via >> the database and the information we save from the compute driver (I.e. >> what IP was assigned to the instance). >> >> The Octavia API process does not need to be connected to the >> lb-mgmt-net at this time. It only connects the the messaging bus and >> the Octavia database. Provider drivers may have other connectivity >> requirements for the Octavia API. >> >> The amphorae also send UDP packets back to the health manager on port >> 5555. This is the heartbeat packet from the amphora. It contains the >> health and statistics from that amphora. It know it's list of health >> manager endpoints from the configuration file >> "controller_ip_port_list" >> (https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list). >> Each amphora will rotate through that list of endpoints to reduce the >> chance of a network split impacting the heartbeat messages. >> >> This is the only traffic that passed over this network. All of it is >> IP based and can be routed (it does not require L2 connectivity). >> >> Michael >> >> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS wrote: >> > >> > Hi Folks, >> > >> > I'm currently deploying the Octavia component into our testing environment which is based on KOLLA. >> > >> > So far I'm quite enjoying it as it is pretty much straight forward (Except for some documentation pitfalls), but I'm now facing a weird and hard to debug situation. >> > >> > I actually have a hard time to understand how Amphora are communicating back and forth with the Control Plan components. >> > >> > From my understanding, as soon as I create a new LB, the Control Plan is spawning an instance using the configured Octavia Flavor and Image type, attach it to the LB-MGMT-NET and to the user provided subnet. >> > >> > What I think I'm misunderstanding is the discussion that follows between the amphora and the different components such as the HealthManager/HouseKeeper, the API and the Worker. >> > >> > How is the amphora agent able to found my control plan? Is the HealthManager or the Octavia Worker initiating the communication to the Amphora on port 9443 and so give the agent the API/Control plan internalURL? >> > >> > If anyone have a diagram of the workflow I would be more than happy ^^ >> > >> > Thanks a lot in advance to anyone willing to help :D >> > >> > _______________________________________________ >> > OpenStack-operators mailing list >> > OpenStack-operators at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From gael.therond at gmail.com Wed Aug 1 04:49:07 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Wed, 1 Aug 2018 06:49:07 +0200 Subject: [Openstack-operators] [OCTAVIA][KOLLA] - Amphora to control plan communication question. In-Reply-To: References: Message-ID: Hi Michael, Oh ok! That config-drive trick was the missing part! Thanks a lot! Is there a release target for the API vs config-drive thing? I’ll have a look at an instance as soon as I’ll be able to log into one of my amphora. By the way, three sub-questions remains: 1°/ - What is the best place to push some documentation improvement ? 2°/ - Is the amphora-agent an auto-generated file at image build time or do I need to create one and give it to the diskimage-builder process? 3°/ - The amphora agent source-code is available at https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends/agent isn’t? Sorry for the questions volume, but I prefer to really understand the underlying mechanisms before we goes live with the solution. G. Le mer. 1 août 2018 à 02:36, Michael Johnson a écrit : > Hi Flint, > > Happy to help. > > Right now the list of controller endpoints is pushed at boot time and > loaded into the amphora via config drive/nova. > In the future we plan to be able to update this list via the amphora > API, but it has not been developed yet. > > I am pretty sure centos is getting the config file as our gate job > that runs with centos 7 amphora has been passing. It should be in the > same /etc/octavia/amphora-agent.conf location as the ubuntu based > amphora. > > Michael > > > > On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS > wrote: > > > > Hi Michael, thanks a lot for that explanation, it’s actually how I > envisioned the flow. > > > > I’ll have to produce a diagram for my peers understanding, I maybe can > share it with you. > > > > There is still one point that seems to be a little bit odd to me. > > > > How the amphora agent know where to find out the healthManagers and > worker services? Is that because the worker is sending the agent some > catalog informations or because we set that at diskimage-create time? > > > > If so, I think the Centos based amphora is missing the agent.conf > because currently my vms doesn’t have any. > > > > Once again thanks for your help! > > Le mar. 31 juil. 2018 à 18:15, Michael Johnson a > écrit : > >> > >> Hi Flint, > >> > >> We don't have a logical network diagram at this time (it's still on > >> the to-do list), but I can talk you through it. > >> > >> The Octavia worker, health manager, and housekeeping need to be able > >> to reach the amphora (service VM at this point) over the lb-mgmt-net > >> on TCP 9443. It knows the amphora IP addresses on the lb-mgmt-net via > >> the database and the information we save from the compute driver (I.e. > >> what IP was assigned to the instance). > >> > >> The Octavia API process does not need to be connected to the > >> lb-mgmt-net at this time. It only connects the the messaging bus and > >> the Octavia database. Provider drivers may have other connectivity > >> requirements for the Octavia API. > >> > >> The amphorae also send UDP packets back to the health manager on port > >> 5555. This is the heartbeat packet from the amphora. It contains the > >> health and statistics from that amphora. It know it's list of health > >> manager endpoints from the configuration file > >> "controller_ip_port_list" > >> ( > https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list > ). > >> Each amphora will rotate through that list of endpoints to reduce the > >> chance of a network split impacting the heartbeat messages. > >> > >> This is the only traffic that passed over this network. All of it is > >> IP based and can be routed (it does not require L2 connectivity). > >> > >> Michael > >> > >> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS > wrote: > >> > > >> > Hi Folks, > >> > > >> > I'm currently deploying the Octavia component into our testing > environment which is based on KOLLA. > >> > > >> > So far I'm quite enjoying it as it is pretty much straight forward > (Except for some documentation pitfalls), but I'm now facing a weird and > hard to debug situation. > >> > > >> > I actually have a hard time to understand how Amphora are > communicating back and forth with the Control Plan components. > >> > > >> > From my understanding, as soon as I create a new LB, the Control Plan > is spawning an instance using the configured Octavia Flavor and Image type, > attach it to the LB-MGMT-NET and to the user provided subnet. > >> > > >> > What I think I'm misunderstanding is the discussion that follows > between the amphora and the different components such as the > HealthManager/HouseKeeper, the API and the Worker. > >> > > >> > How is the amphora agent able to found my control plan? Is the > HealthManager or the Octavia Worker initiating the communication to the > Amphora on port 9443 and so give the agent the API/Control plan internalURL? > >> > > >> > If anyone have a diagram of the workflow I would be more than happy ^^ > >> > > >> > Thanks a lot in advance to anyone willing to help :D > >> > > >> > _______________________________________________ > >> > OpenStack-operators mailing list > >> > OpenStack-operators at lists.openstack.org > >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Wed Aug 1 05:56:59 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 31 Jul 2018 22:56:59 -0700 Subject: [Openstack-operators] [OCTAVIA][KOLLA] - Amphora to control plan communication question. In-Reply-To: References: Message-ID: No worries, happy to share. Answers below. Michael On Tue, Jul 31, 2018 at 9:49 PM Flint WALRUS wrote: > > Hi Michael, > > Oh ok! That config-drive trick was the missing part! Thanks a lot! Is there a release target for the API vs config-drive thing? I’ll have a look at an instance as soon as I’ll be able to log into one of my amphora. No I have no timeline for the amphora-agent config update API. Either way, the initial config will be installed via config drive. The API is intended for runtime updates. > > By the way, three sub-questions remains: > > 1°/ - What is the best place to push some documentation improvement ? Patches are welcome! All of our documentation is included in the source code repository here: https://github.com/openstack/octavia/tree/master/doc/source Our patches follow the normal OpenStack gerrit review process (OpenStack does not use pull requests). > 2°/ - Is the amphora-agent an auto-generated file at image build time or do I need to create one and give it to the diskimage-builder process? The amphora-agent code itself is installed automatically with the diskimage-builder process via the "amphora-agent" element. The amphora-agent configuration file is only installed at amphora boot time by nova using the config drive capability. It is also auto-generated by the controller. > 3°/ - The amphora agent source-code is available at https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends/agent isn’t? Yes, the agent code that runs in the amphora instance is all under https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends in the main octavia repository. > > Sorry for the questions volume, but I prefer to really understand the underlying mechanisms before we goes live with the solution. > > G. > > Le mer. 1 août 2018 à 02:36, Michael Johnson a écrit : >> >> Hi Flint, >> >> Happy to help. >> >> Right now the list of controller endpoints is pushed at boot time and >> loaded into the amphora via config drive/nova. >> In the future we plan to be able to update this list via the amphora >> API, but it has not been developed yet. >> >> I am pretty sure centos is getting the config file as our gate job >> that runs with centos 7 amphora has been passing. It should be in the >> same /etc/octavia/amphora-agent.conf location as the ubuntu based >> amphora. >> >> Michael >> >> >> >> On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS wrote: >> > >> > Hi Michael, thanks a lot for that explanation, it’s actually how I envisioned the flow. >> > >> > I’ll have to produce a diagram for my peers understanding, I maybe can share it with you. >> > >> > There is still one point that seems to be a little bit odd to me. >> > >> > How the amphora agent know where to find out the healthManagers and worker services? Is that because the worker is sending the agent some catalog informations or because we set that at diskimage-create time? >> > >> > If so, I think the Centos based amphora is missing the agent.conf because currently my vms doesn’t have any. >> > >> > Once again thanks for your help! >> > Le mar. 31 juil. 2018 à 18:15, Michael Johnson a écrit : >> >> >> >> Hi Flint, >> >> >> >> We don't have a logical network diagram at this time (it's still on >> >> the to-do list), but I can talk you through it. >> >> >> >> The Octavia worker, health manager, and housekeeping need to be able >> >> to reach the amphora (service VM at this point) over the lb-mgmt-net >> >> on TCP 9443. It knows the amphora IP addresses on the lb-mgmt-net via >> >> the database and the information we save from the compute driver (I.e. >> >> what IP was assigned to the instance). >> >> >> >> The Octavia API process does not need to be connected to the >> >> lb-mgmt-net at this time. It only connects the the messaging bus and >> >> the Octavia database. Provider drivers may have other connectivity >> >> requirements for the Octavia API. >> >> >> >> The amphorae also send UDP packets back to the health manager on port >> >> 5555. This is the heartbeat packet from the amphora. It contains the >> >> health and statistics from that amphora. It know it's list of health >> >> manager endpoints from the configuration file >> >> "controller_ip_port_list" >> >> (https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list). >> >> Each amphora will rotate through that list of endpoints to reduce the >> >> chance of a network split impacting the heartbeat messages. >> >> >> >> This is the only traffic that passed over this network. All of it is >> >> IP based and can be routed (it does not require L2 connectivity). >> >> >> >> Michael >> >> >> >> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS wrote: >> >> > >> >> > Hi Folks, >> >> > >> >> > I'm currently deploying the Octavia component into our testing environment which is based on KOLLA. >> >> > >> >> > So far I'm quite enjoying it as it is pretty much straight forward (Except for some documentation pitfalls), but I'm now facing a weird and hard to debug situation. >> >> > >> >> > I actually have a hard time to understand how Amphora are communicating back and forth with the Control Plan components. >> >> > >> >> > From my understanding, as soon as I create a new LB, the Control Plan is spawning an instance using the configured Octavia Flavor and Image type, attach it to the LB-MGMT-NET and to the user provided subnet. >> >> > >> >> > What I think I'm misunderstanding is the discussion that follows between the amphora and the different components such as the HealthManager/HouseKeeper, the API and the Worker. >> >> > >> >> > How is the amphora agent able to found my control plan? Is the HealthManager or the Octavia Worker initiating the communication to the Amphora on port 9443 and so give the agent the API/Control plan internalURL? >> >> > >> >> > If anyone have a diagram of the workflow I would be more than happy ^^ >> >> > >> >> > Thanks a lot in advance to anyone willing to help :D >> >> > >> >> > _______________________________________________ >> >> > OpenStack-operators mailing list >> >> > OpenStack-operators at lists.openstack.org >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From gael.therond at gmail.com Wed Aug 1 06:03:21 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Wed, 1 Aug 2018 08:03:21 +0200 Subject: [Openstack-operators] [OCTAVIA][KOLLA] - Amphora to control plan communication question. In-Reply-To: References: Message-ID: Ok sweet! Many thanks ! Awesome, I’ll be able to continue our deployment with peace in mind. Regarding the documentation patch, does it need to get a specific format or following some guidelines? I’ll compulse all my annotations and push a patch for those points that would need clarification and a little bit of formatting (layout issue). Thanks for this awesome support Michael! Le mer. 1 août 2018 à 07:57, Michael Johnson a écrit : > No worries, happy to share. Answers below. > > Michael > > > On Tue, Jul 31, 2018 at 9:49 PM Flint WALRUS > wrote: > > > > Hi Michael, > > > > Oh ok! That config-drive trick was the missing part! Thanks a lot! Is > there a release target for the API vs config-drive thing? I’ll have a look > at an instance as soon as I’ll be able to log into one of my amphora. > > No I have no timeline for the amphora-agent config update API. Either > way, the initial config will be installed via config drive. The API is > intended for runtime updates. > > > > By the way, three sub-questions remains: > > > > 1°/ - What is the best place to push some documentation improvement ? > > Patches are welcome! All of our documentation is included in the > source code repository here: > https://github.com/openstack/octavia/tree/master/doc/source > > Our patches follow the normal OpenStack gerrit review process > (OpenStack does not use pull requests). > > > 2°/ - Is the amphora-agent an auto-generated file at image build time or > do I need to create one and give it to the diskimage-builder process? > > The amphora-agent code itself is installed automatically with the > diskimage-builder process via the "amphora-agent" element. > The amphora-agent configuration file is only installed at amphora boot > time by nova using the config drive capability. It is also > auto-generated by the controller. > > > 3°/ - The amphora agent source-code is available at > https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends/agent > isn’t? > > Yes, the agent code that runs in the amphora instance is all under > https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends > in the main octavia repository. > > > > > Sorry for the questions volume, but I prefer to really understand the > underlying mechanisms before we goes live with the solution. > > > > G. > > > > Le mer. 1 août 2018 à 02:36, Michael Johnson a > écrit : > >> > >> Hi Flint, > >> > >> Happy to help. > >> > >> Right now the list of controller endpoints is pushed at boot time and > >> loaded into the amphora via config drive/nova. > >> In the future we plan to be able to update this list via the amphora > >> API, but it has not been developed yet. > >> > >> I am pretty sure centos is getting the config file as our gate job > >> that runs with centos 7 amphora has been passing. It should be in the > >> same /etc/octavia/amphora-agent.conf location as the ubuntu based > >> amphora. > >> > >> Michael > >> > >> > >> > >> On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS > wrote: > >> > > >> > Hi Michael, thanks a lot for that explanation, it’s actually how I > envisioned the flow. > >> > > >> > I’ll have to produce a diagram for my peers understanding, I maybe > can share it with you. > >> > > >> > There is still one point that seems to be a little bit odd to me. > >> > > >> > How the amphora agent know where to find out the healthManagers and > worker services? Is that because the worker is sending the agent some > catalog informations or because we set that at diskimage-create time? > >> > > >> > If so, I think the Centos based amphora is missing the agent.conf > because currently my vms doesn’t have any. > >> > > >> > Once again thanks for your help! > >> > Le mar. 31 juil. 2018 à 18:15, Michael Johnson > a écrit : > >> >> > >> >> Hi Flint, > >> >> > >> >> We don't have a logical network diagram at this time (it's still on > >> >> the to-do list), but I can talk you through it. > >> >> > >> >> The Octavia worker, health manager, and housekeeping need to be able > >> >> to reach the amphora (service VM at this point) over the lb-mgmt-net > >> >> on TCP 9443. It knows the amphora IP addresses on the lb-mgmt-net via > >> >> the database and the information we save from the compute driver > (I.e. > >> >> what IP was assigned to the instance). > >> >> > >> >> The Octavia API process does not need to be connected to the > >> >> lb-mgmt-net at this time. It only connects the the messaging bus and > >> >> the Octavia database. Provider drivers may have other connectivity > >> >> requirements for the Octavia API. > >> >> > >> >> The amphorae also send UDP packets back to the health manager on port > >> >> 5555. This is the heartbeat packet from the amphora. It contains the > >> >> health and statistics from that amphora. It know it's list of health > >> >> manager endpoints from the configuration file > >> >> "controller_ip_port_list" > >> >> ( > https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list > ). > >> >> Each amphora will rotate through that list of endpoints to reduce the > >> >> chance of a network split impacting the heartbeat messages. > >> >> > >> >> This is the only traffic that passed over this network. All of it is > >> >> IP based and can be routed (it does not require L2 connectivity). > >> >> > >> >> Michael > >> >> > >> >> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS > wrote: > >> >> > > >> >> > Hi Folks, > >> >> > > >> >> > I'm currently deploying the Octavia component into our testing > environment which is based on KOLLA. > >> >> > > >> >> > So far I'm quite enjoying it as it is pretty much straight forward > (Except for some documentation pitfalls), but I'm now facing a weird and > hard to debug situation. > >> >> > > >> >> > I actually have a hard time to understand how Amphora are > communicating back and forth with the Control Plan components. > >> >> > > >> >> > From my understanding, as soon as I create a new LB, the Control > Plan is spawning an instance using the configured Octavia Flavor and Image > type, attach it to the LB-MGMT-NET and to the user provided subnet. > >> >> > > >> >> > What I think I'm misunderstanding is the discussion that follows > between the amphora and the different components such as the > HealthManager/HouseKeeper, the API and the Worker. > >> >> > > >> >> > How is the amphora agent able to found my control plan? Is the > HealthManager or the Octavia Worker initiating the communication to the > Amphora on port 9443 and so give the agent the API/Control plan internalURL? > >> >> > > >> >> > If anyone have a diagram of the workflow I would be more than > happy ^^ > >> >> > > >> >> > Thanks a lot in advance to anyone willing to help :D > >> >> > > >> >> > _______________________________________________ > >> >> > OpenStack-operators mailing list > >> >> > OpenStack-operators at lists.openstack.org > >> >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.zunker at codecentric.cloud Wed Aug 1 07:58:17 2018 From: christian.zunker at codecentric.cloud (Christian Zunker) Date: Wed, 1 Aug 2018 09:58:17 +0200 Subject: [Openstack-operators] [openstack-ansible] How to manage system upgrades ? In-Reply-To: References: <618796e0e942dc5bd5b0824950565ea1@nuagelibre.org> Message-ID: Hi Matt, you are right. That's at least what I understand under 'live-evacuate'. The compute is still working, all VMs on it are running. Our VMs can all be live-migrated. We wrote a script to live-migrate all VMs to other computes, to do hardware maintenance on the evacuated compute server. I know, this is not going to work for every setup, but could be starting point, for what Gilles described. Matt Riedemann schrieb am Mo., 30. Juli 2018 um 20:20 Uhr: > On 7/27/2018 3:34 AM, Gilles Mocellin wrote: > > - for compute nodes : disable compute node and live-evacuate instances... > > To be clear, what do you mean exactly by "live-evacuate"? I assume you > mean live migration of all instances off each (disabled) compute node > *before* you upgrade it. I wanted to ask because "evacuate" as a server > operation is something else entirely (it's rebuild on another host which > is definitely disruptive to the workload on that server). > > > http://www.danplanet.com/blog/2016/03/03/evacuate-in-nova-one-command-to-confuse-us-all/ > > -- > > Thanks, > > Matt > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Wed Aug 1 08:13:51 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Wed, 01 Aug 2018 10:13:51 +0200 Subject: [Openstack-operators] =?utf-8?q?=5Bopenstack-ansible=5D_Change_in?= =?utf-8?q?_our_IRC_channel?= Message-ID: <725f-5b616b80-11-7f7c7a00@249916277> Hello everyone, Due to a continuously increasing spam [0] on our IRC channels, I have decided to make our channel (#openstack-ansible on freenode) only joinable by Freenode's nickserv registered users. I am sorry for the inconvenience, as it will now be harder to reach us (but it's not that hard to register! [1]). The conversations will be easier to follow though. You can still contact us on the mailing lists too. Regards, Jean-Philippe Evrard (evrardjp) [0]: https://freenode.net/news/spambot-attack [1]: https://freenode.net/kb/answer/registration From skaplons at redhat.com Wed Aug 1 09:15:39 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 1 Aug 2018 11:15:39 +0200 Subject: [Openstack-operators] [openstack-dev] [openstack-ansible] Change in our IRC channel In-Reply-To: <725f-5b616b80-11-7f7c7a00@249916277> References: <725f-5b616b80-11-7f7c7a00@249916277> Message-ID: Maybe such change should be considered to be done globally on all OpenStack channels? > Wiadomość napisana przez jean-philippe at evrard.me w dniu 01.08.2018, o godz. 10:13: > > Hello everyone, > > Due to a continuously increasing spam [0] on our IRC channels, I have decided to make our channel (#openstack-ansible on freenode) only joinable by Freenode's nickserv registered users. > > I am sorry for the inconvenience, as it will now be harder to reach us (but it's not that hard to register! [1]). The conversations will be easier to follow though. > > You can still contact us on the mailing lists too. > > Regards, > Jean-Philippe Evrard (evrardjp) > > [0]: https://freenode.net/news/spambot-attack > [1]: https://freenode.net/kb/answer/registration > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From johnsomor at gmail.com Wed Aug 1 15:56:48 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 1 Aug 2018 08:56:48 -0700 Subject: [Openstack-operators] [OCTAVIA][KOLLA] - Amphora to control plan communication question. In-Reply-To: References: Message-ID: Hi Flint, Yes, our documentation follows the OpenStack documentation rules. It is in RestructuredText format. The documentation team has some guides here: https://docs.openstack.org/doc-contrib-guide/rst-conv.html However we can also help with that during the review process. Michael On Tue, Jul 31, 2018 at 11:03 PM Flint WALRUS wrote: > > Ok sweet! Many thanks ! Awesome, I’ll be able to continue our deployment with peace in mind. > > Regarding the documentation patch, does it need to get a specific format or following some guidelines? I’ll compulse all my annotations and push a patch for those points that would need clarification and a little bit of formatting (layout issue). > > Thanks for this awesome support Michael! > Le mer. 1 août 2018 à 07:57, Michael Johnson a écrit : >> >> No worries, happy to share. Answers below. >> >> Michael >> >> >> On Tue, Jul 31, 2018 at 9:49 PM Flint WALRUS wrote: >> > >> > Hi Michael, >> > >> > Oh ok! That config-drive trick was the missing part! Thanks a lot! Is there a release target for the API vs config-drive thing? I’ll have a look at an instance as soon as I’ll be able to log into one of my amphora. >> >> No I have no timeline for the amphora-agent config update API. Either >> way, the initial config will be installed via config drive. The API is >> intended for runtime updates. >> > >> > By the way, three sub-questions remains: >> > >> > 1°/ - What is the best place to push some documentation improvement ? >> >> Patches are welcome! All of our documentation is included in the >> source code repository here: >> https://github.com/openstack/octavia/tree/master/doc/source >> >> Our patches follow the normal OpenStack gerrit review process >> (OpenStack does not use pull requests). >> >> > 2°/ - Is the amphora-agent an auto-generated file at image build time or do I need to create one and give it to the diskimage-builder process? >> >> The amphora-agent code itself is installed automatically with the >> diskimage-builder process via the "amphora-agent" element. >> The amphora-agent configuration file is only installed at amphora boot >> time by nova using the config drive capability. It is also >> auto-generated by the controller. >> >> > 3°/ - The amphora agent source-code is available at https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends/agent isn’t? >> >> Yes, the agent code that runs in the amphora instance is all under >> https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends >> in the main octavia repository. >> >> > >> > Sorry for the questions volume, but I prefer to really understand the underlying mechanisms before we goes live with the solution. >> > >> > G. >> > >> > Le mer. 1 août 2018 à 02:36, Michael Johnson a écrit : >> >> >> >> Hi Flint, >> >> >> >> Happy to help. >> >> >> >> Right now the list of controller endpoints is pushed at boot time and >> >> loaded into the amphora via config drive/nova. >> >> In the future we plan to be able to update this list via the amphora >> >> API, but it has not been developed yet. >> >> >> >> I am pretty sure centos is getting the config file as our gate job >> >> that runs with centos 7 amphora has been passing. It should be in the >> >> same /etc/octavia/amphora-agent.conf location as the ubuntu based >> >> amphora. >> >> >> >> Michael >> >> >> >> >> >> >> >> On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS wrote: >> >> > >> >> > Hi Michael, thanks a lot for that explanation, it’s actually how I envisioned the flow. >> >> > >> >> > I’ll have to produce a diagram for my peers understanding, I maybe can share it with you. >> >> > >> >> > There is still one point that seems to be a little bit odd to me. >> >> > >> >> > How the amphora agent know where to find out the healthManagers and worker services? Is that because the worker is sending the agent some catalog informations or because we set that at diskimage-create time? >> >> > >> >> > If so, I think the Centos based amphora is missing the agent.conf because currently my vms doesn’t have any. >> >> > >> >> > Once again thanks for your help! >> >> > Le mar. 31 juil. 2018 à 18:15, Michael Johnson a écrit : >> >> >> >> >> >> Hi Flint, >> >> >> >> >> >> We don't have a logical network diagram at this time (it's still on >> >> >> the to-do list), but I can talk you through it. >> >> >> >> >> >> The Octavia worker, health manager, and housekeeping need to be able >> >> >> to reach the amphora (service VM at this point) over the lb-mgmt-net >> >> >> on TCP 9443. It knows the amphora IP addresses on the lb-mgmt-net via >> >> >> the database and the information we save from the compute driver (I.e. >> >> >> what IP was assigned to the instance). >> >> >> >> >> >> The Octavia API process does not need to be connected to the >> >> >> lb-mgmt-net at this time. It only connects the the messaging bus and >> >> >> the Octavia database. Provider drivers may have other connectivity >> >> >> requirements for the Octavia API. >> >> >> >> >> >> The amphorae also send UDP packets back to the health manager on port >> >> >> 5555. This is the heartbeat packet from the amphora. It contains the >> >> >> health and statistics from that amphora. It know it's list of health >> >> >> manager endpoints from the configuration file >> >> >> "controller_ip_port_list" >> >> >> (https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list). >> >> >> Each amphora will rotate through that list of endpoints to reduce the >> >> >> chance of a network split impacting the heartbeat messages. >> >> >> >> >> >> This is the only traffic that passed over this network. All of it is >> >> >> IP based and can be routed (it does not require L2 connectivity). >> >> >> >> >> >> Michael >> >> >> >> >> >> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS wrote: >> >> >> > >> >> >> > Hi Folks, >> >> >> > >> >> >> > I'm currently deploying the Octavia component into our testing environment which is based on KOLLA. >> >> >> > >> >> >> > So far I'm quite enjoying it as it is pretty much straight forward (Except for some documentation pitfalls), but I'm now facing a weird and hard to debug situation. >> >> >> > >> >> >> > I actually have a hard time to understand how Amphora are communicating back and forth with the Control Plan components. >> >> >> > >> >> >> > From my understanding, as soon as I create a new LB, the Control Plan is spawning an instance using the configured Octavia Flavor and Image type, attach it to the LB-MGMT-NET and to the user provided subnet. >> >> >> > >> >> >> > What I think I'm misunderstanding is the discussion that follows between the amphora and the different components such as the HealthManager/HouseKeeper, the API and the Worker. >> >> >> > >> >> >> > How is the amphora agent able to found my control plan? Is the HealthManager or the Octavia Worker initiating the communication to the Amphora on port 9443 and so give the agent the API/Control plan internalURL? >> >> >> > >> >> >> > If anyone have a diagram of the workflow I would be more than happy ^^ >> >> >> > >> >> >> > Thanks a lot in advance to anyone willing to help :D >> >> >> > >> >> >> > _______________________________________________ >> >> >> > OpenStack-operators mailing list >> >> >> > OpenStack-operators at lists.openstack.org >> >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From gael.therond at gmail.com Thu Aug 2 07:31:52 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Thu, 2 Aug 2018 09:31:52 +0200 Subject: [Openstack-operators] [OCTAVIA][KOLLA] - Amphora to control plan communication question. In-Reply-To: References: Message-ID: Ok ok, I’ll have a look at the guidelines and make some documentation patch proposal once our POC will be working fine. Do I have to propose these patch through storyboard or launchpad ? Oh BTW, one last question, is there an official Octavia Amphora pre-build iso? Thanks for the link. Le mer. 1 août 2018 à 17:57, Michael Johnson a écrit : > Hi Flint, > > Yes, our documentation follows the OpenStack documentation rules. It > is in RestructuredText format. > > The documentation team has some guides here: > https://docs.openstack.org/doc-contrib-guide/rst-conv.html > > However we can also help with that during the review process. > > Michael > > On Tue, Jul 31, 2018 at 11:03 PM Flint WALRUS > wrote: > > > > Ok sweet! Many thanks ! Awesome, I’ll be able to continue our deployment > with peace in mind. > > > > Regarding the documentation patch, does it need to get a specific format > or following some guidelines? I’ll compulse all my annotations and push a > patch for those points that would need clarification and a little bit of > formatting (layout issue). > > > > Thanks for this awesome support Michael! > > Le mer. 1 août 2018 à 07:57, Michael Johnson a > écrit : > >> > >> No worries, happy to share. Answers below. > >> > >> Michael > >> > >> > >> On Tue, Jul 31, 2018 at 9:49 PM Flint WALRUS > wrote: > >> > > >> > Hi Michael, > >> > > >> > Oh ok! That config-drive trick was the missing part! Thanks a lot! Is > there a release target for the API vs config-drive thing? I’ll have a look > at an instance as soon as I’ll be able to log into one of my amphora. > >> > >> No I have no timeline for the amphora-agent config update API. Either > >> way, the initial config will be installed via config drive. The API is > >> intended for runtime updates. > >> > > >> > By the way, three sub-questions remains: > >> > > >> > 1°/ - What is the best place to push some documentation improvement ? > >> > >> Patches are welcome! All of our documentation is included in the > >> source code repository here: > >> https://github.com/openstack/octavia/tree/master/doc/source > >> > >> Our patches follow the normal OpenStack gerrit review process > >> (OpenStack does not use pull requests). > >> > >> > 2°/ - Is the amphora-agent an auto-generated file at image build time > or do I need to create one and give it to the diskimage-builder process? > >> > >> The amphora-agent code itself is installed automatically with the > >> diskimage-builder process via the "amphora-agent" element. > >> The amphora-agent configuration file is only installed at amphora boot > >> time by nova using the config drive capability. It is also > >> auto-generated by the controller. > >> > >> > 3°/ - The amphora agent source-code is available at > https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends/agent > isn’t? > >> > >> Yes, the agent code that runs in the amphora instance is all under > >> > https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends > >> in the main octavia repository. > >> > >> > > >> > Sorry for the questions volume, but I prefer to really understand the > underlying mechanisms before we goes live with the solution. > >> > > >> > G. > >> > > >> > Le mer. 1 août 2018 à 02:36, Michael Johnson a > écrit : > >> >> > >> >> Hi Flint, > >> >> > >> >> Happy to help. > >> >> > >> >> Right now the list of controller endpoints is pushed at boot time and > >> >> loaded into the amphora via config drive/nova. > >> >> In the future we plan to be able to update this list via the amphora > >> >> API, but it has not been developed yet. > >> >> > >> >> I am pretty sure centos is getting the config file as our gate job > >> >> that runs with centos 7 amphora has been passing. It should be in the > >> >> same /etc/octavia/amphora-agent.conf location as the ubuntu based > >> >> amphora. > >> >> > >> >> Michael > >> >> > >> >> > >> >> > >> >> On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS < > gael.therond at gmail.com> wrote: > >> >> > > >> >> > Hi Michael, thanks a lot for that explanation, it’s actually how I > envisioned the flow. > >> >> > > >> >> > I’ll have to produce a diagram for my peers understanding, I maybe > can share it with you. > >> >> > > >> >> > There is still one point that seems to be a little bit odd to me. > >> >> > > >> >> > How the amphora agent know where to find out the healthManagers > and worker services? Is that because the worker is sending the agent some > catalog informations or because we set that at diskimage-create time? > >> >> > > >> >> > If so, I think the Centos based amphora is missing the agent.conf > because currently my vms doesn’t have any. > >> >> > > >> >> > Once again thanks for your help! > >> >> > Le mar. 31 juil. 2018 à 18:15, Michael Johnson < > johnsomor at gmail.com> a écrit : > >> >> >> > >> >> >> Hi Flint, > >> >> >> > >> >> >> We don't have a logical network diagram at this time (it's still > on > >> >> >> the to-do list), but I can talk you through it. > >> >> >> > >> >> >> The Octavia worker, health manager, and housekeeping need to be > able > >> >> >> to reach the amphora (service VM at this point) over the > lb-mgmt-net > >> >> >> on TCP 9443. It knows the amphora IP addresses on the lb-mgmt-net > via > >> >> >> the database and the information we save from the compute driver > (I.e. > >> >> >> what IP was assigned to the instance). > >> >> >> > >> >> >> The Octavia API process does not need to be connected to the > >> >> >> lb-mgmt-net at this time. It only connects the the messaging bus > and > >> >> >> the Octavia database. Provider drivers may have other connectivity > >> >> >> requirements for the Octavia API. > >> >> >> > >> >> >> The amphorae also send UDP packets back to the health manager on > port > >> >> >> 5555. This is the heartbeat packet from the amphora. It contains > the > >> >> >> health and statistics from that amphora. It know it's list of > health > >> >> >> manager endpoints from the configuration file > >> >> >> "controller_ip_port_list" > >> >> >> ( > https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list > ). > >> >> >> Each amphora will rotate through that list of endpoints to reduce > the > >> >> >> chance of a network split impacting the heartbeat messages. > >> >> >> > >> >> >> This is the only traffic that passed over this network. All of it > is > >> >> >> IP based and can be routed (it does not require L2 connectivity). > >> >> >> > >> >> >> Michael > >> >> >> > >> >> >> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS < > gael.therond at gmail.com> wrote: > >> >> >> > > >> >> >> > Hi Folks, > >> >> >> > > >> >> >> > I'm currently deploying the Octavia component into our testing > environment which is based on KOLLA. > >> >> >> > > >> >> >> > So far I'm quite enjoying it as it is pretty much straight > forward (Except for some documentation pitfalls), but I'm now facing a > weird and hard to debug situation. > >> >> >> > > >> >> >> > I actually have a hard time to understand how Amphora are > communicating back and forth with the Control Plan components. > >> >> >> > > >> >> >> > From my understanding, as soon as I create a new LB, the > Control Plan is spawning an instance using the configured Octavia Flavor > and Image type, attach it to the LB-MGMT-NET and to the user provided > subnet. > >> >> >> > > >> >> >> > What I think I'm misunderstanding is the discussion that > follows between the amphora and the different components such as the > HealthManager/HouseKeeper, the API and the Worker. > >> >> >> > > >> >> >> > How is the amphora agent able to found my control plan? Is the > HealthManager or the Octavia Worker initiating the communication to the > Amphora on port 9443 and so give the agent the API/Control plan internalURL? > >> >> >> > > >> >> >> > If anyone have a diagram of the workflow I would be more than > happy ^^ > >> >> >> > > >> >> >> > Thanks a lot in advance to anyone willing to help :D > >> >> >> > > >> >> >> > _______________________________________________ > >> >> >> > OpenStack-operators mailing list > >> >> >> > OpenStack-operators at lists.openstack.org > >> >> >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at lrasc.fr Thu Aug 2 09:23:16 2018 From: nicolas at lrasc.fr (nicolas at lrasc.fr) Date: Thu, 02 Aug 2018 11:23:16 +0200 Subject: [Openstack-operators] Openstack Ansible ODL+OvS+SFC Message-ID: <3f7a0192608a34c033bfdf3261a65e41@lrasc.fr> Hi Openstack community ! I have a question regarding Openstack Ansible (OSA) and one deployement scenario "Scenario - OpenDaylight and Open vSwitch" (link below). https://docs.openstack.org/openstack-ansible-os_neutron/latest/app-opendaylight.html This is a lab test and I take inspiration from "Test environment" example : https://docs.openstack.org/openstack-ansible/queens/user/test/example.html First, I have already tried and achieve to use the "OSA Scenario - Using Open vSwitch" (link below), which was necessary to understand before trying the ODL + OvS scenario. https://docs.openstack.org/openstack-ansible-os_neutron/latest/app-openvswitch.html This means I was able to deploy Openstack with OvS as network driver and I was able to instatiates VMs on tenant VxLAN network, test networks between VMs, use floating IPs, etc.. Now I want to try the "Scenario - OpenDaylight and Open vSwitch" scenario because I want to deploy my Openstack environnement with the "networking-sfc" driver activated. I understand that I must modify the config file "/etc/openstack_deploy/user_variables.yml" like this : ``` # /etc/openstack_deploy/user_variables.yml ### Ensure the openvswitch kernel module is loaded openstack_host_specific_kernel_modules: - name: "openvswitch" pattern: "CONFIG_OPENVSWITCH" group: "network_hosts" ### Use OpenDaylight SDN Controller neutron_plugin_type: "ml2.opendaylight" odl_ip: "{{ hostvars[groups['opendaylight'][0]]['ansible_default_ipv4']['address'] }}" neutron_opendaylight_conf_ini_overrides: ml2_odl: url: "http://{{ odl_ip }}:8180/controller/nb/v2/neutron" username: password: neutron_plugin_base: - router - metering - networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin - networking_sfc.services.sfc.plugin.SfcPlugin ``` But there are no information about the "/etc/openstack_deploy/openstack_user_config.yml" config file. I am not sure if I understand where ansible get the value of "{{ odl_ip }}" and "{{ hostvars[group.... In the file "/opt/openstack-ansible/tests/test_inventory.py", I find out that there is an entry for "opendaylight" in the class TestAnsibleInventoryFormatConstraints(unittest.TestCase). I assumed I must modify "/etc/openstack_deploy/openstack_user_config.yml" like this : ``` # /etc/openstack_deploy/openstack_user_config.yml [...] # horizon dashboard_hosts: infra41: ip: infra41.dom4.net # neutron server, agents (L3, etc) network_hosts: network41: ip: network41.dom4.net opendaylight: network41: ip: network41.dom4.net [...] ``` I have an infra node (with most of Openstack core services) and a network node (dedicated to neutron). On my first try, I wanted to install ODL on the network node (because I thought it will be deployed in a LXC container). But I can dedicate an host to ODL if needed. Could someone gives me some hints on this ? My goal is to deploy my Openstack environnement with the "networking-sfc" driver, and using OSA. Maybe there are other method like "kolla-ansible", but I found that OSA has a more dense documentation. Thanks you for your time. -- Nicolas From johnsomor at gmail.com Thu Aug 2 14:04:26 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 2 Aug 2018 07:04:26 -0700 Subject: [Openstack-operators] [OCTAVIA][KOLLA] - Amphora to control plan communication question. In-Reply-To: References: Message-ID: Hi there. We track our bugs/RFE in storyboard, but patches go into the OpenStack gerrit. There is a new contributor guide here if you have not contributed to OpenStack before: https://docs.openstack.org/contributors/ As for images, no we don't as an OpenStack group. We have nightly builds here: http://tarballs.openstack.org/octavia/test-images/ but they are not configured for production use and are not always stable. If you are using RDO or RedHat OpenStack Platform (OSP) those projects do provide production images. Michael On Thu, Aug 2, 2018 at 12:32 AM Flint WALRUS wrote: > > Ok ok, I’ll have a look at the guidelines and make some documentation patch proposal once our POC will be working fine. > > Do I have to propose these patch through storyboard or launchpad ? > > Oh BTW, one last question, is there an official Octavia Amphora pre-build iso? > > Thanks for the link. > Le mer. 1 août 2018 à 17:57, Michael Johnson a écrit : >> >> Hi Flint, >> >> Yes, our documentation follows the OpenStack documentation rules. It >> is in RestructuredText format. >> >> The documentation team has some guides here: >> https://docs.openstack.org/doc-contrib-guide/rst-conv.html >> >> However we can also help with that during the review process. >> >> Michael >> >> On Tue, Jul 31, 2018 at 11:03 PM Flint WALRUS wrote: >> > >> > Ok sweet! Many thanks ! Awesome, I’ll be able to continue our deployment with peace in mind. >> > >> > Regarding the documentation patch, does it need to get a specific format or following some guidelines? I’ll compulse all my annotations and push a patch for those points that would need clarification and a little bit of formatting (layout issue). >> > >> > Thanks for this awesome support Michael! >> > Le mer. 1 août 2018 à 07:57, Michael Johnson a écrit : >> >> >> >> No worries, happy to share. Answers below. >> >> >> >> Michael >> >> >> >> >> >> On Tue, Jul 31, 2018 at 9:49 PM Flint WALRUS wrote: >> >> > >> >> > Hi Michael, >> >> > >> >> > Oh ok! That config-drive trick was the missing part! Thanks a lot! Is there a release target for the API vs config-drive thing? I’ll have a look at an instance as soon as I’ll be able to log into one of my amphora. >> >> >> >> No I have no timeline for the amphora-agent config update API. Either >> >> way, the initial config will be installed via config drive. The API is >> >> intended for runtime updates. >> >> > >> >> > By the way, three sub-questions remains: >> >> > >> >> > 1°/ - What is the best place to push some documentation improvement ? >> >> >> >> Patches are welcome! All of our documentation is included in the >> >> source code repository here: >> >> https://github.com/openstack/octavia/tree/master/doc/source >> >> >> >> Our patches follow the normal OpenStack gerrit review process >> >> (OpenStack does not use pull requests). >> >> >> >> > 2°/ - Is the amphora-agent an auto-generated file at image build time or do I need to create one and give it to the diskimage-builder process? >> >> >> >> The amphora-agent code itself is installed automatically with the >> >> diskimage-builder process via the "amphora-agent" element. >> >> The amphora-agent configuration file is only installed at amphora boot >> >> time by nova using the config drive capability. It is also >> >> auto-generated by the controller. >> >> >> >> > 3°/ - The amphora agent source-code is available at https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends/agent isn’t? >> >> >> >> Yes, the agent code that runs in the amphora instance is all under >> >> https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends >> >> in the main octavia repository. >> >> >> >> > >> >> > Sorry for the questions volume, but I prefer to really understand the underlying mechanisms before we goes live with the solution. >> >> > >> >> > G. >> >> > >> >> > Le mer. 1 août 2018 à 02:36, Michael Johnson a écrit : >> >> >> >> >> >> Hi Flint, >> >> >> >> >> >> Happy to help. >> >> >> >> >> >> Right now the list of controller endpoints is pushed at boot time and >> >> >> loaded into the amphora via config drive/nova. >> >> >> In the future we plan to be able to update this list via the amphora >> >> >> API, but it has not been developed yet. >> >> >> >> >> >> I am pretty sure centos is getting the config file as our gate job >> >> >> that runs with centos 7 amphora has been passing. It should be in the >> >> >> same /etc/octavia/amphora-agent.conf location as the ubuntu based >> >> >> amphora. >> >> >> >> >> >> Michael >> >> >> >> >> >> >> >> >> >> >> >> On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS wrote: >> >> >> > >> >> >> > Hi Michael, thanks a lot for that explanation, it’s actually how I envisioned the flow. >> >> >> > >> >> >> > I’ll have to produce a diagram for my peers understanding, I maybe can share it with you. >> >> >> > >> >> >> > There is still one point that seems to be a little bit odd to me. >> >> >> > >> >> >> > How the amphora agent know where to find out the healthManagers and worker services? Is that because the worker is sending the agent some catalog informations or because we set that at diskimage-create time? >> >> >> > >> >> >> > If so, I think the Centos based amphora is missing the agent.conf because currently my vms doesn’t have any. >> >> >> > >> >> >> > Once again thanks for your help! >> >> >> > Le mar. 31 juil. 2018 à 18:15, Michael Johnson a écrit : >> >> >> >> >> >> >> >> Hi Flint, >> >> >> >> >> >> >> >> We don't have a logical network diagram at this time (it's still on >> >> >> >> the to-do list), but I can talk you through it. >> >> >> >> >> >> >> >> The Octavia worker, health manager, and housekeeping need to be able >> >> >> >> to reach the amphora (service VM at this point) over the lb-mgmt-net >> >> >> >> on TCP 9443. It knows the amphora IP addresses on the lb-mgmt-net via >> >> >> >> the database and the information we save from the compute driver (I.e. >> >> >> >> what IP was assigned to the instance). >> >> >> >> >> >> >> >> The Octavia API process does not need to be connected to the >> >> >> >> lb-mgmt-net at this time. It only connects the the messaging bus and >> >> >> >> the Octavia database. Provider drivers may have other connectivity >> >> >> >> requirements for the Octavia API. >> >> >> >> >> >> >> >> The amphorae also send UDP packets back to the health manager on port >> >> >> >> 5555. This is the heartbeat packet from the amphora. It contains the >> >> >> >> health and statistics from that amphora. It know it's list of health >> >> >> >> manager endpoints from the configuration file >> >> >> >> "controller_ip_port_list" >> >> >> >> (https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list). >> >> >> >> Each amphora will rotate through that list of endpoints to reduce the >> >> >> >> chance of a network split impacting the heartbeat messages. >> >> >> >> >> >> >> >> This is the only traffic that passed over this network. All of it is >> >> >> >> IP based and can be routed (it does not require L2 connectivity). >> >> >> >> >> >> >> >> Michael >> >> >> >> >> >> >> >> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS wrote: >> >> >> >> > >> >> >> >> > Hi Folks, >> >> >> >> > >> >> >> >> > I'm currently deploying the Octavia component into our testing environment which is based on KOLLA. >> >> >> >> > >> >> >> >> > So far I'm quite enjoying it as it is pretty much straight forward (Except for some documentation pitfalls), but I'm now facing a weird and hard to debug situation. >> >> >> >> > >> >> >> >> > I actually have a hard time to understand how Amphora are communicating back and forth with the Control Plan components. >> >> >> >> > >> >> >> >> > From my understanding, as soon as I create a new LB, the Control Plan is spawning an instance using the configured Octavia Flavor and Image type, attach it to the LB-MGMT-NET and to the user provided subnet. >> >> >> >> > >> >> >> >> > What I think I'm misunderstanding is the discussion that follows between the amphora and the different components such as the HealthManager/HouseKeeper, the API and the Worker. >> >> >> >> > >> >> >> >> > How is the amphora agent able to found my control plan? Is the HealthManager or the Octavia Worker initiating the communication to the Amphora on port 9443 and so give the agent the API/Control plan internalURL? >> >> >> >> > >> >> >> >> > If anyone have a diagram of the workflow I would be more than happy ^^ >> >> >> >> > >> >> >> >> > Thanks a lot in advance to anyone willing to help :D >> >> >> >> > >> >> >> >> > _______________________________________________ >> >> >> >> > OpenStack-operators mailing list >> >> >> >> > OpenStack-operators at lists.openstack.org >> >> >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From jimmy at openstack.org Thu Aug 2 22:50:55 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 02 Aug 2018 17:50:55 -0500 Subject: [Openstack-operators] The UC at Stein PTG + Upcoming Elections Message-ID: <5B638ACF.4090902@openstack.org> Hi All - Just a quick note to let you know that the UC will be meeting at the PTG. Wednesday, Septemeber 12th looks like it will be the likely date. If you're interested in chiming in on User Committee matters or planning to run for the PTG, this would be a very good meeting to attend. Here's a look at the current proposed agenda [1]. If you have other items or want to weigh in on what's there, please do! Also, while I'm here, don't forget to nominate yourself or someone else for the User Committee. Elections will be next week. Here's a very rough primer on what it takes to be a UC Member [2]. More info can be found here [3]. Thanks and let me know if you have any questions. Cheers, Jimmy [1] https://etherpad.openstack.org/p/uc-stein-ptg [2] https://etherpad.openstack.org/p/UC-Election-Qualifications [3] https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee From gael.therond at gmail.com Fri Aug 3 09:43:59 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Fri, 3 Aug 2018 11:43:59 +0200 Subject: [Openstack-operators] [OCTAVIA][KOLLA] - Amphora to control plan communication question. In-Reply-To: References: Message-ID: Ok perfect, I’ll have a look at the whole process and try to cleanup all my note and make a clear documentation of them. For the image I just choose to goes the CICD way with DIB and successfully get a centos working image. I’m now facing a SSL handshake issue, but I’ll try to fix it as I suspect my certificates to be incorrect and then create a new discussion feed if I don’t found out what’s going on. Thanks a lot for your help and kuddos to the Octavia team that build a rock solid solution and provide an awesome work. I especially love the healthManager/Housekeeper duo as I wished for nova and other OS Services to get some (Tempest I’m looking at you) in order to make my life easier by properly managing resources waist. Le jeu. 2 août 2018 à 16:04, Michael Johnson a écrit : > Hi there. > > We track our bugs/RFE in storyboard, but patches go into the OpenStack > gerrit. > There is a new contributor guide here if you have not contributed to > OpenStack before: https://docs.openstack.org/contributors/ > > As for images, no we don't as an OpenStack group. We have nightly > builds here: http://tarballs.openstack.org/octavia/test-images/ but > they are not configured for production use and are not always stable. > > If you are using RDO or RedHat OpenStack Platform (OSP) those projects > do provide production images. > > Michael > > On Thu, Aug 2, 2018 at 12:32 AM Flint WALRUS > wrote: > > > > Ok ok, I’ll have a look at the guidelines and make some documentation > patch proposal once our POC will be working fine. > > > > Do I have to propose these patch through storyboard or launchpad ? > > > > Oh BTW, one last question, is there an official Octavia Amphora > pre-build iso? > > > > Thanks for the link. > > Le mer. 1 août 2018 à 17:57, Michael Johnson a > écrit : > >> > >> Hi Flint, > >> > >> Yes, our documentation follows the OpenStack documentation rules. It > >> is in RestructuredText format. > >> > >> The documentation team has some guides here: > >> https://docs.openstack.org/doc-contrib-guide/rst-conv.html > >> > >> However we can also help with that during the review process. > >> > >> Michael > >> > >> On Tue, Jul 31, 2018 at 11:03 PM Flint WALRUS > wrote: > >> > > >> > Ok sweet! Many thanks ! Awesome, I’ll be able to continue our > deployment with peace in mind. > >> > > >> > Regarding the documentation patch, does it need to get a specific > format or following some guidelines? I’ll compulse all my annotations and > push a patch for those points that would need clarification and a little > bit of formatting (layout issue). > >> > > >> > Thanks for this awesome support Michael! > >> > Le mer. 1 août 2018 à 07:57, Michael Johnson a > écrit : > >> >> > >> >> No worries, happy to share. Answers below. > >> >> > >> >> Michael > >> >> > >> >> > >> >> On Tue, Jul 31, 2018 at 9:49 PM Flint WALRUS > wrote: > >> >> > > >> >> > Hi Michael, > >> >> > > >> >> > Oh ok! That config-drive trick was the missing part! Thanks a lot! > Is there a release target for the API vs config-drive thing? I’ll have a > look at an instance as soon as I’ll be able to log into one of my amphora. > >> >> > >> >> No I have no timeline for the amphora-agent config update API. > Either > >> >> way, the initial config will be installed via config drive. The API > is > >> >> intended for runtime updates. > >> >> > > >> >> > By the way, three sub-questions remains: > >> >> > > >> >> > 1°/ - What is the best place to push some documentation > improvement ? > >> >> > >> >> Patches are welcome! All of our documentation is included in the > >> >> source code repository here: > >> >> https://github.com/openstack/octavia/tree/master/doc/source > >> >> > >> >> Our patches follow the normal OpenStack gerrit review process > >> >> (OpenStack does not use pull requests). > >> >> > >> >> > 2°/ - Is the amphora-agent an auto-generated file at image build > time or do I need to create one and give it to the diskimage-builder > process? > >> >> > >> >> The amphora-agent code itself is installed automatically with the > >> >> diskimage-builder process via the "amphora-agent" element. > >> >> The amphora-agent configuration file is only installed at amphora > boot > >> >> time by nova using the config drive capability. It is also > >> >> auto-generated by the controller. > >> >> > >> >> > 3°/ - The amphora agent source-code is available at > https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends/agent > isn’t? > >> >> > >> >> Yes, the agent code that runs in the amphora instance is all under > >> >> > https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends > >> >> in the main octavia repository. > >> >> > >> >> > > >> >> > Sorry for the questions volume, but I prefer to really understand > the underlying mechanisms before we goes live with the solution. > >> >> > > >> >> > G. > >> >> > > >> >> > Le mer. 1 août 2018 à 02:36, Michael Johnson > a écrit : > >> >> >> > >> >> >> Hi Flint, > >> >> >> > >> >> >> Happy to help. > >> >> >> > >> >> >> Right now the list of controller endpoints is pushed at boot time > and > >> >> >> loaded into the amphora via config drive/nova. > >> >> >> In the future we plan to be able to update this list via the > amphora > >> >> >> API, but it has not been developed yet. > >> >> >> > >> >> >> I am pretty sure centos is getting the config file as our gate job > >> >> >> that runs with centos 7 amphora has been passing. It should be in > the > >> >> >> same /etc/octavia/amphora-agent.conf location as the ubuntu based > >> >> >> amphora. > >> >> >> > >> >> >> Michael > >> >> >> > >> >> >> > >> >> >> > >> >> >> On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS < > gael.therond at gmail.com> wrote: > >> >> >> > > >> >> >> > Hi Michael, thanks a lot for that explanation, it’s actually > how I envisioned the flow. > >> >> >> > > >> >> >> > I’ll have to produce a diagram for my peers understanding, I > maybe can share it with you. > >> >> >> > > >> >> >> > There is still one point that seems to be a little bit odd to > me. > >> >> >> > > >> >> >> > How the amphora agent know where to find out the healthManagers > and worker services? Is that because the worker is sending the agent some > catalog informations or because we set that at diskimage-create time? > >> >> >> > > >> >> >> > If so, I think the Centos based amphora is missing the > agent.conf because currently my vms doesn’t have any. > >> >> >> > > >> >> >> > Once again thanks for your help! > >> >> >> > Le mar. 31 juil. 2018 à 18:15, Michael Johnson < > johnsomor at gmail.com> a écrit : > >> >> >> >> > >> >> >> >> Hi Flint, > >> >> >> >> > >> >> >> >> We don't have a logical network diagram at this time (it's > still on > >> >> >> >> the to-do list), but I can talk you through it. > >> >> >> >> > >> >> >> >> The Octavia worker, health manager, and housekeeping need to > be able > >> >> >> >> to reach the amphora (service VM at this point) over the > lb-mgmt-net > >> >> >> >> on TCP 9443. It knows the amphora IP addresses on the > lb-mgmt-net via > >> >> >> >> the database and the information we save from the compute > driver (I.e. > >> >> >> >> what IP was assigned to the instance). > >> >> >> >> > >> >> >> >> The Octavia API process does not need to be connected to the > >> >> >> >> lb-mgmt-net at this time. It only connects the the messaging > bus and > >> >> >> >> the Octavia database. Provider drivers may have other > connectivity > >> >> >> >> requirements for the Octavia API. > >> >> >> >> > >> >> >> >> The amphorae also send UDP packets back to the health manager > on port > >> >> >> >> 5555. This is the heartbeat packet from the amphora. It > contains the > >> >> >> >> health and statistics from that amphora. It know it's list of > health > >> >> >> >> manager endpoints from the configuration file > >> >> >> >> "controller_ip_port_list" > >> >> >> >> ( > https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list > ). > >> >> >> >> Each amphora will rotate through that list of endpoints to > reduce the > >> >> >> >> chance of a network split impacting the heartbeat messages. > >> >> >> >> > >> >> >> >> This is the only traffic that passed over this network. All of > it is > >> >> >> >> IP based and can be routed (it does not require L2 > connectivity). > >> >> >> >> > >> >> >> >> Michael > >> >> >> >> > >> >> >> >> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS < > gael.therond at gmail.com> wrote: > >> >> >> >> > > >> >> >> >> > Hi Folks, > >> >> >> >> > > >> >> >> >> > I'm currently deploying the Octavia component into our > testing environment which is based on KOLLA. > >> >> >> >> > > >> >> >> >> > So far I'm quite enjoying it as it is pretty much straight > forward (Except for some documentation pitfalls), but I'm now facing a > weird and hard to debug situation. > >> >> >> >> > > >> >> >> >> > I actually have a hard time to understand how Amphora are > communicating back and forth with the Control Plan components. > >> >> >> >> > > >> >> >> >> > From my understanding, as soon as I create a new LB, the > Control Plan is spawning an instance using the configured Octavia Flavor > and Image type, attach it to the LB-MGMT-NET and to the user provided > subnet. > >> >> >> >> > > >> >> >> >> > What I think I'm misunderstanding is the discussion that > follows between the amphora and the different components such as the > HealthManager/HouseKeeper, the API and the Worker. > >> >> >> >> > > >> >> >> >> > How is the amphora agent able to found my control plan? Is > the HealthManager or the Octavia Worker initiating the communication to the > Amphora on port 9443 and so give the agent the API/Control plan internalURL? > >> >> >> >> > > >> >> >> >> > If anyone have a diagram of the workflow I would be more > than happy ^^ > >> >> >> >> > > >> >> >> >> > Thanks a lot in advance to anyone willing to help :D > >> >> >> >> > > >> >> >> >> > _______________________________________________ > >> >> >> >> > OpenStack-operators mailing list > >> >> >> >> > OpenStack-operators at lists.openstack.org > >> >> >> >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Fri Aug 3 13:42:51 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Fri, 03 Aug 2018 15:42:51 +0200 Subject: [Openstack-operators] =?utf-8?b?Pz09P3V0Zi04P3E/ICBPcGVuc3RhY2sg?= =?utf-8?q?Ansible_ODL+OvS+SFC?= In-Reply-To: <3f7a0192608a34c033bfdf3261a65e41@lrasc.fr> Message-ID: <2ceb-5b645c00-13-5f406200@150221191> Hello, We don't ship with an opendaylight group by default in the integrated repo. Two choices: - You deploy opendaylight on the side, and you point to it by having a user variable odl_ip for example. - You add an extra group (where you want to have it!) using openstack-ansible inventory system. For this you can either follow the documentation for manipulating the dynamic inventory, or you can ship your own static inventory to complement the dynamic inventory. For the former, you should read this [1]. For the latter, you can simply create an /etc/openstack_deploy/inventory.ini file, like a regular ansible ini inventory. Hope it helps. Jean-Philippe Evrard (evrardjp) [1]: https://docs.openstack.org/openstack-ansible/latest/reference/inventory/inventory.html On Thursday, August 02, 2018 11:23 CEST, nicolas at lrasc.fr wrote: > Hi Openstack community ! > > I have a question regarding Openstack Ansible (OSA) and one deployement > scenario "Scenario - OpenDaylight and Open vSwitch" (link below). > https://docs.openstack.org/openstack-ansible-os_neutron/latest/app-opendaylight.html > > > > This is a lab test and I take inspiration from "Test environment" > example : > https://docs.openstack.org/openstack-ansible/queens/user/test/example.html > > > > First, I have already tried and achieve to use the "OSA Scenario - Using > Open vSwitch" (link below), which was necessary to understand before > trying the ODL + OvS scenario. > https://docs.openstack.org/openstack-ansible-os_neutron/latest/app-openvswitch.html > > This means I was able to deploy Openstack with OvS as network driver and > I was able to instatiates VMs on tenant VxLAN network, test networks > between VMs, use floating IPs, etc.. > > > > Now I want to try the "Scenario - OpenDaylight and Open vSwitch" > scenario because I want to deploy my Openstack environnement with the > "networking-sfc" driver activated. > > I understand that I must modify the config file > "/etc/openstack_deploy/user_variables.yml" like this : > > ``` > # /etc/openstack_deploy/user_variables.yml > > ### Ensure the openvswitch kernel module is loaded > openstack_host_specific_kernel_modules: > - name: "openvswitch" > pattern: "CONFIG_OPENVSWITCH" > group: "network_hosts" > > ### Use OpenDaylight SDN Controller > neutron_plugin_type: "ml2.opendaylight" > > odl_ip: "{{ > hostvars[groups['opendaylight'][0]]['ansible_default_ipv4']['address'] > }}" > neutron_opendaylight_conf_ini_overrides: > ml2_odl: > url: "http://{{ odl_ip }}:8180/controller/nb/v2/neutron" > username: > password: > > neutron_plugin_base: > - router > - metering > - networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin > - networking_sfc.services.sfc.plugin.SfcPlugin > > ``` > > But there are no information about the > "/etc/openstack_deploy/openstack_user_config.yml" config file. > I am not sure if I understand where ansible get the value of "{{ odl_ip > }}" and "{{ hostvars[group.... > > In the file "/opt/openstack-ansible/tests/test_inventory.py", I find out > that there is an entry for "opendaylight" in the > class TestAnsibleInventoryFormatConstraints(unittest.TestCase). > > > I assumed I must modify > "/etc/openstack_deploy/openstack_user_config.yml" like this : > > ``` > # /etc/openstack_deploy/openstack_user_config.yml > [...] > > # horizon > dashboard_hosts: > infra41: > ip: infra41.dom4.net > > # neutron server, agents (L3, etc) > network_hosts: > network41: > ip: network41.dom4.net > > opendaylight: > network41: > ip: network41.dom4.net > [...] > ``` > > I have an infra node (with most of Openstack core services) and a > network node (dedicated to neutron). On my first try, I wanted to > install ODL on the network node (because I thought it will be deployed > in a LXC container). But I can dedicate an host to ODL if needed. > > > Could someone gives me some hints on this ? My goal is to deploy my > Openstack environnement with the "networking-sfc" driver, and using OSA. > > Maybe there are other method like "kolla-ansible", but I found that OSA > has a more dense documentation. > > > Thanks you for your time. > -- > Nicolas > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From mvanwinkle at salesforce.com Fri Aug 3 14:30:13 2018 From: mvanwinkle at salesforce.com (Matt Van Winkle) Date: Fri, 3 Aug 2018 09:30:13 -0500 Subject: [Openstack-operators] UC Candidacy Message-ID: Greetings OpenStack Operators and Users, I’d like to take the opportunity to state my candidacy in the upcoming UC election. I have enjoyed the work we have been able to accomplish these last 12 months and I would like to serve another term to help continue the momentum. After 6 years in Operations and Engineering for Rackspace’s public cloud, I have recently joined Salesforce to help with their OpenStack efforts. At both companies, I’ve had the distinct pleasure of serving a number of talented engineers and teams as they have worked to scale and manage the infrastructure. During this time, I’ve also enjoyed sharing ideas with and learning from other Operators running large OpenStack clouds in order to find new and creative ways to solve challenges With respect to community involvement, my first summit was Portland and have made all but two since. I’ve also been very active in the Operators community since helping plan the very first meet-up in San Jose. I’ve given a few talks in the past and have served as track chair many times. After Paris, I began chairing the Large Deployments Team. This team, while inactive now, was a long running group of operators that shared many ideas on scaling OpenStack and has had some successes running feature requests to ground with dev teams. It’s been a distinct pleasure to work with such smart folks from around the community. Chairing LDT also led to an opportunity to join the Ops Meetup Team - working with others on planning Operator mid-cycles and Ops related Summit/Forum sessions. I was fortunate enough to be part of the group that helped the old UC craft the bylaw changes that have expanded the committee and made it the elected body it is today. After serving as an election official in the first election, I chose to run for an open spot a year ago. Regardless of the outcome of this election, it is really awesome to see the evolution of the UC and how it’s able to better coordinate Operator and User efforts in guiding the community and the development cycle. If re-elected, I hope to keep helping more Users and Operators understand how to take better advantage of the the various events and dev cycle to drive improvement and change in the software. The UC has a vision of seeing conversations at and Operators mid-cycle or from an OpenStack Days OPs session become specific topic submissions at the next summit. Conversely, we'd love this pattern to be regular enough that the Dev teams start proposing session ideas for certain feedback at upcoming OPs gatherings to complete the cycle. While there is still plenty of work to do to make these things a reality, the UC has been laying the ground work since the Dublin PTG. I'd like to serve another term so I can do my part to help keep making progress. Beyond that, I want to continue the great work of the UC members to date on being an advocate for the User with the Board, TC and community at large. I appreciate the time and the consideration. Thanks! VW -- Matt Van Winkle Senior Manager, Software Engineering | Salesforce Mobile: 210-445-4183 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at lrasc.fr Fri Aug 3 15:26:34 2018 From: nicolas at lrasc.fr (nicolas at lrasc.fr) Date: Fri, 03 Aug 2018 17:26:34 +0200 Subject: [Openstack-operators] Openstack Ansible ODL+OvS+SFC In-Reply-To: <2ceb-5b645c00-13-5f406200@150221191> References: <2ceb-5b645c00-13-5f406200@150221191> Message-ID: <8b5d1fd0ae8abba137b040fdc1a60e2b@lrasc.fr> Hi Jean-Philippe, Thanks you for this reply. If I understand correctly, I have 2 options: 1. I deploy ODL before deploying OSA. Then in the OSA config file "/etc/openstack_deploy/user_variables.yml", I point to the ODL API IP address. No other modification done to OSA roles or playbooks are required. Here OSA does not install ODL, but it only configures neutron to works with ODL. 2. I add an extra "opendaylight" group following the documentation (either static or dynamic inventory). Here the OSA script will install ODL from scratch (with no other modification done to OSA roles and playbooks) and will configures neutron to works with ODL. I have the feeling that the first solution is 'easier'. In fact I am not sure that I fully understand what it takes to add a new custom inventory group and links it to the ODL deployment scripts in OSA. I do know how to use ansible (I have writen several roles and playbooks), but I prefer to modify OSA as little as possible (because of the release rate). So if the 1st solution works, I will try it ! Maybe I can share the result in this ML because they were people how was interested in. Regards, -- Nicolas On Friday, August 03, 2018 15:42, jean-philippe at evrard.me wrote: > Hello, > > We don't ship with an opendaylight group by default in the integrated > repo. > Two choices: > - You deploy opendaylight on the side, and you point to it by having a > user variable odl_ip for example. > - You add an extra group (where you want to have it!) using > openstack-ansible inventory system. > > For this you can either follow the documentation for manipulating the > dynamic inventory, or you can ship your own static inventory to > complement the dynamic inventory. > For the former, you should read this [1]. > For the latter, you can simply create an > /etc/openstack_deploy/inventory.ini file, like a regular ansible ini > inventory. > > Hope it helps. > > Jean-Philippe Evrard (evrardjp) > > [1]: > https://docs.openstack.org/openstack-ansible/latest/reference/inventory/inventory.html > > On Thursday, August 02, 2018 11:23 CEST, nicolas at lrasc.fr wrote: > >> Hi Openstack community ! >> >> I have a question regarding Openstack Ansible (OSA) and one >> deployement >> scenario "Scenario - OpenDaylight and Open vSwitch" (link below). >> https://docs.openstack.org/openstack-ansible-os_neutron/latest/app-opendaylight.html >> >> >> >> This is a lab test and I take inspiration from "Test environment" >> example : >> https://docs.openstack.org/openstack-ansible/queens/user/test/example.html >> >> >> >> First, I have already tried and achieve to use the "OSA Scenario - >> Using >> Open vSwitch" (link below), which was necessary to understand before >> trying the ODL + OvS scenario. >> https://docs.openstack.org/openstack-ansible-os_neutron/latest/app-openvswitch.html >> >> This means I was able to deploy Openstack with OvS as network driver >> and >> I was able to instatiates VMs on tenant VxLAN network, test networks >> between VMs, use floating IPs, etc.. >> >> >> >> Now I want to try the "Scenario - OpenDaylight and Open vSwitch" >> scenario because I want to deploy my Openstack environnement with the >> "networking-sfc" driver activated. >> >> I understand that I must modify the config file >> "/etc/openstack_deploy/user_variables.yml" like this : >> >> ``` >> # /etc/openstack_deploy/user_variables.yml >> >> ### Ensure the openvswitch kernel module is loaded >> openstack_host_specific_kernel_modules: >> - name: "openvswitch" >> pattern: "CONFIG_OPENVSWITCH" >> group: "network_hosts" >> >> ### Use OpenDaylight SDN Controller >> neutron_plugin_type: "ml2.opendaylight" >> >> odl_ip: "{{ >> hostvars[groups['opendaylight'][0]]['ansible_default_ipv4']['address'] >> }}" >> neutron_opendaylight_conf_ini_overrides: >> ml2_odl: >> url: "http://{{ odl_ip }}:8180/controller/nb/v2/neutron" >> username: >> password: >> >> neutron_plugin_base: >> - router >> - metering >> - >> networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin >> - networking_sfc.services.sfc.plugin.SfcPlugin >> >> ``` >> >> But there are no information about the >> "/etc/openstack_deploy/openstack_user_config.yml" config file. >> I am not sure if I understand where ansible get the value of "{{ >> odl_ip >> }}" and "{{ hostvars[group.... >> >> In the file "/opt/openstack-ansible/tests/test_inventory.py", I find >> out >> that there is an entry for "opendaylight" in the >> class TestAnsibleInventoryFormatConstraints(unittest.TestCase). >> >> >> I assumed I must modify >> "/etc/openstack_deploy/openstack_user_config.yml" like this : >> >> ``` >> # /etc/openstack_deploy/openstack_user_config.yml >> [...] >> >> # horizon >> dashboard_hosts: >> infra41: >> ip: infra41.dom4.net >> >> # neutron server, agents (L3, etc) >> network_hosts: >> network41: >> ip: network41.dom4.net >> >> opendaylight: >> network41: >> ip: network41.dom4.net >> [...] >> ``` >> >> I have an infra node (with most of Openstack core services) and a >> network node (dedicated to neutron). On my first try, I wanted to >> install ODL on the network node (because I thought it will be deployed >> in a LXC container). But I can dedicate an host to ODL if needed. >> >> >> Could someone gives me some hints on this ? My goal is to deploy my >> Openstack environnement with the "networking-sfc" driver, and using >> OSA. >> >> Maybe there are other method like "kolla-ansible", but I found that >> OSA >> has a more dense documentation. >> >> >> Thanks you for your time. >> -- >> Nicolas >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From jon at csail.mit.edu Fri Aug 3 15:30:53 2018 From: jon at csail.mit.edu (Jonathan Proulx) Date: Fri, 3 Aug 2018 11:30:53 -0400 Subject: [Openstack-operators] [openstack-ansible] galera issues with mitaka-eol Message-ID: <20180803153053.mg26uwyqvxvxv6nc@csail.mit.edu> Hi All, In my continuing quest to install an OSA cluster with mitaka-eol in hopes of digging our to a non-eol release eventually I've hit another snag... setup-hosts plays out fine setup-infrastructure chodes soem where in galera-install the 1st galera container gets properly bootstrapped into a 1 node cluster. the 2nd container (1st that needs replication) fails with: 180802 14:59:49 [Warning] WSREP: Gap in state sequence. Need state transfer. 180802 14:59:49 [Note] WSREP: Running: 'wsrep_sst_xtrabackup-v2 --role 'joiner' --address '172.29.238.84' --datadir '/var/lib/mysql/' --parent '3719' --binlog '/var/lib/mysql/mariadb-bin' ' WSREP_SST: [ERROR] FATAL: The innobackupex version is 1.5.1. Needs xtrabackup-2.3.5 or higher to perform SST (20180 802 14:59:50.235) 180802 14:59:50 [ERROR] WSREP: Failed to read 'ready ' from: wsrep_sst_xtrabackup-v2 --role 'joiner' --addres s '172.29.238.84' --datadir '/var/lib/mysql/' --parent '3719' --binlog '/var/lib/mysql/mariadb-bin' Read: '(null)' 180802 14:59:50 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '172 .29.238.84' --datadir '/var/lib/mysql/' --parent '3719' --binlog '/var/lib/mysql/mariadb-bin' : 2 (No such file o r directory) 180802 14:59:50 [ERROR] WSREP: Failed to prepare for 'xtrabackup-v2' SST. Unrecoverable. 180802 14:59:50 [ERROR] Aborting The container has: percona-xtrabackup-22/unknown,now 2.2.13-1.trusty amd64 [installed] which is clearly the source of sadness. Seem seems bit odd that a very default vanilla install woudl tangle like this but it is EOL and perhaps the world has changed around it. What is the most OSA friendly way digging out of this? My current prod cluster (which I eventually want this to take over) is using wsrep_sst_method = mysqldump So I could set some overrides and probably make that go in OSA land but I'd rather converge toward "noraml OSA way" unless there's explict local reason to change from defaults and in this case I don't have any particular need for mysqldump over xtrabackup. -Jon -- From aspiers at suse.com Fri Aug 3 15:41:57 2018 From: aspiers at suse.com (Adam Spiers) Date: Fri, 3 Aug 2018 16:41:57 +0100 Subject: [Openstack-operators] [openstack-dev] [sig][upgrades][ansible][charms][tripleo][kolla][airship] reboot or poweroff? In-Reply-To: References: Message-ID: <20180803154157.7h33v5pxdbbcmdtx@pacific.linksys.moosehall> [Adding openstack-sigs list too; apologies for the extreme cross-posting, but I think in this case the discussion deserves wide visibility. Happy to be corrected if there's a better way to handle this.] Hi James, James Page wrote: >Hi All > >tl;dr we (the original founders) have not managed to invest the time to get >the Upgrades SIG booted - time to hit reboot or time to poweroff? TL;DR response: reboot, absolutely no question! My full response is below. >Since Vancouver, two of the original SIG chairs have stepped down leaving >me in the hot seat with minimal participation from either deployment >projects or operators in the IRC meetings. In addition I've only been able >to make every 3rd IRC meeting, so they have generally not being happening. > >I think the current timing is not good for a lot of folk so finding a >better slot is probably a must-have if the SIG is going to continue - and >maybe moving to a monthly or bi-weekly schedule rather than the weekly slot >we have now. > >In addition I need some willing folk to help with leadership in the SIG. >If you have an interest and would like to help please let me know! > >I'd also like to better engage with all deployment projects - upgrades is >something that deployment tools should be looking to encapsulate as >features, so it would be good to get deployment projects engaged in the SIG >with nominated representatives. > >Based on the attendance in upgrades sessions in Vancouver and >developer/operator appetite to discuss all things upgrade at said sessions >I'm assuming that there is still interest in having a SIG for Upgrades but >I may be wrong! > >Thoughts? As a SIG leader in a similar position (albeit with one other very helpful person on board), let me throw my £0.02 in ... With both upgrades and self-healing I think there is a big disparity between supply (developers with time to work on the functionality) and demand (operators who need the functionality). And perhaps also the high demand leads to a lot of developers being interested in the topic whilst not having much spare time to help out. That is probably why we both see high attendance at the summit / PTG events but relatively little activity in between. I also freely admit that the inevitable conflicts with downstream requirements mean that I have struggled to find time to be as proactive with driving momentum as I had wanted, although I'm hoping to pick this up again over the next weeks leading up to the PTG. It sounds like maybe you have encountered similar challenges. That said, I strongly believe that both of these SIGs offer a *lot* of value, and even if we aren't yet seeing the level of online activity that we would like, I think it's really important that they both continue. If for no other reasons, the offline sessions at the summits and PTGs are hugely useful for helping converge the community on common approaches, and the associated repositories / wikis serve as a great focal point too. Regarding online collaboration, yes, building momentum for IRC meetings is tough, especially with the timezone challenges. Maybe a monthly cadence is a reasonable starting point, or twice a month in alternating timezones - but maybe with both meetings within ~24 hours of each other, to reduce accidental creation of geographic silos. Another possibility would be to offer "open clinic" office hours, like the TC and other projects have done. If the TC or anyone else has established best practices in this space, it'd be great to hear them. Either way, I sincerely hope that you decide to continue with the SIG, and that other people step up to help out. These things don't develop overnight but it is a tremendously worthwhile initiative; after all, everyone needs to upgrade OpenStack. Keep the faith! ;-) Cheers, Adam From ed at leafe.com Fri Aug 3 18:42:43 2018 From: ed at leafe.com (Ed Leafe) Date: Fri, 3 Aug 2018 13:42:43 -0500 Subject: [Openstack-operators] [User-committee] UC Candidacy In-Reply-To: References: Message-ID: <45CFCF9F-74A3-4AD8-8BB7-547FE69A997D@leafe.com> On Aug 3, 2018, at 9:30 AM, Matt Van Winkle wrote: > > I’d like to take the opportunity to state my candidacy in the upcoming UC election. I have enjoyed the work we have been able to accomplish these last 12 months and I would like to serve another term to help continue the momentum. The nomination period doesn't open until August 6, but I admire your enthusiasm! August 6 - August 17, 05:59 UTC: Open candidacy for UC positions August 20 - August 24, 11:59 UTC: UC elections (voting) -- Ed Leafe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From amy at demarco.com Sat Aug 4 01:15:30 2018 From: amy at demarco.com (Amy Marrich) Date: Fri, 3 Aug 2018 20:15:30 -0500 Subject: [Openstack-operators] New AUC Criteria Message-ID: *Are you an Active User Contributor (AUC)? Well you may be and not even know it! Historically, AUCs met the following criteria: - Organizers of Official OpenStack User Groups: from the Groups Portal- Active members and contributors to functional teams and/or working groups (currently also manually calculated for WGs not using IRC): from IRC logs- Moderators of any of the operators official meet-up sessions: Currently manually calculated.- Contributors to any repository under the UC governance: from Gerrit- Track chairs for OpenStack summits: from the Track Chair tool- Contributors to Superuser (articles, interviews, user stories, etc.): from the Superuser backend- Active moderators on ask.openstack.org : from Ask OpenStackIn July, the User Committee (UC) voted to add the following criteria to becoming an AUC in order to meet the needs of the evolving OpenStack Community. So in addition to the above ways, you can now earn AUC status by meeting the following: - User survey participants who completed a deployment survey- Ops midcycle session moderators- OpenStack Days organizers- SIG Members nominated by SIG leaders- Active Women of OpenStack participants- Active Diversity WG participantsWell that’s great you have met the requirements to become an AUC but what does that mean? AUCs can run for open UC positions and can vote in the elections. AUCs also receive a discounted $300 ticket for OpenStack Summit as well as having the coveted AUC insignia on your badge!* And remember nominations for the User Committee open on Monday, August 6 and end on August, 17 with voting August 20 to August 24. Amy Marrich (spotz) User Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From gilles.mocellin at nuagelibre.org Sun Aug 5 07:27:55 2018 From: gilles.mocellin at nuagelibre.org (Gilles Mocellin) Date: Sun, 05 Aug 2018 09:27:55 +0200 Subject: [Openstack-operators] [openstack-ansible] How to manage system upgrades ? In-Reply-To: References: <618796e0e942dc5bd5b0824950565ea1@nuagelibre.org> Message-ID: <1800194.f9RFR1smQ6@gillesxps> Le lundi 30 juillet, Matt Riedemann écrivit : > On 7/27/2018 3:34 AM, Gilles Mocellin wrote: > > - for compute nodes : disable compute node and live-evacuate instances... > > To be clear, what do you mean exactly by "live-evacuate"? I assume you mean > live migration of all instances off each (disabled) compute node *before* > you upgrade it. I wanted to ask because "evacuate" as a server operation is > something else entirely (it's rebuild on another host which is definitely > disruptive to the workload on that server). > > http://www.danplanet.com/blog/2016/03/03/evacuate-in-nova-one-command-to-confuse-us-all/ > > -- > > Thanks, > > Matt [Sorry for the delay, my mail was blocked] Ah yes, I know the difference. Of course, I mean live migrate all instances on the node before disabling it, and upgrade / reboot. So, the following nova command match : $ nova host-evacuate-live From bitskrieg at bitskrieg.net Sun Aug 5 18:43:44 2018 From: bitskrieg at bitskrieg.net (Chris Apsey) Date: Sun, 05 Aug 2018 14:43:44 -0400 Subject: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures? Message-ID: <81d14679c181cdc1a252570529ca5c4b@bitskrieg.net> All, Trying to enable some alternate (non-x86) architectures on xenial + queens. I can load up images and set the property correctly according to the supported values (https://docs.openstack.org/nova/queens/configuration/config.html) in image_properties_default_architecture. From what I can tell, the scheduler works correctly and instances are only scheduled on nodes that have the correct qemu binary installed. However, when the instance request lands on this node, it always starts it with qemu-system-x86_64 rather than qemu-system-arm, qemu-system-ppc, etc. If I manually set the correct binary, everything works as expected. Am I missing something here, or is this a bug in nova-compute? Thanks in advance, -- v/r Chris Apsey bitskrieg at bitskrieg.net https://www.bitskrieg.net From mrhillsman at gmail.com Mon Aug 6 05:14:02 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 6 Aug 2018 00:14:02 -0500 Subject: [Openstack-operators] Reminder: User Committee Meeting @ 1400UTC Message-ID: Hi everyone, Reminder about UC meeting in #openstack-uc; please add to agenda: https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee#Meeting_Agenda.2FPrevious_Meeting_Logs -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Mon Aug 6 13:01:49 2018 From: aspiers at suse.com (Adam Spiers) Date: Mon, 6 Aug 2018 14:01:49 +0100 Subject: [Openstack-operators] Cinder-volume and high availability In-Reply-To: References: Message-ID: <20180806130149.xwy7yol2nvgjwonv@pacific.linksys.moosehall> Jean-Philippe Méthot wrote: >Hi, > >I’ve noticed that in the high-availability guide, it is not >recommended to run cinder-volume in an active-active configuration. Active-active cinder-volume support is still in development, e.g. see https://blueprints.launchpad.net/cinder/+spec/cinder-volume-active-active-support I would strongly recommend talking to the Cinder experts to find out the current state, rather than relying on the HA guide which is currently rather out of date. >However, I have built an active-passive setup that uses keepalived >and a virtual IP to redirect API traffic to only one controller at a >time. Yes, this is a fairly common deployment trick. >In such a configuration, would I still need to have only one >cinder-volume service running at a time? Yes, otherwise it's not active-passive ;-) Sorry for the slow reply! From clint at fewbar.com Mon Aug 6 13:12:22 2018 From: clint at fewbar.com (Clint Byrum) Date: Mon, 06 Aug 2018 15:12:22 +0200 Subject: [Openstack-operators] =?utf-8?q?Live-migration_experiences=3F?= Message-ID: <8f0b95a1a450cea9bc42e0aa0caaacc8@secure.spamaps.org> Hello! At GoDaddy, we're about to start experimenting with live migration. While setting it up, we've found a number of options that seem attractive/useful, but we're wondering if anyone has data/anecdotes about specific configurations of live migration. Your time in reading them is appreciated! First a few facts about our installation: * We're using kolla-ansible and basically leaving most nova settings at the default, meaning libvirt+kvm * We will be using block migration, as we have no shared storage of any kind. * We use routed networks to set up L2 segments per-rack. Each rack is basically an island unto itself. The VMs on one rack cannot be migrated to another rack because of this. * Our main resource limitation is disk, followed closely by RAM. As such, our main motivation for wanting to do live migration is to be able to move VMs off of machines where over-subscribed disk users start to threaten the free space of the others. Now, some things we'd love your help with: * TLS for libvirt - We do not want to transfer the contents of VMs' RAM over unencrypted sockets. We want to setup TLS with an internal CA and tls_allowed_dn_list controlling access. Has anyone reading this used this setup? Do you have suggestions, reservations, or encouragement for us wanting to do it this way? * Raw backed qcow2 files - Our instances use qcow2, and our images are uploaded as a raw-backed qcow2. As a result we get maximum disk savings with excellent read performance. When live migrating these around, have you found that they continue to use the same space on the target node as they did on the source? If not, did you find a workaround? * Do people have feedback on live_migrate_permit_auto_convergence? It seems like a reasonable trade-off, but since it is defaulted to false, I wonder if there are some hidden gotchas there. * General pointers to excellent guides, white papers, etc, that might help us avoid doing all of our learning via trial/error. Thanks very much for your time! From amy at demarco.com Mon Aug 6 14:42:46 2018 From: amy at demarco.com (Amy Marrich) Date: Mon, 6 Aug 2018 09:42:46 -0500 Subject: [Openstack-operators] [openstack-community] How to configure keystone to authenticate using x509 certificates instead of password In-Reply-To: <1533551950.18685.15.camel@ericsson.com> References: <1533551950.18685.15.camel@ericsson.com> Message-ID: Hi Shiva, I'm forwarding this to the Ops list where someone might be able to assist you. Thanks, Amy On Mon, Aug 6, 2018 at 5:39 AM, Shiva Prasad Thagadur Prakash < shiva.prasad.thagadur.prakash at ericsson.com> wrote: > Hello, > > Could anyone please explain the steps in configuring keystone to > authenticate the users using certificates instead of password/username. > I couldn't find any detailed steps to do this. > > Best regards, > Shiva > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Mon Aug 6 16:52:38 2018 From: ed at leafe.com (Ed Leafe) Date: Mon, 6 Aug 2018 11:52:38 -0500 Subject: [Openstack-operators] UC nomination period is now open! Message-ID: <277DC0C9-C34D-47D9-B14F-81E41F136909@leafe.com> As the subject says, the nomination period for the summer[0] User Committee elections is now open. Any individual member of the Foundation who is an Active User Contributor (AUC) can propose their candidacy (except the three sitting UC members elected in the previous election). Self-nomination is common; no third party nomination is required. Nominations are made by sending an email to the user-committee at lists.openstack.org mailing-list, with the subject: “UC candidacy” by August 17, 05:59 UTC. The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate. [0] Sorry, southern hemisphere people! -- Ed Leafe From kendall at openstack.org Mon Aug 6 17:36:26 2018 From: kendall at openstack.org (Kendall Waters) Date: Mon, 6 Aug 2018 12:36:26 -0500 Subject: [Openstack-operators] Denver PTG Registration Price Increases on August 23 Message-ID: <00AB295F-05B2-4DE6-8D56-31BC924D9123@openstack.org> Hi everyone, The September 2018 PTG in Denver is right around the corner! Friendly reminder that ticket prices will increase to USD $599 on August 22 at 11:59pm PT (August 23 at 6:59 UTC). So purchase your tickets before the price increases. Register here: https://denver2018ptg.eventbrite.com Our discounted hotel block is filling up and will sell out. The last date to book in the hotel block is August 20 so book now here: www.openstack.org/ptg If you have any questions, please email ptg at openstack.org . Cheers, Kendall Kendall Waters OpenStack Marketing & Events kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Aug 6 22:03:28 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 6 Aug 2018 17:03:28 -0500 Subject: [Openstack-operators] [nova] StarlingX diff analysis Message-ID: In case you haven't heard, there was this StarlingX thing announced at the last summit. I have gone through the enormous nova diff in their repo and the results are in a spreadsheet [1]. Given the enormous spreadsheet (see a pattern?), I have further refined that into a set of high-level charts [2]. I suspect there might be some negative reactions to even doing this type of analysis lest it might seem like promoting throwing a huge pile of code over the wall and expecting the OpenStack (or more specifically the nova) community to pick it up. That's not my intention at all, nor do I expect nova maintainers to be responsible for upstreaming any of this. This is all educational to figure out what the major differences and overlaps are and what could be constructively upstreamed from the starlingx staging repo since it's not all NFV and Edge dragons in here, there are some legitimate bug fixes and good ideas. I'm sharing it because I want to feel like my time spent on this in the last week wasn't all for nothing. [1] https://docs.google.com/spreadsheets/d/1ugp1FVWMsu4x3KgrmPf7HGX8Mh1n80v-KVzweSDZunU/edit?usp=sharing [2] https://docs.google.com/presentation/d/1P-__JnxCFUbSVlEoPX26Jz6VaOyNg-jZbBsmmKA2f0c/edit?usp=sharing -- Thanks, Matt From mriedemos at gmail.com Mon Aug 6 23:43:43 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 6 Aug 2018 18:43:43 -0500 Subject: [Openstack-operators] Live-migration experiences? In-Reply-To: <8f0b95a1a450cea9bc42e0aa0caaacc8@secure.spamaps.org> References: <8f0b95a1a450cea9bc42e0aa0caaacc8@secure.spamaps.org> Message-ID: <84f9c845-1b80-cc52-a645-b4d55b728469@gmail.com> On 8/6/2018 8:12 AM, Clint Byrum wrote: > First a few facts about our installation: > > * We're using kolla-ansible and basically leaving most nova settings at > the default, meaning libvirt+kvm > * We will be using block migration, as we have no shared storage of any > kind. > * We use routed networks to set up L2 segments per-rack. Each rack is > basically an island unto itself. The VMs on one rack cannot be migrated > to another rack  because of this. > * Our main resource limitation is disk, followed closely by RAM. As > such, our main motivation for wanting to do live migration is to be able > to move VMs off of machines where over-subscribed disk users start to > threaten the free space of the others. What release are you on? > > * Do people have feedback on live_migrate_permit_auto_convergence? It > seems like a reasonable trade-off, but since it is defaulted to false, I > wonder if there are some hidden gotchas there. You might want to read through [1] and [2]. Those were written by the OSIC dev team when that still existed. But there are some (somewhat mysterious) mentions to caveats with post-copy you should be aware of. At this point, John Garbutt is probably the best person to talk to about those since all of the other OSIC devs that worked on this spec are long gone. > > * General pointers to excellent guides, white papers, etc, that might help us avoid doing all of our learning via trial/error. Check out [3]. I've specifically been meaning to watch the one from Boston that John was in. [1] https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/live-migration-force-after-timeout.html [2] https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/live-migration-per-instance-timeout.html [3] https://www.openstack.org/videos/search?search=live%20migration -- Thanks, Matt From gael.therond at gmail.com Tue Aug 7 06:10:53 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Tue, 7 Aug 2018 08:10:53 +0200 Subject: [Openstack-operators] [nova] StarlingX diff analysis In-Reply-To: References: Message-ID: Hi matt, everyone, I just read your analysis and would like to thank you for such work. I really think there are numerous features included/used on this Nova rework that would be highly beneficial for Nova and users of it. I hope people will fairly appreciate you work. I didn’t had time to check StarlingX code quality, how did you feel it while you were doing your analysis? Thanks a lot for this share. I’ll have a closer look at it this afternoon as my company may be interested by some features. Kind regards, G. Le mar. 7 août 2018 à 00:03, Matt Riedemann a écrit : > In case you haven't heard, there was this StarlingX thing announced at > the last summit. I have gone through the enormous nova diff in their > repo and the results are in a spreadsheet [1]. Given the enormous > spreadsheet (see a pattern?), I have further refined that into a set of > high-level charts [2]. > > I suspect there might be some negative reactions to even doing this type > of analysis lest it might seem like promoting throwing a huge pile of > code over the wall and expecting the OpenStack (or more specifically the > nova) community to pick it up. That's not my intention at all, nor do I > expect nova maintainers to be responsible for upstreaming any of this. > > This is all educational to figure out what the major differences and > overlaps are and what could be constructively upstreamed from the > starlingx staging repo since it's not all NFV and Edge dragons in here, > there are some legitimate bug fixes and good ideas. I'm sharing it > because I want to feel like my time spent on this in the last week > wasn't all for nothing. > > [1] > > https://docs.google.com/spreadsheets/d/1ugp1FVWMsu4x3KgrmPf7HGX8Mh1n80v-KVzweSDZunU/edit?usp=sharing > [2] > > https://docs.google.com/presentation/d/1P-__JnxCFUbSVlEoPX26Jz6VaOyNg-jZbBsmmKA2f0c/edit?usp=sharing > > -- > > Thanks, > > Matt > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at gmail.com Tue Aug 7 06:19:56 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Tue, 7 Aug 2018 08:19:56 +0200 Subject: [Openstack-operators] Live-migration experiences? In-Reply-To: <84f9c845-1b80-cc52-a645-b4d55b728469@gmail.com> References: <8f0b95a1a450cea9bc42e0aa0caaacc8@secure.spamaps.org> <84f9c845-1b80-cc52-a645-b4d55b728469@gmail.com> Message-ID: Hi clint, matt. To be noticed that post-copy and auto-convergence are mutually exclusive. The drawbacks that we experienced with here is that live-migration using either way post-copy or auto-convergence will likely fail for application not being able to handle throttling. Although, post-copy is guaranteed to work for most of your migration. On the design part of your solution. For a while we had the same design with each rack being segmented but we gave up with this choice as it was a PITA especially for live-migration. We’re currently migrating our network design to a much simplified all L3 network layout with our underlying network a bgp driven network and all overlay network being managed by Openstack. Let me know if you need more details or information. Kind regards, G. Le mar. 7 août 2018 à 01:44, Matt Riedemann a écrit : > On 8/6/2018 8:12 AM, Clint Byrum wrote: > > First a few facts about our installation: > > > > * We're using kolla-ansible and basically leaving most nova settings at > > the default, meaning libvirt+kvm > > * We will be using block migration, as we have no shared storage of any > > kind. > > * We use routed networks to set up L2 segments per-rack. Each rack is > > basically an island unto itself. The VMs on one rack cannot be migrated > > to another rack because of this. > > * Our main resource limitation is disk, followed closely by RAM. As > > such, our main motivation for wanting to do live migration is to be able > > to move VMs off of machines where over-subscribed disk users start to > > threaten the free space of the others. > > What release are you on? > > > > > * Do people have feedback on live_migrate_permit_auto_convergence? It > > seems like a reasonable trade-off, but since it is defaulted to false, I > > wonder if there are some hidden gotchas there. > > You might want to read through [1] and [2]. Those were written by the > OSIC dev team when that still existed. But there are some (somewhat > mysterious) mentions to caveats with post-copy you should be aware of. > At this point, John Garbutt is probably the best person to talk to about > those since all of the other OSIC devs that worked on this spec are long > gone. > > > > > * General pointers to excellent guides, white papers, etc, that might > help us avoid doing all of our learning via trial/error. > > Check out [3]. I've specifically been meaning to watch the one from > Boston that John was in. > > [1] > > https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/live-migration-force-after-timeout.html > [2] > > https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/live-migration-per-instance-timeout.html > [3] https://www.openstack.org/videos/search?search=live%20migration > > -- > > Thanks, > > Matt > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zioproto at gmail.com Tue Aug 7 07:07:44 2018 From: zioproto at gmail.com (Saverio Proto) Date: Tue, 7 Aug 2018 09:07:44 +0200 Subject: [Openstack-operators] Openstack Version discovery with the cli client. Message-ID: Hello, This is maybe a super trivial question bit I have to admit I could not figure it out. Can the user with the openstack cli client discover the version of Openstack that is running ? For example in kubernetes the kubectl version command returns the version of the client and the version of the cluster. For Openstack I never managed to discover the backend version, and this could be useful when using public clouds. Anyone knows how to do that ? thanks Saverio From jimmy at openstack.org Tue Aug 7 13:09:10 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 07 Aug 2018 08:09:10 -0500 Subject: [Openstack-operators] Openstack Version discovery with the cli client. In-Reply-To: References: Message-ID: <5B6999F6.9080201@openstack.org> Hey Saverio, This answer from ask.openstack.org should have what you're looking for: https://ask.openstack.org/en/question/45513/how-to-find-out-which-version-of-openstack-is-installed/at Once you get the release number, you have to look it up here to match the release date: https://releases.openstack.org/ I had to use this the other day when taking the COA. Cheers, Jimmy Saverio Proto wrote: > Hello, > > This is maybe a super trivial question bit I have to admit I could not > figure it out. > > Can the user with the openstack cli client discover the version of > Openstack that is running ? > > For example in kubernetes the kubectl version command returns the > version of the client and the version of the cluster. > > For Openstack I never managed to discover the backend version, and > this could be useful when using public clouds. > > Anyone knows how to do that ? > > thanks > > Saverio > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From mriedemos at gmail.com Tue Aug 7 13:29:04 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 7 Aug 2018 08:29:04 -0500 Subject: [Openstack-operators] [nova] StarlingX diff analysis In-Reply-To: References: Message-ID: <45bd7236-b9f8-026d-620b-7356d4effa49@gmail.com> On 8/7/2018 1:10 AM, Flint WALRUS wrote: > I didn’t had time to check StarlingX code quality, how did you feel it > while you were doing your analysis? I didn't dig into the test diffs themselves, but it was my impression that from what I was poking around in the local git repo, there were several changes which didn't have any test coverage. For the really big full stack changes (L3 CAT, CPU scaling and shared/pinned CPUs on same host), toward the end I just started glossing over a lot of that because it's so much code in so many places, so I can't really speak very well to how it was written or how well it is tested (maybe WindRiver had a more robust CI system running integration tests, I don't know). There were also some things which would have been caught in code review upstream. For example, they ignore the "force" parameter for live migration so that live migration requests always go through the scheduler. However, the "force" parameter is only on newer microversions. Before that, if you specified a host at all it would bypass the scheduler, but the change didn't take that into account, so they still have gaps in some of the things they were trying to essentially disable in the API. On the whole I think the quality is OK. It's not really possible to accurately judge that when looking at a single diff this large. -- Thanks, Matt From zioproto at gmail.com Tue Aug 7 13:30:48 2018 From: zioproto at gmail.com (Saverio Proto) Date: Tue, 7 Aug 2018 15:30:48 +0200 Subject: [Openstack-operators] Openstack Version discovery with the cli client. In-Reply-To: <5B6999F6.9080201@openstack.org> References: <5B6999F6.9080201@openstack.org> Message-ID: Hello Jimmy, thanks for your help. If I understand correctly the answer you linked, that helps if you operate the cloud and you have access to the servers. Then of course you can call nova-manage. But being a user of a public cloud without having access the the infrastructure servers ... how do you do that ? thanks Saverio Il giorno mar 7 ago 2018 alle ore 15:09 Jimmy McArthur ha scritto: > > Hey Saverio, > > This answer from ask.openstack.org should have what you're looking for: > https://ask.openstack.org/en/question/45513/how-to-find-out-which-version-of-openstack-is-installed/at > > Once you get the release number, you have to look it up here to match > the release date: https://releases.openstack.org/ > > I had to use this the other day when taking the COA. > > Cheers, > Jimmy > > Saverio Proto wrote: > > Hello, > > > > This is maybe a super trivial question bit I have to admit I could not > > figure it out. > > > > Can the user with the openstack cli client discover the version of > > Openstack that is running ? > > > > For example in kubernetes the kubectl version command returns the > > version of the client and the version of the cluster. > > > > For Openstack I never managed to discover the backend version, and > > this could be useful when using public clouds. > > > > Anyone knows how to do that ? > > > > thanks > > > > Saverio > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From mriedemos at gmail.com Tue Aug 7 13:32:18 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 7 Aug 2018 08:32:18 -0500 Subject: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures? In-Reply-To: <81d14679c181cdc1a252570529ca5c4b@bitskrieg.net> References: <81d14679c181cdc1a252570529ca5c4b@bitskrieg.net> Message-ID: <25e24abf-aebc-2881-9981-7f9683ffc700@gmail.com> On 8/5/2018 1:43 PM, Chris Apsey wrote: > Trying to enable some alternate (non-x86) architectures on xenial + > queens.  I can load up images and set the property correctly according > to the supported values > (https://docs.openstack.org/nova/queens/configuration/config.html) in > image_properties_default_architecture.  From what I can tell, the > scheduler works correctly and instances are only scheduled on nodes that > have the correct qemu binary installed.  However, when the instance > request lands on this node, it always starts it with qemu-system-x86_64 > rather than qemu-system-arm, qemu-system-ppc, etc.  If I manually set > the correct binary, everything works as expected. > > Am I missing something here, or is this a bug in nova-compute? image_properties_default_architecture is only used in the scheduler filter to pick a compute host, it doesn't do anything about the qemu binary used in nova-compute. mnaser added the config option so maybe he can share what he's done on his computes. Do you have qemu-system-x86_64 on non-x86 systems? Seems like a package/deploy issue since I'd expect x86 packages shouldn't install on a ppc system and vice versa, and only one qemu package should provide the binary. -- Thanks, Matt From doug at doughellmann.com Tue Aug 7 13:40:20 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 07 Aug 2018 09:40:20 -0400 Subject: [Openstack-operators] Openstack Version discovery with the cli client. In-Reply-To: References: <5B6999F6.9080201@openstack.org> Message-ID: <1533649191-sup-4398@lrrr.local> Excerpts from Saverio Proto's message of 2018-08-07 15:30:48 +0200: > Hello Jimmy, > > thanks for your help. If I understand correctly the answer you linked, > that helps if you operate the cloud and you have access to the > servers. Then of course you can call nova-manage. > > But being a user of a public cloud without having access the the > infrastructure servers ... how do you do that ? Try the "versions show" command: $ openstack versions show It should report the versions of each endpoint it finds. Doug From ralf.teckelmann at bertelsmann.de Tue Aug 7 13:40:59 2018 From: ralf.teckelmann at bertelsmann.de (Teckelmann, Ralf, NMU-OIP) Date: Tue, 7 Aug 2018 13:40:59 +0000 Subject: [Openstack-operators] Openstack Version discovery with the cli client. In-Reply-To: References: <5B6999F6.9080201@openstack.org>, Message-ID: Hello Saverio, I am wondering whats your actual use case. As a public cloud user, you should never depend on the clouds version. It should simply be stable and compatible any time. Best regards Ralf T. ________________________________ Von: Saverio Proto Gesendet: Dienstag, 7. August 2018 15:30:48 An: Jimmy McArthur Cc: OpenStack Operators Betreff: Re: [Openstack-operators] Openstack Version discovery with the cli client. Hello Jimmy, thanks for your help. If I understand correctly the answer you linked, that helps if you operate the cloud and you have access to the servers. Then of course you can call nova-manage. But being a user of a public cloud without having access the the infrastructure servers ... how do you do that ? thanks Saverio Il giorno mar 7 ago 2018 alle ore 15:09 Jimmy McArthur ha scritto: > > Hey Saverio, > > This answer from ask.openstack.org should have what you're looking for: > https://ask.openstack.org/en/question/45513/how-to-find-out-which-version-of-openstack-is-installed/at > > Once you get the release number, you have to look it up here to match > the release date: https://releases.openstack.org/ > > I had to use this the other day when taking the COA. > > Cheers, > Jimmy > > Saverio Proto wrote: > > Hello, > > > > This is maybe a super trivial question bit I have to admit I could not > > figure it out. > > > > Can the user with the openstack cli client discover the version of > > Openstack that is running ? > > > > For example in kubernetes the kubectl version command returns the > > version of the client and the version of the cluster. > > > > For Openstack I never managed to discover the backend version, and > > this could be useful when using public clouds. > > > > Anyone knows how to do that ? > > > > thanks > > > > Saverio > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From lmihaiescu at gmail.com Tue Aug 7 13:47:46 2018 From: lmihaiescu at gmail.com (George Mihaiescu) Date: Tue, 7 Aug 2018 09:47:46 -0400 Subject: [Openstack-operators] Openstack Version discovery with the cli client. In-Reply-To: References: <5B6999F6.9080201@openstack.org> Message-ID: Hi Saverio, I think only the API versions supported by some of the endpoint are discoverable, as described here: https://wiki.openstack.org/wiki/VersionDiscovery curl https://x.x.x.x:9292/image curl https://x.x.x.x:8774/compute Cheers, George On Tue, Aug 7, 2018 at 9:30 AM, Saverio Proto wrote: > Hello Jimmy, > > thanks for your help. If I understand correctly the answer you linked, > that helps if you operate the cloud and you have access to the > servers. Then of course you can call nova-manage. > > But being a user of a public cloud without having access the the > infrastructure servers ... how do you do that ? > > thanks > > Saverio > > > > Il giorno mar 7 ago 2018 alle ore 15:09 Jimmy McArthur > ha scritto: > > > > Hey Saverio, > > > > This answer from ask.openstack.org should have what you're looking for: > > https://ask.openstack.org/en/question/45513/how-to-find- > out-which-version-of-openstack-is-installed/at > > > > Once you get the release number, you have to look it up here to match > > the release date: https://releases.openstack.org/ > > > > I had to use this the other day when taking the COA. > > > > Cheers, > > Jimmy > > > > Saverio Proto wrote: > > > Hello, > > > > > > This is maybe a super trivial question bit I have to admit I could not > > > figure it out. > > > > > > Can the user with the openstack cli client discover the version of > > > Openstack that is running ? > > > > > > For example in kubernetes the kubectl version command returns the > > > version of the client and the version of the cluster. > > > > > > For Openstack I never managed to discover the backend version, and > > > this could be useful when using public clouds. > > > > > > Anyone knows how to do that ? > > > > > > thanks > > > > > > Saverio > > > > > > _______________________________________________ > > > OpenStack-operators mailing list > > > OpenStack-operators at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-operators > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bitskrieg at bitskrieg.net Tue Aug 7 13:54:46 2018 From: bitskrieg at bitskrieg.net (Chris Apsey) Date: Tue, 07 Aug 2018 09:54:46 -0400 Subject: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures? In-Reply-To: <25e24abf-aebc-2881-9981-7f9683ffc700@gmail.com> References: <81d14679c181cdc1a252570529ca5c4b@bitskrieg.net> <25e24abf-aebc-2881-9981-7f9683ffc700@gmail.com> Message-ID: <06029fcf4648d3aa784783389e986a8d@bitskrieg.net> Hey Matt, We don't actually have any non-x86 hardware at the moment - we're just looking to run certain workloads in qemu full emulation mode sans KVM extensions (we know there is a huge performance hit - it's just for a few very specific things). The hosts I'm talking about are normal intel-based compute nodes with several different qemu packages installed (arm, ppc, mips, x86_64 w/ kvm extensions, etc.). Is nova designed to work in this kind of scenario? It seems like many pieces are there, but they're just not quite tied together quite right, or there is some config option I'm missing. Thanks! --- v/r Chris Apsey bitskrieg at bitskrieg.net https://www.bitskrieg.net On 2018-08-07 09:32 AM, Matt Riedemann wrote: > On 8/5/2018 1:43 PM, Chris Apsey wrote: >> Trying to enable some alternate (non-x86) architectures on xenial + >> queens.  I can load up images and set the property correctly according >> to the supported values >> (https://docs.openstack.org/nova/queens/configuration/config.html) in >> image_properties_default_architecture.  From what I can tell, the >> scheduler works correctly and instances are only scheduled on nodes >> that have the correct qemu binary installed.  However, when the >> instance request lands on this node, it always starts it with >> qemu-system-x86_64 rather than qemu-system-arm, qemu-system-ppc, etc.  >> If I manually set the correct binary, everything works as expected. >> >> Am I missing something here, or is this a bug in nova-compute? > > image_properties_default_architecture is only used in the scheduler > filter to pick a compute host, it doesn't do anything about the qemu > binary used in nova-compute. mnaser added the config option so maybe > he can share what he's done on his computes. > > Do you have qemu-system-x86_64 on non-x86 systems? Seems like a > package/deploy issue since I'd expect x86 packages shouldn't install > on a ppc system and vice versa, and only one qemu package should > provide the binary. From zhipengh512 at gmail.com Wed Aug 8 02:08:05 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 8 Aug 2018 10:08:05 +0800 Subject: [Openstack-operators] [publiccloud-wg] Asia-EU friendly meeting today Message-ID: Hi team, A kind reminder for the UTC 7:00 meeting today, please do remember to register yourself to irc due to new channel policy. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Wed Aug 8 18:43:33 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Wed, 8 Aug 2018 14:43:33 -0400 Subject: [Openstack-operators] getting back onto our IRC channel Message-ID: FWIW I struggled to get back onto IRC following the recent switch of openstack channels to requiring registration. A big part of the problem is I never learned "real IRC" (apart from anything else real IRC is entirely banned where I work). So in case this is useful to anyone else in a similar situation: What eventually worked for me is using the `webchat.freenode.net interface to do the nickname registration mentioned here: https://freenode.net/kb/answer/registration Once I received the resulting email and typed the provided validation command (also on webchat.freenode.net), I was then able to use my credentials to configure my normal client (irccloud in my case) and go to the #openstack-operators channel with my normal nickname. What did not work for me is trying to msg nickserv from the command-line IRC client (irssi). I can't send a message to nickserv without connecting, and (it seems to me anyway) can't connect without a nickname. Seems a bit chicken and egg to this IRC newbie. See you (back) on our IRC channel! Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Aug 8 18:54:21 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 8 Aug 2018 18:54:21 +0000 Subject: [Openstack-operators] getting back onto our IRC channel In-Reply-To: References: Message-ID: <20180808185421.molty6tkvb26lr7q@yuggoth.org> On 2018-08-08 14:43:33 -0400 (-0400), Chris Morgan wrote: [...] > What did not work for me is trying to msg nickserv from the command-line > IRC client (irssi). I can't send a message to nickserv without connecting, > and (it seems to me anyway) can't connect without a nickname. Seems a bit > chicken and egg to this IRC newbie. [...] It's been a while since I used irssi personally (switched to weechat some 5-6 years ago), but it should only have rejected your ability to auto-join OpenStack official IRC channels until you registered/identified and not prevented you from connecting to Freenode. You only need to be in a server buffer and connected to be able to `/msg nickserv ...` but don't need to be in any channels at all so shouldn't be a chicken-and-egg scenario. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mihalis68 at gmail.com Wed Aug 8 19:03:46 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Wed, 8 Aug 2018 15:03:46 -0400 Subject: [Openstack-operators] getting back onto our IRC channel In-Reply-To: <20180808185421.molty6tkvb26lr7q@yuggoth.org> References: <20180808185421.molty6tkvb26lr7q@yuggoth.org> Message-ID: I'm sure I'm doing something wrong, but it's really not obvious, hence this email. I tried just "/connect chat.freenode.net" and then "/msg nickserv ..." a few times. I always got dumped and it said "You need to identify via SASL to use this server". Something like that. If I don't connect first, then I just get "Not connected to server" when I try to /msg nickserv chris On Wed, Aug 8, 2018 at 2:57 PM Jeremy Stanley wrote: > On 2018-08-08 14:43:33 -0400 (-0400), Chris Morgan wrote: > [...] > > What did not work for me is trying to msg nickserv from the command-line > > IRC client (irssi). I can't send a message to nickserv without > connecting, > > and (it seems to me anyway) can't connect without a nickname. Seems a bit > > chicken and egg to this IRC newbie. > [...] > > It's been a while since I used irssi personally (switched to weechat > some 5-6 years ago), but it should only have rejected your ability > to auto-join OpenStack official IRC channels until you > registered/identified and not prevented you from connecting to > Freenode. You only need to be in a server buffer and connected to be > able to `/msg nickserv ...` but don't need to be in any channels at > all so shouldn't be a chicken-and-egg scenario. > -- > Jeremy Stanley > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Aug 8 19:07:29 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 8 Aug 2018 14:07:29 -0500 Subject: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures? In-Reply-To: <06029fcf4648d3aa784783389e986a8d@bitskrieg.net> References: <81d14679c181cdc1a252570529ca5c4b@bitskrieg.net> <25e24abf-aebc-2881-9981-7f9683ffc700@gmail.com> <06029fcf4648d3aa784783389e986a8d@bitskrieg.net> Message-ID: <26839d31-18b8-ba76-56cc-8bbe4b73fc37@gmail.com> On 8/7/2018 8:54 AM, Chris Apsey wrote: > We don't actually have any non-x86 hardware at the moment - we're just > looking to run certain workloads in qemu full emulation mode sans KVM > extensions (we know there is a huge performance hit - it's just for a > few very specific things).  The hosts I'm talking about are normal > intel-based compute nodes with several different qemu packages installed > (arm, ppc, mips, x86_64 w/ kvm extensions, etc.). > > Is nova designed to work in this kind of scenario?  It seems like many > pieces are there, but they're just not quite tied together quite right, > or there is some config option I'm missing. As far as I know, nova doesn't make anything arch-specific for QEMU. Nova will execute some qemu commands like qemu-img but as far as the virt driver, it goes through the libvirt-python API bindings which wrap over libvirtd which interfaces with QEMU. I would expect that if you're on an x86_64 arch host, that you can't have non-x86_64 packages installed on there (or they are noarch packages). Like, I don't know how your packaging works (are these rpms or debs, or other?) but how do you have ppc packages installed on an x86 system? -- Thanks, Matt From iain.macdonnell at oracle.com Wed Aug 8 19:23:17 2018 From: iain.macdonnell at oracle.com (iain MacDonnell) Date: Wed, 8 Aug 2018 12:23:17 -0700 Subject: [Openstack-operators] getting back onto our IRC channel In-Reply-To: References: <20180808185421.molty6tkvb26lr7q@yuggoth.org> Message-ID: <5b06c5b9-19a7-6d46-5739-7ba1c21c775d@oracle.com> According to: https://superuser.com/questions/1220409/irc-how-to-register-on-freenode-using-hexchat-when-i-get-disconnected-immediat there's a blacklist of source address ranges from which SASL auth/e is required..... ~iain On 08/08/2018 12:03 PM, Chris Morgan wrote: > I'm sure I'm doing something wrong, but it's really not obvious, hence > this email. > > I tried just "/connect chat.freenode.net > " > and then "/msg nickserv ..." a few times. I always got dumped and it > said "You need to identify via SASL to use this server". Something like > that. > > If I don't connect first, then I just get "Not connected to server" when > I try to /msg nickserv > > chris > > On Wed, Aug 8, 2018 at 2:57 PM Jeremy Stanley > wrote: > > On 2018-08-08 14:43:33 -0400 (-0400), Chris Morgan wrote: > [...] > > What did not work for me is trying to msg nickserv from the > command-line > > IRC client (irssi). I can't send a message to nickserv without > connecting, > > and (it seems to me anyway) can't connect without a nickname. > Seems a bit > > chicken and egg to this IRC newbie. > [...] > > It's been a while since I used irssi personally (switched to weechat > some 5-6 years ago), but it should only have rejected your ability > to auto-join OpenStack official IRC channels until you > registered/identified and not prevented you from connecting to > Freenode. You only need to be in a server buffer and connected to be > able to `/msg nickserv ...` but don't need to be in any channels at > all so shouldn't be a chicken-and-egg scenario. > -- > Jeremy Stanley > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > -- > Chris Morgan > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=XitLUTp1htaMQO9yd3X4qTLgEaEKYUScTKuga61xBnM&s=pOs2IpLof7IqciYxf2K2rTsQ9jqCKkIAlL_mvXqqCDo&e= > From bitskrieg at bitskrieg.net Wed Aug 8 19:42:13 2018 From: bitskrieg at bitskrieg.net (Chris Apsey) Date: Wed, 08 Aug 2018 15:42:13 -0400 Subject: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures? In-Reply-To: <26839d31-18b8-ba76-56cc-8bbe4b73fc37@gmail.com> References: <81d14679c181cdc1a252570529ca5c4b@bitskrieg.net> <25e24abf-aebc-2881-9981-7f9683ffc700@gmail.com> <06029fcf4648d3aa784783389e986a8d@bitskrieg.net> <26839d31-18b8-ba76-56cc-8bbe4b73fc37@gmail.com> Message-ID: Matt, qemu-system-arm, qemu-system-ppc64, etc. in our environment are all x86 packages, but they perform system-mode emulation (via dynamic instruction translation) for those target environments. So, you run qemu-system-ppc64 on an x86 host in order to get a ppc64-emulated VM. Our use case is specifically directed at reverse engineering binaries and fuzzing for vulnerabilities inside of those architectures for things that aren't built for x86, but there are others. If you were to apt-get install qemu-system and then hit autocomplete, you'd get a list of archiectures that qemu can emulate on x86 hardware - that's what we're trying to do incorporate. We still want to run normal qemu-x86 with KVM virtualization extensions, but we ALSO want to run the other emulators without the KVM virtualization extensions in order to have more choice for target environments. So to me, openstack would interpret this by checking to see if a target host supports the architecture specified in the image (it does this correctly), then it would choose the correct qemu-system-xx for spawning the instance based on the architecture flag of the image, which it currently does not (it always choose qemu-system-x86_64). Does that make sense? Chris --- v/r Chris Apsey bitskrieg at bitskrieg.net https://www.bitskrieg.net On 2018-08-08 03:07 PM, Matt Riedemann wrote: > On 8/7/2018 8:54 AM, Chris Apsey wrote: >> We don't actually have any non-x86 hardware at the moment - we're just >> looking to run certain workloads in qemu full emulation mode sans KVM >> extensions (we know there is a huge performance hit - it's just for a >> few very specific things).  The hosts I'm talking about are normal >> intel-based compute nodes with several different qemu packages >> installed (arm, ppc, mips, x86_64 w/ kvm extensions, etc.). >> >> Is nova designed to work in this kind of scenario?  It seems like many >> pieces are there, but they're just not quite tied together quite >> right, or there is some config option I'm missing. > > As far as I know, nova doesn't make anything arch-specific for QEMU. > Nova will execute some qemu commands like qemu-img but as far as the > virt driver, it goes through the libvirt-python API bindings which > wrap over libvirtd which interfaces with QEMU. I would expect that if > you're on an x86_64 arch host, that you can't have non-x86_64 packages > installed on there (or they are noarch packages). Like, I don't know > how your packaging works (are these rpms or debs, or other?) but how > do you have ppc packages installed on an x86 system? From mrhillsman at gmail.com Wed Aug 8 19:50:41 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 8 Aug 2018 14:50:41 -0500 Subject: [Openstack-operators] getting back onto our IRC channel In-Reply-To: <5b06c5b9-19a7-6d46-5739-7ba1c21c775d@oracle.com> References: <20180808185421.molty6tkvb26lr7q@yuggoth.org> <5b06c5b9-19a7-6d46-5739-7ba1c21c775d@oracle.com> Message-ID: I just tried irssi on MacOS without any other stuff added to the config just a fresh install and I am able to talk to NickServ and appears I can register the nick if I so choose. Quite possible your IP or an IP along your path of talking to Freenode is blacklisted Chris. On Wed, Aug 8, 2018 at 2:23 PM iain MacDonnell wrote: > > According to: > > > https://superuser.com/questions/1220409/irc-how-to-register-on-freenode-using-hexchat-when-i-get-disconnected-immediat > > there's a blacklist of source address ranges from which SASL auth/e is > required..... > > ~iain > > > > On 08/08/2018 12:03 PM, Chris Morgan wrote: > > I'm sure I'm doing something wrong, but it's really not obvious, hence > > this email. > > > > I tried just "/connect chat.freenode.net > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__chat.freenode.net&d=DwMFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=XitLUTp1htaMQO9yd3X4qTLgEaEKYUScTKuga61xBnM&s=hZAfF36UBBrPqqDYxYnjUXSWNOyfTga0P_lzOaA1Ax0&e=>" > > > and then "/msg nickserv ..." a few times. I always got dumped and it > > said "You need to identify via SASL to use this server". Something like > > that. > > > > If I don't connect first, then I just get "Not connected to server" when > > I try to /msg nickserv > > > > chris > > > > On Wed, Aug 8, 2018 at 2:57 PM Jeremy Stanley > > wrote: > > > > On 2018-08-08 14:43:33 -0400 (-0400), Chris Morgan wrote: > > [...] > > > What did not work for me is trying to msg nickserv from the > > command-line > > > IRC client (irssi). I can't send a message to nickserv without > > connecting, > > > and (it seems to me anyway) can't connect without a nickname. > > Seems a bit > > > chicken and egg to this IRC newbie. > > [...] > > > > It's been a while since I used irssi personally (switched to weechat > > some 5-6 years ago), but it should only have rejected your ability > > to auto-join OpenStack official IRC channels until you > > registered/identified and not prevented you from connecting to > > Freenode. You only need to be in a server buffer and connected to be > > able to `/msg nickserv ...` but don't need to be in any channels at > > all so shouldn't be a chicken-and-egg scenario. > > -- > > Jeremy Stanley > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwMFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=XitLUTp1htaMQO9yd3X4qTLgEaEKYUScTKuga61xBnM&s=pOs2IpLof7IqciYxf2K2rTsQ9jqCKkIAlL_mvXqqCDo&e= > > > > > > > > > > -- > > Chris Morgan > > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=XitLUTp1htaMQO9yd3X4qTLgEaEKYUScTKuga61xBnM&s=pOs2IpLof7IqciYxf2K2rTsQ9jqCKkIAlL_mvXqqCDo&e= > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Aug 8 20:53:56 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 8 Aug 2018 20:53:56 +0000 Subject: [Openstack-operators] getting back onto our IRC channel In-Reply-To: References: <20180808185421.molty6tkvb26lr7q@yuggoth.org> Message-ID: <20180808205356.3jxjtodndica4syf@yuggoth.org> On 2018-08-08 15:03:46 -0400 (-0400), Chris Morgan wrote: > I'm sure I'm doing something wrong, but it's really not obvious, > hence this email. > > I tried just "/connect chat.freenode.net" and then "/msg nickserv > ..." a few times. I always got dumped and it said "You need to > identify via SASL to use this server". Something like that. > > If I don't connect first, then I just get "Not connected to > server" when I try to /msg nickserv [...] It does indeed sound like you might be caught up in the aforementioned SASL-only network blacklist (I wasn't even aware of it until this ML thread) which Freenode staff seem to be using to help block the spam onslaught from some known parts of the Internet. Your workaround with the webclient is precisely what I would have recommended in such situations too. This is entirely independent from the Infra team measure to require nick registration in official OpenStack IRC channels, but certainly another symptom of the same fundamental problem. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mriedemos at gmail.com Wed Aug 8 22:40:01 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 8 Aug 2018 17:40:01 -0500 Subject: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures? In-Reply-To: References: <81d14679c181cdc1a252570529ca5c4b@bitskrieg.net> <25e24abf-aebc-2881-9981-7f9683ffc700@gmail.com> <06029fcf4648d3aa784783389e986a8d@bitskrieg.net> <26839d31-18b8-ba76-56cc-8bbe4b73fc37@gmail.com> Message-ID: <34763ede-45a3-2d22-37a1-c3fc75ea84d2@gmail.com> On 8/8/2018 2:42 PM, Chris Apsey wrote: > qemu-system-arm, qemu-system-ppc64, etc. in our environment are all x86 > packages, but they perform system-mode emulation (via dynamic > instruction translation) for those target environments.  So, you run > qemu-system-ppc64 on an x86 host in order to get a ppc64-emulated VM. > Our use case is specifically directed at reverse engineering binaries > and fuzzing for vulnerabilities inside of those architectures for things > that aren't built for x86, but there are others. > > If you were to apt-get install qemu-system and then hit autocomplete, > you'd get a list of archiectures that qemu can emulate on x86 hardware - > that's what we're trying to do incorporate.  We still want to run normal > qemu-x86 with KVM virtualization extensions, but we ALSO want to run the > other emulators without the KVM virtualization extensions in order to > have more choice for target environments. > > So to me, openstack would interpret this by checking to see if a target > host supports the architecture specified in the image (it does this > correctly), then it would choose the correct qemu-system-xx for spawning > the instance based on the architecture flag of the image, which it > currently does not (it always choose qemu-system-x86_64). > > Does that make sense? OK yeah now I'm following you - running ppc guests on an x86 host (virt_type=qemu rather than kvm right?). I would have thought the hw_architecture image property was used for this somehow to configure the arch in the guest xml properly, like it's used in a few places [1][2][3]. See [4], I'd think we'd set the guest.arch but don't see that happening. We do set the guest.os_type though [5]. [1] https://github.com/openstack/nova/blob/c18b1c1bd646d7cefa3d3e4b25ce59460d1a6ebc/nova/virt/libvirt/driver.py#L4649 [2] https://github.com/openstack/nova/blob/c18b1c1bd646d7cefa3d3e4b25ce59460d1a6ebc/nova/virt/libvirt/driver.py#L4927 [3] https://github.com/openstack/nova/blob/c18b1c1bd646d7cefa3d3e4b25ce59460d1a6ebc/nova/virt/libvirt/blockinfo.py#L257 [4] https://libvirt.org/formatcaps.html#elementGuest [5] https://github.com/openstack/nova/blob/c18b1c1bd646d7cefa3d3e4b25ce59460d1a6ebc/nova/virt/libvirt/driver.py#L5196 -- Thanks, Matt From thierry at openstack.org Thu Aug 9 07:56:47 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 9 Aug 2018 09:56:47 +0200 Subject: [Openstack-operators] getting back onto our IRC channel In-Reply-To: <20180808205356.3jxjtodndica4syf@yuggoth.org> References: <20180808185421.molty6tkvb26lr7q@yuggoth.org> <20180808205356.3jxjtodndica4syf@yuggoth.org> Message-ID: <69d832be-cc18-e5c8-6fe4-b445797657a0@openstack.org> Jeremy Stanley wrote: > [...] > It does indeed sound like you might be caught up in the > aforementioned SASL-only network blacklist (I wasn't even aware of > it until this ML thread) which Freenode staff seem to be using to > help block the spam onslaught from some known parts of the Internet. > [...] Yes, the Freenode blacklist blocks most cloud providers IP blocks. My own instance (running on an OpenStack public cloud) is also required to use SASL. -- Thierry Carrez (ttx) From bitskrieg at bitskrieg.net Thu Aug 9 11:03:14 2018 From: bitskrieg at bitskrieg.net (Chris Apsey) Date: Thu, 09 Aug 2018 07:03:14 -0400 Subject: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures? In-Reply-To: <34763ede-45a3-2d22-37a1-c3fc75ea84d2@gmail.com> References: <81d14679c181cdc1a252570529ca5c4b@bitskrieg.net> <25e24abf-aebc-2881-9981-7f9683ffc700@gmail.com> <06029fcf4648d3aa784783389e986a8d@bitskrieg.net> <26839d31-18b8-ba76-56cc-8bbe4b73fc37@gmail.com> <34763ede-45a3-2d22-37a1-c3fc75ea84d2@gmail.com> Message-ID: <4fbe5786f0765d97229147cc1137a6ce@bitskrieg.net> Exactly. And I agree, it seems like hw_architecture should dictate which emulator is chosen, but as you mentioned its currently not. I'm not sure if this is a bug and it's supposed to 'just work', or just something that was never fully implemented (intentionally) and would be more of a feature request/suggestion for a later version. The docs are kind of sparse in this area. What are your thoughts? I can open a bug if you think the scope is reasonable. --- v/r Chris Apsey bitskrieg at bitskrieg.net https://www.bitskrieg.net On 2018-08-08 06:40 PM, Matt Riedemann wrote: > On 8/8/2018 2:42 PM, Chris Apsey wrote: >> qemu-system-arm, qemu-system-ppc64, etc. in our environment are all >> x86 packages, but they perform system-mode emulation (via dynamic >> instruction translation) for those target environments.  So, you run >> qemu-system-ppc64 on an x86 host in order to get a ppc64-emulated VM. >> Our use case is specifically directed at reverse engineering binaries >> and fuzzing for vulnerabilities inside of those architectures for >> things that aren't built for x86, but there are others. >> >> If you were to apt-get install qemu-system and then hit autocomplete, >> you'd get a list of archiectures that qemu can emulate on x86 hardware >> - that's what we're trying to do incorporate.  We still want to run >> normal qemu-x86 with KVM virtualization extensions, but we ALSO want >> to run the other emulators without the KVM virtualization extensions >> in order to have more choice for target environments. >> >> So to me, openstack would interpret this by checking to see if a >> target host supports the architecture specified in the image (it does >> this correctly), then it would choose the correct qemu-system-xx for >> spawning the instance based on the architecture flag of the image, >> which it currently does not (it always choose qemu-system-x86_64). >> >> Does that make sense? > > OK yeah now I'm following you - running ppc guests on an x86 host > (virt_type=qemu rather than kvm right?). > > I would have thought the hw_architecture image property was used for > this somehow to configure the arch in the guest xml properly, like > it's used in a few places [1][2][3]. > > See [4], I'd think we'd set the guest.arch but don't see that > happening. We do set the guest.os_type though [5]. > > [1] > https://github.com/openstack/nova/blob/c18b1c1bd646d7cefa3d3e4b25ce59460d1a6ebc/nova/virt/libvirt/driver.py#L4649 > [2] > https://github.com/openstack/nova/blob/c18b1c1bd646d7cefa3d3e4b25ce59460d1a6ebc/nova/virt/libvirt/driver.py#L4927 > [3] > https://github.com/openstack/nova/blob/c18b1c1bd646d7cefa3d3e4b25ce59460d1a6ebc/nova/virt/libvirt/blockinfo.py#L257 > [4] https://libvirt.org/formatcaps.html#elementGuest > [5] > https://github.com/openstack/nova/blob/c18b1c1bd646d7cefa3d3e4b25ce59460d1a6ebc/nova/virt/libvirt/driver.py#L5196 From zioproto at gmail.com Thu Aug 9 11:18:15 2018 From: zioproto at gmail.com (Saverio Proto) Date: Thu, 9 Aug 2018 13:18:15 +0200 Subject: [Openstack-operators] Openstack Version discovery with the cli client. In-Reply-To: References: <5B6999F6.9080201@openstack.org> Message-ID: Thanks ! I think the command I was looking for is: "openstack versions show" But for example for Neutron I get just version v2.0 from Newton to Pike, that tells me very little. The use case is when testing Kubernetes on Openstack, a lot of kubernetes users cannot tell easily the version of Openstack they are testing on. Because things like the LBaaS are so different between Openstack releases that version v2.0 tells too little. Often it is good to know what is the version of the Openstack cloud to identify bugs on launchpad. Cheers, Saverio Il giorno mar 7 ago 2018 alle ore 15:48 George Mihaiescu ha scritto: > > Hi Saverio, > > I think only the API versions supported by some of the endpoint are discoverable, as described here: https://wiki.openstack.org/wiki/VersionDiscovery > > curl https://x.x.x.x:9292/image > curl https://x.x.x.x:8774/compute > > > Cheers, > George > > On Tue, Aug 7, 2018 at 9:30 AM, Saverio Proto wrote: >> >> Hello Jimmy, >> >> thanks for your help. If I understand correctly the answer you linked, >> that helps if you operate the cloud and you have access to the >> servers. Then of course you can call nova-manage. >> >> But being a user of a public cloud without having access the the >> infrastructure servers ... how do you do that ? >> >> thanks >> >> Saverio >> >> >> >> Il giorno mar 7 ago 2018 alle ore 15:09 Jimmy McArthur >> ha scritto: >> > >> > Hey Saverio, >> > >> > This answer from ask.openstack.org should have what you're looking for: >> > https://ask.openstack.org/en/question/45513/how-to-find-out-which-version-of-openstack-is-installed/at >> > >> > Once you get the release number, you have to look it up here to match >> > the release date: https://releases.openstack.org/ >> > >> > I had to use this the other day when taking the COA. >> > >> > Cheers, >> > Jimmy >> > >> > Saverio Proto wrote: >> > > Hello, >> > > >> > > This is maybe a super trivial question bit I have to admit I could not >> > > figure it out. >> > > >> > > Can the user with the openstack cli client discover the version of >> > > Openstack that is running ? >> > > >> > > For example in kubernetes the kubectl version command returns the >> > > version of the client and the version of the cluster. >> > > >> > > For Openstack I never managed to discover the backend version, and >> > > this could be useful when using public clouds. >> > > >> > > Anyone knows how to do that ? >> > > >> > > thanks >> > > >> > > Saverio >> > > >> > > _______________________________________________ >> > > OpenStack-operators mailing list >> > > OpenStack-operators at lists.openstack.org >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > From fungi at yuggoth.org Thu Aug 9 16:38:03 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 9 Aug 2018 16:38:03 +0000 Subject: [Openstack-operators] getting back onto our IRC channel In-Reply-To: <69d832be-cc18-e5c8-6fe4-b445797657a0@openstack.org> References: <20180808185421.molty6tkvb26lr7q@yuggoth.org> <20180808205356.3jxjtodndica4syf@yuggoth.org> <69d832be-cc18-e5c8-6fe4-b445797657a0@openstack.org> Message-ID: <20180809163803.dx66apeztqbq5gzv@yuggoth.org> On 2018-08-09 09:56:47 +0200 (+0200), Thierry Carrez wrote: > Jeremy Stanley wrote: > > [...] > > It does indeed sound like you might be caught up in the > > aforementioned SASL-only network blacklist (I wasn't even aware of > > it until this ML thread) which Freenode staff seem to be using to > > help block the spam onslaught from some known parts of the Internet. > > [...] > > Yes, the Freenode blacklist blocks most cloud providers IP blocks. My own > instance (running on an OpenStack public cloud) is also required to use > SASL. That's very useful detail--thanks! I simply didn't notice it because I've been authenticating via SASL for years already on any IRC networks which support that, but I can definitely see how that might catch a bunch of our participants off-guard (especially given our "cloud" focus in this community). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From JohnMP at cardiff.ac.uk Thu Aug 9 17:01:50 2018 From: JohnMP at cardiff.ac.uk (Matthew John) Date: Thu, 9 Aug 2018 17:01:50 +0000 Subject: [Openstack-operators] Provider Networks Help Message-ID: Hi, I have setup an OpenStack cloud using OpenStack-Ansible but having issues setting up networking. I need to have two external networks e.g. 10.1.0.0/24 & 10.2.0.0/24 that are accessible to instances. I have two bridges, br-ex and br-ex2, setup on the infra nodes that are able to access the 10.1.0.0/24 & 10.2.0.0/24 ranges respectively. I am guessing that I need to add another two flat networks to the provider_networks? The current provider_networks configuration is: provider_networks: - network: container_bridge: "br-mgmt" container_type: "veth" container_interface: "eth1" ip_from_q: "container" type: "raw" group_binds: - all_containers - hosts is_container_address: true is_ssh_address: true - network: container_bridge: "br-vxlan" container_type: "veth" container_interface: "eth10" ip_from_q: "tunnel" type: "vxlan" range: "1:1000" net_name: "vxlan" group_binds: - neutron_linuxbridge_agent - network: container_bridge: "br-vlan" container_type: "veth" container_interface: "eth12" host_bind_override: "eth12" type: "flat" net_name: "flat" group_binds: - neutron_linuxbridge_agent - network: container_bridge: "br-vlan" container_type: "veth" container_interface: "eth11" type: "vlan" range: "101:200,301:400" net_name: "vlan" group_binds: - neutron_linuxbridge_agent - network: container_bridge: "br-storage" container_type: "veth" container_interface: "eth2" ip_from_q: "storage" type: "raw" group_binds: - glance_api - cinder_api - cinder_volume - nova_compute Cheers, Matt --- Dr Matt John Engineer (Service Delivery - COMSC) School of Computer Science & Informatics Cardiff University, 5 The Parade, Cardiff, CF24 3AA Tel: +44 2920 876536 JohnMP at cardiff.ac.uk The University welcomes correspondence in Welsh or English. Corresponding in Welsh will not lead to any delay. Dr Matt John Peiriannydd (Cyflwyno Gwasanaeth - COMSC) Ysgol Cyfrifiadureg a Gwybodeg Prifysgol Caerdydd, 5 The Parade, Caerdydd, CF24 3AA Ffôn : +44 2920 876536 JohnMP at caerdydd.ac.uk Mae'r Brifysgol yn croesawu gohebiaeth yn Gymraeg neu'n Saesneg. Ni fydd gohebu yn Gymraeg yn creu unrhyw oedi. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Aug 9 17:14:56 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 9 Aug 2018 12:14:56 -0500 Subject: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures? In-Reply-To: <4fbe5786f0765d97229147cc1137a6ce@bitskrieg.net> References: <81d14679c181cdc1a252570529ca5c4b@bitskrieg.net> <25e24abf-aebc-2881-9981-7f9683ffc700@gmail.com> <06029fcf4648d3aa784783389e986a8d@bitskrieg.net> <26839d31-18b8-ba76-56cc-8bbe4b73fc37@gmail.com> <34763ede-45a3-2d22-37a1-c3fc75ea84d2@gmail.com> <4fbe5786f0765d97229147cc1137a6ce@bitskrieg.net> Message-ID: On 8/9/2018 6:03 AM, Chris Apsey wrote: > Exactly.  And I agree, it seems like hw_architecture should dictate > which emulator is chosen, but as you mentioned its currently not.  I'm > not sure if this is a bug and it's supposed to 'just work', or just > something that was never fully implemented (intentionally) and would be > more of a feature request/suggestion for a later version.  The docs are > kind of sparse in this area. > > What are your thoughts?  I can open a bug if you think the scope is > reasonable. I'm not sure if this is a bug or a feature, or if there are reasons why it's never been done. I'm gonna have to rope in Kashyap and danpb since they'd likely know more. Dan/Kaskyap: tl;dr why doesn't the nova libvirt driver, configured for qemu, set the guest.arch based on the hw_architecture image property so that you can run ppc guests in an x86 host? -- Thanks, Matt From bitskrieg at bitskrieg.net Fri Aug 10 16:50:09 2018 From: bitskrieg at bitskrieg.net (Chris Apsey) Date: Fri, 10 Aug 2018 12:50:09 -0400 Subject: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures? In-Reply-To: <20180809172447.GB19251@redhat.com> References: <81d14679c181cdc1a252570529ca5c4b@bitskrieg.net> <25e24abf-aebc-2881-9981-7f9683ffc700@gmail.com> <06029fcf4648d3aa784783389e986a8d@bitskrieg.net> <26839d31-18b8-ba76-56cc-8bbe4b73fc37@gmail.com> <34763ede-45a3-2d22-37a1-c3fc75ea84d2@gmail.com> <4fbe5786f0765d97229147cc1137a6ce@bitskrieg.net> <20180809172447.GB19251@redhat.com> Message-ID: <16524beca00.2784.5f0d7f2baa7831a2bbe6450f254d9a24@bitskrieg.net> This sounds promising and there seems to be a feasible way to do this, but it also sounds like a decent amount of effort and would be a new feature in a future release rather than a bugfix - am I correct in that assessment? On August 9, 2018 13:30:31 "Daniel P. Berrangé" wrote: > On Thu, Aug 09, 2018 at 12:14:56PM -0500, Matt Riedemann wrote: >> On 8/9/2018 6:03 AM, Chris Apsey wrote: >>> Exactly. And I agree, it seems like hw_architecture should dictate >>> which emulator is chosen, but as you mentioned its currently not. I'm >>> not sure if this is a bug and it's supposed to 'just work', or just >>> something that was never fully implemented (intentionally) and would be >>> more of a feature request/suggestion for a later version. The docs are >>> kind of sparse in this area. >>> >>> What are your thoughts? I can open a bug if you think the scope is >>> reasonable. >> >> I'm not sure if this is a bug or a feature, or if there are reasons why it's >> never been done. I'm gonna have to rope in Kashyap and danpb since they'd >> likely know more. >> >> Dan/Kaskyap: tl;dr why doesn't the nova libvirt driver, configured for qemu, >> set the guest.arch based on the hw_architecture image property so that you >> can run ppc guests in an x86 host? > > Yes, it should do exactly that IMHO ! > > The main caveat is that a hell of alot of code in libvirt assumes that > guest arch == host arch. ie when building guest XML there's lots of code > that looks at caps.host.cpu.arch to decide how to configure the guest. > This all needs fixing to look at the guest.arch value instead, having > set that from hw_architecture prop. > > Nova libvirt driver is already reporting that it is capable of running > guest with multiple arches (the _get_instance_capaiblities method in > nova/virt/libvirt/driver.py). > > The only other thing is that you likely want to distinguish between > hosts that can do PPC64 via KVM vs those that can only do it via > emulation, so you don't get guests randomly placed on slow vs fast > hosts. Some kind of scheduler filter / weighting can do that based > on info already reported from the compute host I expect. > > > Regards, > Daniel > -- > |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| > |: https://libvirt.org -o- https://fstop138.berrange.com :| > |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| From lijie at unitedstack.com Mon Aug 13 13:30:01 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Mon, 13 Aug 2018 21:30:01 +0800 Subject: [Openstack-operators] [openstack-dev] [nova] deployment question consultation Message-ID: Hi,all I have some questions about deploy the large scale openstack cloud.Such as 1.Only in one region situation,what will happen in the cloud as expansion of cluster size?Then how solve it?If have the limit physical node number under the one region situation?How many nodes would be the best in one regione? 2.When to use cellV2 is most suitable in cloud? 3.How to shorten the time of batch creation of instance? Can you tell me more about these combined with own practice? Would you give me some methods to learn it?Such as the website,blog and so on. Thank you very much!Looking forward to hearing from you. Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Aug 13 13:53:03 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 13 Aug 2018 08:53:03 -0500 Subject: [Openstack-operators] Speaker Selection Process: OpenStack Summit Berlin Message-ID: <5B718D3F.9030202@openstack.org> Greetings! The speakers for the OpenStack Summit Berlin will be announced August 14, at 4:00 AM UTC. Ahead of that, we want to take this opportunity to thank our Programming Committee! They have once again taken time out of their busy schedules to help create another round of outstanding content for the OpenStack Summit. The OpenStack Foundation relies on the community-nominated Programming Committee, along with your Community Votes to select the content of the summit. If you're curious about this process, you can read more about it here where we have also listed the Programming Committee members. If you'd like to nominate yourself or someone you know for the OpenStack Summit Denver Programming Committee, you can do so here: * *https://openstackfoundation.formstack.com/forms/openstackdenver2019_programmingcommitteenom Thanks a bunch and we look forward to seeing everyone in Berlin! Cheers, Jimmy * * -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openstack.org Mon Aug 13 13:59:31 2018 From: allison at openstack.org (Allison Price) Date: Mon, 13 Aug 2018 08:59:31 -0500 Subject: [Openstack-operators] [openstack-dev] Speaker Selection Process: OpenStack Summit Berlin In-Reply-To: <5B718D3F.9030202@openstack.org> References: <5B718D3F.9030202@openstack.org> Message-ID: <5B515018-FDF4-49D8-89F0-DC3C8ED942CF@openstack.org> Hi everyone, One quick clarification. The speakers will be announced on August 14 at 1300 UTC / 4:00 AM PDT. Cheers, Allison > On Aug 13, 2018, at 8:53 AM, Jimmy McArthur wrote: > > Greetings! > > The speakers for the OpenStack Summit Berlin will be announced August 14, at 4:00 AM UTC. Ahead of that, we want to take this opportunity to thank our Programming Committee! They have once again taken time out of their busy schedules to help create another round of outstanding content for the OpenStack Summit. > > The OpenStack Foundation relies on the community-nominated Programming Committee, along with your Community Votes to select the content of the summit. If you're curious about this process, you can read more about it here where we have also listed the Programming Committee members. > > If you'd like to nominate yourself or someone you know for the OpenStack Summit Denver Programming Committee, you can do so here: > https://openstackfoundation.formstack.com/forms/openstackdenver2019_programmingcommitteenom > > Thanks a bunch and we look forward to seeing everyone in Berlin! > > Cheers, > Jimmy > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at nycresistor.com Mon Aug 13 14:20:05 2018 From: matt at nycresistor.com (Matt Joyce) Date: Mon, 13 Aug 2018 10:20:05 -0400 Subject: [Openstack-operators] [openstack-dev] Speaker Selection Process: OpenStack Summit Berlin In-Reply-To: <5B515018-FDF4-49D8-89F0-DC3C8ED942CF@openstack.org> References: <5B718D3F.9030202@openstack.org> <5B515018-FDF4-49D8-89F0-DC3C8ED942CF@openstack.org> Message-ID: CFP work is hard as hell. Much respect to the review panel members. It's a thankless difficult job. So, in lieu of being thankless, THANK YOU -Matt On Mon, Aug 13, 2018 at 9:59 AM, Allison Price wrote: > Hi everyone, > > One quick clarification. The speakers will be announced on* August 14 at > 1300 UTC / 4:00 AM PDT.* > > Cheers, > Allison > > > On Aug 13, 2018, at 8:53 AM, Jimmy McArthur wrote: > > Greetings! > > The speakers for the OpenStack Summit Berlin will be announced August 14, > at 4:00 AM UTC. Ahead of that, we want to take this opportunity to thank > our Programming Committee! They have once again taken time out of their > busy schedules to help create another round of outstanding content for the > OpenStack Summit. > > The OpenStack Foundation relies on the community-nominated Programming > Committee, along with your Community Votes to select the content of the > summit. If you're curious about this process, you can read more about it > here > > where we have also listed the Programming Committee members. > > If you'd like to nominate yourself or someone you know for the OpenStack > Summit Denver Programming Committee, you can do so here: > https://openstackfoundation.formstack.com/forms/openstackdenver2019_ > programmingcommitteenom > > Thanks a bunch and we look forward to seeing everyone in Berlin! > > Cheers, > Jimmy > > > > > * > * > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Mon Aug 13 14:27:13 2018 From: amy at demarco.com (Amy Marrich) Date: Mon, 13 Aug 2018 09:27:13 -0500 Subject: [Openstack-operators] User Committee Election Nominations Reminder Message-ID: Just wanted to remind everyone that the nomination period for the User Committee elections are open until August 17, 05:59 UTC. If you are an AUC and thinking about running what's stopping you? If you know of someone who would make a great committee member nominate them! Help make a difference for Operators, Users and the Community! Thanks, Amy Marrich (spotz) User Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Mon Aug 13 23:10:33 2018 From: amy at demarco.com (Amy Marrich) Date: Mon, 13 Aug 2018 18:10:33 -0500 Subject: [Openstack-operators] =?utf-8?q?=28no_subject=29?= Message-ID: Hi everyone, If you’re running OpenStack, please participate in the User Survey to share more about the technology you are using and provide feedback for the community by *August 21 - hurry, it’s next week!!* By completing a deployment, you will qualify as an AUC and receive a $300 USD ticket to the two upcoming Summits. Please help us spread the word. a we're trying to gather as much real-world deployment data as possible to share back with both the operator and developer communities. We are only conducting one survey this year, and the report will be published at the Berlin Summit . II you would like OpenStack user data in the meantime, check out the analytics dashboard updates in real time, throughout the year. The information provided is confidential and will only be presented in aggregate unless you consent to make it public. The deadline to complete the survey and be part of the next report is next *Tuesday, August 21** at 23:59 UTC.* - You can login and complete the OpenStack User Survey here: http://www.openstack.org/user-survey - If you’re interested in joining the OpenStack User Survey Working Group to help with the survey analysis, please complete this form: https://openstackfoundation.formstack.com/forms/user_survey_working_group - Help us promote the User Survey: https://twitter.com/Op enStack/status/993589356312088577 Please let me know if you have any questions. Thanks, Amy Amy Marrich (spotz) OpenStack User Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashlee at openstack.org Tue Aug 14 11:04:56 2018 From: ashlee at openstack.org (Ashlee Ferguson) Date: Tue, 14 Aug 2018 06:04:56 -0500 Subject: [Openstack-operators] Berlin Summit Schedule Live! Message-ID: <4008CE47-25A0-43B1-8FEF-D2CD52330C7C@openstack.org> The schedule for the Berlin Summit is live! Check out 100+ sessions, demos, and workshops covering 35+ open source projects in the following Tracks: • CI/CD • Container Infrastructure • Edge Computing • HPC / GPU / AI • Private & Hybrid Cloud • Public Cloud • Telecom & NFV Log in with your OpenStackID and start building your schedule now! Register for the Summit - Get your Summit ticket for USD $699 before the price increases on August 21 at 11:59pm PT (August 22 at 6:59 UTC) For speakers with accepted sessions, look for an email from speakersupport at openstack.org for next steps on registration. Thank you to our Programming Committee! They have once again taken time out of their busy schedules to help create another round of outstanding content for the OpenStack Summit. The OpenStack Foundation relies on the community-nominated Programming Committee, along with your Community Votes to select the content of the summit. If you're curious about this process, you can read more about it here where we have also listed the Programming Committee members. Interested in sponsoring the Berlin Summit? Learn more here Cheers, Ashlee Ashlee Ferguson OpenStack Foundation ashlee at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at gmail.com Tue Aug 14 11:54:16 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Tue, 14 Aug 2018 13:54:16 +0200 Subject: [Openstack-operators] [OCTAVIA][KOLLA] - Self signed CA/CERTS Message-ID: Hi guys, I continue to work on my Octavia integration using Kolla-Ansible and I'm facing a strange behavior. As for now I'm working on a POC using restricted HW and SW Capacities, I'm facing a strange issue when trying to launch a new load-balancer. When I create a new LB, would it be using CLI or WebUI, the amphora immediately disappear and the LB status switch to ERROR. When looking at logs and especially Worker logs, I see that the error seems to be related to the fact that the worker can't connect to the amphora because of a TLS Handshake issue which so trigger the contact timeout and rollback the amphora creation. Here is the worker.log relevant trace: *2018-08-07 07:33:57.108 24 INFO octavia.controller.queue.endpoint [-] Creating load balancer 'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'...2018-08-07 07:33:57.220 24 INFO octavia.controller.worker.tasks.database_tasks [-] Created Amphora in DB with id c20af002-1576-446e-b99f-7af607b8d8852018-08-07 07:33:57.285 24 INFO octavia.certificates.generator.local [-] Signing a certificate request using OpenSSL locally.2018-08-07 07:33:57.285 24 INFO octavia.certificates.generator.local [-] Using CA Certificate from config.2018-08-07 07:33:57.285 24 INFO octavia.certificates.generator.local [-] Using CA Private Key from config.2018-08-07 07:33:57.286 24 INFO octavia.certificates.generator.local [-] Using CA Private Key Passphrase from config.2018-08-07 07:34:04.074 24 INFO octavia.controller.worker.tasks.database_tasks [-] Mark ALLOCATED in DB for amphora: c20af002-1576-446e-b99f-7af607b8d885 with compute id 3bbabfa6-366f-46a4-8fb2-1ec7158e19f1 for load balancer: bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e2018-08-07 07:34:04.253 24 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Port a7bae53e-0bc6-4830-8c75-646a8baf2885 already exists. Nothing to be done.2018-08-07 07:34:19.656 24 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: ConnectTimeout: HTTPSConnectionPool(host='10.1.56.103', port=9443): Max retries exceeded with url: /0.5/plug/vip/192.168.56.100 (Caused by ConnectTimeoutError(, 'Connection to 10.1.56.103 timed out. (connect timeout=10.0)'))2018-08-07 07:34:24.673 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraePostVIPPlug' (c86bbab6-87d5-4930-8832-5511d42efe3e) transitioned into state 'FAILURE' from state 'RUNNING'34 predecessors (most recent first): Atom 'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': {u'c20af002-1576-446e-b99f-7af607b8d885': }} |__Atom 'reload-lb-after-plug-vip' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': } |__Atom 'octavia.controller.worker.tasks.database_tasks.UpdateAmphoraVIPData' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amps_data': []}, 'provides': None} |__Atom 'octavia.controller.worker.tasks.network_tasks.ApplyQos' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amps_data': [], 'loadbalancer': , 'update_dict': {'topology': 'SINGLE'}}, 'provides': None} |__Atom 'octavia.controller.worker.tasks.network_tasks.PlugVIP' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': []} |__Atom 'octavia.controller.worker.tasks.database_tasks.UpdateVIPAfterAllocation' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'vip': , 'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': } |__Atom 'octavia.controller.worker.tasks.network_tasks.AllocateVIP' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': } |__Flow 'octavia-new-loadbalancer-net-subflow' |__Atom 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow-reload-lb-after-amp-assoc' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': } |__Flow 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow' |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amp-standalone-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': }, 'provides': None} | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-reload-amphora' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885'}, 'provides': } | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-allocated-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': , 'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': None} | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-amphora-finalize' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': }, 'provides': None} | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-amphora-info' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_obj': }, 'provides': } | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-compute-wait' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': } | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-booting-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': None} | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-amphora-computeid' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': None} | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-cert-compute-create' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'server_pem': '-----BEGIN CERTIFICATE-----\n REDACTED \n-----END RSA PRIVATE KEY-----\n', 'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', 'build_type_priority': 40}, 'provides': u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'} | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-cert-expiration' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'server_pem': '-----BEGIN CERTIFICATE-----\n REDACTED \n-----END RSA PRIVATE KEY-----\n', 'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885'}, 'provides': None} | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' {'intention': 'EXECUTE', 'state': 'SUCCESS'} | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {}, 'provides': u'c20af002-1576-446e-b99f-7af607b8d885'} | |__Flow 'STANDALONE-octavia-create-amp-for-lb-subflow' | |__Atom 'STANDALONE-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': None} | |__Flow 'STANDALONE-octavia-get-amphora-for-lb-subflow' | |__Atom 'octavia.controller.worker.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': None} | |__Flow 'octavia-create-loadbalancer-flow' |__Atom 'STANDALONE-octavia-post-map-amp-to-lb-subflow-octavia-mark-amp-standalone-indb' {'intention': 'IGNORE', 'state': 'IGNORE'} |__Atom 'STANDALONE-octavia-post-map-amp-to-lb-subflow-octavia-reload-amphora' {'intention': 'IGNORE', 'state': 'IGNORE', 'requires': {'amphora_id': None}} |__Flow 'STANDALONE-octavia-post-map-amp-to-lb-subflow' |__Atom 'STANDALONE-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides':None} |__Flow 'STANDALONE-octavia-get-amphora-for-lb-subflow' |__Atom 'octavia.controller.worker.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': None} |__Flow 'octavia-create-loadbalancer-flow': Error: [('PEM routines', 'PEM_read_bio', 'no start line'), ('SSL routines', 'SSL_CTX_use_certificate_file', 'PEM lib')]2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker Traceback (most recent call last):2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker result = task.execute(**arguments)2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py", line 240, in execute2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker amphorae_network_config)2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py", line 219, in execute2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker amphora, loadbalancer, amphorae_network_config)2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 137, in post_vip_plug2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker net_info)2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 388, in plug_vip2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker json=net_info)2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 277, in request2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker r = _request(**reqargs)2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 565, in post2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker return self.request('POST', url, data=data, json=json, **kwargs)2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 518, in request2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker resp = self.send(prep, **send_kwargs)2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 639, in send2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker r = adapter.send(request, **kwargs)2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 438, in send2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker timeout=timeout2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 600, in urlopen2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker chunked=chunked)2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 345, in _make_request2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker self._validate_conn(conn)2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 844, in _validate_conn2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker conn.connect()2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connection.py", line 326, in connect2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker ssl_context=context)2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py", line 323, in ssl_wrap_socket2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker context.load_cert_chain(certfile, keyfile)2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/contrib/pyopenssl.py", line 418, in load_cert_chain2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker self._ctx.use_certificate_file(certfile)2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 817, in use_certificate_file2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker _raise_current_error()2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/OpenSSL/_util.py", line 54, in exception_from_error_queue2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker raise exception_type(errors)2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker Error: [('PEM routines', 'PEM_read_bio', 'no start line'), ('SSL routines', 'SSL_CTX_use_certificate_file', 'PEM lib')]2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker2018-08-07 07:34:24.684 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraePostVIPPlug' (c86bbab6-87d5-4930-8832-5511d42efe3e) transitioned into state 'REVERTED' from state 'REVERTING'2018-08-07 07:34:24.687 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' (1e329fa2-b7c3-4fe2-93f0-d565a18cdbba) transitioned into state 'REVERTED' from state 'REVERTING'2018-08-07 07:34:24.691 24 WARNING octavia.controller.worker.controller_worker [-] Task 'reload-lb-after-plug-vip' (842fb766-dd6f-4b3c-936a-7a5baa82c64f) transitioned into state 'REVERTED' from state 'REVERTING'2018-08-07 07:34:24.694 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.database_tasks.UpdateAmphoraVIPData' (761da17b-4655-46a9-9d67-cb7816c7ea0c) transitioned into state 'REVERTED' from state 'REVERTING'2018-08-07 07:34:24.716 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.network_tasks.ApplyQos' (fb40f555-1f0a-48fc-b377-f9e791077f65) transitioned into state 'REVERTED' from state 'REVERTING'2018-08-07 07:34:24.719 24 WARNING octavia.controller.worker.tasks.network_tasks [-] Unable to plug VIP for loadbalancer id bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e2018-08-07 07:34:26.413 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.network_tasks.PlugVIP' (ae486972-6e98-4036-9e20-85f335058074) transitioned into state 'REVERTED' from state 'REVERTING'2018-08-07 07:34:26.420 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.database_tasks.UpdateVIPAfterAllocation' (79391dee-6011-4145-b544-499e0a632ca1) transitioned into state 'REVERTED' from state 'REVERTING'2018-08-07 07:34:26.425 24 WARNING octavia.controller.worker.tasks.network_tasks [-] Deallocating vip 192.168.56.1002018-08-07 07:34:26.577 24 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Removing security group 3d84ee39-1db9-475f-b048-9fe0f87201c1 from port a7bae53e-0bc6-4830-8c75-646a8baf28852018-08-07 07:34:27.187 24 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Deleted security group 3d84ee39-1db9-475f-b048-9fe0f87201c12018-08-07 07:34:27.803 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.network_tasks.AllocateVIP' (7edf30ee-4338-4725-a86e-e45c0aa0aa58) transitioned into state 'REVERTED' from state 'REVERTING'2018-08-07 07:34:27.807 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow-reload-lb-after-amp-assoc' (64ac1f84-f8ec-4cc1-b3c8-f18ac8474d73) transitioned into state 'REVERTED' from state 'REVERTING'2018-08-07 07:34:27.810 24 WARNING octavia.controller.worker.tasks.database_tasks [-] Reverting amphora role in DB for amp id c20af002-1576-446e-b99f-7af607b8d8852018-08-07 07:34:27.816 24 WARNING octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amp-standalone-indb' (2db823a7-c4ac-4622-824b-b709c96b554a) transitioned into state 'REVERTED' from state 'REVERTING'2018-08-07 07:34:27.819 24 WARNING octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-reload-amphora' (86219bac-efd2-4d1f-8141-818f1a5bc6f5) transitioned into state 'REVERTED' from state 'REVERTING'2018-08-07 07:34:27.821 24 WARNING octavia.controller.worker.tasks.database_tasks [-] Reverting mark amphora ready in DB for amp id c20af002-1576-446e-b99f-7af607b8d885 and compute id 3bbabfa6-366f-46a4-8fb2-1ec7158e19f12018-08-07 07:34:27.826 24 WARNING octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-allocated-indb' (baf58e71-eef6-41e0-9bf3-ab9f9554ace2) transitioned into state 'REVERTED' from state 'REVERTING'2018-08-07 07:34:27.828 24 WARNING octavia.controller.worker.tasks.amphora_driver_tasks [-] Reverting amphora finalize.* Is this a problem if I use self-signed CAcert ? Is their a way to tell octavia to ignore SSL Error while working on a LAB environment? As usual, if you need further information feel free to ask. Thanks a lot guys. -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Tue Aug 14 16:21:01 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 14 Aug 2018 09:21:01 -0700 Subject: [Openstack-operators] [OCTAVIA][KOLLA] - Self signed CA/CERTS In-Reply-To: References: Message-ID: Hi there Flint. Octavia fully supports using self-signed certificates and we use those in our gate tests. We do not allow non-TLS authenticated connections in the code, even for lab setups. This is a configuration issue or certificate file format issue. When the controller is attempting to access the controller local certificate file (likely the one we use to prove we are a valid controller to the amphora agent) it is finding a file without the required PEM format header. Check that your certificate files have the "-----BEGIN CERTIFICATE-----" line (maybe they are in binary DER format and just need to be converted). Also for reference, here are the minimal steps we use in our gate tests to setup the TLS certificates: https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L295-L305 Michael On Tue, Aug 14, 2018 at 4:54 AM Flint WALRUS wrote: > > > Hi guys, > > I continue to work on my Octavia integration using Kolla-Ansible and I'm facing a strange behavior. > > As for now I'm working on a POC using restricted HW and SW Capacities, I'm facing a strange issue when trying to launch a new load-balancer. > > When I create a new LB, would it be using CLI or WebUI, the amphora immediately disappear and the LB status switch to ERROR. > > When looking at logs and especially Worker logs, I see that the error seems to be related to the fact that the worker can't connect to the amphora because of a TLS Handshake issue which so trigger the contact timeout and rollback the amphora creation. > > Here is the worker.log relevant trace: > > 2018-08-07 07:33:57.108 24 INFO octavia.controller.queue.endpoint [-] Creating load balancer 'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'... > 2018-08-07 07:33:57.220 24 INFO octavia.controller.worker.tasks.database_tasks [-] Created Amphora in DB with id c20af002-1576-446e-b99f-7af607b8d885 > 2018-08-07 07:33:57.285 24 INFO octavia.certificates.generator.local [-] Signing a certificate request using OpenSSL locally. > 2018-08-07 07:33:57.285 24 INFO octavia.certificates.generator.local [-] Using CA Certificate from config. > 2018-08-07 07:33:57.285 24 INFO octavia.certificates.generator.local [-] Using CA Private Key from config. > 2018-08-07 07:33:57.286 24 INFO octavia.certificates.generator.local [-] Using CA Private Key Passphrase from config. > 2018-08-07 07:34:04.074 24 INFO octavia.controller.worker.tasks.database_tasks [-] Mark ALLOCATED in DB for amphora: c20af002-1576-446e-b99f-7af607b8d885 with compute id 3bbabfa6-366f-46a4-8fb2-1ec7158e19f1 for load balancer: bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e > 2018-08-07 07:34:04.253 24 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Port a7bae53e-0bc6-4830-8c75-646a8baf2885 already exists. Nothing to be done. > 2018-08-07 07:34:19.656 24 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: ConnectTimeout: HTTPSConnectionPool(host='10.1.56.103', port=9443): Max retries exceeded with url: /0.5/plug/vip/192.168.56.100 (Caused by ConnectTimeoutError(, 'Connection to 10.1.56.103 timed out. (connect timeout=10.0)')) > 2018-08-07 07:34:24.673 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraePostVIPPlug' (c86bbab6-87d5-4930-8832-5511d42efe3e) transitioned into state 'FAILURE' from state 'RUNNING' > 34 predecessors (most recent first): > Atom 'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': {u'c20af002-1576-446e-b99f-7af607b8d885': }} > |__Atom 'reload-lb-after-plug-vip' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': } > |__Atom 'octavia.controller.worker.tasks.database_tasks.UpdateAmphoraVIPData' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amps_data': []}, 'provides': None} > |__Atom 'octavia.controller.worker.tasks.network_tasks.ApplyQos' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amps_data': [], 'loadbalancer': , 'update_dict': {'topology': 'SINGLE'}}, 'provides': None} > |__Atom 'octavia.controller.worker.tasks.network_tasks.PlugVIP' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': []} > |__Atom 'octavia.controller.worker.tasks.database_tasks.UpdateVIPAfterAllocation' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'vip': , 'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': } > |__Atom 'octavia.controller.worker.tasks.network_tasks.AllocateVIP' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': } > |__Flow 'octavia-new-loadbalancer-net-subflow' > |__Atom 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow-reload-lb-after-amp-assoc' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': } > |__Flow 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow' > |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amp-standalone-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': }, 'provides': None} > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-reload-amphora' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885'}, 'provides': } > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-allocated-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': , 'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': None} > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-amphora-finalize' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': }, 'provides': None} > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-amphora-info' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_obj': }, 'provides': } > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-compute-wait' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': } > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-booting-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': None} > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-amphora-computeid' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': None} > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-cert-compute-create' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'server_pem': '-----BEGIN CERTIFICATE-----\n REDACTED \n-----END RSA PRIVATE KEY-----\n', 'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', 'build_type_priority': 40}, 'provides': u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'} > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-cert-expiration' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'server_pem': '-----BEGIN CERTIFICATE-----\n REDACTED \n-----END RSA PRIVATE KEY-----\n', 'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885'}, 'provides': None} > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' {'intention': 'EXECUTE', 'state': 'SUCCESS'} > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {}, 'provides': u'c20af002-1576-446e-b99f-7af607b8d885'} > | |__Flow 'STANDALONE-octavia-create-amp-for-lb-subflow' > | |__Atom 'STANDALONE-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': None} > | |__Flow 'STANDALONE-octavia-get-amphora-for-lb-subflow' > | |__Atom 'octavia.controller.worker.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': None} > | |__Flow 'octavia-create-loadbalancer-flow' > |__Atom 'STANDALONE-octavia-post-map-amp-to-lb-subflow-octavia-mark-amp-standalone-indb' {'intention': 'IGNORE', 'state': 'IGNORE'} > |__Atom 'STANDALONE-octavia-post-map-amp-to-lb-subflow-octavia-reload-amphora' {'intention': 'IGNORE', 'state': 'IGNORE', 'requires': {'amphora_id': None}} > |__Flow 'STANDALONE-octavia-post-map-amp-to-lb-subflow' > |__Atom 'STANDALONE-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': > None} > |__Flow 'STANDALONE-octavia-get-amphora-for-lb-subflow' > |__Atom 'octavia.controller.worker.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': None} > |__Flow 'octavia-create-loadbalancer-flow': Error: [('PEM routines', 'PEM_read_bio', 'no start line'), ('SSL routines', 'SSL_CTX_use_certificate_file', 'PEM lib')] > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker Traceback (most recent call last): > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker result = task.execute(**arguments) > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py", line 240, in execute > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker amphorae_network_config) > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py", line 219, in execute > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker amphora, loadbalancer, amphorae_network_config) > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 137, in post_vip_plug > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker net_info) > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 388, in plug_vip > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker json=net_info) > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 277, in request > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker r = _request(**reqargs) > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 565, in post > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker return self.request('POST', url, data=data, json=json, **kwargs) > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 518, in request > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker resp = self.send(prep, **send_kwargs) > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 639, in send > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker r = adapter.send(request, **kwargs) > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 438, in send > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker timeout=timeout > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 600, in urlopen > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker chunked=chunked) > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 345, in _make_request > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker self._validate_conn(conn) > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 844, in _validate_conn > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker conn.connect() > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connection.py", line 326, in connect > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker ssl_context=context) > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py", line 323, in ssl_wrap_socket > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker context.load_cert_chain(certfile, keyfile) > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/contrib/pyopenssl.py", line 418, in load_cert_chain > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker self._ctx.use_certificate_file(certfile) > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 817, in use_certificate_file > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker _raise_current_error() > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/OpenSSL/_util.py", line 54, in exception_from_error_queue > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker raise exception_type(errors) > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker Error: [('PEM routines', 'PEM_read_bio', 'no start line'), ('SSL routines', 'SSL_CTX_use_certificate_file', 'PEM lib')] > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker > 2018-08-07 07:34:24.684 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraePostVIPPlug' (c86bbab6-87d5-4930-8832-5511d42efe3e) transitioned into state 'REVERTED' from state 'REVERTING' > 2018-08-07 07:34:24.687 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' (1e329fa2-b7c3-4fe2-93f0-d565a18cdbba) transitioned into state 'REVERTED' from state 'REVERTING' > 2018-08-07 07:34:24.691 24 WARNING octavia.controller.worker.controller_worker [-] Task 'reload-lb-after-plug-vip' (842fb766-dd6f-4b3c-936a-7a5baa82c64f) transitioned into state 'REVERTED' from state 'REVERTING' > 2018-08-07 07:34:24.694 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.database_tasks.UpdateAmphoraVIPData' (761da17b-4655-46a9-9d67-cb7816c7ea0c) transitioned into state 'REVERTED' from state 'REVERTING' > 2018-08-07 07:34:24.716 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.network_tasks.ApplyQos' (fb40f555-1f0a-48fc-b377-f9e791077f65) transitioned into state 'REVERTED' from state 'REVERTING' > 2018-08-07 07:34:24.719 24 WARNING octavia.controller.worker.tasks.network_tasks [-] Unable to plug VIP for loadbalancer id bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e > 2018-08-07 07:34:26.413 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.network_tasks.PlugVIP' (ae486972-6e98-4036-9e20-85f335058074) transitioned into state 'REVERTED' from state 'REVERTING' > 2018-08-07 07:34:26.420 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.database_tasks.UpdateVIPAfterAllocation' (79391dee-6011-4145-b544-499e0a632ca1) transitioned into state 'REVERTED' from state 'REVERTING' > 2018-08-07 07:34:26.425 24 WARNING octavia.controller.worker.tasks.network_tasks [-] Deallocating vip 192.168.56.100 > 2018-08-07 07:34:26.577 24 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Removing security group 3d84ee39-1db9-475f-b048-9fe0f87201c1 from port a7bae53e-0bc6-4830-8c75-646a8baf2885 > 2018-08-07 07:34:27.187 24 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Deleted security group 3d84ee39-1db9-475f-b048-9fe0f87201c1 > 2018-08-07 07:34:27.803 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.network_tasks.AllocateVIP' (7edf30ee-4338-4725-a86e-e45c0aa0aa58) transitioned into state 'REVERTED' from state 'REVERTING' > 2018-08-07 07:34:27.807 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow-reload-lb-after-amp-assoc' (64ac1f84-f8ec-4cc1-b3c8-f18ac8474d73) transitioned into state 'REVERTED' from state 'REVERTING' > 2018-08-07 07:34:27.810 24 WARNING octavia.controller.worker.tasks.database_tasks [-] Reverting amphora role in DB for amp id c20af002-1576-446e-b99f-7af607b8d885 > 2018-08-07 07:34:27.816 24 WARNING octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amp-standalone-indb' (2db823a7-c4ac-4622-824b-b709c96b554a) transitioned into state 'REVERTED' from state 'REVERTING' > 2018-08-07 07:34:27.819 24 WARNING octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-reload-amphora' (86219bac-efd2-4d1f-8141-818f1a5bc6f5) transitioned into state 'REVERTED' from state 'REVERTING' > 2018-08-07 07:34:27.821 24 WARNING octavia.controller.worker.tasks.database_tasks [-] Reverting mark amphora ready in DB for amp id c20af002-1576-446e-b99f-7af607b8d885 and compute id 3bbabfa6-366f-46a4-8fb2-1ec7158e19f1 > 2018-08-07 07:34:27.826 24 WARNING octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-allocated-indb' (baf58e71-eef6-41e0-9bf3-ab9f9554ace2) transitioned into state 'REVERTED' from state 'REVERTING' > 2018-08-07 07:34:27.828 24 WARNING octavia.controller.worker.tasks.amphora_driver_tasks [-] Reverting amphora finalize. > > Is this a problem if I use self-signed CAcert ? > Is their a way to tell octavia to ignore SSL Error while working on a LAB environment? > > As usual, if you need further information feel free to ask. > > Thanks a lot guys. > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From tobias.urdin at binero.se Tue Aug 14 16:33:11 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Tue, 14 Aug 2018 18:33:11 +0200 Subject: [Openstack-operators] [puppet] migrating to storyboard Message-ID: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> Hello all incredible Puppeters, I've tested setting up an Storyboard instance and test migrated puppet-ceph and it went without any issues there using the documentation [1] [2] with just one minor issue during the SB setup [3]. My goal is that we will be able to swap to Storyboard during the Stein cycle but considering that we have a low activity on bugs my opinion is that we could do this swap very easily anything soon as long as everybody is in favor of it. Please let me know what you think about moving to Storyboard? If everybody is in favor of it we can request a migration to infra according to documentation [2]. I will continue to test the import of all our project while people are collecting their thoughts and feedback :) Best regards Tobias [1] https://docs.openstack.org/infra/storyboard/install/development.html [2] https://docs.openstack.org/infra/storyboard/migration.html [3] It failed with an error about launchpadlib not being installed, solved with `tox -e venv pip install launchpadlib` From gael.therond at gmail.com Tue Aug 14 17:52:09 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Tue, 14 Aug 2018 19:52:09 +0200 Subject: [Openstack-operators] [OCTAVIA][KOLLA] - Self signed CA/CERTS In-Reply-To: References: Message-ID: Hi Michael, thanks a lot for your quick response once again! Le mar. 14 août 2018 à 18:21, Michael Johnson a écrit : > Hi there Flint. > > Octavia fully supports using self-signed certificates and we use those > in our gate tests. > We do not allow non-TLS authenticated connections in the code, even > for lab setups. > > This is a configuration issue or certificate file format issue. When > the controller is attempting to access the controller local > certificate file (likely the one we use to prove we are a valid > controller to the amphora agent) it is finding a file without the > required PEM format header. Check that your certificate files have the > "-----BEGIN CERTIFICATE-----" line (maybe they are in binary DER > format and just need to be converted). > > Also for reference, here are the minimal steps we use in our gate > tests to setup the TLS certificates: > > https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L295-L305 > > Michael > On Tue, Aug 14, 2018 at 4:54 AM Flint WALRUS > wrote: > > > > > > Hi guys, > > > > I continue to work on my Octavia integration using Kolla-Ansible and I'm > facing a strange behavior. > > > > As for now I'm working on a POC using restricted HW and SW Capacities, > I'm facing a strange issue when trying to launch a new load-balancer. > > > > When I create a new LB, would it be using CLI or WebUI, the amphora > immediately disappear and the LB status switch to ERROR. > > > > When looking at logs and especially Worker logs, I see that the error > seems to be related to the fact that the worker can't connect to the > amphora because of a TLS Handshake issue which so trigger the contact > timeout and rollback the amphora creation. > > > > Here is the worker.log relevant trace: > > > > 2018-08-07 07:33:57.108 24 INFO octavia.controller.queue.endpoint [-] > Creating load balancer 'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'... > > 2018-08-07 07:33:57.220 24 INFO > octavia.controller.worker.tasks.database_tasks [-] Created Amphora in DB > with id c20af002-1576-446e-b99f-7af607b8d885 > > 2018-08-07 07:33:57.285 24 INFO octavia.certificates.generator.local [-] > Signing a certificate request using OpenSSL locally. > > 2018-08-07 07:33:57.285 24 INFO octavia.certificates.generator.local [-] > Using CA Certificate from config. > > 2018-08-07 07:33:57.285 24 INFO octavia.certificates.generator.local [-] > Using CA Private Key from config. > > 2018-08-07 07:33:57.286 24 INFO octavia.certificates.generator.local [-] > Using CA Private Key Passphrase from config. > > 2018-08-07 07:34:04.074 24 INFO > octavia.controller.worker.tasks.database_tasks [-] Mark ALLOCATED in DB for > amphora: c20af002-1576-446e-b99f-7af607b8d885 with compute id > 3bbabfa6-366f-46a4-8fb2-1ec7158e19f1 for load balancer: > bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e > > 2018-08-07 07:34:04.253 24 INFO > octavia.network.drivers.neutron.allowed_address_pairs [-] Port > a7bae53e-0bc6-4830-8c75-646a8baf2885 already exists. Nothing to be done. > > 2018-08-07 07:34:19.656 24 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to > instance. Retrying.: ConnectTimeout: > HTTPSConnectionPool(host='10.1.56.103', port=9443): Max retries exceeded > with url: /0.5/plug/vip/192.168.56.100 (Caused by > ConnectTimeoutError( object at 0x7f4c28415c50>, 'Connection to 10.1.56.103 timed out. (connect > timeout=10.0)')) > > 2018-08-07 07:34:24.673 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraePostVIPPlug' > (c86bbab6-87d5-4930-8832-5511d42efe3e) transitioned into state 'FAILURE' > from state 'RUNNING' > > 34 predecessors (most recent first): > > Atom > 'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': > }, > 'provides': {u'c20af002-1576-446e-b99f-7af607b8d885': > 0x7f4c284786d0>}} > > |__Atom 'reload-lb-after-plug-vip' {'intention': 'EXECUTE', 'state': > 'SUCCESS', 'requires': {'loadbalancer_id': > u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': > } > > |__Atom > 'octavia.controller.worker.tasks.database_tasks.UpdateAmphoraVIPData' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amps_data': > []}, > 'provides': None} > > |__Atom 'octavia.controller.worker.tasks.network_tasks.ApplyQos' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amps_data': > [], > 'loadbalancer': 0x7f4c2845fe10>, 'update_dict': {'topology': 'SINGLE'}}, 'provides': None} > > |__Atom > 'octavia.controller.worker.tasks.network_tasks.PlugVIP' {'intention': > 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': > }, > 'provides': []} > > |__Atom > 'octavia.controller.worker.tasks.database_tasks.UpdateVIPAfterAllocation' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'vip': > , > 'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': > } > > |__Atom > 'octavia.controller.worker.tasks.network_tasks.AllocateVIP' {'intention': > 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': > }, > 'provides': } > > |__Flow 'octavia-new-loadbalancer-net-subflow' > > |__Atom > 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow-reload-lb-after-amp-assoc' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': > {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': > } > > |__Flow > 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow' > > |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amp-standalone-indb' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': > }, 'provides': > None} > > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-reload-amphora' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': > u'c20af002-1576-446e-b99f-7af607b8d885'}, 'provides': > } > > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-allocated-indb' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': > , > 'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': > None} > > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-amphora-finalize' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': > }, 'provides': > None} > > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-amphora-info' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': > u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_obj': > }, 'provides': > } > > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-compute-wait' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': > u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': > u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': > } > > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-booting-indb' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': > u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': > u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': None} > > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-amphora-computeid' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': > u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': > u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': None} > > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-cert-compute-create' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'server_pem': > '-----BEGIN CERTIFICATE-----\n REDACTED \n-----END RSA PRIVATE KEY-----\n', > 'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', > 'build_type_priority': 40}, 'provides': > u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'} > > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-cert-expiration' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'server_pem': > '-----BEGIN CERTIFICATE-----\n REDACTED \n-----END RSA PRIVATE KEY-----\n', > 'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885'}, 'provides': None} > > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' > {'intention': 'EXECUTE', 'state': 'SUCCESS'} > > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {}, 'provides': > u'c20af002-1576-446e-b99f-7af607b8d885'} > > | |__Flow > 'STANDALONE-octavia-create-amp-for-lb-subflow' > > | > |__Atom > 'STANDALONE-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': > {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': > None} > > | > |__Flow 'STANDALONE-octavia-get-amphora-for-lb-subflow' > > | > |__Atom > 'octavia.controller.worker.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': > {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': > None} > > | > |__Flow 'octavia-create-loadbalancer-flow' > > |__Atom > 'STANDALONE-octavia-post-map-amp-to-lb-subflow-octavia-mark-amp-standalone-indb' > {'intention': 'IGNORE', 'state': 'IGNORE'} > > |__Atom > 'STANDALONE-octavia-post-map-amp-to-lb-subflow-octavia-reload-amphora' > {'intention': 'IGNORE', 'state': 'IGNORE', 'requires': {'amphora_id': None}} > > |__Flow > 'STANDALONE-octavia-post-map-amp-to-lb-subflow' > > |__Atom > 'STANDALONE-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': > {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': > > None} > > |__Flow > 'STANDALONE-octavia-get-amphora-for-lb-subflow' > > |__Atom > 'octavia.controller.worker.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': > {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': > None} > > |__Flow > 'octavia-create-loadbalancer-flow': Error: [('PEM routines', > 'PEM_read_bio', 'no start line'), ('SSL routines', > 'SSL_CTX_use_certificate_file', 'PEM lib')] > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker Traceback (most recent call > last): > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", > line 53, in _execute_task > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker result = > task.execute(**arguments) > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py", > line 240, in execute > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker amphorae_network_config) > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py", > line 219, in execute > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker amphora, loadbalancer, > amphorae_network_config) > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", > line 137, in post_vip_plug > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker net_info) > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", > line 388, in plug_vip > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker json=net_info) > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", > line 277, in request > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker r = _request(**reqargs) > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/sessions.py", line 565, in post > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker return self.request('POST', > url, data=data, json=json, **kwargs) > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/sessions.py", line 518, in > request > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker resp = self.send(prep, > **send_kwargs) > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/sessions.py", line 639, in send > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker r = adapter.send(request, > **kwargs) > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/adapters.py", line 438, in send > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker timeout=timeout > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", > line 600, in urlopen > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker chunked=chunked) > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", > line 345, in _make_request > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker self._validate_conn(conn) > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", > line 844, in _validate_conn > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker conn.connect() > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connection.py", > line 326, in connect > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker ssl_context=context) > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py", > line 323, in ssl_wrap_socket > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker > context.load_cert_chain(certfile, keyfile) > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/packages/urllib3/contrib/pyopenssl.py", > line 418, in load_cert_chain > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker > self._ctx.use_certificate_file(certfile) > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 817, in > use_certificate_file > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker _raise_current_error() > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/OpenSSL/_util.py", line 54, in > exception_from_error_queue > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker raise exception_type(errors) > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker Error: [('PEM routines', > 'PEM_read_bio', 'no start line'), ('SSL routines', > 'SSL_CTX_use_certificate_file', 'PEM lib')] > > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker > > 2018-08-07 07:34:24.684 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraePostVIPPlug' > (c86bbab6-87d5-4930-8832-5511d42efe3e) transitioned into state 'REVERTED' > from state 'REVERTING' > > 2018-08-07 07:34:24.687 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' > (1e329fa2-b7c3-4fe2-93f0-d565a18cdbba) transitioned into state 'REVERTED' > from state 'REVERTING' > > 2018-08-07 07:34:24.691 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'reload-lb-after-plug-vip' (842fb766-dd6f-4b3c-936a-7a5baa82c64f) > transitioned into state 'REVERTED' from state 'REVERTING' > > 2018-08-07 07:34:24.694 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'octavia.controller.worker.tasks.database_tasks.UpdateAmphoraVIPData' > (761da17b-4655-46a9-9d67-cb7816c7ea0c) transitioned into state 'REVERTED' > from state 'REVERTING' > > 2018-08-07 07:34:24.716 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'octavia.controller.worker.tasks.network_tasks.ApplyQos' > (fb40f555-1f0a-48fc-b377-f9e791077f65) transitioned into state 'REVERTED' > from state 'REVERTING' > > 2018-08-07 07:34:24.719 24 WARNING > octavia.controller.worker.tasks.network_tasks [-] Unable to plug VIP for > loadbalancer id bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e > > 2018-08-07 07:34:26.413 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'octavia.controller.worker.tasks.network_tasks.PlugVIP' > (ae486972-6e98-4036-9e20-85f335058074) transitioned into state 'REVERTED' > from state 'REVERTING' > > 2018-08-07 07:34:26.420 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'octavia.controller.worker.tasks.database_tasks.UpdateVIPAfterAllocation' > (79391dee-6011-4145-b544-499e0a632ca1) transitioned into state 'REVERTED' > from state 'REVERTING' > > 2018-08-07 07:34:26.425 24 WARNING > octavia.controller.worker.tasks.network_tasks [-] Deallocating vip > 192.168.56.100 > > 2018-08-07 07:34:26.577 24 INFO > octavia.network.drivers.neutron.allowed_address_pairs [-] Removing security > group 3d84ee39-1db9-475f-b048-9fe0f87201c1 from port > a7bae53e-0bc6-4830-8c75-646a8baf2885 > > 2018-08-07 07:34:27.187 24 INFO > octavia.network.drivers.neutron.allowed_address_pairs [-] Deleted security > group 3d84ee39-1db9-475f-b048-9fe0f87201c1 > > 2018-08-07 07:34:27.803 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'octavia.controller.worker.tasks.network_tasks.AllocateVIP' > (7edf30ee-4338-4725-a86e-e45c0aa0aa58) transitioned into state 'REVERTED' > from state 'REVERTING' > > 2018-08-07 07:34:27.807 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow-reload-lb-after-amp-assoc' > (64ac1f84-f8ec-4cc1-b3c8-f18ac8474d73) transitioned into state 'REVERTED' > from state 'REVERTING' > > 2018-08-07 07:34:27.810 24 WARNING > octavia.controller.worker.tasks.database_tasks [-] Reverting amphora role > in DB for amp id c20af002-1576-446e-b99f-7af607b8d885 > > 2018-08-07 07:34:27.816 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amp-standalone-indb' > (2db823a7-c4ac-4622-824b-b709c96b554a) transitioned into state 'REVERTED' > from state 'REVERTING' > > 2018-08-07 07:34:27.819 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-reload-amphora' > (86219bac-efd2-4d1f-8141-818f1a5bc6f5) transitioned into state 'REVERTED' > from state 'REVERTING' > > 2018-08-07 07:34:27.821 24 WARNING > octavia.controller.worker.tasks.database_tasks [-] Reverting mark amphora > ready in DB for amp id c20af002-1576-446e-b99f-7af607b8d885 and compute id > 3bbabfa6-366f-46a4-8fb2-1ec7158e19f1 > > 2018-08-07 07:34:27.826 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-allocated-indb' > (baf58e71-eef6-41e0-9bf3-ab9f9554ace2) transitioned into state 'REVERTED' > from state 'REVERTING' > > 2018-08-07 07:34:27.828 24 WARNING > octavia.controller.worker.tasks.amphora_driver_tasks [-] Reverting amphora > finalize. > > > > Is this a problem if I use self-signed CAcert ? > > Is their a way to tell octavia to ignore SSL Error while working on a > LAB environment? > > > > As usual, if you need further information feel free to ask. > > > > Thanks a lot guys. > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at gmail.com Tue Aug 14 17:53:12 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Tue, 14 Aug 2018 19:53:12 +0200 Subject: [Openstack-operators] [OCTAVIA][KOLLA] - Self signed CA/CERTS In-Reply-To: References: Message-ID: I’ll try to check the certificate format and make the appropriate change if required or let you know if I’ve got something specific regarding that topic. Kind regards, G. Le mar. 14 août 2018 à 19:52, Flint WALRUS a écrit : > Hi Michael, thanks a lot for your quick response once again! > Le mar. 14 août 2018 à 18:21, Michael Johnson a > écrit : > >> Hi there Flint. >> >> Octavia fully supports using self-signed certificates and we use those >> in our gate tests. >> We do not allow non-TLS authenticated connections in the code, even >> for lab setups. >> >> This is a configuration issue or certificate file format issue. When >> the controller is attempting to access the controller local >> certificate file (likely the one we use to prove we are a valid >> controller to the amphora agent) it is finding a file without the >> required PEM format header. Check that your certificate files have the >> "-----BEGIN CERTIFICATE-----" line (maybe they are in binary DER >> format and just need to be converted). >> >> Also for reference, here are the minimal steps we use in our gate >> tests to setup the TLS certificates: >> >> https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L295-L305 >> >> Michael >> On Tue, Aug 14, 2018 at 4:54 AM Flint WALRUS >> wrote: >> > >> > >> > Hi guys, >> > >> > I continue to work on my Octavia integration using Kolla-Ansible and >> I'm facing a strange behavior. >> > >> > As for now I'm working on a POC using restricted HW and SW Capacities, >> I'm facing a strange issue when trying to launch a new load-balancer. >> > >> > When I create a new LB, would it be using CLI or WebUI, the amphora >> immediately disappear and the LB status switch to ERROR. >> > >> > When looking at logs and especially Worker logs, I see that the error >> seems to be related to the fact that the worker can't connect to the >> amphora because of a TLS Handshake issue which so trigger the contact >> timeout and rollback the amphora creation. >> > >> > Here is the worker.log relevant trace: >> > >> > 2018-08-07 07:33:57.108 24 INFO octavia.controller.queue.endpoint [-] >> Creating load balancer 'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'... >> > 2018-08-07 07:33:57.220 24 INFO >> octavia.controller.worker.tasks.database_tasks [-] Created Amphora in DB >> with id c20af002-1576-446e-b99f-7af607b8d885 >> > 2018-08-07 07:33:57.285 24 INFO octavia.certificates.generator.local >> [-] Signing a certificate request using OpenSSL locally. >> > 2018-08-07 07:33:57.285 24 INFO octavia.certificates.generator.local >> [-] Using CA Certificate from config. >> > 2018-08-07 07:33:57.285 24 INFO octavia.certificates.generator.local >> [-] Using CA Private Key from config. >> > 2018-08-07 07:33:57.286 24 INFO octavia.certificates.generator.local >> [-] Using CA Private Key Passphrase from config. >> > 2018-08-07 07:34:04.074 24 INFO >> octavia.controller.worker.tasks.database_tasks [-] Mark ALLOCATED in DB for >> amphora: c20af002-1576-446e-b99f-7af607b8d885 with compute id >> 3bbabfa6-366f-46a4-8fb2-1ec7158e19f1 for load balancer: >> bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e >> > 2018-08-07 07:34:04.253 24 INFO >> octavia.network.drivers.neutron.allowed_address_pairs [-] Port >> a7bae53e-0bc6-4830-8c75-646a8baf2885 already exists. Nothing to be done. >> > 2018-08-07 07:34:19.656 24 WARNING >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to >> instance. Retrying.: ConnectTimeout: >> HTTPSConnectionPool(host='10.1.56.103', port=9443): Max retries exceeded >> with url: /0.5/plug/vip/192.168.56.100 (Caused by >> ConnectTimeoutError(> object at 0x7f4c28415c50>, 'Connection to 10.1.56.103 timed out. (connect >> timeout=10.0)')) >> > 2018-08-07 07:34:24.673 24 WARNING >> octavia.controller.worker.controller_worker [-] Task >> 'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraePostVIPPlug' >> (c86bbab6-87d5-4930-8832-5511d42efe3e) transitioned into state 'FAILURE' >> from state 'RUNNING' >> > 34 predecessors (most recent first): >> > Atom >> 'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': >> }, >> 'provides': {u'c20af002-1576-446e-b99f-7af607b8d885': >> > 0x7f4c284786d0>}} >> > |__Atom 'reload-lb-after-plug-vip' {'intention': 'EXECUTE', 'state': >> 'SUCCESS', 'requires': {'loadbalancer_id': >> u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': >> } >> > |__Atom >> 'octavia.controller.worker.tasks.database_tasks.UpdateAmphoraVIPData' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amps_data': >> []}, >> 'provides': None} >> > |__Atom >> 'octavia.controller.worker.tasks.network_tasks.ApplyQos' {'intention': >> 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amps_data': >> [], >> 'loadbalancer': > 0x7f4c2845fe10>, 'update_dict': {'topology': 'SINGLE'}}, 'provides': None} >> > |__Atom >> 'octavia.controller.worker.tasks.network_tasks.PlugVIP' {'intention': >> 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': >> }, >> 'provides': []} >> > |__Atom >> 'octavia.controller.worker.tasks.database_tasks.UpdateVIPAfterAllocation' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'vip': >> , >> 'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': >> } >> > |__Atom >> 'octavia.controller.worker.tasks.network_tasks.AllocateVIP' {'intention': >> 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': >> }, >> 'provides': } >> > |__Flow 'octavia-new-loadbalancer-net-subflow' >> > |__Atom >> 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow-reload-lb-after-amp-assoc' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': >> {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': >> } >> > |__Flow >> 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow' >> > |__Atom >> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amp-standalone-indb' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': >> }, 'provides': >> None} >> > | |__Atom >> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-reload-amphora' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': >> u'c20af002-1576-446e-b99f-7af607b8d885'}, 'provides': >> } >> > | |__Atom >> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-allocated-indb' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': >> , >> 'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': >> None} >> > | |__Atom >> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-amphora-finalize' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': >> }, 'provides': >> None} >> > | |__Atom >> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-amphora-info' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': >> u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_obj': >> }, 'provides': >> } >> > | |__Atom >> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-compute-wait' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': >> u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': >> u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': >> } >> > | |__Atom >> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-booting-indb' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': >> u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': >> u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': None} >> > | |__Atom >> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-amphora-computeid' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': >> u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': >> u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': None} >> > | |__Atom >> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-cert-compute-create' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'server_pem': >> '-----BEGIN CERTIFICATE-----\n REDACTED \n-----END RSA PRIVATE KEY-----\n', >> 'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', >> 'build_type_priority': 40}, 'provides': >> u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'} >> > | |__Atom >> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-cert-expiration' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'server_pem': >> '-----BEGIN CERTIFICATE-----\n REDACTED \n-----END RSA PRIVATE KEY-----\n', >> 'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885'}, 'provides': None} >> > | |__Atom >> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' >> {'intention': 'EXECUTE', 'state': 'SUCCESS'} >> > | |__Atom >> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {}, 'provides': >> u'c20af002-1576-446e-b99f-7af607b8d885'} >> > | >> |__Flow 'STANDALONE-octavia-create-amp-for-lb-subflow' >> > | >> |__Atom >> 'STANDALONE-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': >> {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': >> None} >> > | >> |__Flow 'STANDALONE-octavia-get-amphora-for-lb-subflow' >> > | >> |__Atom >> 'octavia.controller.worker.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': >> {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': >> None} >> > | >> |__Flow 'octavia-create-loadbalancer-flow' >> > |__Atom >> 'STANDALONE-octavia-post-map-amp-to-lb-subflow-octavia-mark-amp-standalone-indb' >> {'intention': 'IGNORE', 'state': 'IGNORE'} >> > |__Atom >> 'STANDALONE-octavia-post-map-amp-to-lb-subflow-octavia-reload-amphora' >> {'intention': 'IGNORE', 'state': 'IGNORE', 'requires': {'amphora_id': None}} >> > |__Flow >> 'STANDALONE-octavia-post-map-amp-to-lb-subflow' >> > |__Atom >> 'STANDALONE-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': >> {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': >> > None} >> > |__Flow >> 'STANDALONE-octavia-get-amphora-for-lb-subflow' >> > |__Atom >> 'octavia.controller.worker.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' >> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': >> {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': >> None} >> > |__Flow >> 'octavia-create-loadbalancer-flow': Error: [('PEM routines', >> 'PEM_read_bio', 'no start line'), ('SSL routines', >> 'SSL_CTX_use_certificate_file', 'PEM lib')] >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker Traceback (most recent call >> last): >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker File >> "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", >> line 53, in _execute_task >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker result = >> task.execute(**arguments) >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker File >> "/usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py", >> line 240, in execute >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker amphorae_network_config) >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker File >> "/usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py", >> line 219, in execute >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker amphora, loadbalancer, >> amphorae_network_config) >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker File >> "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", >> line 137, in post_vip_plug >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker net_info) >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker File >> "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", >> line 388, in plug_vip >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker json=net_info) >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker File >> "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", >> line 277, in request >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker r = _request(**reqargs) >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker File >> "/usr/lib/python2.7/site-packages/requests/sessions.py", line 565, in post >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker return self.request('POST', >> url, data=data, json=json, **kwargs) >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker File >> "/usr/lib/python2.7/site-packages/requests/sessions.py", line 518, in >> request >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker resp = self.send(prep, >> **send_kwargs) >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker File >> "/usr/lib/python2.7/site-packages/requests/sessions.py", line 639, in send >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker r = adapter.send(request, >> **kwargs) >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker File >> "/usr/lib/python2.7/site-packages/requests/adapters.py", line 438, in send >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker timeout=timeout >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker File >> "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", >> line 600, in urlopen >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker chunked=chunked) >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker File >> "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", >> line 345, in _make_request >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker self._validate_conn(conn) >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker File >> "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", >> line 844, in _validate_conn >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker conn.connect() >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker File >> "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connection.py", >> line 326, in connect >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker ssl_context=context) >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker File >> "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py", >> line 323, in ssl_wrap_socket >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker >> context.load_cert_chain(certfile, keyfile) >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker File >> "/usr/lib/python2.7/site-packages/requests/packages/urllib3/contrib/pyopenssl.py", >> line 418, in load_cert_chain >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker >> self._ctx.use_certificate_file(certfile) >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker File >> "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 817, in >> use_certificate_file >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker _raise_current_error() >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker File >> "/usr/lib/python2.7/site-packages/OpenSSL/_util.py", line 54, in >> exception_from_error_queue >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker raise exception_type(errors) >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker Error: [('PEM routines', >> 'PEM_read_bio', 'no start line'), ('SSL routines', >> 'SSL_CTX_use_certificate_file', 'PEM lib')] >> > 2018-08-07 07:34:24.673 24 ERROR >> octavia.controller.worker.controller_worker >> > 2018-08-07 07:34:24.684 24 WARNING >> octavia.controller.worker.controller_worker [-] Task >> 'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraePostVIPPlug' >> (c86bbab6-87d5-4930-8832-5511d42efe3e) transitioned into state 'REVERTED' >> from state 'REVERTING' >> > 2018-08-07 07:34:24.687 24 WARNING >> octavia.controller.worker.controller_worker [-] Task >> 'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' >> (1e329fa2-b7c3-4fe2-93f0-d565a18cdbba) transitioned into state 'REVERTED' >> from state 'REVERTING' >> > 2018-08-07 07:34:24.691 24 WARNING >> octavia.controller.worker.controller_worker [-] Task >> 'reload-lb-after-plug-vip' (842fb766-dd6f-4b3c-936a-7a5baa82c64f) >> transitioned into state 'REVERTED' from state 'REVERTING' >> > 2018-08-07 07:34:24.694 24 WARNING >> octavia.controller.worker.controller_worker [-] Task >> 'octavia.controller.worker.tasks.database_tasks.UpdateAmphoraVIPData' >> (761da17b-4655-46a9-9d67-cb7816c7ea0c) transitioned into state 'REVERTED' >> from state 'REVERTING' >> > 2018-08-07 07:34:24.716 24 WARNING >> octavia.controller.worker.controller_worker [-] Task >> 'octavia.controller.worker.tasks.network_tasks.ApplyQos' >> (fb40f555-1f0a-48fc-b377-f9e791077f65) transitioned into state 'REVERTED' >> from state 'REVERTING' >> > 2018-08-07 07:34:24.719 24 WARNING >> octavia.controller.worker.tasks.network_tasks [-] Unable to plug VIP for >> loadbalancer id bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e >> > 2018-08-07 07:34:26.413 24 WARNING >> octavia.controller.worker.controller_worker [-] Task >> 'octavia.controller.worker.tasks.network_tasks.PlugVIP' >> (ae486972-6e98-4036-9e20-85f335058074) transitioned into state 'REVERTED' >> from state 'REVERTING' >> > 2018-08-07 07:34:26.420 24 WARNING >> octavia.controller.worker.controller_worker [-] Task >> 'octavia.controller.worker.tasks.database_tasks.UpdateVIPAfterAllocation' >> (79391dee-6011-4145-b544-499e0a632ca1) transitioned into state 'REVERTED' >> from state 'REVERTING' >> > 2018-08-07 07:34:26.425 24 WARNING >> octavia.controller.worker.tasks.network_tasks [-] Deallocating vip >> 192.168.56.100 >> > 2018-08-07 07:34:26.577 24 INFO >> octavia.network.drivers.neutron.allowed_address_pairs [-] Removing security >> group 3d84ee39-1db9-475f-b048-9fe0f87201c1 from port >> a7bae53e-0bc6-4830-8c75-646a8baf2885 >> > 2018-08-07 07:34:27.187 24 INFO >> octavia.network.drivers.neutron.allowed_address_pairs [-] Deleted security >> group 3d84ee39-1db9-475f-b048-9fe0f87201c1 >> > 2018-08-07 07:34:27.803 24 WARNING >> octavia.controller.worker.controller_worker [-] Task >> 'octavia.controller.worker.tasks.network_tasks.AllocateVIP' >> (7edf30ee-4338-4725-a86e-e45c0aa0aa58) transitioned into state 'REVERTED' >> from state 'REVERTING' >> > 2018-08-07 07:34:27.807 24 WARNING >> octavia.controller.worker.controller_worker [-] Task >> 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow-reload-lb-after-amp-assoc' >> (64ac1f84-f8ec-4cc1-b3c8-f18ac8474d73) transitioned into state 'REVERTED' >> from state 'REVERTING' >> > 2018-08-07 07:34:27.810 24 WARNING >> octavia.controller.worker.tasks.database_tasks [-] Reverting amphora role >> in DB for amp id c20af002-1576-446e-b99f-7af607b8d885 >> > 2018-08-07 07:34:27.816 24 WARNING >> octavia.controller.worker.controller_worker [-] Task >> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amp-standalone-indb' >> (2db823a7-c4ac-4622-824b-b709c96b554a) transitioned into state 'REVERTED' >> from state 'REVERTING' >> > 2018-08-07 07:34:27.819 24 WARNING >> octavia.controller.worker.controller_worker [-] Task >> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-reload-amphora' >> (86219bac-efd2-4d1f-8141-818f1a5bc6f5) transitioned into state 'REVERTED' >> from state 'REVERTING' >> > 2018-08-07 07:34:27.821 24 WARNING >> octavia.controller.worker.tasks.database_tasks [-] Reverting mark amphora >> ready in DB for amp id c20af002-1576-446e-b99f-7af607b8d885 and compute id >> 3bbabfa6-366f-46a4-8fb2-1ec7158e19f1 >> > 2018-08-07 07:34:27.826 24 WARNING >> octavia.controller.worker.controller_worker [-] Task >> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-allocated-indb' >> (baf58e71-eef6-41e0-9bf3-ab9f9554ace2) transitioned into state 'REVERTED' >> from state 'REVERTING' >> > 2018-08-07 07:34:27.828 24 WARNING >> octavia.controller.worker.tasks.amphora_driver_tasks [-] Reverting amphora >> finalize. >> > >> > Is this a problem if I use self-signed CAcert ? >> > Is their a way to tell octavia to ignore SSL Error while working on a >> LAB environment? >> > >> > As usual, if you need further information feel free to ask. >> > >> > Thanks a lot guys. >> > >> > >> > _______________________________________________ >> > OpenStack-operators mailing list >> > OpenStack-operators at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Aug 14 19:03:41 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 14 Aug 2018 12:03:41 -0700 Subject: [Openstack-operators] [openstack-dev] [puppet] migrating to storyboard In-Reply-To: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> Message-ID: Hello! The error you hit can be resolved by adding launchpadlib to your tox.ini if I recall correctly.. also, if you'd like, I can run a test migration of puppet's launchpad projects into our storyboard-dev db (where I've done a ton of other test migrations) if you want to see how it looks/works with a larger db. Just let me know and I can kick it off. As for a time to migrate, if you all are good with it, we usually schedule for Friday's so there is even less activity. Its a small project config change and then we just need an infra core to kick off the script once the change merges. -Kendall (diablo_rojo) On Tue, Aug 14, 2018 at 9:33 AM Tobias Urdin wrote: > Hello all incredible Puppeters, > > I've tested setting up an Storyboard instance and test migrated > puppet-ceph and it went without any issues there using the documentation > [1] [2] > with just one minor issue during the SB setup [3]. > > My goal is that we will be able to swap to Storyboard during the Stein > cycle but considering that we have a low activity on > bugs my opinion is that we could do this swap very easily anything soon > as long as everybody is in favor of it. > > Please let me know what you think about moving to Storyboard? > If everybody is in favor of it we can request a migration to infra > according to documentation [2]. > > I will continue to test the import of all our project while people are > collecting their thoughts and feedback :) > > Best regards > Tobias > > [1] https://docs.openstack.org/infra/storyboard/install/development.html > [2] https://docs.openstack.org/infra/storyboard/migration.html > [3] It failed with an error about launchpadlib not being installed, > solved with `tox -e venv pip install launchpadlib` > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xliviux at gmail.com Wed Aug 15 04:16:27 2018 From: xliviux at gmail.com (Liviu Popescu) Date: Wed, 15 Aug 2018 07:16:27 +0300 Subject: [Openstack-operators] oracle rac on openstack: openfiler as shared storage Message-ID: Hi Please advise me on how to get an openfiler image suitable for openstack. I would use it for a "shared storage" for oracle rac, with asm, installation. Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From xliviux at gmail.com Wed Aug 15 05:56:01 2018 From: xliviux at gmail.com (Liviu Popescu) Date: Wed, 15 Aug 2018 08:56:01 +0300 Subject: [Openstack-operators] oracle rac on openstack: openfiler as shared storage In-Reply-To: References: Message-ID: Hello, I have used openfiler in a vmware environment, for creating iscsi targets and accessing iscsi luns on target machines with iscsi-initiator. On machines, LUNs were used via multipath and udev rules by Oracle RAC, two nodes for ASM (asm disks). Please advice me on how to get an openfiler image suitable for openstack. Thank you! On 15 August 2018 at 07:16, Liviu Popescu wrote: > Hi > > Please advise me on how to get an openfiler image suitable for openstack. > > I would use it for a "shared storage" for oracle rac, with asm, > installation. > > Thank you! > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Wed Aug 15 07:07:41 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Wed, 15 Aug 2018 09:07:41 +0200 Subject: [Openstack-operators] [openstack-dev] [puppet] migrating to storyboard In-Reply-To: References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> Message-ID: <01cc050e-c74b-a133-4020-6e0f219b7158@binero.se> Hello Kendall, Thanks for your reply, that sounds awesome! We can then dig around and see how everything looks when all project bugs are imported to stories. I see no issues with being able to move to Storyboard anytime soon if the feedback for moving is positive. Best regards Tobias On 08/14/2018 09:06 PM, Kendall Nelson wrote: > Hello! > > The error you hit can be resolved by adding launchpadlib to your > tox.ini if I recall correctly.. > > also, if you'd like, I can run a test migration of puppet's launchpad > projects into our storyboard-dev db (where I've done a ton of other > test migrations) if you want to see how it looks/works with a larger > db. Just let me know and I can kick it off. > > As for a time to migrate, if you all are good with it, we usually > schedule for Friday's so there is even less activity. Its a small > project config change and then we just need an infra core to kick off > the script once the change merges. > > -Kendall (diablo_rojo) > > On Tue, Aug 14, 2018 at 9:33 AM Tobias Urdin > wrote: > > Hello all incredible Puppeters, > > I've tested setting up an Storyboard instance and test migrated > puppet-ceph and it went without any issues there using the > documentation > [1] [2] > with just one minor issue during the SB setup [3]. > > My goal is that we will be able to swap to Storyboard during the > Stein > cycle but considering that we have a low activity on > bugs my opinion is that we could do this swap very easily anything > soon > as long as everybody is in favor of it. > > Please let me know what you think about moving to Storyboard? > If everybody is in favor of it we can request a migration to infra > according to documentation [2]. > > I will continue to test the import of all our project while people > are > collecting their thoughts and feedback :) > > Best regards > Tobias > > [1] > https://docs.openstack.org/infra/storyboard/install/development.html > [2] https://docs.openstack.org/infra/storyboard/migration.html > > [3] It failed with an error about launchpadlib not being installed, > solved with `tox -e venv pip install launchpadlib` > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xliviux at gmail.com Wed Aug 15 10:51:21 2018 From: xliviux at gmail.com (Liviu Popescu) Date: Wed, 15 Aug 2018 13:51:21 +0300 Subject: [Openstack-operators] oracle rac on openstack: openfiler as shared storage Message-ID: Hello, regarding: openstack for having "shared-storage" needed for Oracle RAC: I have used openfiler in a vmware environment, for creating iscsi targets and accessing iscsi luns on target machines with iscsi-initiator. On machines, LUNs were used via multipath.conf and udev rules by Oracle RAC, two nodes for ASM (asm disks). Please advice me on how to get an openfiler image suitable for openstack, in order to simulate a storage server here. Thank you! On 15 August 2018 at 08:56, Liviu Popescu wrote: > Hello, > > I have used openfiler in a vmware environment, for creating iscsi targets > and accessing iscsi luns on target machines with iscsi-initiator. > On machines, LUNs were used via multipath and udev rules by Oracle RAC, > two nodes for ASM (asm disks). > > Please advice me on how to get an openfiler image suitable for openstack. > > > Thank you! > > > > > > > On 15 August 2018 at 07:16, Liviu Popescu wrote: > >> Hi >> >> Please advise me on how to get an openfiler image suitable for openstack. >> >> I would use it for a "shared storage" for oracle rac, with asm, >> installation. >> >> Thank you! >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xliviux at gmail.com Wed Aug 15 10:58:54 2018 From: xliviux at gmail.com (Liviu Popescu) Date: Wed, 15 Aug 2018 13:58:54 +0300 Subject: [Openstack-operators] oracle rac on openstack: openfiler as shared storage Message-ID: Hello, regarding: openstack for having "shared-storage" needed for Oracle RAC: I have used openfiler in a vmware environment, for creating iscsi targets and accessing iscsi luns on target machines with iscsi-initiator. On db machines, LUNs were used via multipath.conf and udev rules, to be visible by Oracle RAC nodes, for ASM (asm disks). Please advice me on how to get an openfiler image suitable for openstack, in order to simulate a shared-storage. Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Wed Aug 15 15:22:45 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 15 Aug 2018 17:22:45 +0200 Subject: [Openstack-operators] [openstack-dev] [puppet] migrating to storyboard In-Reply-To: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> Message-ID: On Tue, Aug 14, 2018 at 6:33 PM Tobias Urdin wrote: > Please let me know what you think about moving to Storyboard? > Go for it. AFIK we don't have specific blockers to make that migration happening. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed Aug 15 16:04:47 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 15 Aug 2018 12:04:47 -0400 Subject: [Openstack-operators] [openstack-dev] [puppet] migrating to storyboard In-Reply-To: References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> Message-ID: It's a +1 from me. I don't think there is anything linked specifically to it. On Wed, Aug 15, 2018 at 11:22 AM, Emilien Macchi wrote: > On Tue, Aug 14, 2018 at 6:33 PM Tobias Urdin wrote: >> >> Please let me know what you think about moving to Storyboard? > > Go for it. AFIK we don't have specific blockers to make that migration > happening. > > Thanks, > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From chris.friesen at windriver.com Wed Aug 15 16:43:00 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 15 Aug 2018 10:43:00 -0600 Subject: [Openstack-operators] [openstack-dev] [puppet] migrating to storyboard In-Reply-To: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> Message-ID: <5B745814.1040008@windriver.com> On 08/14/2018 10:33 AM, Tobias Urdin wrote: > My goal is that we will be able to swap to Storyboard during the Stein cycle but > considering that we have a low activity on > bugs my opinion is that we could do this swap very easily anything soon as long > as everybody is in favor of it. > > Please let me know what you think about moving to Storyboard? Not a puppet dev, but am currently using Storyboard. One of the things we've run into is that there is no way to attach log files for bug reports to a story. There's an open story on this[1] but it's not assigned to anyone. Chris [1] https://storyboard.openstack.org/#!/story/2003071 From tpb at dyncloud.net Wed Aug 15 17:09:29 2018 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 15 Aug 2018 13:09:29 -0400 Subject: [Openstack-operators] [openstack-dev] [puppet] migrating to storyboard In-Reply-To: <5B745814.1040008@windriver.com> References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> <5B745814.1040008@windriver.com> Message-ID: <20180815170929.dkj7cvmcudk62f63@barron.net> On 15/08/18 10:43 -0600, Chris Friesen wrote: >On 08/14/2018 10:33 AM, Tobias Urdin wrote: > >>My goal is that we will be able to swap to Storyboard during the Stein cycle but >>considering that we have a low activity on >>bugs my opinion is that we could do this swap very easily anything soon as long >>as everybody is in favor of it. >> >>Please let me know what you think about moving to Storyboard? > >Not a puppet dev, but am currently using Storyboard. > >One of the things we've run into is that there is no way to attach log >files for bug reports to a story. There's an open story on this[1] >but it's not assigned to anyone. > Yeah, given that gerrit logs are ephemeral and given that users often don't have the savvy to cut and paste exactly the right log fragments for their issues I think this is a pretty big deal. When I triage bugs I often ask for logs to be uploaded. This may be less of a big deal for puppet than for projects like manila or cinder where there are a set of ongoing services in a custom configuration and there's no often no clear way for the bug triager to set up a reproducer. We're waiting on resolution of [1] before moving ahead with Storyboard for manila. -- Tom >Chris > > >[1] https://storyboard.openstack.org/#!/story/2003071 > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From amy at demarco.com Wed Aug 15 20:00:34 2018 From: amy at demarco.com (Amy Marrich) Date: Wed, 15 Aug 2018 15:00:34 -0500 Subject: [Openstack-operators] OpenStack Diversity and Inclusion Survey Message-ID: The Diversity and Inclusion WG is asking for your assistance. We have revised the Diversity Survey that was originally distributed to the Community in the Fall of 2015 and are looking to update our view of the OpenStack community and it's diversity. We are pleased to be working with members of the CHAOSS project who have signed confidentiality agreements in order to assist us in the following ways: 1) Assistance in analyzing the results 2) And feeding the results into the CHAOSS software and metrics development work so that we can help other Open Source projects Please take the time to fill out the survey and share it with others in the community. The survey can be found at: https://www.surveymonkey.com/r/OpenStackDiversity Thank you for assisting us in this important task! Amy Marrich (spotz) Diversity and Inclusion Working Group Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From pfb29 at cam.ac.uk Thu Aug 16 08:43:08 2018 From: pfb29 at cam.ac.uk (Paul Browne) Date: Thu, 16 Aug 2018 09:43:08 +0100 Subject: [Openstack-operators] Routing deployments + Storage networks Message-ID: Hi operators, I had a quick question for those operators who use a routed topology for their OpenStack deployments, whether routed spine-leaf or routed underlay providing L2 connectivity in tunnels; Where using one, would the storage network (e.g. Ceph public network) also be routed on the same fabric, or would separate fabric be employed here to reduce hops? Many thanks, Paul Browne -- ******************* Paul Browne Research Computing Platforms University Information Services Roger Needham Building JJ Thompson Avenue University of Cambridge Cambridge United Kingdom E-Mail: pfb29 at cam.ac.uk Tel: 0044-1223-746548 ******************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Thu Aug 16 09:58:36 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Thu, 16 Aug 2018 17:58:36 +0800 Subject: [Openstack-operators] [openstack-dev] [nova] ask deployment question Message-ID: Hi,all I have some questions about deploy the large scale openstack cloud.Such as 1.Only in one region situation,How many physical machines are the biggest deployment scale in our community? Can you tell me more about these combined with own practice? Would you give me some methods to learn it?Such as the website,blog and so on. Thank you very much!Looking forward to hearing from you. Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.rydberg at citynetwork.eu Thu Aug 16 10:30:13 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Thu, 16 Aug 2018 12:30:13 +0200 Subject: [Openstack-operators] [publiccloud-wg] Meeting this afternoon for Public Cloud WG Message-ID: <72285db1-6f3b-9370-1539-6030e84cfb4f@citynetwork.eu> Hi folks, Time for a new meeting for the Public Cloud WG. Agenda draft can be found at https://etherpad.openstack.org/p/publiccloud-wg, feel free to add items to that list. See you all later this afternoon at IRC 1400 UTC in #openstack-publiccloud Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From zioproto at gmail.com Thu Aug 16 12:09:25 2018 From: zioproto at gmail.com (Saverio Proto) Date: Thu, 16 Aug 2018 14:09:25 +0200 Subject: [Openstack-operators] [openstack-dev] [nova] ask deployment question In-Reply-To: References: Message-ID: Hello Rambo, you can find information about other deployments reading the User Survey: https://www.openstack.org/user-survey/survey-2018/landing For blog posts with experiences from other operators check out: https://superuser.openstack.org/ and http://planet.openstack.org/ Cheers Saverio Il giorno gio 16 ago 2018 alle ore 11:59 Rambo ha scritto: > > Hi,all > I have some questions about deploy the large scale openstack cloud.Such as > 1.Only in one region situation,How many physical machines are the biggest deployment scale in our community? > Can you tell me more about these combined with own practice? Would you give me some methods to learn it?Such as the website,blog and so on. Thank you very much!Looking forward to hearing from you. > > > > > > > > > Best Regards > Rambo > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From zioproto at gmail.com Thu Aug 16 12:10:47 2018 From: zioproto at gmail.com (Saverio Proto) Date: Thu, 16 Aug 2018 14:10:47 +0200 Subject: [Openstack-operators] Routing deployments + Storage networks In-Reply-To: References: Message-ID: Hello, we route the Ceph storage network in the same fabric. We did not have problems with that so far. Cheers Saverio Il giorno gio 16 ago 2018 alle ore 10:43 Paul Browne ha scritto: > > Hi operators, > > I had a quick question for those operators who use a routed topology for their OpenStack deployments, whether routed spine-leaf or routed underlay providing L2 connectivity in tunnels; > > Where using one, would the storage network (e.g. Ceph public network) also be routed on the same fabric, or would separate fabric be employed here to reduce hops? > > Many thanks, > Paul Browne > > -- > ******************* > Paul Browne > Research Computing Platforms > University Information Services > Roger Needham Building > JJ Thompson Avenue > University of Cambridge > Cambridge > United Kingdom > E-Mail: pfb29 at cam.ac.uk > Tel: 0044-1223-746548 > ******************* > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From anne at openstack.org Thu Aug 16 16:46:44 2018 From: anne at openstack.org (Anne Bertucio) Date: Thu, 16 Aug 2018 09:46:44 -0700 Subject: [Openstack-operators] [community][Rocky] Save the Date: Community Meeting: Rocky + project updates Message-ID: <87363388-E7B9-499B-AC96-D2751504DAEB@openstack.org> Hi all, Save the date for an OpenStack community meeting on August 30 at 3pm UTC. This is the evolution of the “Marketing Community Release Preview” meeting that we’ve had each cycle. While that meeting has always been open to all, we wanted to expand the topics and encourage anyone who was interested in getting updates on the Rocky release or the newer projects at OSF to attend. We’ll cover: —What’s new in Rocky (This info will still be at a fairly high level, so might not be new information if you’re someone who stays up to date in the dev ML or is actively involved in upstream work) —Updates from Airship, Kata Containers, StarlingX, and Zuul —What you can expect at the Berlin Summit in November This meeting will be run over Zoom (look for info closer to the 30th) and will be recorded, so if you can’t make the time, don’t panic! Cheers, Anne Bertucio OpenStack Foundation anne at openstack.org | irc: annabelleB -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Thu Aug 16 17:21:11 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 16 Aug 2018 12:21:11 -0500 Subject: [Openstack-operators] User Committee Nominations Closing Soon! Message-ID: <699C3850-848C-438B-9AFB-FD6A1197EF1D@leafe.com> As I write this, there are just over 12 hours left to get in your nominations for the OpenStack User Committee. Nominations close at August 17, 05:59 UTC. If you are an AUC and thinking about running what's stopping you? If you know of someone who would make a great committee member nominate them (with their permission, of course)! Help make a difference for Operators, Users and the Community! -- Ed Leafe From kennelson11 at gmail.com Thu Aug 16 19:18:04 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 16 Aug 2018 12:18:04 -0700 Subject: [Openstack-operators] [openstack-dev] [puppet] migrating to storyboard In-Reply-To: <01cc050e-c74b-a133-4020-6e0f219b7158@binero.se> References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> <01cc050e-c74b-a133-4020-6e0f219b7158@binero.se> Message-ID: Hey :) I created all the puppet openstack repos in the storyboard-dev envrionment and made a project group[1]. I am struggling a bit with finding all of your launchpad projects to perform the migrations through, can you share a list of all of them? -Kendall (diablo_rojo) [1] https://storyboard-dev.openstack.org/#!/project_group/60 On Wed, Aug 15, 2018 at 12:08 AM Tobias Urdin wrote: > Hello Kendall, > > Thanks for your reply, that sounds awesome! > We can then dig around and see how everything looks when all project bugs > are imported to stories. > > I see no issues with being able to move to Storyboard anytime soon if the > feedback for > moving is positive. > > Best regards > > Tobias > > > On 08/14/2018 09:06 PM, Kendall Nelson wrote: > > Hello! > > The error you hit can be resolved by adding launchpadlib to your tox.ini > if I recall correctly.. > > also, if you'd like, I can run a test migration of puppet's launchpad > projects into our storyboard-dev db (where I've done a ton of other test > migrations) if you want to see how it looks/works with a larger db. Just > let me know and I can kick it off. > > As for a time to migrate, if you all are good with it, we usually schedule > for Friday's so there is even less activity. Its a small project config > change and then we just need an infra core to kick off the script once the > change merges. > > -Kendall (diablo_rojo) > > On Tue, Aug 14, 2018 at 9:33 AM Tobias Urdin > wrote: > >> Hello all incredible Puppeters, >> >> I've tested setting up an Storyboard instance and test migrated >> puppet-ceph and it went without any issues there using the documentation >> [1] [2] >> with just one minor issue during the SB setup [3]. >> >> My goal is that we will be able to swap to Storyboard during the Stein >> cycle but considering that we have a low activity on >> bugs my opinion is that we could do this swap very easily anything soon >> as long as everybody is in favor of it. >> >> Please let me know what you think about moving to Storyboard? >> If everybody is in favor of it we can request a migration to infra >> according to documentation [2]. >> >> I will continue to test the import of all our project while people are >> collecting their thoughts and feedback :) >> >> Best regards >> Tobias >> >> [1] https://docs.openstack.org/infra/storyboard/install/development.html >> [2] https://docs.openstack.org/infra/storyboard/migration.html >> [3] It failed with an error about launchpadlib not being installed, >> solved with `tox -e venv pip install launchpadlib` >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at gmail.com Thu Aug 16 20:43:07 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Thu, 16 Aug 2018 22:43:07 +0200 Subject: [Openstack-operators] [OCTAVIA][KOLLA] - Self signed CA/CERTS In-Reply-To: References: Message-ID: Hi Michael, Ok, it was indeed an issue with the create_certificate.sh script for centos that indeed improperly created the client.pem certificate. However now the amphora is responding with a 404 not found when the worker is trying to post /v0.5/plug/vip/10.1.56.12 I know the amphora and the worker are correctly communicating as I can see the amphora-proxy net namespace being set with the subnet ip as eth1 and the vip as eth1:0 I did a tcpdump on each side (worker and amphora) and correctly see the network two ways communication. I checked the 9443 port and it is correctly binded to the gunicorn server using the lb-mgmt-net ip of the amphora. Is there any logs regarding the gunicorn server where I could check why does the amphora is not able to found the api endpoint? Le mar. 14 août 2018 à 19:53, Flint WALRUS a écrit : > I’ll try to check the certificate format and make the appropriate change > if required or let you know if I’ve got something specific regarding that > topic. > > Kind regards, > G. > Le mar. 14 août 2018 à 19:52, Flint WALRUS a > écrit : > >> Hi Michael, thanks a lot for your quick response once again! >> Le mar. 14 août 2018 à 18:21, Michael Johnson a >> écrit : >> >>> Hi there Flint. >>> >>> Octavia fully supports using self-signed certificates and we use those >>> in our gate tests. >>> We do not allow non-TLS authenticated connections in the code, even >>> for lab setups. >>> >>> This is a configuration issue or certificate file format issue. When >>> the controller is attempting to access the controller local >>> certificate file (likely the one we use to prove we are a valid >>> controller to the amphora agent) it is finding a file without the >>> required PEM format header. Check that your certificate files have the >>> "-----BEGIN CERTIFICATE-----" line (maybe they are in binary DER >>> format and just need to be converted). >>> >>> Also for reference, here are the minimal steps we use in our gate >>> tests to setup the TLS certificates: >>> >>> https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L295-L305 >>> >>> Michael >>> On Tue, Aug 14, 2018 at 4:54 AM Flint WALRUS >>> wrote: >>> > >>> > >>> > Hi guys, >>> > >>> > I continue to work on my Octavia integration using Kolla-Ansible and >>> I'm facing a strange behavior. >>> > >>> > As for now I'm working on a POC using restricted HW and SW Capacities, >>> I'm facing a strange issue when trying to launch a new load-balancer. >>> > >>> > When I create a new LB, would it be using CLI or WebUI, the amphora >>> immediately disappear and the LB status switch to ERROR. >>> > >>> > When looking at logs and especially Worker logs, I see that the error >>> seems to be related to the fact that the worker can't connect to the >>> amphora because of a TLS Handshake issue which so trigger the contact >>> timeout and rollback the amphora creation. >>> > >>> > Here is the worker.log relevant trace: >>> > >>> > 2018-08-07 07:33:57.108 24 INFO octavia.controller.queue.endpoint [-] >>> Creating load balancer 'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'... >>> > 2018-08-07 07:33:57.220 24 INFO >>> octavia.controller.worker.tasks.database_tasks [-] Created Amphora in DB >>> with id c20af002-1576-446e-b99f-7af607b8d885 >>> > 2018-08-07 07:33:57.285 24 INFO octavia.certificates.generator.local >>> [-] Signing a certificate request using OpenSSL locally. >>> > 2018-08-07 07:33:57.285 24 INFO octavia.certificates.generator.local >>> [-] Using CA Certificate from config. >>> > 2018-08-07 07:33:57.285 24 INFO octavia.certificates.generator.local >>> [-] Using CA Private Key from config. >>> > 2018-08-07 07:33:57.286 24 INFO octavia.certificates.generator.local >>> [-] Using CA Private Key Passphrase from config. >>> > 2018-08-07 07:34:04.074 24 INFO >>> octavia.controller.worker.tasks.database_tasks [-] Mark ALLOCATED in DB for >>> amphora: c20af002-1576-446e-b99f-7af607b8d885 with compute id >>> 3bbabfa6-366f-46a4-8fb2-1ec7158e19f1 for load balancer: >>> bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e >>> > 2018-08-07 07:34:04.253 24 INFO >>> octavia.network.drivers.neutron.allowed_address_pairs [-] Port >>> a7bae53e-0bc6-4830-8c75-646a8baf2885 already exists. Nothing to be done. >>> > 2018-08-07 07:34:19.656 24 WARNING >>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to >>> instance. Retrying.: ConnectTimeout: >>> HTTPSConnectionPool(host='10.1.56.103', port=9443): Max retries exceeded >>> with url: /0.5/plug/vip/192.168.56.100 (Caused by >>> ConnectTimeoutError(>> object at 0x7f4c28415c50>, 'Connection to 10.1.56.103 timed out. (connect >>> timeout=10.0)')) >>> > 2018-08-07 07:34:24.673 24 WARNING >>> octavia.controller.worker.controller_worker [-] Task >>> 'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraePostVIPPlug' >>> (c86bbab6-87d5-4930-8832-5511d42efe3e) transitioned into state 'FAILURE' >>> from state 'RUNNING' >>> > 34 predecessors (most recent first): >>> > Atom >>> 'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': >>> }, >>> 'provides': {u'c20af002-1576-446e-b99f-7af607b8d885': >>> >> 0x7f4c284786d0>}} >>> > |__Atom 'reload-lb-after-plug-vip' {'intention': 'EXECUTE', 'state': >>> 'SUCCESS', 'requires': {'loadbalancer_id': >>> u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': >>> } >>> > |__Atom >>> 'octavia.controller.worker.tasks.database_tasks.UpdateAmphoraVIPData' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amps_data': >>> []}, >>> 'provides': None} >>> > |__Atom >>> 'octavia.controller.worker.tasks.network_tasks.ApplyQos' {'intention': >>> 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amps_data': >>> [], >>> 'loadbalancer': >> 0x7f4c2845fe10>, 'update_dict': {'topology': 'SINGLE'}}, 'provides': None} >>> > |__Atom >>> 'octavia.controller.worker.tasks.network_tasks.PlugVIP' {'intention': >>> 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': >>> }, >>> 'provides': []} >>> > |__Atom >>> 'octavia.controller.worker.tasks.database_tasks.UpdateVIPAfterAllocation' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'vip': >>> , >>> 'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': >>> } >>> > |__Atom >>> 'octavia.controller.worker.tasks.network_tasks.AllocateVIP' {'intention': >>> 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': >>> }, >>> 'provides': } >>> > |__Flow 'octavia-new-loadbalancer-net-subflow' >>> > |__Atom >>> 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow-reload-lb-after-amp-assoc' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': >>> {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': >>> } >>> > |__Flow >>> 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow' >>> > |__Atom >>> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amp-standalone-indb' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': >>> }, 'provides': >>> None} >>> > | |__Atom >>> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-reload-amphora' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': >>> u'c20af002-1576-446e-b99f-7af607b8d885'}, 'provides': >>> } >>> > | |__Atom >>> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-allocated-indb' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': >>> , >>> 'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': >>> None} >>> > | |__Atom >>> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-amphora-finalize' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': >>> }, 'provides': >>> None} >>> > | |__Atom >>> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-amphora-info' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': >>> u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_obj': >>> }, 'provides': >>> } >>> > | |__Atom >>> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-compute-wait' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': >>> u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': >>> u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': >>> } >>> > | |__Atom >>> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-booting-indb' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': >>> u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': >>> u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': None} >>> > | |__Atom >>> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-amphora-computeid' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': >>> u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': >>> u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': None} >>> > | |__Atom >>> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-cert-compute-create' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'server_pem': >>> '-----BEGIN CERTIFICATE-----\n REDACTED \n-----END RSA PRIVATE KEY-----\n', >>> 'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', >>> 'build_type_priority': 40}, 'provides': >>> u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'} >>> > | |__Atom >>> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-cert-expiration' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'server_pem': >>> '-----BEGIN CERTIFICATE-----\n REDACTED \n-----END RSA PRIVATE KEY-----\n', >>> 'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885'}, 'provides': None} >>> > | |__Atom >>> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS'} >>> > | |__Atom >>> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {}, 'provides': >>> u'c20af002-1576-446e-b99f-7af607b8d885'} >>> > | >>> |__Flow 'STANDALONE-octavia-create-amp-for-lb-subflow' >>> > | >>> |__Atom >>> 'STANDALONE-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': >>> {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': >>> None} >>> > | >>> |__Flow 'STANDALONE-octavia-get-amphora-for-lb-subflow' >>> > | >>> |__Atom >>> 'octavia.controller.worker.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': >>> {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': >>> None} >>> > | >>> |__Flow 'octavia-create-loadbalancer-flow' >>> > |__Atom >>> 'STANDALONE-octavia-post-map-amp-to-lb-subflow-octavia-mark-amp-standalone-indb' >>> {'intention': 'IGNORE', 'state': 'IGNORE'} >>> > |__Atom >>> 'STANDALONE-octavia-post-map-amp-to-lb-subflow-octavia-reload-amphora' >>> {'intention': 'IGNORE', 'state': 'IGNORE', 'requires': {'amphora_id': None}} >>> > |__Flow >>> 'STANDALONE-octavia-post-map-amp-to-lb-subflow' >>> > |__Atom >>> 'STANDALONE-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': >>> {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': >>> > None} >>> > |__Flow >>> 'STANDALONE-octavia-get-amphora-for-lb-subflow' >>> > |__Atom >>> 'octavia.controller.worker.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' >>> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': >>> {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': >>> None} >>> > |__Flow >>> 'octavia-create-loadbalancer-flow': Error: [('PEM routines', >>> 'PEM_read_bio', 'no start line'), ('SSL routines', >>> 'SSL_CTX_use_certificate_file', 'PEM lib')] >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker Traceback (most recent call >>> last): >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker File >>> "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", >>> line 53, in _execute_task >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker result = >>> task.execute(**arguments) >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker File >>> "/usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py", >>> line 240, in execute >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker amphorae_network_config) >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker File >>> "/usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py", >>> line 219, in execute >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker amphora, loadbalancer, >>> amphorae_network_config) >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker File >>> "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", >>> line 137, in post_vip_plug >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker net_info) >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker File >>> "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", >>> line 388, in plug_vip >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker json=net_info) >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker File >>> "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", >>> line 277, in request >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker r = _request(**reqargs) >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker File >>> "/usr/lib/python2.7/site-packages/requests/sessions.py", line 565, in post >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker return self.request('POST', >>> url, data=data, json=json, **kwargs) >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker File >>> "/usr/lib/python2.7/site-packages/requests/sessions.py", line 518, in >>> request >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker resp = self.send(prep, >>> **send_kwargs) >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker File >>> "/usr/lib/python2.7/site-packages/requests/sessions.py", line 639, in send >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker r = adapter.send(request, >>> **kwargs) >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker File >>> "/usr/lib/python2.7/site-packages/requests/adapters.py", line 438, in send >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker timeout=timeout >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker File >>> "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", >>> line 600, in urlopen >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker chunked=chunked) >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker File >>> "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", >>> line 345, in _make_request >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker self._validate_conn(conn) >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker File >>> "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", >>> line 844, in _validate_conn >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker conn.connect() >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker File >>> "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connection.py", >>> line 326, in connect >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker ssl_context=context) >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker File >>> "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py", >>> line 323, in ssl_wrap_socket >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker >>> context.load_cert_chain(certfile, keyfile) >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker File >>> "/usr/lib/python2.7/site-packages/requests/packages/urllib3/contrib/pyopenssl.py", >>> line 418, in load_cert_chain >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker >>> self._ctx.use_certificate_file(certfile) >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker File >>> "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 817, in >>> use_certificate_file >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker _raise_current_error() >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker File >>> "/usr/lib/python2.7/site-packages/OpenSSL/_util.py", line 54, in >>> exception_from_error_queue >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker raise exception_type(errors) >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker Error: [('PEM routines', >>> 'PEM_read_bio', 'no start line'), ('SSL routines', >>> 'SSL_CTX_use_certificate_file', 'PEM lib')] >>> > 2018-08-07 07:34:24.673 24 ERROR >>> octavia.controller.worker.controller_worker >>> > 2018-08-07 07:34:24.684 24 WARNING >>> octavia.controller.worker.controller_worker [-] Task >>> 'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraePostVIPPlug' >>> (c86bbab6-87d5-4930-8832-5511d42efe3e) transitioned into state 'REVERTED' >>> from state 'REVERTING' >>> > 2018-08-07 07:34:24.687 24 WARNING >>> octavia.controller.worker.controller_worker [-] Task >>> 'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' >>> (1e329fa2-b7c3-4fe2-93f0-d565a18cdbba) transitioned into state 'REVERTED' >>> from state 'REVERTING' >>> > 2018-08-07 07:34:24.691 24 WARNING >>> octavia.controller.worker.controller_worker [-] Task >>> 'reload-lb-after-plug-vip' (842fb766-dd6f-4b3c-936a-7a5baa82c64f) >>> transitioned into state 'REVERTED' from state 'REVERTING' >>> > 2018-08-07 07:34:24.694 24 WARNING >>> octavia.controller.worker.controller_worker [-] Task >>> 'octavia.controller.worker.tasks.database_tasks.UpdateAmphoraVIPData' >>> (761da17b-4655-46a9-9d67-cb7816c7ea0c) transitioned into state 'REVERTED' >>> from state 'REVERTING' >>> > 2018-08-07 07:34:24.716 24 WARNING >>> octavia.controller.worker.controller_worker [-] Task >>> 'octavia.controller.worker.tasks.network_tasks.ApplyQos' >>> (fb40f555-1f0a-48fc-b377-f9e791077f65) transitioned into state 'REVERTED' >>> from state 'REVERTING' >>> > 2018-08-07 07:34:24.719 24 WARNING >>> octavia.controller.worker.tasks.network_tasks [-] Unable to plug VIP for >>> loadbalancer id bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e >>> > 2018-08-07 07:34:26.413 24 WARNING >>> octavia.controller.worker.controller_worker [-] Task >>> 'octavia.controller.worker.tasks.network_tasks.PlugVIP' >>> (ae486972-6e98-4036-9e20-85f335058074) transitioned into state 'REVERTED' >>> from state 'REVERTING' >>> > 2018-08-07 07:34:26.420 24 WARNING >>> octavia.controller.worker.controller_worker [-] Task >>> 'octavia.controller.worker.tasks.database_tasks.UpdateVIPAfterAllocation' >>> (79391dee-6011-4145-b544-499e0a632ca1) transitioned into state 'REVERTED' >>> from state 'REVERTING' >>> > 2018-08-07 07:34:26.425 24 WARNING >>> octavia.controller.worker.tasks.network_tasks [-] Deallocating vip >>> 192.168.56.100 >>> > 2018-08-07 07:34:26.577 24 INFO >>> octavia.network.drivers.neutron.allowed_address_pairs [-] Removing security >>> group 3d84ee39-1db9-475f-b048-9fe0f87201c1 from port >>> a7bae53e-0bc6-4830-8c75-646a8baf2885 >>> > 2018-08-07 07:34:27.187 24 INFO >>> octavia.network.drivers.neutron.allowed_address_pairs [-] Deleted security >>> group 3d84ee39-1db9-475f-b048-9fe0f87201c1 >>> > 2018-08-07 07:34:27.803 24 WARNING >>> octavia.controller.worker.controller_worker [-] Task >>> 'octavia.controller.worker.tasks.network_tasks.AllocateVIP' >>> (7edf30ee-4338-4725-a86e-e45c0aa0aa58) transitioned into state 'REVERTED' >>> from state 'REVERTING' >>> > 2018-08-07 07:34:27.807 24 WARNING >>> octavia.controller.worker.controller_worker [-] Task >>> 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow-reload-lb-after-amp-assoc' >>> (64ac1f84-f8ec-4cc1-b3c8-f18ac8474d73) transitioned into state 'REVERTED' >>> from state 'REVERTING' >>> > 2018-08-07 07:34:27.810 24 WARNING >>> octavia.controller.worker.tasks.database_tasks [-] Reverting amphora role >>> in DB for amp id c20af002-1576-446e-b99f-7af607b8d885 >>> > 2018-08-07 07:34:27.816 24 WARNING >>> octavia.controller.worker.controller_worker [-] Task >>> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amp-standalone-indb' >>> (2db823a7-c4ac-4622-824b-b709c96b554a) transitioned into state 'REVERTED' >>> from state 'REVERTING' >>> > 2018-08-07 07:34:27.819 24 WARNING >>> octavia.controller.worker.controller_worker [-] Task >>> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-reload-amphora' >>> (86219bac-efd2-4d1f-8141-818f1a5bc6f5) transitioned into state 'REVERTED' >>> from state 'REVERTING' >>> > 2018-08-07 07:34:27.821 24 WARNING >>> octavia.controller.worker.tasks.database_tasks [-] Reverting mark amphora >>> ready in DB for amp id c20af002-1576-446e-b99f-7af607b8d885 and compute id >>> 3bbabfa6-366f-46a4-8fb2-1ec7158e19f1 >>> > 2018-08-07 07:34:27.826 24 WARNING >>> octavia.controller.worker.controller_worker [-] Task >>> 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-allocated-indb' >>> (baf58e71-eef6-41e0-9bf3-ab9f9554ace2) transitioned into state 'REVERTED' >>> from state 'REVERTING' >>> > 2018-08-07 07:34:27.828 24 WARNING >>> octavia.controller.worker.tasks.amphora_driver_tasks [-] Reverting amphora >>> finalize. >>> > >>> > Is this a problem if I use self-signed CAcert ? >>> > Is their a way to tell octavia to ignore SSL Error while working on a >>> LAB environment? >>> > >>> > As usual, if you need further information feel free to ask. >>> > >>> > Thanks a lot guys. >>> > >>> > >>> > _______________________________________________ >>> > OpenStack-operators mailing list >>> > OpenStack-operators at lists.openstack.org >>> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Fri Aug 17 07:14:51 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Fri, 17 Aug 2018 09:14:51 +0200 Subject: [Openstack-operators] [openstack-dev] [puppet] migrating to storyboard In-Reply-To: References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> <01cc050e-c74b-a133-4020-6e0f219b7158@binero.se> Message-ID: <7a5ea840-687b-449a-75e0-d5fb9268e46a@binero.se> Hello Kendall, I went through the list of projects [1] and could only really see two things. 1) puppet-rally and puppet-openstack-guide is missing 2) We have some support projects which doesn't really need bug tracking, where some others do.     You can remove puppet-openstack-specs and puppet-openstack-cookiecutter all others would be     nice to still have left so we can track bugs. [2] Best regards Tobias [1] https://storyboard-dev.openstack.org/#!/project_group/60 [2] Keeping puppet-openstack-integration (integration testing) and puppet-openstack_spec_helper (helper for testing).       These two usually has a lot of changes so would be good to be able to track them. On 08/16/2018 09:40 PM, Kendall Nelson wrote: > Hey :) > > I created all the puppet openstack repos in the storyboard-dev > envrionment and made a project group[1]. I am struggling a bit with > finding all of your launchpad projects to perform the migrations > through, can you share a list of all of them? > > -Kendall (diablo_rojo) > > [1] https://storyboard-dev.openstack.org/#!/project_group/60 > > > On Wed, Aug 15, 2018 at 12:08 AM Tobias Urdin > wrote: > > Hello Kendall, > > Thanks for your reply, that sounds awesome! > We can then dig around and see how everything looks when all > project bugs are imported to stories. > > I see no issues with being able to move to Storyboard anytime soon > if the feedback for > moving is positive. > > Best regards > > Tobias > > > On 08/14/2018 09:06 PM, Kendall Nelson wrote: >> Hello! >> >> The error you hit can be resolved by adding launchpadlib to your >> tox.ini if I recall correctly.. >> >> also, if you'd like, I can run a test migration of puppet's >> launchpad projects into our storyboard-dev db (where I've done a >> ton of other test migrations) if you want to see how it >> looks/works with a larger db. Just let me know and I can kick it >> off. >> >> As for a time to migrate, if you all are good with it, we usually >> schedule for Friday's so there is even less activity. Its a small >> project config change and then we just need an infra core to kick >> off the script once the change merges. >> >> -Kendall (diablo_rojo) >> >> On Tue, Aug 14, 2018 at 9:33 AM Tobias Urdin >> > wrote: >> >> Hello all incredible Puppeters, >> >> I've tested setting up an Storyboard instance and test migrated >> puppet-ceph and it went without any issues there using the >> documentation >> [1] [2] >> with just one minor issue during the SB setup [3]. >> >> My goal is that we will be able to swap to Storyboard during >> the Stein >> cycle but considering that we have a low activity on >> bugs my opinion is that we could do this swap very easily >> anything soon >> as long as everybody is in favor of it. >> >> Please let me know what you think about moving to Storyboard? >> If everybody is in favor of it we can request a migration to >> infra >> according to documentation [2]. >> >> I will continue to test the import of all our project while >> people are >> collecting their thoughts and feedback :) >> >> Best regards >> Tobias >> >> [1] >> https://docs.openstack.org/infra/storyboard/install/development.html >> [2] https://docs.openstack.org/infra/storyboard/migration.html >> [3] It failed with an error about launchpadlib not being >> installed, >> solved with `tox -e venv pip install launchpadlib` >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Fri Aug 17 10:52:32 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Fri, 17 Aug 2018 12:52:32 +0200 Subject: [Openstack-operators] [kolla][ptg] Denver PTG on-site or virtual Message-ID: Fellow kolleages. In september is Denver PTG, as per the etherpad [0] only 3 contributors confirmed their presence in the PTG, we expected more people to be there as previous PTGs were full of contributors and operators. In the last kolla meeting [1] with discussed if we should make a virtual PTG rather than a on-site one as we will probably reach a bigger number of attendance. This set us in a bad possition as per: If we do an on-site PTG - Small representation for a whole cycle design, being this one larger than usual. - Many people whiling to attend is not able to be there. If we do a virtual PTG - Some people already spend money to travel for kolla PTG - PTG rooms are already reserved for kolla session - No cross project discussion If there are more people who is going to Denver and haven't signed up at the etherpad, please confirm your presence as it will probably influence on this topic. Here is the though question... What kind of PTG do you prefer for this one, virtual or on-site in Denver? CC to Kendall Nelson from the foundation if she could help us on this though decission, given the small time we have until the PTG both ways have some kind of bad consecuencies for both the project and the contributors. [0] https://etherpad.openstack.org/p/kolla-stein-ptg-planning [1] http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-08-15-15.00.log.html#l-13 Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Fri Aug 17 11:27:04 2018 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 17 Aug 2018 12:27:04 +0100 Subject: [Openstack-operators] [openstack-dev] [kolla][ptg] Denver PTG on-site or virtual In-Reply-To: References: Message-ID: As one of the lucky three kolleagues able to make the PTG, here's my position (inline). On 17 August 2018 at 11:52, Eduardo Gonzalez wrote: > Fellow kolleages. > > In september is Denver PTG, as per the etherpad [0] only 3 contributors > confirmed their presence in the PTG, we expected more people to be there as > previous PTGs were full of contributors and operators. > > In the last kolla meeting [1] with discussed if we should make a virtual > PTG rather than a on-site one as we will probably reach a bigger number of > attendance. > > This set us in a bad possition as per: > > If we do an on-site PTG > > - Small representation for a whole cycle design, being this one larger > than usual. > - Many people whiling to attend is not able to be there. > I agree that three is too small a number to justify an on-site PTG. I was planning to split my time between kolla and ironic, so being able to focus on one project would be beneficial to me, assuming the virtual PTG takes place at a different time. I could still split my time if the virtual PTG occurs at the same time. > > If we do a virtual PTG > > - Some people already spend money to travel for kolla PTG > I would be going anyway. > - PTG rooms are already reserved for kolla session > If the virtual PTG occurs at the same time, we could use the (oversized) reserved room to dial into calls. - No cross project discussion > Happy to attend on behalf of kolla and feed back to the team. > > If there are more people who is going to Denver and haven't signed up at > the etherpad, please confirm your presence as it will probably influence on > this topic. > > Here is the though question... > > What kind of PTG do you prefer for this one, virtual or on-site in Denver? > Virtual makes sense to me. > > CC to Kendall Nelson from the foundation if she could help us on this > though decission, given the small time we have until the PTG both ways have > some kind of bad consecuencies for both the project and the contributors. > > [0] https://etherpad.openstack.org/p/kolla-stein-ptg-planning > [1] http://eavesdrop.openstack.org/meetings/kolla/2018/kolla. > 2018-08-15-15.00.log.html#l-13 > > Regards > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Fri Aug 17 11:59:19 2018 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 17 Aug 2018 12:59:19 +0100 Subject: [Openstack-operators] [openstack-dev] [kolla][ptg] Denver PTG on-site or virtual In-Reply-To: References: Message-ID: Whether there is a physical PTG session or not, I'd certainly like to meet up with other folks who are using and/or contributing to Kolla, let's be sure to make time for that. Mark On 17 August 2018 at 12:54, Adam Harwell wrote: > As one of the other two in the etherpad, I will say that I was looking > forward to getting together face to face with other contributors for the > first time (as I am new to the project), but I guess the majority won't > actually be there, and I understand that we need to do what is best for the > majority as well. > I know that at least one or maybe two other people from my team were also > planning to attend some Kolla sessions, so I'll see if I can get them to > sign up. > The other projects I'll be focused on are Octavia and Barbican, and I know > both have been successful with a hybrid approach in the past (providing > video of the room and allowing folks to dial in and contribute, while also > having a number of people present physically). > Since the room is already reserved, I don't see a huge point in avoiding > its use either. > > --Adam > > > On Fri, Aug 17, 2018, 20:27 Mark Goddard wrote: > >> As one of the lucky three kolleagues able to make the PTG, here's my >> position (inline). >> >> On 17 August 2018 at 11:52, Eduardo Gonzalez wrote: >> >>> Fellow kolleages. >>> >>> In september is Denver PTG, as per the etherpad [0] only 3 contributors >>> confirmed their presence in the PTG, we expected more people to be there as >>> previous PTGs were full of contributors and operators. >>> >>> In the last kolla meeting [1] with discussed if we should make a virtual >>> PTG rather than a on-site one as we will probably reach a bigger number of >>> attendance. >>> >>> This set us in a bad possition as per: >>> >>> If we do an on-site PTG >>> >>> - Small representation for a whole cycle design, being this one larger >>> than usual. >>> - Many people whiling to attend is not able to be there. >>> >> >> I agree that three is too small a number to justify an on-site PTG. I was >> planning to split my time between kolla and ironic, so being able to focus >> on one project would be beneficial to me, assuming the virtual PTG takes >> place at a different time. I could still split my time if the virtual PTG >> occurs at the same time. >> >> >>> >>> If we do a virtual PTG >>> >>> - Some people already spend money to travel for kolla PTG >>> >> >> I would be going anyway. >> >> >>> - PTG rooms are already reserved for kolla session >>> >> >> If the virtual PTG occurs at the same time, we could use the (oversized) >> reserved room to dial into calls. >> >> - No cross project discussion >>> >> >> Happy to attend on behalf of kolla and feed back to the team. >> >>> >>> If there are more people who is going to Denver and haven't signed up at >>> the etherpad, please confirm your presence as it will probably influence on >>> this topic. >>> >>> Here is the though question... >>> >>> What kind of PTG do you prefer for this one, virtual or on-site in >>> Denver? >>> >> >> Virtual makes sense to me. >> >>> >>> CC to Kendall Nelson from the foundation if she could help us on this >>> though decission, given the small time we have until the PTG both ways have >>> some kind of bad consecuencies for both the project and the contributors. >>> >>> [0] https://etherpad.openstack.org/p/kolla-stein-ptg-planning >>> [1] http://eavesdrop.openstack.org/meetings/kolla/2018/kolla. >>> 2018-08-15-15.00.log.html#l-13 >>> >>> Regards >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Aug 17 21:02:32 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 17 Aug 2018 14:02:32 -0700 Subject: [Openstack-operators] [openstack-dev] [puppet] migrating to storyboard In-Reply-To: <7a5ea840-687b-449a-75e0-d5fb9268e46a@binero.se> References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> <01cc050e-c74b-a133-4020-6e0f219b7158@binero.se> <7a5ea840-687b-449a-75e0-d5fb9268e46a@binero.se> Message-ID: On Fri, Aug 17, 2018 at 12:15 AM Tobias Urdin wrote: > Hello Kendall, > > I went through the list of projects [1] and could only really see two > things. > > 1) puppet-rally and puppet-openstack-guide is missing > > I had created the projects, but missed adding them to the group. They should be there now :) > 2) We have some support projects which doesn't really need bug tracking, > where some others do. > You can remove puppet-openstack-specs and > puppet-openstack-cookiecutter all others would be > nice to still have left so we can track bugs. [2] > > i can remove them from the group if you want, but I don't think I can delete the projects entirely. > Best regards > Tobias > > [1] https://storyboard-dev.openstack.org/#!/project_group/60 > [2] Keeping puppet-openstack-integration (integration testing) and > puppet-openstack_spec_helper (helper for testing). > These two usually has a lot of changes so would be good to be able > to track them. > > > On 08/16/2018 09:40 PM, Kendall Nelson wrote: > > Hey :) > > I created all the puppet openstack repos in the storyboard-dev envrionment > and made a project group[1]. I am struggling a bit with finding all of your > launchpad projects to perform the migrations through, can you share a list > of all of them? > > -Kendall (diablo_rojo) > > [1] https://storyboard-dev.openstack.org/#!/project_group/60 > > On Wed, Aug 15, 2018 at 12:08 AM Tobias Urdin > wrote: > >> Hello Kendall, >> >> Thanks for your reply, that sounds awesome! >> We can then dig around and see how everything looks when all project bugs >> are imported to stories. >> >> I see no issues with being able to move to Storyboard anytime soon if the >> feedback for >> moving is positive. >> >> Best regards >> >> Tobias >> >> >> On 08/14/2018 09:06 PM, Kendall Nelson wrote: >> >> Hello! >> >> The error you hit can be resolved by adding launchpadlib to your tox.ini >> if I recall correctly.. >> >> also, if you'd like, I can run a test migration of puppet's launchpad >> projects into our storyboard-dev db (where I've done a ton of other test >> migrations) if you want to see how it looks/works with a larger db. Just >> let me know and I can kick it off. >> >> As for a time to migrate, if you all are good with it, we usually >> schedule for Friday's so there is even less activity. Its a small project >> config change and then we just need an infra core to kick off the script >> once the change merges. >> >> -Kendall (diablo_rojo) >> >> On Tue, Aug 14, 2018 at 9:33 AM Tobias Urdin >> wrote: >> >>> Hello all incredible Puppeters, >>> >>> I've tested setting up an Storyboard instance and test migrated >>> puppet-ceph and it went without any issues there using the documentation >>> [1] [2] >>> with just one minor issue during the SB setup [3]. >>> >>> My goal is that we will be able to swap to Storyboard during the Stein >>> cycle but considering that we have a low activity on >>> bugs my opinion is that we could do this swap very easily anything soon >>> as long as everybody is in favor of it. >>> >>> Please let me know what you think about moving to Storyboard? >>> If everybody is in favor of it we can request a migration to infra >>> according to documentation [2]. >>> >>> I will continue to test the import of all our project while people are >>> collecting their thoughts and feedback :) >>> >>> Best regards >>> Tobias >>> >>> [1] https://docs.openstack.org/infra/storyboard/install/development.html >>> [2] https://docs.openstack.org/infra/storyboard/migration.html >>> [3] It failed with an error about launchpadlib not being installed, >>> solved with `tox -e venv pip install launchpadlib` >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev If that's all good now, I can kick off test migrations but having a complete list of the launchpad projects you maintain and use would be super helpful so I don't miss any. Is there somewhere this is documented? Or can you send me a list? -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Fri Aug 17 22:05:14 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 17 Aug 2018 15:05:14 -0700 Subject: [Openstack-operators] [OCTAVIA][KOLLA] - Self signed CA/CERTS In-Reply-To: References: Message-ID: Yes, the amphora-agent logs to both the amphora-agent.log and syslog in /var/log inside the amphora. Michael On Thu, Aug 16, 2018 at 1:43 PM Flint WALRUS wrote: > > Hi Michael, > > Ok, it was indeed an issue with the create_certificate.sh script for centos that indeed improperly created the client.pem certificate. > > However now the amphora is responding with a 404 not found when the worker is trying to post /v0.5/plug/vip/10.1.56.12 > > I know the amphora and the worker are correctly communicating as I can see the amphora-proxy net namespace being set with the subnet ip as eth1 and the vip as eth1:0 > > I did a tcpdump on each side (worker and amphora) and correctly see the network two ways communication. > > I checked the 9443 port and it is correctly binded to the gunicorn server using the lb-mgmt-net ip of the amphora. > > Is there any logs regarding the gunicorn server where I could check why does the amphora is not able to found the api endpoint? > Le mar. 14 août 2018 à 19:53, Flint WALRUS a écrit : >> >> I’ll try to check the certificate format and make the appropriate change if required or let you know if I’ve got something specific regarding that topic. >> >> Kind regards, >> G. >> Le mar. 14 août 2018 à 19:52, Flint WALRUS a écrit : >>> >>> Hi Michael, thanks a lot for your quick response once again! >>> Le mar. 14 août 2018 à 18:21, Michael Johnson a écrit : >>>> >>>> Hi there Flint. >>>> >>>> Octavia fully supports using self-signed certificates and we use those >>>> in our gate tests. >>>> We do not allow non-TLS authenticated connections in the code, even >>>> for lab setups. >>>> >>>> This is a configuration issue or certificate file format issue. When >>>> the controller is attempting to access the controller local >>>> certificate file (likely the one we use to prove we are a valid >>>> controller to the amphora agent) it is finding a file without the >>>> required PEM format header. Check that your certificate files have the >>>> "-----BEGIN CERTIFICATE-----" line (maybe they are in binary DER >>>> format and just need to be converted). >>>> >>>> Also for reference, here are the minimal steps we use in our gate >>>> tests to setup the TLS certificates: >>>> https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L295-L305 >>>> >>>> Michael >>>> On Tue, Aug 14, 2018 at 4:54 AM Flint WALRUS wrote: >>>> > >>>> > >>>> > Hi guys, >>>> > >>>> > I continue to work on my Octavia integration using Kolla-Ansible and I'm facing a strange behavior. >>>> > >>>> > As for now I'm working on a POC using restricted HW and SW Capacities, I'm facing a strange issue when trying to launch a new load-balancer. >>>> > >>>> > When I create a new LB, would it be using CLI or WebUI, the amphora immediately disappear and the LB status switch to ERROR. >>>> > >>>> > When looking at logs and especially Worker logs, I see that the error seems to be related to the fact that the worker can't connect to the amphora because of a TLS Handshake issue which so trigger the contact timeout and rollback the amphora creation. >>>> > >>>> > Here is the worker.log relevant trace: >>>> > >>>> > 2018-08-07 07:33:57.108 24 INFO octavia.controller.queue.endpoint [-] Creating load balancer 'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'... >>>> > 2018-08-07 07:33:57.220 24 INFO octavia.controller.worker.tasks.database_tasks [-] Created Amphora in DB with id c20af002-1576-446e-b99f-7af607b8d885 >>>> > 2018-08-07 07:33:57.285 24 INFO octavia.certificates.generator.local [-] Signing a certificate request using OpenSSL locally. >>>> > 2018-08-07 07:33:57.285 24 INFO octavia.certificates.generator.local [-] Using CA Certificate from config. >>>> > 2018-08-07 07:33:57.285 24 INFO octavia.certificates.generator.local [-] Using CA Private Key from config. >>>> > 2018-08-07 07:33:57.286 24 INFO octavia.certificates.generator.local [-] Using CA Private Key Passphrase from config. >>>> > 2018-08-07 07:34:04.074 24 INFO octavia.controller.worker.tasks.database_tasks [-] Mark ALLOCATED in DB for amphora: c20af002-1576-446e-b99f-7af607b8d885 with compute id 3bbabfa6-366f-46a4-8fb2-1ec7158e19f1 for load balancer: bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e >>>> > 2018-08-07 07:34:04.253 24 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Port a7bae53e-0bc6-4830-8c75-646a8baf2885 already exists. Nothing to be done. >>>> > 2018-08-07 07:34:19.656 24 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: ConnectTimeout: HTTPSConnectionPool(host='10.1.56.103', port=9443): Max retries exceeded with url: /0.5/plug/vip/192.168.56.100 (Caused by ConnectTimeoutError(, 'Connection to 10.1.56.103 timed out. (connect timeout=10.0)')) >>>> > 2018-08-07 07:34:24.673 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraePostVIPPlug' (c86bbab6-87d5-4930-8832-5511d42efe3e) transitioned into state 'FAILURE' from state 'RUNNING' >>>> > 34 predecessors (most recent first): >>>> > Atom 'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': {u'c20af002-1576-446e-b99f-7af607b8d885': }} >>>> > |__Atom 'reload-lb-after-plug-vip' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': } >>>> > |__Atom 'octavia.controller.worker.tasks.database_tasks.UpdateAmphoraVIPData' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amps_data': []}, 'provides': None} >>>> > |__Atom 'octavia.controller.worker.tasks.network_tasks.ApplyQos' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amps_data': [], 'loadbalancer': , 'update_dict': {'topology': 'SINGLE'}}, 'provides': None} >>>> > |__Atom 'octavia.controller.worker.tasks.network_tasks.PlugVIP' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': []} >>>> > |__Atom 'octavia.controller.worker.tasks.database_tasks.UpdateVIPAfterAllocation' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'vip': , 'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': } >>>> > |__Atom 'octavia.controller.worker.tasks.network_tasks.AllocateVIP' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': } >>>> > |__Flow 'octavia-new-loadbalancer-net-subflow' >>>> > |__Atom 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow-reload-lb-after-amp-assoc' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': } >>>> > |__Flow 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow' >>>> > |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amp-standalone-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': }, 'provides': None} >>>> > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-reload-amphora' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885'}, 'provides': } >>>> > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-allocated-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': , 'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': None} >>>> > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-amphora-finalize' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': }, 'provides': None} >>>> > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-amphora-info' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_obj': }, 'provides': } >>>> > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-compute-wait' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': } >>>> > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-booting-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': None} >>>> > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-amphora-computeid' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': None} >>>> > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-cert-compute-create' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'server_pem': '-----BEGIN CERTIFICATE-----\n REDACTED \n-----END RSA PRIVATE KEY-----\n', 'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', 'build_type_priority': 40}, 'provides': u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'} >>>> > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-cert-expiration' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'server_pem': '-----BEGIN CERTIFICATE-----\n REDACTED \n-----END RSA PRIVATE KEY-----\n', 'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885'}, 'provides': None} >>>> > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' {'intention': 'EXECUTE', 'state': 'SUCCESS'} >>>> > | |__Atom 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {}, 'provides': u'c20af002-1576-446e-b99f-7af607b8d885'} >>>> > | |__Flow 'STANDALONE-octavia-create-amp-for-lb-subflow' >>>> > | |__Atom 'STANDALONE-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': None} >>>> > | |__Flow 'STANDALONE-octavia-get-amphora-for-lb-subflow' >>>> > | |__Atom 'octavia.controller.worker.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': None} >>>> > | |__Flow 'octavia-create-loadbalancer-flow' >>>> > |__Atom 'STANDALONE-octavia-post-map-amp-to-lb-subflow-octavia-mark-amp-standalone-indb' {'intention': 'IGNORE', 'state': 'IGNORE'} >>>> > |__Atom 'STANDALONE-octavia-post-map-amp-to-lb-subflow-octavia-reload-amphora' {'intention': 'IGNORE', 'state': 'IGNORE', 'requires': {'amphora_id': None}} >>>> > |__Flow 'STANDALONE-octavia-post-map-amp-to-lb-subflow' >>>> > |__Atom 'STANDALONE-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': >>>> > None} >>>> > |__Flow 'STANDALONE-octavia-get-amphora-for-lb-subflow' >>>> > |__Atom 'octavia.controller.worker.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': None} >>>> > |__Flow 'octavia-create-loadbalancer-flow': Error: [('PEM routines', 'PEM_read_bio', 'no start line'), ('SSL routines', 'SSL_CTX_use_certificate_file', 'PEM lib')] >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker Traceback (most recent call last): >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker result = task.execute(**arguments) >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py", line 240, in execute >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker amphorae_network_config) >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py", line 219, in execute >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker amphora, loadbalancer, amphorae_network_config) >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 137, in post_vip_plug >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker net_info) >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 388, in plug_vip >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker json=net_info) >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 277, in request >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker r = _request(**reqargs) >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 565, in post >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker return self.request('POST', url, data=data, json=json, **kwargs) >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 518, in request >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker resp = self.send(prep, **send_kwargs) >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 639, in send >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker r = adapter.send(request, **kwargs) >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 438, in send >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker timeout=timeout >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 600, in urlopen >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker chunked=chunked) >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 345, in _make_request >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker self._validate_conn(conn) >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 844, in _validate_conn >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker conn.connect() >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connection.py", line 326, in connect >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker ssl_context=context) >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py", line 323, in ssl_wrap_socket >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker context.load_cert_chain(certfile, keyfile) >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/contrib/pyopenssl.py", line 418, in load_cert_chain >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker self._ctx.use_certificate_file(certfile) >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 817, in use_certificate_file >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker _raise_current_error() >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/site-packages/OpenSSL/_util.py", line 54, in exception_from_error_queue >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker raise exception_type(errors) >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker Error: [('PEM routines', 'PEM_read_bio', 'no start line'), ('SSL routines', 'SSL_CTX_use_certificate_file', 'PEM lib')] >>>> > 2018-08-07 07:34:24.673 24 ERROR octavia.controller.worker.controller_worker >>>> > 2018-08-07 07:34:24.684 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraePostVIPPlug' (c86bbab6-87d5-4930-8832-5511d42efe3e) transitioned into state 'REVERTED' from state 'REVERTING' >>>> > 2018-08-07 07:34:24.687 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' (1e329fa2-b7c3-4fe2-93f0-d565a18cdbba) transitioned into state 'REVERTED' from state 'REVERTING' >>>> > 2018-08-07 07:34:24.691 24 WARNING octavia.controller.worker.controller_worker [-] Task 'reload-lb-after-plug-vip' (842fb766-dd6f-4b3c-936a-7a5baa82c64f) transitioned into state 'REVERTED' from state 'REVERTING' >>>> > 2018-08-07 07:34:24.694 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.database_tasks.UpdateAmphoraVIPData' (761da17b-4655-46a9-9d67-cb7816c7ea0c) transitioned into state 'REVERTED' from state 'REVERTING' >>>> > 2018-08-07 07:34:24.716 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.network_tasks.ApplyQos' (fb40f555-1f0a-48fc-b377-f9e791077f65) transitioned into state 'REVERTED' from state 'REVERTING' >>>> > 2018-08-07 07:34:24.719 24 WARNING octavia.controller.worker.tasks.network_tasks [-] Unable to plug VIP for loadbalancer id bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e >>>> > 2018-08-07 07:34:26.413 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.network_tasks.PlugVIP' (ae486972-6e98-4036-9e20-85f335058074) transitioned into state 'REVERTED' from state 'REVERTING' >>>> > 2018-08-07 07:34:26.420 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.database_tasks.UpdateVIPAfterAllocation' (79391dee-6011-4145-b544-499e0a632ca1) transitioned into state 'REVERTED' from state 'REVERTING' >>>> > 2018-08-07 07:34:26.425 24 WARNING octavia.controller.worker.tasks.network_tasks [-] Deallocating vip 192.168.56.100 >>>> > 2018-08-07 07:34:26.577 24 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Removing security group 3d84ee39-1db9-475f-b048-9fe0f87201c1 from port a7bae53e-0bc6-4830-8c75-646a8baf2885 >>>> > 2018-08-07 07:34:27.187 24 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Deleted security group 3d84ee39-1db9-475f-b048-9fe0f87201c1 >>>> > 2018-08-07 07:34:27.803 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.network_tasks.AllocateVIP' (7edf30ee-4338-4725-a86e-e45c0aa0aa58) transitioned into state 'REVERTED' from state 'REVERTING' >>>> > 2018-08-07 07:34:27.807 24 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow-reload-lb-after-amp-assoc' (64ac1f84-f8ec-4cc1-b3c8-f18ac8474d73) transitioned into state 'REVERTED' from state 'REVERTING' >>>> > 2018-08-07 07:34:27.810 24 WARNING octavia.controller.worker.tasks.database_tasks [-] Reverting amphora role in DB for amp id c20af002-1576-446e-b99f-7af607b8d885 >>>> > 2018-08-07 07:34:27.816 24 WARNING octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amp-standalone-indb' (2db823a7-c4ac-4622-824b-b709c96b554a) transitioned into state 'REVERTED' from state 'REVERTING' >>>> > 2018-08-07 07:34:27.819 24 WARNING octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-reload-amphora' (86219bac-efd2-4d1f-8141-818f1a5bc6f5) transitioned into state 'REVERTED' from state 'REVERTING' >>>> > 2018-08-07 07:34:27.821 24 WARNING octavia.controller.worker.tasks.database_tasks [-] Reverting mark amphora ready in DB for amp id c20af002-1576-446e-b99f-7af607b8d885 and compute id 3bbabfa6-366f-46a4-8fb2-1ec7158e19f1 >>>> > 2018-08-07 07:34:27.826 24 WARNING octavia.controller.worker.controller_worker [-] Task 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-allocated-indb' (baf58e71-eef6-41e0-9bf3-ab9f9554ace2) transitioned into state 'REVERTED' from state 'REVERTING' >>>> > 2018-08-07 07:34:27.828 24 WARNING octavia.controller.worker.tasks.amphora_driver_tasks [-] Reverting amphora finalize. >>>> > >>>> > Is this a problem if I use self-signed CAcert ? >>>> > Is their a way to tell octavia to ignore SSL Error while working on a LAB environment? >>>> > >>>> > As usual, if you need further information feel free to ask. >>>> > >>>> > Thanks a lot guys. >>>> > >>>> > >>>> > _______________________________________________ >>>> > OpenStack-operators mailing list >>>> > OpenStack-operators at lists.openstack.org >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From gael.therond at gmail.com Fri Aug 17 22:31:06 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Sat, 18 Aug 2018 00:31:06 +0200 Subject: [Openstack-operators] [OCTAVIA][KOLLA] - Self signed CA/CERTS In-Reply-To: References: Message-ID: Ok, I’ll have a look at the syslog logs as there was nothing but the 404 inside the agent logs. I’ll not be able to get my hand on my lab until at least the middle of the next week so don’t worry if I’m not coming back to you with my results. It’s not that I solved it, just that I won’t get my lab available as I have to share it to one of my colleagues working on keystone SSO signing :-) Have a nice weekend and thanks again for your kind support. Le sam. 18 août 2018 à 00:05, Michael Johnson a écrit : > Yes, the amphora-agent logs to both the amphora-agent.log and syslog > in /var/log inside the amphora. > > Michael > On Thu, Aug 16, 2018 at 1:43 PM Flint WALRUS > wrote: > > > > Hi Michael, > > > > Ok, it was indeed an issue with the create_certificate.sh script for > centos that indeed improperly created the client.pem certificate. > > > > However now the amphora is responding with a 404 not found when the > worker is trying to post /v0.5/plug/vip/10.1.56.12 > > > > I know the amphora and the worker are correctly communicating as I can > see the amphora-proxy net namespace being set with the subnet ip as eth1 > and the vip as eth1:0 > > > > I did a tcpdump on each side (worker and amphora) and correctly see the > network two ways communication. > > > > I checked the 9443 port and it is correctly binded to the gunicorn > server using the lb-mgmt-net ip of the amphora. > > > > Is there any logs regarding the gunicorn server where I could check why > does the amphora is not able to found the api endpoint? > > Le mar. 14 août 2018 à 19:53, Flint WALRUS a > écrit : > >> > >> I’ll try to check the certificate format and make the appropriate > change if required or let you know if I’ve got something specific regarding > that topic. > >> > >> Kind regards, > >> G. > >> Le mar. 14 août 2018 à 19:52, Flint WALRUS a > écrit : > >>> > >>> Hi Michael, thanks a lot for your quick response once again! > >>> Le mar. 14 août 2018 à 18:21, Michael Johnson a > écrit : > >>>> > >>>> Hi there Flint. > >>>> > >>>> Octavia fully supports using self-signed certificates and we use those > >>>> in our gate tests. > >>>> We do not allow non-TLS authenticated connections in the code, even > >>>> for lab setups. > >>>> > >>>> This is a configuration issue or certificate file format issue. When > >>>> the controller is attempting to access the controller local > >>>> certificate file (likely the one we use to prove we are a valid > >>>> controller to the amphora agent) it is finding a file without the > >>>> required PEM format header. Check that your certificate files have the > >>>> "-----BEGIN CERTIFICATE-----" line (maybe they are in binary DER > >>>> format and just need to be converted). > >>>> > >>>> Also for reference, here are the minimal steps we use in our gate > >>>> tests to setup the TLS certificates: > >>>> > https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L295-L305 > >>>> > >>>> Michael > >>>> On Tue, Aug 14, 2018 at 4:54 AM Flint WALRUS > wrote: > >>>> > > >>>> > > >>>> > Hi guys, > >>>> > > >>>> > I continue to work on my Octavia integration using Kolla-Ansible > and I'm facing a strange behavior. > >>>> > > >>>> > As for now I'm working on a POC using restricted HW and SW > Capacities, I'm facing a strange issue when trying to launch a new > load-balancer. > >>>> > > >>>> > When I create a new LB, would it be using CLI or WebUI, the amphora > immediately disappear and the LB status switch to ERROR. > >>>> > > >>>> > When looking at logs and especially Worker logs, I see that the > error seems to be related to the fact that the worker can't connect to the > amphora because of a TLS Handshake issue which so trigger the contact > timeout and rollback the amphora creation. > >>>> > > >>>> > Here is the worker.log relevant trace: > >>>> > > >>>> > 2018-08-07 07:33:57.108 24 INFO octavia.controller.queue.endpoint > [-] Creating load balancer 'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'... > >>>> > 2018-08-07 07:33:57.220 24 INFO > octavia.controller.worker.tasks.database_tasks [-] Created Amphora in DB > with id c20af002-1576-446e-b99f-7af607b8d885 > >>>> > 2018-08-07 07:33:57.285 24 INFO > octavia.certificates.generator.local [-] Signing a certificate request > using OpenSSL locally. > >>>> > 2018-08-07 07:33:57.285 24 INFO > octavia.certificates.generator.local [-] Using CA Certificate from config. > >>>> > 2018-08-07 07:33:57.285 24 INFO > octavia.certificates.generator.local [-] Using CA Private Key from config. > >>>> > 2018-08-07 07:33:57.286 24 INFO > octavia.certificates.generator.local [-] Using CA Private Key Passphrase > from config. > >>>> > 2018-08-07 07:34:04.074 24 INFO > octavia.controller.worker.tasks.database_tasks [-] Mark ALLOCATED in DB for > amphora: c20af002-1576-446e-b99f-7af607b8d885 with compute id > 3bbabfa6-366f-46a4-8fb2-1ec7158e19f1 for load balancer: > bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e > >>>> > 2018-08-07 07:34:04.253 24 INFO > octavia.network.drivers.neutron.allowed_address_pairs [-] Port > a7bae53e-0bc6-4830-8c75-646a8baf2885 already exists. Nothing to be done. > >>>> > 2018-08-07 07:34:19.656 24 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to > instance. Retrying.: ConnectTimeout: > HTTPSConnectionPool(host='10.1.56.103', port=9443): Max retries exceeded > with url: /0.5/plug/vip/192.168.56.100 (Caused by > ConnectTimeoutError( object at 0x7f4c28415c50>, 'Connection to 10.1.56.103 timed out. (connect > timeout=10.0)')) > >>>> > 2018-08-07 07:34:24.673 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraePostVIPPlug' > (c86bbab6-87d5-4930-8832-5511d42efe3e) transitioned into state 'FAILURE' > from state 'RUNNING' > >>>> > 34 predecessors (most recent first): > >>>> > Atom > 'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': > }, > 'provides': {u'c20af002-1576-446e-b99f-7af607b8d885': > 0x7f4c284786d0>}} > >>>> > |__Atom 'reload-lb-after-plug-vip' {'intention': 'EXECUTE', > 'state': 'SUCCESS', 'requires': {'loadbalancer_id': > u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': > } > >>>> > |__Atom > 'octavia.controller.worker.tasks.database_tasks.UpdateAmphoraVIPData' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amps_data': > []}, > 'provides': None} > >>>> > |__Atom > 'octavia.controller.worker.tasks.network_tasks.ApplyQos' {'intention': > 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amps_data': > [], > 'loadbalancer': 0x7f4c2845fe10>, 'update_dict': {'topology': 'SINGLE'}}, 'provides': None} > >>>> > |__Atom > 'octavia.controller.worker.tasks.network_tasks.PlugVIP' {'intention': > 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': > }, > 'provides': []} > >>>> > |__Atom > 'octavia.controller.worker.tasks.database_tasks.UpdateVIPAfterAllocation' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'vip': > , > 'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': > } > >>>> > |__Atom > 'octavia.controller.worker.tasks.network_tasks.AllocateVIP' {'intention': > 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': > }, > 'provides': } > >>>> > |__Flow 'octavia-new-loadbalancer-net-subflow' > >>>> > |__Atom > 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow-reload-lb-after-amp-assoc' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': > {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': > } > >>>> > |__Flow > 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow' > >>>> > |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amp-standalone-indb' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': > }, 'provides': > None} > >>>> > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-reload-amphora' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': > u'c20af002-1576-446e-b99f-7af607b8d885'}, 'provides': > } > >>>> > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-allocated-indb' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': > , > 'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': > None} > >>>> > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-amphora-finalize' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora': > }, 'provides': > None} > >>>> > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-amphora-info' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': > u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_obj': > }, 'provides': > } > >>>> > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-compute-wait' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': > u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': > u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': > } > >>>> > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-booting-indb' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': > u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': > u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': None} > >>>> > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-amphora-computeid' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amphora_id': > u'c20af002-1576-446e-b99f-7af607b8d885', 'compute_id': > u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'}, 'provides': None} > >>>> > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-cert-compute-create' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'server_pem': > '-----BEGIN CERTIFICATE-----\n REDACTED \n-----END RSA PRIVATE KEY-----\n', > 'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885', > 'build_type_priority': 40}, 'provides': > u'3bbabfa6-366f-46a4-8fb2-1ec7158e19f1'} > >>>> > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-update-cert-expiration' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'server_pem': > '-----BEGIN CERTIFICATE-----\n REDACTED \n-----END RSA PRIVATE KEY-----\n', > 'amphora_id': u'c20af002-1576-446e-b99f-7af607b8d885'}, 'provides': None} > >>>> > | |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' > {'intention': 'EXECUTE', 'state': 'SUCCESS'} > >>>> > | > |__Atom > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {}, 'provides': > u'c20af002-1576-446e-b99f-7af607b8d885'} > >>>> > | > |__Flow 'STANDALONE-octavia-create-amp-for-lb-subflow' > >>>> > | > |__Atom > 'STANDALONE-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': > {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': > None} > >>>> > | > |__Flow 'STANDALONE-octavia-get-amphora-for-lb-subflow' > >>>> > | > |__Atom > 'octavia.controller.worker.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': > {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': > None} > >>>> > | > |__Flow 'octavia-create-loadbalancer-flow' > >>>> > |__Atom > 'STANDALONE-octavia-post-map-amp-to-lb-subflow-octavia-mark-amp-standalone-indb' > {'intention': 'IGNORE', 'state': 'IGNORE'} > >>>> > |__Atom > 'STANDALONE-octavia-post-map-amp-to-lb-subflow-octavia-reload-amphora' > {'intention': 'IGNORE', 'state': 'IGNORE', 'requires': {'amphora_id': None}} > >>>> > |__Flow > 'STANDALONE-octavia-post-map-amp-to-lb-subflow' > >>>> > |__Atom > 'STANDALONE-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': > {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': > >>>> > None} > >>>> > |__Flow > 'STANDALONE-octavia-get-amphora-for-lb-subflow' > >>>> > |__Atom > 'octavia.controller.worker.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': > {'loadbalancer_id': u'bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e'}, 'provides': > None} > >>>> > |__Flow > 'octavia-create-loadbalancer-flow': Error: [('PEM routines', > 'PEM_read_bio', 'no start line'), ('SSL routines', > 'SSL_CTX_use_certificate_file', 'PEM lib')] > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker Traceback (most recent call > last): > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", > line 53, in _execute_task > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker result = > task.execute(**arguments) > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py", > line 240, in execute > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker amphorae_network_config) > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py", > line 219, in execute > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker amphora, loadbalancer, > amphorae_network_config) > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", > line 137, in post_vip_plug > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker net_info) > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", > line 388, in plug_vip > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker json=net_info) > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", > line 277, in request > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker r = _request(**reqargs) > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/sessions.py", line 565, in post > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker return self.request('POST', > url, data=data, json=json, **kwargs) > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/sessions.py", line 518, in > request > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker resp = self.send(prep, > **send_kwargs) > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/sessions.py", line 639, in send > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker r = adapter.send(request, > **kwargs) > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/adapters.py", line 438, in send > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker timeout=timeout > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", > line 600, in urlopen > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker chunked=chunked) > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", > line 345, in _make_request > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker self._validate_conn(conn) > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", > line 844, in _validate_conn > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker conn.connect() > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connection.py", > line 326, in connect > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker ssl_context=context) > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py", > line 323, in ssl_wrap_socket > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker > context.load_cert_chain(certfile, keyfile) > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/requests/packages/urllib3/contrib/pyopenssl.py", > line 418, in load_cert_chain > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker > self._ctx.use_certificate_file(certfile) > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 817, in > use_certificate_file > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker _raise_current_error() > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/site-packages/OpenSSL/_util.py", line 54, in > exception_from_error_queue > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker raise exception_type(errors) > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker Error: [('PEM routines', > 'PEM_read_bio', 'no start line'), ('SSL routines', > 'SSL_CTX_use_certificate_file', 'PEM lib')] > >>>> > 2018-08-07 07:34:24.673 24 ERROR > octavia.controller.worker.controller_worker > >>>> > 2018-08-07 07:34:24.684 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraePostVIPPlug' > (c86bbab6-87d5-4930-8832-5511d42efe3e) transitioned into state 'REVERTED' > from state 'REVERTING' > >>>> > 2018-08-07 07:34:24.687 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' > (1e329fa2-b7c3-4fe2-93f0-d565a18cdbba) transitioned into state 'REVERTED' > from state 'REVERTING' > >>>> > 2018-08-07 07:34:24.691 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'reload-lb-after-plug-vip' (842fb766-dd6f-4b3c-936a-7a5baa82c64f) > transitioned into state 'REVERTED' from state 'REVERTING' > >>>> > 2018-08-07 07:34:24.694 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'octavia.controller.worker.tasks.database_tasks.UpdateAmphoraVIPData' > (761da17b-4655-46a9-9d67-cb7816c7ea0c) transitioned into state 'REVERTED' > from state 'REVERTING' > >>>> > 2018-08-07 07:34:24.716 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'octavia.controller.worker.tasks.network_tasks.ApplyQos' > (fb40f555-1f0a-48fc-b377-f9e791077f65) transitioned into state 'REVERTED' > from state 'REVERTING' > >>>> > 2018-08-07 07:34:24.719 24 WARNING > octavia.controller.worker.tasks.network_tasks [-] Unable to plug VIP for > loadbalancer id bf7ab6e4-081a-4b4d-b7a0-c176a9cb995e > >>>> > 2018-08-07 07:34:26.413 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'octavia.controller.worker.tasks.network_tasks.PlugVIP' > (ae486972-6e98-4036-9e20-85f335058074) transitioned into state 'REVERTED' > from state 'REVERTING' > >>>> > 2018-08-07 07:34:26.420 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'octavia.controller.worker.tasks.database_tasks.UpdateVIPAfterAllocation' > (79391dee-6011-4145-b544-499e0a632ca1) transitioned into state 'REVERTED' > from state 'REVERTING' > >>>> > 2018-08-07 07:34:26.425 24 WARNING > octavia.controller.worker.tasks.network_tasks [-] Deallocating vip > 192.168.56.100 > >>>> > 2018-08-07 07:34:26.577 24 INFO > octavia.network.drivers.neutron.allowed_address_pairs [-] Removing security > group 3d84ee39-1db9-475f-b048-9fe0f87201c1 from port > a7bae53e-0bc6-4830-8c75-646a8baf2885 > >>>> > 2018-08-07 07:34:27.187 24 INFO > octavia.network.drivers.neutron.allowed_address_pairs [-] Deleted security > group 3d84ee39-1db9-475f-b048-9fe0f87201c1 > >>>> > 2018-08-07 07:34:27.803 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'octavia.controller.worker.tasks.network_tasks.AllocateVIP' > (7edf30ee-4338-4725-a86e-e45c0aa0aa58) transitioned into state 'REVERTED' > from state 'REVERTING' > >>>> > 2018-08-07 07:34:27.807 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'octavia-post-loadbalancer-amp_association-subflow-octavia-post-loadbalancer-amp_association-subflow-reload-lb-after-amp-assoc' > (64ac1f84-f8ec-4cc1-b3c8-f18ac8474d73) transitioned into state 'REVERTED' > from state 'REVERTING' > >>>> > 2018-08-07 07:34:27.810 24 WARNING > octavia.controller.worker.tasks.database_tasks [-] Reverting amphora role > in DB for amp id c20af002-1576-446e-b99f-7af607b8d885 > >>>> > 2018-08-07 07:34:27.816 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amp-standalone-indb' > (2db823a7-c4ac-4622-824b-b709c96b554a) transitioned into state 'REVERTED' > from state 'REVERTING' > >>>> > 2018-08-07 07:34:27.819 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-reload-amphora' > (86219bac-efd2-4d1f-8141-818f1a5bc6f5) transitioned into state 'REVERTED' > from state 'REVERTING' > >>>> > 2018-08-07 07:34:27.821 24 WARNING > octavia.controller.worker.tasks.database_tasks [-] Reverting mark amphora > ready in DB for amp id c20af002-1576-446e-b99f-7af607b8d885 and compute id > 3bbabfa6-366f-46a4-8fb2-1ec7158e19f1 > >>>> > 2018-08-07 07:34:27.826 24 WARNING > octavia.controller.worker.controller_worker [-] Task > 'STANDALONE-octavia-create-amp-for-lb-subflow-octavia-mark-amphora-allocated-indb' > (baf58e71-eef6-41e0-9bf3-ab9f9554ace2) transitioned into state 'REVERTED' > from state 'REVERTING' > >>>> > 2018-08-07 07:34:27.828 24 WARNING > octavia.controller.worker.tasks.amphora_driver_tasks [-] Reverting amphora > finalize. > >>>> > > >>>> > Is this a problem if I use self-signed CAcert ? > >>>> > Is their a way to tell octavia to ignore SSL Error while working on > a LAB environment? > >>>> > > >>>> > As usual, if you need further information feel free to ask. > >>>> > > >>>> > Thanks a lot guys. > >>>> > > >>>> > > >>>> > _______________________________________________ > >>>> > OpenStack-operators mailing list > >>>> > OpenStack-operators at lists.openstack.org > >>>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Sun Aug 19 03:09:45 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sun, 19 Aug 2018 11:09:45 +0800 Subject: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures? In-Reply-To: <16524beca00.2784.5f0d7f2baa7831a2bbe6450f254d9a24@bitskrieg.net> References: <81d14679c181cdc1a252570529ca5c4b@bitskrieg.net> <25e24abf-aebc-2881-9981-7f9683ffc700@gmail.com> <06029fcf4648d3aa784783389e986a8d@bitskrieg.net> <26839d31-18b8-ba76-56cc-8bbe4b73fc37@gmail.com> <34763ede-45a3-2d22-37a1-c3fc75ea84d2@gmail.com> <4fbe5786f0765d97229147cc1137a6ce@bitskrieg.net> <20180809172447.GB19251@redhat.com> <16524beca00.2784.5f0d7f2baa7831a2bbe6450f254d9a24@bitskrieg.net> Message-ID: <8c92910f-a8a0-5f1c-41b0-784ff2c3d00a@gmail.com> On 8/11/2018 12:50 AM, Chris Apsey wrote: > This sounds promising and there seems to be a feasible way to do this, > but it also sounds like a decent amount of effort and would be a new > feature in a future release rather than a bugfix - am I correct in that > assessment? Yes I'd say it's a blueprint and not a bug fix - it's not something we'd backport to stable branches upstream, for example. -- Thanks, Matt From mriedemos at gmail.com Sun Aug 19 03:21:03 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 18 Aug 2018 22:21:03 -0500 Subject: [Openstack-operators] [openstack-dev] [nova] deployment question consultation In-Reply-To: References: Message-ID: <51390cd8-8495-5c34-c109-a25e5165e4f3@gmail.com> +ops list On 8/18/2018 10:20 PM, Matt Riedemann wrote: > On 8/13/2018 9:30 PM, Rambo wrote: >>         1.Only in one region situation,what will happen in the cloud >> as expansion of cluster size?Then how solve it?If have the limit >> physical node number under the one region situation?How many nodes >> would be the best in one regione? > > This question seems a bit too open-ended and completely subjective. > >>         2.When to use cellV2 is most suitable in cloud? > > When this has been asked in the past, the best answer I've heard is, > "whatever your current DB and MQ limits are for nova". So if that's > about 200 hosts before the DB/MQ are struggling, then that could a cell. > For reference, CERN has 70 cells with ~200 hosts per cell. However, at > least one public cloud is approaching cells with fewer cells and > thousands of hosts per cell. So it varies based on where your > limitations lie. Also note that cells do not have to be defined by DB/MQ > limits, they can also be used as a way to shard hardware and instance > (flavor) types. For example, generation 1 hardware in cell1, gen2 > hardware in cell2, etc. > >>         3.How to shorten the time of batch creation of instance? > > This again is completely subjective. It would depend on the > configuration, size of nova deployment, size of hardware, available > capacity, etc. Have you done profiling to point out *specific* problem > areas during multi-create, for example, are you packing VMs onto as few > hosts as possible to reduce costs? And if so, are you hitting problems > with that due to rescheduling the server build because you have multiple > scheduler workers picking the same host(s) for a subset of the VMs in > the request? Or are you hitting RPC timeouts during select_destinations? > If so, that might be related to the problem described in [1]. > > [1] https://review.openstack.org/#/c/510235/ > -- Thanks, Matt From tobias.urdin at binero.se Mon Aug 20 07:12:29 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 20 Aug 2018 09:12:29 +0200 Subject: [Openstack-operators] [openstack-dev] [puppet] migrating to storyboard In-Reply-To: References: <971a4543-d7a9-f602-9173-8b5fcf45cb11@binero.se> <01cc050e-c74b-a133-4020-6e0f219b7158@binero.se> <7a5ea840-687b-449a-75e0-d5fb9268e46a@binero.se> Message-ID: <07896a14-6327-8a9a-30f8-c58a8aa9c5eb@binero.se> Hello Kendall, I think you can just leave them in the group then, at your convenience. If they are there we can start using them if so. Best regards Tobias On 08/17/2018 11:08 PM, Kendall Nelson wrote: > > > On Fri, Aug 17, 2018 at 12:15 AM Tobias Urdin > wrote: > > Hello Kendall, > > I went through the list of projects [1] and could only really see > two things. > > 1) puppet-rally and puppet-openstack-guide is missing > > I had created the projects, but missed adding them to the group. They > should be there now :) > > 2) We have some support projects which doesn't really need bug > tracking, where some others do. >     You can remove puppet-openstack-specs and > puppet-openstack-cookiecutter all others would be >     nice to still have left so we can track bugs. [2] > > i can remove them from the group if you want, but I don't think I can > delete the projects entirely. > > Best regards > Tobias > > [1] https://storyboard-dev.openstack.org/#!/project_group/60 > > [2] Keeping puppet-openstack-integration (integration testing) and > puppet-openstack_spec_helper (helper for testing). >       These two usually has a lot of changes so would be good to > be able to track them. > > > On 08/16/2018 09:40 PM, Kendall Nelson wrote: >> Hey :) >> >> I created all the puppet openstack repos in the storyboard-dev >> envrionment and made a project group[1]. I am struggling a bit >> with finding all of your launchpad projects to perform the >> migrations through, can you share a list of all of them? >> >> -Kendall (diablo_rojo) >> >> [1] https://storyboard-dev.openstack.org/#!/project_group/60 >> >> >> On Wed, Aug 15, 2018 at 12:08 AM Tobias Urdin >> > wrote: >> >> Hello Kendall, >> >> Thanks for your reply, that sounds awesome! >> We can then dig around and see how everything looks when all >> project bugs are imported to stories. >> >> I see no issues with being able to move to Storyboard anytime >> soon if the feedback for >> moving is positive. >> >> Best regards >> >> Tobias >> >> >> On 08/14/2018 09:06 PM, Kendall Nelson wrote: >>> Hello! >>> >>> The error you hit can be resolved by adding launchpadlib to >>> your tox.ini if I recall correctly.. >>> >>> also, if you'd like, I can run a test migration of puppet's >>> launchpad projects into our storyboard-dev db (where I've >>> done a ton of other test migrations) if you want to see how >>> it looks/works with a larger db. Just let me know and I can >>> kick it off. >>> >>> As for a time to migrate, if you all are good with it, we >>> usually schedule for Friday's so there is even less >>> activity. Its a small project config change and then we just >>> need an infra core to kick off the script once the change >>> merges. >>> >>> -Kendall (diablo_rojo) >>> >>> On Tue, Aug 14, 2018 at 9:33 AM Tobias Urdin >>> > wrote: >>> >>> Hello all incredible Puppeters, >>> >>> I've tested setting up an Storyboard instance and test >>> migrated >>> puppet-ceph and it went without any issues there using >>> the documentation >>> [1] [2] >>> with just one minor issue during the SB setup [3]. >>> >>> My goal is that we will be able to swap to Storyboard >>> during the Stein >>> cycle but considering that we have a low activity on >>> bugs my opinion is that we could do this swap very >>> easily anything soon >>> as long as everybody is in favor of it. >>> >>> Please let me know what you think about moving to >>> Storyboard? >>> If everybody is in favor of it we can request a >>> migration to infra >>> according to documentation [2]. >>> >>> I will continue to test the import of all our project >>> while people are >>> collecting their thoughts and feedback :) >>> >>> Best regards >>> Tobias >>> >>> [1] >>> https://docs.openstack.org/infra/storyboard/install/development.html >>> [2] >>> https://docs.openstack.org/infra/storyboard/migration.html >>> [3] It failed with an error about launchpadlib not being >>> installed, >>> solved with `tox -e venv pip install launchpadlib` >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > If that's all good now, I can kick off test migrations but having a > complete list of the launchpad projects you maintain and use would be > super helpful so I don't miss any. Is there somewhere this is > documented? Or can you send me a list? > > -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Mon Aug 20 09:36:36 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 20 Aug 2018 11:36:36 +0200 Subject: [Openstack-operators] neutron ipv6 radvd sends out link-local or nothing as def gw (L3 HA issue?) Message-ID: <07d859f1-034e-d3b4-7fc0-0c7b087056a4@binero.se> Hello, Note: before reading, this router was a regular router but was then disable, changed ha=true so it's now a L3 HA router, then it was enabled again. CC openstack-dev for help or feedback if it's a possible bug. I've been testing around with IPv6 and overall the experience has been positive but I've met some weird issue that I cannot put my head around. So this is a neutron L3 router with an outside interface with a ipv4 and ipv6 from the provider network and one inside interface for ipv4 and one inside interface for ipv6. The instances for some reason get's there default gateway as the ipv6 link-local (in fe80::/10) from the router with SLAAC and radvd. (1111.2222 is provider network, 1111.4444 is inside network, they are masked so don't pay attention to the number per se) *interfaces inside router:* 15: ha-9bde1bb1-bd: mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000     link/ether fa:16:3e:05:80:32 brd ff:ff:ff:ff:ff:ff     inet 169.254.192.7/18 brd 169.254.255.255 scope global ha-9bde1bb1-bd        valid_lft forever preferred_lft forever     inet 169.254.0.1/24 scope global ha-9bde1bb1-bd        valid_lft forever preferred_lft forever     inet6 fe80::f816:3eff:fe05:8032/64 scope link        valid_lft forever preferred_lft forever 19: qg-86e465f6-33: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000     link/ether fa:16:3e:3b:8b:a5 brd ff:ff:ff:ff:ff:ff     inet 1.2.3.4/22 scope global qg-86e465f6-33        valid_lft forever preferred_lft forever     inet6 1111:2222::f/64 scope global nodad        valid_lft forever preferred_lft forever     inet6 fe80::f816:3eff:fe3b:8ba5/64 scope link nodad        valid_lft forever preferred_lft forever 1168: qr-5be04815-68: mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000     link/ether fa:16:3e:c3:85:bd brd ff:ff:ff:ff:ff:ff     inet 192.168.99.1/24 scope global qr-5be04815-68        valid_lft forever preferred_lft forever     inet6 fe80::f816:3eff:fec3:85bd/64 scope link        valid_lft forever preferred_lft forever 1169: qr-7fad6b1b-c9: mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000     link/ether fa:16:3e:66:de:a8 brd ff:ff:ff:ff:ff:ff     inet6 1111:4444:0:1::1/64 scope global nodad        valid_lft forever preferred_lft forever     inet6 fe80::f816:3eff:fe66:dea8/64 scope link        valid_lft forever preferred_lft forever I get this error messages in dmesg on the network node: [581085.858869] IPv6: qr-5be04815-68: IPv6 duplicate address 1111:4444:0:1:f816:3eff:fec3:85bd detected! [581085.997497] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address 1111:4444:0:1:f816:3eff:fe66:dea8 detected! [581142.869939] IPv6: qr-5be04815-68: IPv6 duplicate address 1111:4444:0:1:f816:3eff:fec3:85bd detected! [581143.182371] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address 1111:4444:0:1:f816:3eff:fe66:dea8 detected! *radvd:* interface qr-7fad6b1b-c9 {    AdvSendAdvert on;    MinRtrAdvInterval 30;    MaxRtrAdvInterval 100;    AdvLinkMTU 1450;    RDNSS  2001:4860:4860::8888  {};    prefix 1111:4444:0:1::/64    {         AdvOnLink on;         AdvAutonomous on;    }; }; *inside instance:* ipv4 = 192.168.199.7 ipv6 = 1111:4444:0:1:f816:3eff:fe29:723d/64 (from radvd SLAAC) I can ping ipv4 gateway 192.168.199.1 and internet over ipv4. I can ping ipv6 gateway 1111:4444:0:1::1 but I can't ping the internet checking the ipv6 routing table on my instance I either get no default gateway at all or I get a default gateway to a fe80::/10 link-local address. IIRC this worked before I changed the router to a L3 HA router. Appreciate any feedback! Best regards Tobias -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Mon Aug 20 09:37:57 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 20 Aug 2018 11:37:57 +0200 Subject: [Openstack-operators] [neutron] neutron ipv6 radvd sends out link-local or nothing as def gw (L3 HA issue?) In-Reply-To: <07d859f1-034e-d3b4-7fc0-0c7b087056a4@binero.se> References: <07d859f1-034e-d3b4-7fc0-0c7b087056a4@binero.se> Message-ID: Forgot [neutron] tag. On 08/20/2018 11:36 AM, Tobias Urdin wrote: > Hello, > > Note: before reading, this router was a regular router but was then > disable, changed ha=true so it's now a L3 HA router, then it was > enabled again. > CC openstack-dev for help or feedback if it's a possible bug. > > I've been testing around with IPv6 and overall the experience has been > positive but I've met some weird issue that I cannot put my head around. > So this is a neutron L3 router with an outside interface with a ipv4 > and ipv6 from the provider network and one inside interface for ipv4 > and one inside interface for ipv6. > > The instances for some reason get's there default gateway as the ipv6 > link-local (in fe80::/10) from the router with SLAAC and radvd. > > (1111.2222 is provider network, 1111.4444 is inside network, they are > masked so don't pay attention to the number per se) > > *interfaces inside router:* > 15: ha-9bde1bb1-bd: mtu 1450 qdisc > noqueue state UNKNOWN group default qlen 1000 >     link/ether fa:16:3e:05:80:32 brd ff:ff:ff:ff:ff:ff >     inet 169.254.192.7/18 brd 169.254.255.255 scope global ha-9bde1bb1-bd >        valid_lft forever preferred_lft forever >     inet 169.254.0.1/24 scope global ha-9bde1bb1-bd >        valid_lft forever preferred_lft forever >     inet6 fe80::f816:3eff:fe05:8032/64 scope link >        valid_lft forever preferred_lft forever > 19: qg-86e465f6-33: mtu 1500 qdisc > noqueue state UNKNOWN group default qlen 1000 >     link/ether fa:16:3e:3b:8b:a5 brd ff:ff:ff:ff:ff:ff >     inet 1.2.3.4/22 scope global qg-86e465f6-33 >        valid_lft forever preferred_lft forever >     inet6 1111:2222::f/64 scope global nodad >        valid_lft forever preferred_lft forever >     inet6 fe80::f816:3eff:fe3b:8ba5/64 scope link nodad >        valid_lft forever preferred_lft forever > 1168: qr-5be04815-68: mtu 1450 qdisc > noqueue state UNKNOWN group default qlen 1000 >     link/ether fa:16:3e:c3:85:bd brd ff:ff:ff:ff:ff:ff >     inet 192.168.99.1/24 scope global qr-5be04815-68 >        valid_lft forever preferred_lft forever >     inet6 fe80::f816:3eff:fec3:85bd/64 scope link >        valid_lft forever preferred_lft forever > 1169: qr-7fad6b1b-c9: mtu 1450 qdisc > noqueue state UNKNOWN group default qlen 1000 >     link/ether fa:16:3e:66:de:a8 brd ff:ff:ff:ff:ff:ff >     inet6 1111:4444:0:1::1/64 scope global nodad >        valid_lft forever preferred_lft forever >     inet6 fe80::f816:3eff:fe66:dea8/64 scope link >        valid_lft forever preferred_lft forever > > I get this error messages in dmesg on the network node: > [581085.858869] IPv6: qr-5be04815-68: IPv6 duplicate address > 1111:4444:0:1:f816:3eff:fec3:85bd detected! > [581085.997497] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address > 1111:4444:0:1:f816:3eff:fe66:dea8 detected! > [581142.869939] IPv6: qr-5be04815-68: IPv6 duplicate address > 1111:4444:0:1:f816:3eff:fec3:85bd detected! > [581143.182371] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address > 1111:4444:0:1:f816:3eff:fe66:dea8 detected! > > *radvd:* > interface qr-7fad6b1b-c9 > { >    AdvSendAdvert on; >    MinRtrAdvInterval 30; >    MaxRtrAdvInterval 100; > >    AdvLinkMTU 1450; > >    RDNSS  2001:4860:4860::8888  {}; > >    prefix 1111:4444:0:1::/64 >    { >         AdvOnLink on; >         AdvAutonomous on; >    }; > }; > > *inside instance:* > ipv4 = 192.168.199.7 > ipv6 = 1111:4444:0:1:f816:3eff:fe29:723d/64 (from radvd SLAAC) > > I can ping ipv4 gateway 192.168.199.1 and internet over ipv4. > I can ping ipv6 gateway 1111:4444:0:1::1 but I can't ping the internet > > checking the ipv6 routing table on my instance I either get no default > gateway at all or I get a default gateway to a fe80::/10 link-local > address. > IIRC this worked before I changed the router to a L3 HA router. > > Appreciate any feedback! > > Best regards > Tobias -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Mon Aug 20 09:50:44 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 20 Aug 2018 11:50:44 +0200 Subject: [Openstack-operators] [neutron] neutron ipv6 radvd sends out link-local or nothing as def gw (L3 HA issue?) In-Reply-To: References: <07d859f1-034e-d3b4-7fc0-0c7b087056a4@binero.se> Message-ID: <4e1f27a4-cd70-4ad4-5249-20b18e1dab76@binero.se> Ok, so the issue here seems to be that I have a L3 HA router with SLAAC, both the active and standby router will configure the SLAAC obtained address causing a conflict since both side share the same MAC address. Is there any workaround for this? Should SLAAC even be enabled for interfaces on the standby router? Best regards Tobias On 08/20/2018 11:37 AM, Tobias Urdin wrote: > Forgot [neutron] tag. > > On 08/20/2018 11:36 AM, Tobias Urdin wrote: >> Hello, >> >> Note: before reading, this router was a regular router but was then >> disable, changed ha=true so it's now a L3 HA router, then it was >> enabled again. >> CC openstack-dev for help or feedback if it's a possible bug. >> >> I've been testing around with IPv6 and overall the experience has >> been positive but I've met some weird issue that I cannot put my head >> around. >> So this is a neutron L3 router with an outside interface with a ipv4 >> and ipv6 from the provider network and one inside interface for ipv4 >> and one inside interface for ipv6. >> >> The instances for some reason get's there default gateway as the ipv6 >> link-local (in fe80::/10) from the router with SLAAC and radvd. >> >> (1111.2222 is provider network, 1111.4444 is inside network, they are >> masked so don't pay attention to the number per se) >> >> *interfaces inside router:* >> 15: ha-9bde1bb1-bd: mtu 1450 qdisc >> noqueue state UNKNOWN group default qlen 1000 >>     link/ether fa:16:3e:05:80:32 brd ff:ff:ff:ff:ff:ff >>     inet 169.254.192.7/18 brd 169.254.255.255 scope global ha-9bde1bb1-bd >>        valid_lft forever preferred_lft forever >>     inet 169.254.0.1/24 scope global ha-9bde1bb1-bd >>        valid_lft forever preferred_lft forever >>     inet6 fe80::f816:3eff:fe05:8032/64 scope link >>        valid_lft forever preferred_lft forever >> 19: qg-86e465f6-33: mtu 1500 qdisc >> noqueue state UNKNOWN group default qlen 1000 >>     link/ether fa:16:3e:3b:8b:a5 brd ff:ff:ff:ff:ff:ff >>     inet 1.2.3.4/22 scope global qg-86e465f6-33 >>        valid_lft forever preferred_lft forever >>     inet6 1111:2222::f/64 scope global nodad >>        valid_lft forever preferred_lft forever >>     inet6 fe80::f816:3eff:fe3b:8ba5/64 scope link nodad >>        valid_lft forever preferred_lft forever >> 1168: qr-5be04815-68: mtu 1450 >> qdisc noqueue state UNKNOWN group default qlen 1000 >>     link/ether fa:16:3e:c3:85:bd brd ff:ff:ff:ff:ff:ff >>     inet 192.168.99.1/24 scope global qr-5be04815-68 >>        valid_lft forever preferred_lft forever >>     inet6 fe80::f816:3eff:fec3:85bd/64 scope link >>        valid_lft forever preferred_lft forever >> 1169: qr-7fad6b1b-c9: mtu 1450 >> qdisc noqueue state UNKNOWN group default qlen 1000 >>     link/ether fa:16:3e:66:de:a8 brd ff:ff:ff:ff:ff:ff >>     inet6 1111:4444:0:1::1/64 scope global nodad >>        valid_lft forever preferred_lft forever >>     inet6 fe80::f816:3eff:fe66:dea8/64 scope link >>        valid_lft forever preferred_lft forever >> >> I get this error messages in dmesg on the network node: >> [581085.858869] IPv6: qr-5be04815-68: IPv6 duplicate address >> 1111:4444:0:1:f816:3eff:fec3:85bd detected! >> [581085.997497] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address >> 1111:4444:0:1:f816:3eff:fe66:dea8 detected! >> [581142.869939] IPv6: qr-5be04815-68: IPv6 duplicate address >> 1111:4444:0:1:f816:3eff:fec3:85bd detected! >> [581143.182371] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address >> 1111:4444:0:1:f816:3eff:fe66:dea8 detected! >> >> *radvd:* >> interface qr-7fad6b1b-c9 >> { >>    AdvSendAdvert on; >>    MinRtrAdvInterval 30; >>    MaxRtrAdvInterval 100; >> >>    AdvLinkMTU 1450; >> >>    RDNSS  2001:4860:4860::8888  {}; >> >>    prefix 1111:4444:0:1::/64 >>    { >>         AdvOnLink on; >>         AdvAutonomous on; >>    }; >> }; >> >> *inside instance:* >> ipv4 = 192.168.199.7 >> ipv6 = 1111:4444:0:1:f816:3eff:fe29:723d/64 (from radvd SLAAC) >> >> I can ping ipv4 gateway 192.168.199.1 and internet over ipv4. >> I can ping ipv6 gateway 1111:4444:0:1::1 but I can't ping the internet >> >> checking the ipv6 routing table on my instance I either get no >> default gateway at all or I get a default gateway to a fe80::/10 >> link-local address. >> IIRC this worked before I changed the router to a L3 HA router. >> >> Appreciate any feedback! >> >> Best regards >> Tobias > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Mon Aug 20 09:58:07 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 20 Aug 2018 11:58:07 +0200 Subject: [Openstack-operators] [neutron] neutron ipv6 radvd sends out link-local or nothing as def gw (L3 HA issue?) In-Reply-To: <4e1f27a4-cd70-4ad4-5249-20b18e1dab76@binero.se> References: <07d859f1-034e-d3b4-7fc0-0c7b087056a4@binero.se> <4e1f27a4-cd70-4ad4-5249-20b18e1dab76@binero.se> Message-ID: Continuing forward, these patches should've fixed that https://review.openstack.org/#/q/topic:bug/1667756+(status:open+OR+status:merged) I'm on Queens. The two inside interfaces on the backup router: [root at controller2 ~]# ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a cat /proc/sys/net/ipv6/conf/qr-7fad6b1b-c9/accept_ra 1 [root at controller2 ~]# ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a cat /proc/sys/net/ipv6/conf/qr-5be04815-68/accept_ra 1 Perhaps the accept_ra patches does not apply for enable/disable or routers changing from a normal router to a L3 HA router? Best regards On 08/20/2018 11:50 AM, Tobias Urdin wrote: > Ok, so the issue here seems to be that I have a L3 HA router with > SLAAC, both the active and standby router will > configure the SLAAC obtained address causing a conflict since both > side share the same MAC address. > > Is there any workaround for this? Should SLAAC even be enabled for > interfaces on the standby router? > > Best regards > Tobias > > On 08/20/2018 11:37 AM, Tobias Urdin wrote: >> Forgot [neutron] tag. >> >> On 08/20/2018 11:36 AM, Tobias Urdin wrote: >>> Hello, >>> >>> Note: before reading, this router was a regular router but was then >>> disable, changed ha=true so it's now a L3 HA router, then it was >>> enabled again. >>> CC openstack-dev for help or feedback if it's a possible bug. >>> >>> I've been testing around with IPv6 and overall the experience has >>> been positive but I've met some weird issue that I cannot put my >>> head around. >>> So this is a neutron L3 router with an outside interface with a ipv4 >>> and ipv6 from the provider network and one inside interface for ipv4 >>> and one inside interface for ipv6. >>> >>> The instances for some reason get's there default gateway as the >>> ipv6 link-local (in fe80::/10) from the router with SLAAC and radvd. >>> >>> (1111.2222 is provider network, 1111.4444 is inside network, they >>> are masked so don't pay attention to the number per se) >>> >>> *interfaces inside router:* >>> 15: ha-9bde1bb1-bd: mtu 1450 qdisc >>> noqueue state UNKNOWN group default qlen 1000 >>>     link/ether fa:16:3e:05:80:32 brd ff:ff:ff:ff:ff:ff >>>     inet 169.254.192.7/18 brd 169.254.255.255 scope global >>> ha-9bde1bb1-bd >>>        valid_lft forever preferred_lft forever >>>     inet 169.254.0.1/24 scope global ha-9bde1bb1-bd >>>        valid_lft forever preferred_lft forever >>>     inet6 fe80::f816:3eff:fe05:8032/64 scope link >>>        valid_lft forever preferred_lft forever >>> 19: qg-86e465f6-33: mtu 1500 qdisc >>> noqueue state UNKNOWN group default qlen 1000 >>>     link/ether fa:16:3e:3b:8b:a5 brd ff:ff:ff:ff:ff:ff >>>     inet 1.2.3.4/22 scope global qg-86e465f6-33 >>>        valid_lft forever preferred_lft forever >>>     inet6 1111:2222::f/64 scope global nodad >>>        valid_lft forever preferred_lft forever >>>     inet6 fe80::f816:3eff:fe3b:8ba5/64 scope link nodad >>>        valid_lft forever preferred_lft forever >>> 1168: qr-5be04815-68: mtu 1450 >>> qdisc noqueue state UNKNOWN group default qlen 1000 >>>     link/ether fa:16:3e:c3:85:bd brd ff:ff:ff:ff:ff:ff >>>     inet 192.168.99.1/24 scope global qr-5be04815-68 >>>        valid_lft forever preferred_lft forever >>>     inet6 fe80::f816:3eff:fec3:85bd/64 scope link >>>        valid_lft forever preferred_lft forever >>> 1169: qr-7fad6b1b-c9: mtu 1450 >>> qdisc noqueue state UNKNOWN group default qlen 1000 >>>     link/ether fa:16:3e:66:de:a8 brd ff:ff:ff:ff:ff:ff >>>     inet6 1111:4444:0:1::1/64 scope global nodad >>>        valid_lft forever preferred_lft forever >>>     inet6 fe80::f816:3eff:fe66:dea8/64 scope link >>>        valid_lft forever preferred_lft forever >>> >>> I get this error messages in dmesg on the network node: >>> [581085.858869] IPv6: qr-5be04815-68: IPv6 duplicate address >>> 1111:4444:0:1:f816:3eff:fec3:85bd detected! >>> [581085.997497] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address >>> 1111:4444:0:1:f816:3eff:fe66:dea8 detected! >>> [581142.869939] IPv6: qr-5be04815-68: IPv6 duplicate address >>> 1111:4444:0:1:f816:3eff:fec3:85bd detected! >>> [581143.182371] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address >>> 1111:4444:0:1:f816:3eff:fe66:dea8 detected! >>> >>> *radvd:* >>> interface qr-7fad6b1b-c9 >>> { >>>    AdvSendAdvert on; >>>    MinRtrAdvInterval 30; >>>    MaxRtrAdvInterval 100; >>> >>>    AdvLinkMTU 1450; >>> >>>    RDNSS  2001:4860:4860::8888  {}; >>> >>>    prefix 1111:4444:0:1::/64 >>>    { >>>         AdvOnLink on; >>>         AdvAutonomous on; >>>    }; >>> }; >>> >>> *inside instance:* >>> ipv4 = 192.168.199.7 >>> ipv6 = 1111:4444:0:1:f816:3eff:fe29:723d/64 (from radvd SLAAC) >>> >>> I can ping ipv4 gateway 192.168.199.1 and internet over ipv4. >>> I can ping ipv6 gateway 1111:4444:0:1::1 but I can't ping the internet >>> >>> checking the ipv6 routing table on my instance I either get no >>> default gateway at all or I get a default gateway to a fe80::/10 >>> link-local address. >>> IIRC this worked before I changed the router to a L3 HA router. >>> >>> Appreciate any feedback! >>> >>> Best regards >>> Tobias >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Mon Aug 20 10:06:26 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Mon, 20 Aug 2018 12:06:26 +0200 Subject: [Openstack-operators] [neutron] neutron ipv6 radvd sends out link-local or nothing as def gw (L3 HA issue?) In-Reply-To: References: <07d859f1-034e-d3b4-7fc0-0c7b087056a4@binero.se> <4e1f27a4-cd70-4ad4-5249-20b18e1dab76@binero.se> Message-ID: <02ac47b0-e96a-7916-3275-665b10d76d1d@binero.se> When I removed those ips and set accept_ra to 0 on the backup router: ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a sysctl -w net.ipv6.conf.qr-7fad6b1b-c9.accept_ra=0 ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a sysctl -w net.ipv6.conf.qr-5be04815-68.accept_ra=0 ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a ip a l ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a ip addr del 1111:4444:0:1:f816:3eff:fe66:dea8/64 dev qr-7fad6b1b-c9 ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a ip addr del 1111:4444:0:1:f816:3eff:fec3:85bd/64 dev qr-5be04815-68 And enabled ipv6 forwarding on the active router: ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a sysctl -w net.ipv6.conf.all.forwarding=1 It started working again, I think this is an issue when disabling a router, change it to L3 HA and enable it again, so a bug? Best regards Tobias On 08/20/2018 11:58 AM, Tobias Urdin wrote: > Continuing forward, these patches should've fixed that > https://review.openstack.org/#/q/topic:bug/1667756+(status:open+OR+status:merged) > I'm on Queens. > > The two inside interfaces on the backup router: > [root at controller2 ~]# ip netns exec > qrouter-0775785e-a93a-4501-917b-be92ff03f36a cat > /proc/sys/net/ipv6/conf/qr-7fad6b1b-c9/accept_ra > 1 > [root at controller2 ~]# ip netns exec > qrouter-0775785e-a93a-4501-917b-be92ff03f36a cat > /proc/sys/net/ipv6/conf/qr-5be04815-68/accept_ra > 1 > > Perhaps the accept_ra patches does not apply for enable/disable or > routers changing from a normal router to a L3 HA router? > Best regards > > On 08/20/2018 11:50 AM, Tobias Urdin wrote: >> Ok, so the issue here seems to be that I have a L3 HA router with >> SLAAC, both the active and standby router will >> configure the SLAAC obtained address causing a conflict since both >> side share the same MAC address. >> >> Is there any workaround for this? Should SLAAC even be enabled for >> interfaces on the standby router? >> >> Best regards >> Tobias >> >> On 08/20/2018 11:37 AM, Tobias Urdin wrote: >>> Forgot [neutron] tag. >>> >>> On 08/20/2018 11:36 AM, Tobias Urdin wrote: >>>> Hello, >>>> >>>> Note: before reading, this router was a regular router but was then >>>> disable, changed ha=true so it's now a L3 HA router, then it was >>>> enabled again. >>>> CC openstack-dev for help or feedback if it's a possible bug. >>>> >>>> I've been testing around with IPv6 and overall the experience has >>>> been positive but I've met some weird issue that I cannot put my >>>> head around. >>>> So this is a neutron L3 router with an outside interface with a >>>> ipv4 and ipv6 from the provider network and one inside interface >>>> for ipv4 and one inside interface for ipv6. >>>> >>>> The instances for some reason get's there default gateway as the >>>> ipv6 link-local (in fe80::/10) from the router with SLAAC and radvd. >>>> >>>> (1111.2222 is provider network, 1111.4444 is inside network, they >>>> are masked so don't pay attention to the number per se) >>>> >>>> *interfaces inside router:* >>>> 15: ha-9bde1bb1-bd: mtu 1450 >>>> qdisc noqueue state UNKNOWN group default qlen 1000 >>>>     link/ether fa:16:3e:05:80:32 brd ff:ff:ff:ff:ff:ff >>>>     inet 169.254.192.7/18 brd 169.254.255.255 scope global >>>> ha-9bde1bb1-bd >>>>        valid_lft forever preferred_lft forever >>>>     inet 169.254.0.1/24 scope global ha-9bde1bb1-bd >>>>        valid_lft forever preferred_lft forever >>>>     inet6 fe80::f816:3eff:fe05:8032/64 scope link >>>>        valid_lft forever preferred_lft forever >>>> 19: qg-86e465f6-33: mtu 1500 >>>> qdisc noqueue state UNKNOWN group default qlen 1000 >>>>     link/ether fa:16:3e:3b:8b:a5 brd ff:ff:ff:ff:ff:ff >>>>     inet 1.2.3.4/22 scope global qg-86e465f6-33 >>>>        valid_lft forever preferred_lft forever >>>>     inet6 1111:2222::f/64 scope global nodad >>>>        valid_lft forever preferred_lft forever >>>>     inet6 fe80::f816:3eff:fe3b:8ba5/64 scope link nodad >>>>        valid_lft forever preferred_lft forever >>>> 1168: qr-5be04815-68: mtu 1450 >>>> qdisc noqueue state UNKNOWN group default qlen 1000 >>>>     link/ether fa:16:3e:c3:85:bd brd ff:ff:ff:ff:ff:ff >>>>     inet 192.168.99.1/24 scope global qr-5be04815-68 >>>>        valid_lft forever preferred_lft forever >>>>     inet6 fe80::f816:3eff:fec3:85bd/64 scope link >>>>        valid_lft forever preferred_lft forever >>>> 1169: qr-7fad6b1b-c9: mtu 1450 >>>> qdisc noqueue state UNKNOWN group default qlen 1000 >>>>     link/ether fa:16:3e:66:de:a8 brd ff:ff:ff:ff:ff:ff >>>>     inet6 1111:4444:0:1::1/64 scope global nodad >>>>        valid_lft forever preferred_lft forever >>>>     inet6 fe80::f816:3eff:fe66:dea8/64 scope link >>>>        valid_lft forever preferred_lft forever >>>> >>>> I get this error messages in dmesg on the network node: >>>> [581085.858869] IPv6: qr-5be04815-68: IPv6 duplicate address >>>> 1111:4444:0:1:f816:3eff:fec3:85bd detected! >>>> [581085.997497] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address >>>> 1111:4444:0:1:f816:3eff:fe66:dea8 detected! >>>> [581142.869939] IPv6: qr-5be04815-68: IPv6 duplicate address >>>> 1111:4444:0:1:f816:3eff:fec3:85bd detected! >>>> [581143.182371] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address >>>> 1111:4444:0:1:f816:3eff:fe66:dea8 detected! >>>> >>>> *radvd:* >>>> interface qr-7fad6b1b-c9 >>>> { >>>>    AdvSendAdvert on; >>>>    MinRtrAdvInterval 30; >>>>    MaxRtrAdvInterval 100; >>>> >>>>    AdvLinkMTU 1450; >>>> >>>>    RDNSS  2001:4860:4860::8888  {}; >>>> >>>>    prefix 1111:4444:0:1::/64 >>>>    { >>>>         AdvOnLink on; >>>>         AdvAutonomous on; >>>>    }; >>>> }; >>>> >>>> *inside instance:* >>>> ipv4 = 192.168.199.7 >>>> ipv6 = 1111:4444:0:1:f816:3eff:fe29:723d/64 (from radvd SLAAC) >>>> >>>> I can ping ipv4 gateway 192.168.199.1 and internet over ipv4. >>>> I can ping ipv6 gateway 1111:4444:0:1::1 but I can't ping the internet >>>> >>>> checking the ipv6 routing table on my instance I either get no >>>> default gateway at all or I get a default gateway to a fe80::/10 >>>> link-local address. >>>> IIRC this worked before I changed the router to a L3 HA router. >>>> >>>> Appreciate any feedback! >>>> >>>> Best regards >>>> Tobias >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laszlo.budai at gmail.com Mon Aug 20 10:24:52 2018 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Mon, 20 Aug 2018 13:24:52 +0300 Subject: [Openstack-operators] [openstack-ansible] configuration file override In-Reply-To: <380a3446-0f95-c63c-9224-230fa85c77f7@gmail.com> References: <380a3446-0f95-c63c-9224-230fa85c77f7@gmail.com> Message-ID: Dear all, Openstack-ansible (OSA) allows us to override parameters in the configuration files as described here: https://docs.openstack.org/project-deploy-guide/openstack-ansible/draft/app-advanced-config-override.html there is the following statement: "You can also apply overrides on a per-host basis with the following configuration in the /etc/openstack_deploy/openstack_user_config.yml file: compute_hosts: 900089-compute001: ip: 192.0.2.10 host_vars: nova_nova_conf_overrides: DEFAULT: remove_unused_original_minimum_age_seconds: 43200 libvirt: cpu_mode: host-model disk_cachemodes: file=directsync,block=none database: idle_timeout: 300 max_pool_size: 10 " In this example the override is part of a compute host definition and there it is in the host_vars section (compute_hosts -> 900089-compute001 -> host_vars -> override). Is it possible to apply such an override for all the compute hosts by not using the hostname? For instance something like: " compute_hosts: nova_nova_conf_overrides: DEFAULT: remove_unused_original_minimum_age_seconds: 43200 " would this be correct? Thank you, Laszlo From assaf at redhat.com Mon Aug 20 11:49:36 2018 From: assaf at redhat.com (Assaf Muller) Date: Mon, 20 Aug 2018 07:49:36 -0400 Subject: [Openstack-operators] [openstack-dev] [neutron] neutron ipv6 radvd sends out link-local or nothing as def gw (L3 HA issue?) In-Reply-To: <02ac47b0-e96a-7916-3275-665b10d76d1d@binero.se> References: <07d859f1-034e-d3b4-7fc0-0c7b087056a4@binero.se> <4e1f27a4-cd70-4ad4-5249-20b18e1dab76@binero.se> <02ac47b0-e96a-7916-3275-665b10d76d1d@binero.se> Message-ID: On Mon, Aug 20, 2018 at 6:06 AM, Tobias Urdin wrote: > When I removed those ips and set accept_ra to 0 on the backup router: > > ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a sysctl -w > net.ipv6.conf.qr-7fad6b1b-c9.accept_ra=0 > ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a sysctl -w > net.ipv6.conf.qr-5be04815-68.accept_ra=0 > ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a ip a l > ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a ip addr del > 1111:4444:0:1:f816:3eff:fe66:dea8/64 dev qr-7fad6b1b-c9 > ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a ip addr del > 1111:4444:0:1:f816:3eff:fec3:85bd/64 dev qr-5be04815-68 > > And enabled ipv6 forwarding on the active router: > ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a sysctl -w > net.ipv6.conf.all.forwarding=1 > > It started working again, I think this is an issue when disabling a router, > change it to L3 HA and enable it again, so a bug? Quite possibly. Are you able to find a minimal reproducer? > > Best regards > Tobias > > > On 08/20/2018 11:58 AM, Tobias Urdin wrote: > > Continuing forward, these patches should've fixed that > https://review.openstack.org/#/q/topic:bug/1667756+(status:open+OR+status:merged) > I'm on Queens. > > The two inside interfaces on the backup router: > [root at controller2 ~]# ip netns exec > qrouter-0775785e-a93a-4501-917b-be92ff03f36a cat > /proc/sys/net/ipv6/conf/qr-7fad6b1b-c9/accept_ra > 1 > [root at controller2 ~]# ip netns exec > qrouter-0775785e-a93a-4501-917b-be92ff03f36a cat > /proc/sys/net/ipv6/conf/qr-5be04815-68/accept_ra > 1 > > Perhaps the accept_ra patches does not apply for enable/disable or routers > changing from a normal router to a L3 HA router? > Best regards > > On 08/20/2018 11:50 AM, Tobias Urdin wrote: > > Ok, so the issue here seems to be that I have a L3 HA router with SLAAC, > both the active and standby router will > configure the SLAAC obtained address causing a conflict since both side > share the same MAC address. > > Is there any workaround for this? Should SLAAC even be enabled for > interfaces on the standby router? > > Best regards > Tobias > > On 08/20/2018 11:37 AM, Tobias Urdin wrote: > > Forgot [neutron] tag. > > On 08/20/2018 11:36 AM, Tobias Urdin wrote: > > Hello, > > Note: before reading, this router was a regular router but was then disable, > changed ha=true so it's now a L3 HA router, then it was enabled again. > CC openstack-dev for help or feedback if it's a possible bug. > > I've been testing around with IPv6 and overall the experience has been > positive but I've met some weird issue that I cannot put my head around. > So this is a neutron L3 router with an outside interface with a ipv4 and > ipv6 from the provider network and one inside interface for ipv4 and one > inside interface for ipv6. > > The instances for some reason get's there default gateway as the ipv6 > link-local (in fe80::/10) from the router with SLAAC and radvd. > > (1111.2222 is provider network, 1111.4444 is inside network, they are masked > so don't pay attention to the number per se) > > interfaces inside router: > 15: ha-9bde1bb1-bd: mtu 1450 qdisc noqueue > state UNKNOWN group default qlen 1000 > link/ether fa:16:3e:05:80:32 brd ff:ff:ff:ff:ff:ff > inet 169.254.192.7/18 brd 169.254.255.255 scope global ha-9bde1bb1-bd > valid_lft forever preferred_lft forever > inet 169.254.0.1/24 scope global ha-9bde1bb1-bd > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fe05:8032/64 scope link > valid_lft forever preferred_lft forever > 19: qg-86e465f6-33: mtu 1500 qdisc noqueue > state UNKNOWN group default qlen 1000 > link/ether fa:16:3e:3b:8b:a5 brd ff:ff:ff:ff:ff:ff > inet 1.2.3.4/22 scope global qg-86e465f6-33 > valid_lft forever preferred_lft forever > inet6 1111:2222::f/64 scope global nodad > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fe3b:8ba5/64 scope link nodad > valid_lft forever preferred_lft forever > 1168: qr-5be04815-68: mtu 1450 qdisc > noqueue state UNKNOWN group default qlen 1000 > link/ether fa:16:3e:c3:85:bd brd ff:ff:ff:ff:ff:ff > inet 192.168.99.1/24 scope global qr-5be04815-68 > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fec3:85bd/64 scope link > valid_lft forever preferred_lft forever > 1169: qr-7fad6b1b-c9: mtu 1450 qdisc > noqueue state UNKNOWN group default qlen 1000 > link/ether fa:16:3e:66:de:a8 brd ff:ff:ff:ff:ff:ff > inet6 1111:4444:0:1::1/64 scope global nodad > valid_lft forever preferred_lft forever > inet6 fe80::f816:3eff:fe66:dea8/64 scope link > valid_lft forever preferred_lft forever > > I get this error messages in dmesg on the network node: > [581085.858869] IPv6: qr-5be04815-68: IPv6 duplicate address > 1111:4444:0:1:f816:3eff:fec3:85bd detected! > [581085.997497] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address > 1111:4444:0:1:f816:3eff:fe66:dea8 detected! > [581142.869939] IPv6: qr-5be04815-68: IPv6 duplicate address > 1111:4444:0:1:f816:3eff:fec3:85bd detected! > [581143.182371] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address > 1111:4444:0:1:f816:3eff:fe66:dea8 detected! > > radvd: > interface qr-7fad6b1b-c9 > { > AdvSendAdvert on; > MinRtrAdvInterval 30; > MaxRtrAdvInterval 100; > > AdvLinkMTU 1450; > > RDNSS 2001:4860:4860::8888 {}; > > prefix 1111:4444:0:1::/64 > { > AdvOnLink on; > AdvAutonomous on; > }; > }; > > inside instance: > ipv4 = 192.168.199.7 > ipv6 = 1111:4444:0:1:f816:3eff:fe29:723d/64 (from radvd SLAAC) > > I can ping ipv4 gateway 192.168.199.1 and internet over ipv4. > I can ping ipv6 gateway 1111:4444:0:1::1 but I can't ping the internet > > checking the ipv6 routing table on my instance I either get no default > gateway at all or I get a default gateway to a fe80::/10 link-local address. > IIRC this worked before I changed the router to a L3 HA router. > > Appreciate any feedback! > > Best regards > Tobias > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jean-philippe at evrard.me Mon Aug 20 14:28:47 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Mon, 20 Aug 2018 16:28:47 +0200 Subject: [Openstack-operators] =?utf-8?b?Pz09P3V0Zi04P3E/ICBbb3BlbnN0YWNr?= =?utf-8?q?-ansible=5D_configuration_file_override?= In-Reply-To: Message-ID: <724c-5b7ad000-7-1b636@42327300> > In this example the override is part of a compute host definition and there it is in the host_vars section (compute_hosts -> 900089-compute001 -> host_vars -> override). Is it possible to apply such an override for all the compute hosts by not using the hostname? For instance something like: > > " compute_hosts: > nova_nova_conf_overrides: > DEFAULT: > remove_unused_original_minimum_age_seconds: 43200 > " You can set nova_nova_conf_overrides: into a file named /etc/openstack_deploy/user_variables.yml and it will apply on all your nodes. If you want to be more surgical, you'd have to give more details about your OpenStack-Ansible version and what you're trying to achieve. Regards, Jean-Philippe Evrard (evrardjp) From laszlo.budai at gmail.com Mon Aug 20 15:11:03 2018 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Mon, 20 Aug 2018 18:11:03 +0300 Subject: [Openstack-operators] [openstack-ansible] configuration file override In-Reply-To: <724c-5b7ad000-7-1b636@42327300> References: <724c-5b7ad000-7-1b636@42327300> Message-ID: Hello Jean-Philippe, thank you for your answer. for the version I have this: root at ansible-ws1:~# openstack-ansible --version Variable files: "-e @/etc/openstack_deploy/user_secrets.yml -e @/etc/openstack_deploy/user_variables.yml " ansible-playbook 2.4.4.0 config file = /root/.ansible.cfg configured module search path = ['/etc/ansible/roles/plugins/library'] ansible python module location = /opt/ansible-runtime/lib/python3.5/site-packages/ansible executable location = /opt/ansible-runtime/bin/ansible-playbook python version = 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609] EXIT NOTICE [Playbook execution success] ************************************** =============================================================================== root at ansible-ws1:~# I'm trying to install the queens release of openstack. My problem is that the servers that I'm using have their name defined as dcx-cy-blz (datacenter - chassis - blade). After installing openstack these names are reflected as the names of the hosts in nova. We would like to have our compute nodes referenced as compute1, compute2 .... We found that the "host" parameter of the nova.conf can be used for this. We've set an override for this in the user variables: nova_nova_conf_overrides: DEFAULT: host: "{{ inventory_hostname }}.ourdomain" The result was that the name in nova became OK, but when nova-compute was trying to start instances it was failing as interfaces were not created in the ovs br-int. It turned out that we need the same host setting for neutron as well ... so I decided to add a similar entry for the neutron.conf as well, but then in the neutron,conf on the neutron-server we had a name of the form "network1_neutron_server_container-HASH.ourdomain". And the neutron server was complaining that this is not a valid hostname (I suppose due to the underscores that appears in the name ... ). So right now I'm looking for a way to set that host parameter in the neutron.conf and nova.conf only on the compute nodes. Thank you, Laszlo On 20.08.2018 17:28, jean-philippe at evrard.me wrote: >> In this example the override is part of a compute host definition and there it is in the host_vars section (compute_hosts -> 900089-compute001 -> host_vars -> override). Is it possible to apply such an override for all the compute hosts by not using the hostname? For instance something like: >> >> " compute_hosts: >> nova_nova_conf_overrides: >> DEFAULT: >> remove_unused_original_minimum_age_seconds: 43200 >> " > > You can set nova_nova_conf_overrides: into a file named /etc/openstack_deploy/user_variables.yml and it will apply on all your nodes. > > If you want to be more surgical, you'd have to give more details about your OpenStack-Ansible version and what you're trying to achieve. > > Regards, > Jean-Philippe Evrard (evrardjp) > > From mbooth at redhat.com Mon Aug 20 15:29:52 2018 From: mbooth at redhat.com (Matthew Booth) Date: Mon, 20 Aug 2018 16:29:52 +0100 Subject: [Openstack-operators] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration) Message-ID: For those who aren't familiar with it, nova's volume-update (also called swap volume by nova devs) is the nova part of the implementation of cinder's live migration (also called retype). Volume-update is essentially an internal cinder<->nova api, but as that's not a thing it's also unfortunately exposed to users. Some users have found it and are using it, but because it's essentially an internal cinder<->nova api it breaks pretty easily if you don't treat it like a special snowflake. It looks like we've finally found a way it's broken for non-cinder callers that we can't fix, even with a dirty hack. volume-update essentially does a live copy of the data on volume to volume, then seamlessly swaps the attachment to from to . The guest OS on will not notice anything at all as the hypervisor swaps the storage backing an attached volume underneath it. When called by cinder, as intended, cinder does some post-operation cleanup such that is deleted and inherits the same volume_id; that is effectively becomes . When called any other way, however, this cleanup doesn't happen, which breaks a bunch of assumptions. One of these is that a disk's serial number is the same as the attached volume_id. Disk serial number, in KVM at least, is immutable, so can't be updated during volume-update. This is fine if we were called via cinder, because the cinder cleanup means the volume_id stays the same. If called any other way, however, they no longer match, at least until a hard reboot when it will be reset to the new volume_id. It turns out this breaks live migration, but probably other things too. We can't think of a workaround. I wondered why users would want to do this anyway. It turns out that sometimes cinder won't let you migrate a volume, but nova volume-update doesn't do those checks (as they're specific to cinder internals, none of nova's business, and duplicating them would be fragile, so we're not adding them!). Specifically we know that cinder won't let you migrate a volume with snapshots. There may be other reasons. If cinder won't let you migrate your volume, you can still move your data by using nova's volume-update, even though you'll end up with a new volume on the destination, and a slightly broken instance. Apparently the former is a trade-off worth making, but the latter has been reported as a bug. I'd like to make it very clear that nova's volume-update, isn't expected to work correctly except when called by cinder. Specifically there was a proposal that we disable volume-update from non-cinder callers in some way, possibly by asserting volume state that can only be set by cinder. However, I'm also very aware that users are calling volume-update because it fills a need, and we don't want to trap data that wasn't previously trapped. Firstly, is anybody aware of any other reasons to use nova's volume-update directly? Secondly, is there any reason why we shouldn't just document then you have to delete snapshots before doing a volume migration? Hopefully some cinder folks or operators can chime in to let me know how to back them up or somehow make them independent before doing this, at which point the volume itself should be migratable? If we can establish that there's an acceptable alternative to calling volume-update directly for all use-cases we're aware of, I'm going to propose heading off this class of bug by disabling it for non-cinder callers. Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From mrhillsman at gmail.com Mon Aug 20 15:46:28 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 20 Aug 2018 10:46:28 -0500 Subject: [Openstack-operators] OpenStack PTG! Message-ID: Hi everyone, Friendly reminder that ticket prices will increase to USD $599 on August 22 at 11:59pm PT (August 23 at 6:59 UTC) for the PTG. So purchase your tickets before the price increases. Register here: https://denver2018ptg.eventbrite.com < https://denver2018ptg.eventbrite.com/> Also the discounted hotel block is filling up if it has not already and the last date to book in the hotel block is TODAY! so book now here: www.openstack.org/ptg PTG questions?, please email ptg at openstack.org -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Mon Aug 20 16:08:39 2018 From: amy at demarco.com (Amy Marrich) Date: Mon, 20 Aug 2018 11:08:39 -0500 Subject: [Openstack-operators] [openstack-community] OpenStack PTG! In-Reply-To: <78A9DBD7-5474-448A-8D46-C1DA974EF35D@cisco.com> References: <78A9DBD7-5474-448A-8D46-C1DA974EF35D@cisco.com> Message-ID: Gary, >From the description there that sounds like Summit vs PTG. Thanks, Amy (spotz) On Mon, Aug 20, 2018 at 11:02 AM, Gary Kevorkian (gkevorki) < gkevorki at cisco.com> wrote: > The Early Bird deadline was 21/22 August. But, according to the Eventbrite > site, Early Bird pricing has been extended to 28/29 August. > > > > > > > > *Can someone from the OpenStack Foundation confirm, please?* > > > > Thanks! > > GK > > > > > > > > [image: > ttp://www.cisco.com/c/dam/m/en_us/signaturetool/images/banners/standard/09_standard_graphic.png] > > > Gary Kevorkian > > EVENT MARKETING MANAGER > > gkevorki at cisco.com > > Tel: +3237912058 > > [image: > ttp://www.cisco.com/c/dam/m/en_us/signaturetool/images/twitter-16x16.png] > > > [image: > ttp://www.cisco.com/c/dam/m/en_us/signaturetool/images/icons/webex.png] > [image: > ttp://www.cisco.com/c/dam/m/en_us/signaturetool/images/icons/sparks.png] > [image: > ttp://www.cisco.com/c/dam/m/en_us/signaturetool/images/icons/jabber.png] > > > Cisco Systems, Inc. > > United States > > Cisco.com > > [image: ttp://www.cisco.com/assets/swa/img/thinkbeforeyouprint.gif] > > Think before you print. > > This email may contain confidential and privileged material for the sole > use of the intended recipient. Any review, use, distribution or disclosure > by others is strictly prohibited. If you are not the intended recipient (or > authorized to receive for the recipient), please contact the sender by > reply email and delete all copies of this message. > > Please click here > for > Company Registration Information. > > > > > > *From: *Melvin Hillsman > *Date: *Monday, August 20, 2018 at 8:46 AM > *To: *OpenStack Operators , " > community at lists.openstack.org" , > user-committee > *Subject: *[openstack-community] OpenStack PTG! > > > > Hi everyone, > > Friendly reminder that ticket prices will increase to USD $599 on August > 22 at 11:59pm PT (August 23 at 6:59 UTC) for the PTG. So purchase your > tickets before the price increases. > > > > Register here: https://denver2018ptg.eventbrite.com < > https://denver2018ptg.eventbrite.com/> > > Also the discounted hotel block is filling up if it has not already and > the last date to book in the hotel block is TODAY! so book now here: > www.openstack.org/ptg > > PTG questions?, please email ptg at openstack.org openstack.org> > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > > mobile: (832) 264-2646 > > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 3506 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 25276 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 3510 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 3405 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 33012 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 521 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.gif Type: image/gif Size: 134 bytes Desc: not available URL: From mvanwinkle at salesforce.com Mon Aug 20 16:39:38 2018 From: mvanwinkle at salesforce.com (Matt Van Winkle) Date: Mon, 20 Aug 2018 11:39:38 -0500 Subject: [Openstack-operators] Reminder to take the User Survey Message-ID: Hi everyone, The deadline for the 2018 OpenStack User Survey deadline is *tomorrow, August 21 at 11:59pm UTC. *The User Survey is your annual opportunity to provide direct feedback to the OpenStack community, so we can better understand your environment and needs. We send all feedback directly to the project teams who work to improve how we provide value to you. By completing a deployment in the User Survey, you qualify as an Active User Contributor (AUC) and will receive a discount for the Berlin Summit - only $300 USD! The survey will take less than 20 minutes, and there’s not much time left! Please your User Survey by *tomorrow*, *Tuesday, August 21 at 11:59pm UTC.* Get started now: https://www.openstack.org/user-survey Let me know if you have any questions. Thank you, VW -- Matt Van Winkle Senior Manager, Software Engineering | Salesforce -------------- next part -------------- An HTML attachment was scrubbed... URL: From weiler at soe.ucsc.edu Mon Aug 20 16:40:57 2018 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Mon, 20 Aug 2018 09:40:57 -0700 Subject: [Openstack-operators] Horizon Custom Logos (Queens, 13.0.1) Message-ID: <5B7AEF19.5010502@soe.ucsc.edu> Hi Y'all, I've been banging my head against a wall for days on this item and can't find anything via google on how to get around it - I am trying to install a custom logo onto my Horizon Dashboard front page (the splash page). I have my logo ready to go, logo-splash.png. I have tried following the instructions here on how to install a custom logo: https://docs.openstack.org/horizon/queens/admin/customize-configure.html But it simply doesn't work. It seems this stanza... #splash .login { background: #355796 url(../img/my_cloud_logo_medium.png) no-repeat center 35px; } ...doesn't actually replace the logo (which is logo-splash.svg), it only seems to put my file, logo-splash.png as the *background* to the .svg logo. And since the option there is "no-repeat center", it appears *behind* the svg logo and I can't see it. I played around with those options, removing "no-repeat" for example, and it dutifully shows my logo repeating in the background. But I need the default logo-splash.svg file to actually be gone and my logo to exist in it's place. Maybe I'm missing something simple? I'm restarting apache and memchached after every change I make when I was testing. And because the images directory is rebuilt every time I restart apache, I can't even copy in a custom logo-splash.svg file. Which wouldn't help anyway, as I want my .png file in there instead. I don't have the means to create a .svg file at this time. ;) Help! As a side note, I'm using the Queens distribution via RedHat. Many thanks in advance, erich From kendall at openstack.org Mon Aug 20 16:48:41 2018 From: kendall at openstack.org (Kendall Waters) Date: Mon, 20 Aug 2018 11:48:41 -0500 Subject: [Openstack-operators] PTG Registration Prices Increase This Week! Message-ID: <5CB43CCD-BDA1-46E1-9285-ABEAC4AB46D0@openstack.org> Hi everyone, If you haven't registered for the PTG in Denver yet, I'd recommend you do it today or tomorrow as the price will switch to last-minute pricing at the end of day on August 22 ! https://www.openstack.org/ptg Protip: There might still be a couple of rooms available in the PTG hotel, but our hotel block closes TODAY. So book now if you want to be at the center of the activity ! Cheers, Kendall Kendall Waters OpenStack Marketing & Events kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From kendall at openstack.org Mon Aug 20 17:17:45 2018 From: kendall at openstack.org (Kendall Waters) Date: Mon, 20 Aug 2018 12:17:45 -0500 Subject: [Openstack-operators] [openstack-community] OpenStack PTG! In-Reply-To: <78A9DBD7-5474-448A-8D46-C1DA974EF35D@cisco.com> References: <78A9DBD7-5474-448A-8D46-C1DA974EF35D@cisco.com> Message-ID: <52FD69B2-FA22-4FA7-B210-9AB629575E54@openstack.org> Hi Gary, Melvin was talking about the PTG registration. The Summit early bird deadline is August 28 at 11:59pm PT. Cheers, Kendall Kendall Waters OpenStack Marketing & Events kendall at openstack.org > On Aug 20, 2018, at 11:02 AM, Gary Kevorkian (gkevorki) wrote: > > The Early Bird deadline was 21/22 August. But, according to the Eventbrite site, Early Bird pricing has been extended to 28/29 August. > > > > > Can someone from the OpenStack Foundation confirm, please? > > Thanks! > GK > > > > > Gary Kevorkian > EVENT MARKETING MANAGER > gkevorki at cisco.com > Tel: +3237912058 > > > Cisco Systems, Inc. > United States > Cisco.com > > Think before you print. > This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply email and delete all copies of this message. > Please click here for Company Registration Information. > > > From: Melvin Hillsman > > Date: Monday, August 20, 2018 at 8:46 AM > To: OpenStack Operators >, "community at lists.openstack.org " >, user-committee > > Subject: [openstack-community] OpenStack PTG! > > Hi everyone, > > Friendly reminder that ticket prices will increase to USD $599 on August 22 at 11:59pm PT (August 23 at 6:59 UTC) for the PTG. So purchase your tickets before the price increases. > > Register here: https://denver2018ptg.eventbrite.com > > > Also the discounted hotel block is filling up if it has not already and the last date to book in the hotel block is TODAY! so book now here: www.openstack.org/ptg > > > PTG questions?, please email ptg at openstack.org at openstack.org > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community -------------- next part -------------- An HTML attachment was scrubbed... URL: From kendall at openstack.org Mon Aug 20 17:32:55 2018 From: kendall at openstack.org (Kendall Waters) Date: Mon, 20 Aug 2018 12:32:55 -0500 Subject: [Openstack-operators] Early Bird Registration Deadline Extended to 8/28 - Berlin Summit Message-ID: <9F966B33-E6CE-4257-8ADB-50BF582230D0@openstack.org> Hi everyone, The OpenStack Summit Berlin schedule is now live and in order to give people some extra time to book tickets, we have decided to extend early bird registration. The NEW early bird registration deadline is August 28 at 11:59pm PT (August 29, 6:59 UTC). Register now before the price increases! Don’t miss out on sessions and workshops from organizations such as Oerlikon ManMade Fibers, Workday, CERN, Volkswagen, BMW and more. If you have any questions, please email summit at openstack.org . Cheers, Kendall Kendall Waters OpenStack Marketing & Events kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Tue Aug 21 09:32:11 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Tue, 21 Aug 2018 11:32:11 +0200 Subject: [Openstack-operators] =?utf-8?b?Pz09P3V0Zi04P3E/ICBbb3BlbnN0YWNr?= =?utf-8?q?-ansible=5D_configuration_file_override?= In-Reply-To: Message-ID: <1de6-5b7bdc00-d-720a0a00@98043852> > My problem is that the servers that I'm using have their name defined as dcx-cy-blz (datacenter - chassis - blade). After installing openstack these names are reflected as the names of the hosts in nova. > We would like to have our compute nodes referenced as compute1, compute2 .... The simplest is then to use compute1, compute2, etc. in your /etc/openstack_deploy/openstack_user_config. You can still internally refer to dcx-cy-blz if you like, it will just not appear openstack or in the inventory. Regards, JP From laszlo.budai at gmail.com Tue Aug 21 09:48:30 2018 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Tue, 21 Aug 2018 12:48:30 +0300 Subject: [Openstack-operators] [openstack-ansible] configuration file override In-Reply-To: <1de6-5b7bdc00-d-720a0a00@98043852> References: <1de6-5b7bdc00-d-720a0a00@98043852> Message-ID: On 21.08.2018 12:32, jean-philippe at evrard.me wrote: >> My problem is that the servers that I'm using have their name defined as dcx-cy-blz (datacenter - chassis - blade). After installing openstack these names are reflected as the names of the hosts in nova. >> We would like to have our compute nodes referenced as compute1, compute2 .... > > The simplest is then to use compute1, compute2, etc. in your /etc/openstack_deploy/openstack_user_config. You can still internally refer to dcx-cy-blz if you like, it will just not appear openstack or in the inventory. > > Regards, > JP > > This is how I tried, but did not worked out .... in the /etc/openstack_deploy/openstack_user_config.yml I had: _compute_hosts: &compute_hosts compute1: ip: 10.210.201.40 host_vars: neutron_neutron_conf_overrides: DEFAULT: host: "{{ inventory_hostname }}.example.intra" nova_nova_conf_overrides: DEFAULT: host: "{{ inventory_hostname }}.example.intra" compute2: ip: 10.210.201.41 host_vars: neutron_neutron_conf_overrides: DEFAULT: host: "{{ inventory_hostname }}.example.intra" nova_nova_conf_overrides: DEFAULT: host: "{{ inventory_hostname }}.example.intra" What have done wrong? Thank you, Laszlo From jean-philippe at evrard.me Tue Aug 21 10:12:08 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Tue, 21 Aug 2018 12:12:08 +0200 Subject: [Openstack-operators] =?utf-8?b?Pz09P3V0Zi04P3E/ICBbb3BlbnN0YWNr?= =?utf-8?q?-ansible=5D_configuration_file_override?= In-Reply-To: Message-ID: <36b5-5b7be580-3-7da1fb00@71145723> On Tuesday, August 21, 2018 11:48 CEST, Budai Laszlo wrote: > On 21.08.2018 12:32, jean-philippe at evrard.me wrote: > >> My problem is that the servers that I'm using have their name defined as dcx-cy-blz (datacenter - chassis - blade). After installing openstack these names are reflected as the names of the hosts in nova. > >> We would like to have our compute nodes referenced as compute1, compute2 .... > > > > The simplest is then to use compute1, compute2, etc. in your /etc/openstack_deploy/openstack_user_config. You can still internally refer to dcx-cy-blz if you like, it will just not appear openstack or in the inventory. > > > > Regards, > > JP > > > > > > This is how I tried, but did not worked out .... in the /etc/openstack_deploy/openstack_user_config.yml I had: > > _compute_hosts: &compute_hosts > compute1: > ip: 10.210.201.40 > host_vars: > neutron_neutron_conf_overrides: > DEFAULT: > host: "{{ inventory_hostname }}.example.intra" > nova_nova_conf_overrides: > DEFAULT: > host: "{{ inventory_hostname }}.example.intra" Hello. I am not sure what &compute_hosts is, as this is just an extract of your file. On top of this, I am not sure these *_conf_overrides need to exist. if your hosts are named compute1, compute2, ... Maybe a cleanup of your environment and redeploy would help you? I am not sure to have enough information to answer you there. Best regards, Jean-Philippe Evrard (evrardjp) From lyarwood at redhat.com Tue Aug 21 10:36:28 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 21 Aug 2018 11:36:28 +0100 Subject: [Openstack-operators] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration) In-Reply-To: References: Message-ID: <20180821103628.dk3ok76fdruwsaut@lyarwood.usersys.redhat.com> On 20-08-18 16:29:52, Matthew Booth wrote: > For those who aren't familiar with it, nova's volume-update (also > called swap volume by nova devs) is the nova part of the > implementation of cinder's live migration (also called retype). > Volume-update is essentially an internal cinder<->nova api, but as > that's not a thing it's also unfortunately exposed to users. Some > users have found it and are using it, but because it's essentially an > internal cinder<->nova api it breaks pretty easily if you don't treat > it like a special snowflake. It looks like we've finally found a way > it's broken for non-cinder callers that we can't fix, even with a > dirty hack. > > volume-update essentially does a live copy of the > data on volume to volume, then seamlessly swaps the > attachment to from to . The guest OS on > will not notice anything at all as the hypervisor swaps the storage > backing an attached volume underneath it. > > When called by cinder, as intended, cinder does some post-operation > cleanup such that is deleted and inherits the same > volume_id; that is effectively becomes . When called any > other way, however, this cleanup doesn't happen, which breaks a bunch > of assumptions. One of these is that a disk's serial number is the > same as the attached volume_id. Disk serial number, in KVM at least, > is immutable, so can't be updated during volume-update. This is fine > if we were called via cinder, because the cinder cleanup means the > volume_id stays the same. If called any other way, however, they no > longer match, at least until a hard reboot when it will be reset to > the new volume_id. It turns out this breaks live migration, but > probably other things too. We can't think of a workaround. > > I wondered why users would want to do this anyway. It turns out that > sometimes cinder won't let you migrate a volume, but nova > volume-update doesn't do those checks (as they're specific to cinder > internals, none of nova's business, and duplicating them would be > fragile, so we're not adding them!). Specifically we know that cinder > won't let you migrate a volume with snapshots. There may be other > reasons. If cinder won't let you migrate your volume, you can still > move your data by using nova's volume-update, even though you'll end > up with a new volume on the destination, and a slightly broken > instance. Apparently the former is a trade-off worth making, but the > latter has been reported as a bug. > > I'd like to make it very clear that nova's volume-update, isn't > expected to work correctly except when called by cinder. Specifically > there was a proposal that we disable volume-update from non-cinder > callers in some way, possibly by asserting volume state that can only > be set by cinder. However, I'm also very aware that users are calling > volume-update because it fills a need, and we don't want to trap data > that wasn't previously trapped. > > Firstly, is anybody aware of any other reasons to use nova's > volume-update directly? > > Secondly, is there any reason why we shouldn't just document then you > have to delete snapshots before doing a volume migration? Hopefully > some cinder folks or operators can chime in to let me know how to back > them up or somehow make them independent before doing this, at which > point the volume itself should be migratable? > > If we can establish that there's an acceptable alternative to calling > volume-update directly for all use-cases we're aware of, I'm going to > propose heading off this class of bug by disabling it for non-cinder > callers. I'm definitely in favor of hiding this from users eventually but wouldn't this require some form of deprecation cycle? Warnings within the API documentation would also be useful and even something we could backport to stable to highlight just how fragile this API is ahead of any policy change. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From nick at stackhpc.com Tue Aug 21 13:28:42 2018 From: nick at stackhpc.com (Nick Jones) Date: Tue, 21 Aug 2018 14:28:42 +0100 Subject: [Openstack-operators] Horizon Custom Logos (Queens, 13.0.1) In-Reply-To: <5B7AEF19.5010502@soe.ucsc.edu> References: <5B7AEF19.5010502@soe.ucsc.edu> Message-ID: Hi Erich. Yeah, I battled against this myself quite recently. Here's what I did to add a logo to the Horizon splash page and to the header of each page itself. Create a file called _splash.html, containing:
And a file called _brand.html, containing: {% load branding %} {% load themes %} I then created a folder called /usr/share/openstack-dashboard/openstack_dashboard/themes/default/templates/auth/ and copied _splash.html into there, copied _brand.html into /usr/share/openstack-dashboard/openstack_dashboard/templates/header/, and finally my 'logo.png' was copied into /usr/lib/python2.7/site-packages/openstack_dashboard/static/dashboard/img/ Note that this approach might differ slightly from your setup, as in my case it's a Kolla-based deployment so these changes are applied to the image I'm using to deploy a Horizon container. But it's the same release (Queens) and a CentOS base image, so in principle the steps should work for you. Hope that helps. -- -Nick On 20 August 2018 at 17:40, Erich Weiler wrote: > Hi Y'all, > > I've been banging my head against a wall for days on this item and can't > find anything via google on how to get around it - I am trying to install a > custom logo onto my Horizon Dashboard front page (the splash page). I have > my logo ready to go, logo-splash.png. I have tried following the > instructions here on how to install a custom logo: > > https://docs.openstack.org/horizon/queens/admin/customize-configure.html > > But it simply doesn't work. It seems this stanza... > > #splash .login { > background: #355796 url(../img/my_cloud_logo_medium.png) no-repeat center > 35px; > } > > ...doesn't actually replace the logo (which is logo-splash.svg), it only > seems to put my file, logo-splash.png as the *background* to the .svg > logo. And since the option there is "no-repeat center", it appears > *behind* the svg logo and I can't see it. I played around with those > options, removing "no-repeat" for example, and it dutifully shows my logo > repeating in the background. But I need the default logo-splash.svg file > to actually be gone and my logo to exist in it's place. Maybe I'm missing > something simple? > > I'm restarting apache and memchached after every change I make when I was > testing. > > And because the images directory is rebuilt every time I restart apache, I > can't even copy in a custom logo-splash.svg file. Which wouldn't help > anyway, as I want my .png file in there instead. I don't have the means to > create a .svg file at this time. ;) > > Help! > > As a side note, I'm using the Queens distribution via RedHat. > > Many thanks in advance, > erich > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Tue Aug 21 15:40:51 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 21 Aug 2018 11:40:51 -0400 Subject: [Openstack-operators] Ops Meetup Agenda Planning - Denver Edition Message-ID: Hello Ops, As you are hopefully aware, the Ops meetup, now integrated as part of the Project Team Gathering (PTG) is rapidly approaching. We are a bit behind on session planning, and we need your help to create an agenda. Please insert your session ideas into this etherpad, add subtopics to already proposed sessions, and +1 those that you are interested in. Also please put your name, and maybe some contact info, at the bottom. If you'd be willing to moderate a session, please add yourself to the moderators list. https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018 The Ops Meetup will take place September 9th - 10th (Monday and Tuesday) in a dedicated space at the PTG. You are welcome and encouraged to participate in other PTG sessions throughout the rest of the week as well. Also as a reminder, EARLY BIRD PRICING ENDS TOMORROW 8/22 at 11:59pm PDT (06:59 UTC). The price will go from $399 to $599 While the price tag may seem a little high to some past Ops Meetup attendees, remember that registration for the PTG includes passes to the next two summits. For you regular summit-goers, that's a good discount. Don't pass it up! Looking forward to seeing lots of new and familiar faces in Denver! Cheers, Erik From jon at csail.mit.edu Tue Aug 21 19:14:48 2018 From: jon at csail.mit.edu (Jonathan Proulx) Date: Tue, 21 Aug 2018 15:14:48 -0400 Subject: [Openstack-operators] Ops Community Documentation - first anchor point In-Reply-To: References: <20180626164210.GA1445@sm-workstation> Message-ID: <20180821191448.odxe7sgrfpwbwd6x@csail.mit.edu> Hi All... I'm still a little confused by the state of this :) I know I made some promises then got distracted the looks like Sean stepped up and got things a bit further, but where is it now? Do we have an active repo? It would be nice to have the repo in place before OPs meetup. -Jon On Tue, Jun 26, 2018 at 07:40:33PM -0500, Amy Marrich wrote: :Sean put together some really great things here and I do think the SiG :might be the way to go as far as ownership for the repos and the plan looks :pretty complete. I've offered to do the Git and Gerrit Lunch and Learn at :the OPS mmetup if needed to help get folks set up and going. : :Amy (spotz) : :On Tue, Jun 26, 2018 at 11:42 AM, Sean McGinnis :wrote: : :> Reviving this thread with a fresh start. See below for the original. :> :> To recap, the ops community is willing to take over some of the operator :> documentation that is no longer available due to the loss of documentation :> team :> resources. From discussions, there needs to be some official governance :> over :> this operator owned repo (or repos) so it is recommended that a sig be :> formed. :> The repos can be created in the meantime, but consideration needs to be :> taken :> about naming as by default, the repo name is what is reflected in the :> documentation publishing location. :> :> SIG Formation :> ------------- :> There were a couple suggestions on naming and focus for this sig, but I :> would :> like to make a slightly different proposal. I would actually like to see a :> sig-operator group formed. We have repos for operator tools and other :> useful :> things and we have a mix of operators, vendors, and others that work :> together :> on things like the ops meetup. I think it would make sense to make this :> into an :> official SIG that could have a broader scope than just documentation. :> :> Docs Repos :> ---------- :> Doug made a good suggestion that we may want these things published under :> something like docs.openstack.org/operations-guide. So based on this, I :> think :> for now at least we should create an opestack/operations-guide repo that :> will :> end up being owned by this SIG. I would expect most documentation :> generated or :> owned by this group would just be located somewhere under that repo, but :> if the :> need arises we can add additional repos. :> :> There are other ops repos out there right now. I would expect the :> ownership of :> those to move under this sig as well, but that is a seperate and less :> pressing :> concern at this point. :> :> Bug Tracking :> ------------ :> There should be some way to track tasks and needs for this documentation :> and :> any other repos that are moved under this sig. Since it is the currently :> planned direction for all OpenStack projects (or at least there is a vocal :> desire for it to be) I think a Storyboard project should be created for :> this :> SIG's activities. :> :> Plan :> ---- :> So to recap above, I would propose the following actions be taken: :> :> 1. Create sig-operators as a group to manage operator efforts at least :> related :> to what needs to be done in repos. :> 2. Create an openstack/operations-guide repo to be the new home of the :> operations documentation. :> 3. Create a new StoryBoard project to help track work in these repos :> x. Document all this. :> 9. Profit! :> :> I'm willing to work through the steps to get these things set up. Please :> give :> feedback if this proposed plan makes sense or if there is anything :> different :> that would be preferred. :> :> Thanks, :> Sean :> :> On Wed, May 23, 2018 at 06:38:32PM -0700, Chris Morgan wrote: :> > Hello Everyone, :> > :> > In the Ops Community documentation working session today in Vancouver, we :> > made some really good progress (etherpad here: :> > https://etherpad.openstack.org/p/YVR-Ops-Community-Docs but not all of :> the :> > good stuff is yet written down). :> > :> > In short, we're going to course correct on maintaining the Operators :> Guide, :> > the HA Guide and Architecture Guide, not edit-in-place via the wiki and :> > instead try still maintaining them as code, but with a different, new set :> > of owners, possibly in a new Ops-focused repo. There was a strong :> consensus :> > that a) code workflow >> wiki workflow and that b) openstack core docs :> > tools are just fine. :> > :> > There is a lot still to be decided on how where and when, but we do have :> an :> > offer of a rewrite of the HA Guide, as long as the changes will be :> allowed :> > to actually land, so we expect to actually start showing some progress. :> > :> > At the end of the session, people wanted to know how to follow along as :> > various people work out how to do this... and so for now that place is :> this :> > very email thread. The idea is if the code for those documents goes to :> live :> > in a different repo, or if new contributors turn up, or if a new version :> we :> > will announce/discuss it here until such time as we have a better home :> for :> > this initiative. :> > :> > Cheers :> > :> > Chris :> > :> > -- :> > Chris Morgan :> :> > _______________________________________________ :> > OpenStack-operators mailing list :> > OpenStack-operators at lists.openstack.org :> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators :> :> :> _______________________________________________ :> OpenStack-operators mailing list :> OpenStack-operators at lists.openstack.org :> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators :> :_______________________________________________ :OpenStack-operators mailing list :OpenStack-operators at lists.openstack.org :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- From sean.mcginnis at gmx.com Tue Aug 21 19:27:44 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 21 Aug 2018 14:27:44 -0500 Subject: [Openstack-operators] Ops Community Documentation - first anchor point In-Reply-To: <20180821191448.odxe7sgrfpwbwd6x@csail.mit.edu> References: <20180626164210.GA1445@sm-workstation> <20180821191448.odxe7sgrfpwbwd6x@csail.mit.edu> Message-ID: <20180821192743.GA31796@sm-workstation> On Tue, Aug 21, 2018 at 03:14:48PM -0400, Jonathan Proulx wrote: > Hi All... > > I'm still a little confused by the state of this :) > > I know I made some promises then got distracted the looks like Sean > stepped up and got things a bit further, but where is it now? Do we > have an active repo? > > It would be nice to have the repo in place before OPs meetup. > > -Jon > Hey Jon, Pretty much everything is in place now. There is one outstanding patch to officially add things under the SIG governance here: https://review.openstack.org/#/c/591248/ That's a formality that needs to be done, but we do have the content being published to the site: https://docs.openstack.org/operations-guide/ And we have the repo set up and ready for updates to be proposed: http://git.openstack.org/cgit/openstack/operations-guide Our next step is to start encouraging contributions from the community. This would be a great things to discuss at the PTG, and I now realize that I didn't add this obvious thing to our ops PTG planning etherpad. I will add it there and hopefully we can go over some of the content and get some more interest in contributing to it. Thanks! Sean From doug at doughellmann.com Tue Aug 21 19:28:45 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 21 Aug 2018 15:28:45 -0400 Subject: [Openstack-operators] Ops Community Documentation - first anchor point In-Reply-To: <20180821191448.odxe7sgrfpwbwd6x@csail.mit.edu> References: <20180626164210.GA1445@sm-workstation> <20180821191448.odxe7sgrfpwbwd6x@csail.mit.edu> Message-ID: <1534879663-sup-6809@lrrr.local> Excerpts from Jonathan Proulx's message of 2018-08-21 15:14:48 -0400: > Hi All... > > I'm still a little confused by the state of this :) > > I know I made some promises then got distracted the looks like Sean > stepped up and got things a bit further, but where is it now? Do we > have an active repo? > > It would be nice to have the repo in place before OPs meetup. > > -Jon The repository exists at http://git.openstack.org/cgit/openstack/operations-guide/ and the results are being published to https://docs.openstack.org/operations-guide/ Now we need more reviewers and authors. :-) Doug > > On Tue, Jun 26, 2018 at 07:40:33PM -0500, Amy Marrich wrote: > :Sean put together some really great things here and I do think the SiG > :might be the way to go as far as ownership for the repos and the plan looks > :pretty complete. I've offered to do the Git and Gerrit Lunch and Learn at > :the OPS mmetup if needed to help get folks set up and going. > : > :Amy (spotz) > : > :On Tue, Jun 26, 2018 at 11:42 AM, Sean McGinnis > :wrote: > : > :> Reviving this thread with a fresh start. See below for the original. > :> > :> To recap, the ops community is willing to take over some of the operator > :> documentation that is no longer available due to the loss of documentation > :> team > :> resources. From discussions, there needs to be some official governance > :> over > :> this operator owned repo (or repos) so it is recommended that a sig be > :> formed. > :> The repos can be created in the meantime, but consideration needs to be > :> taken > :> about naming as by default, the repo name is what is reflected in the > :> documentation publishing location. > :> > :> SIG Formation > :> ------------- > :> There were a couple suggestions on naming and focus for this sig, but I > :> would > :> like to make a slightly different proposal. I would actually like to see a > :> sig-operator group formed. We have repos for operator tools and other > :> useful > :> things and we have a mix of operators, vendors, and others that work > :> together > :> on things like the ops meetup. I think it would make sense to make this > :> into an > :> official SIG that could have a broader scope than just documentation. > :> > :> Docs Repos > :> ---------- > :> Doug made a good suggestion that we may want these things published under > :> something like docs.openstack.org/operations-guide. So based on this, I > :> think > :> for now at least we should create an opestack/operations-guide repo that > :> will > :> end up being owned by this SIG. I would expect most documentation > :> generated or > :> owned by this group would just be located somewhere under that repo, but > :> if the > :> need arises we can add additional repos. > :> > :> There are other ops repos out there right now. I would expect the > :> ownership of > :> those to move under this sig as well, but that is a seperate and less > :> pressing > :> concern at this point. > :> > :> Bug Tracking > :> ------------ > :> There should be some way to track tasks and needs for this documentation > :> and > :> any other repos that are moved under this sig. Since it is the currently > :> planned direction for all OpenStack projects (or at least there is a vocal > :> desire for it to be) I think a Storyboard project should be created for > :> this > :> SIG's activities. > :> > :> Plan > :> ---- > :> So to recap above, I would propose the following actions be taken: > :> > :> 1. Create sig-operators as a group to manage operator efforts at least > :> related > :> to what needs to be done in repos. > :> 2. Create an openstack/operations-guide repo to be the new home of the > :> operations documentation. > :> 3. Create a new StoryBoard project to help track work in these repos > :> x. Document all this. > :> 9. Profit! > :> > :> I'm willing to work through the steps to get these things set up. Please > :> give > :> feedback if this proposed plan makes sense or if there is anything > :> different > :> that would be preferred. > :> > :> Thanks, > :> Sean > :> > :> On Wed, May 23, 2018 at 06:38:32PM -0700, Chris Morgan wrote: > :> > Hello Everyone, > :> > > :> > In the Ops Community documentation working session today in Vancouver, we > :> > made some really good progress (etherpad here: > :> > https://etherpad.openstack.org/p/YVR-Ops-Community-Docs but not all of > :> the > :> > good stuff is yet written down). > :> > > :> > In short, we're going to course correct on maintaining the Operators > :> Guide, > :> > the HA Guide and Architecture Guide, not edit-in-place via the wiki and > :> > instead try still maintaining them as code, but with a different, new set > :> > of owners, possibly in a new Ops-focused repo. There was a strong > :> consensus > :> > that a) code workflow >> wiki workflow and that b) openstack core docs > :> > tools are just fine. > :> > > :> > There is a lot still to be decided on how where and when, but we do have > :> an > :> > offer of a rewrite of the HA Guide, as long as the changes will be > :> allowed > :> > to actually land, so we expect to actually start showing some progress. > :> > > :> > At the end of the session, people wanted to know how to follow along as > :> > various people work out how to do this... and so for now that place is > :> this > :> > very email thread. The idea is if the code for those documents goes to > :> live > :> > in a different repo, or if new contributors turn up, or if a new version > :> we > :> > will announce/discuss it here until such time as we have a better home > :> for > :> > this initiative. > :> > > :> > Cheers > :> > > :> > Chris > :> > > :> > -- > :> > Chris Morgan > :> > :> > _______________________________________________ > :> > OpenStack-operators mailing list > :> > OpenStack-operators at lists.openstack.org > :> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > :> > :> > :> _______________________________________________ > :> OpenStack-operators mailing list > :> OpenStack-operators at lists.openstack.org > :> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > :> > > :_______________________________________________ > :OpenStack-operators mailing list > :OpenStack-operators at lists.openstack.org > :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From ed at leafe.com Tue Aug 21 19:44:04 2018 From: ed at leafe.com (Ed Leafe) Date: Tue, 21 Aug 2018 14:44:04 -0500 Subject: [Openstack-operators] UC Elections will not be held Message-ID: <49D533BF-F818-4642-AD23-F93E1F6E8F05@leafe.com> As there were only 2 nominations for the 2 open seats, elections will not be needed. Congratulations to Matt Van Winkle and Joseph Sandoval! -- Ed Leafe From jon at csail.mit.edu Tue Aug 21 19:44:20 2018 From: jon at csail.mit.edu (Jonathan Proulx) Date: Tue, 21 Aug 2018 15:44:20 -0400 Subject: [Openstack-operators] Ops Community Documentation - first anchor point In-Reply-To: <20180821192743.GA31796@sm-workstation> References: <20180626164210.GA1445@sm-workstation> <20180821191448.odxe7sgrfpwbwd6x@csail.mit.edu> <20180821192743.GA31796@sm-workstation> Message-ID: <20180821194420.j4fmjunhqs7satq5@csail.mit.edu> On Tue, Aug 21, 2018 at 02:27:44PM -0500, Sean McGinnis wrote: :On Tue, Aug 21, 2018 at 03:14:48PM -0400, Jonathan Proulx wrote: :Hey Jon, : :Pretty much everything is in place now. There is one outstanding patch to :officially add things under the SIG governance here: : :https://review.openstack.org/#/c/591248/ : :That's a formality that needs to be done, but we do have the content being :published to the site: : :https://docs.openstack.org/operations-guide/ : :And we have the repo set up and ready for updates to be proposed: : :http://git.openstack.org/cgit/openstack/operations-guide : :Our next step is to start encouraging contributions from the community. This :would be a great things to discuss at the PTG, and I now realize that I didn't :add this obvious thing to our ops PTG planning etherpad. I will add it there :and hopefully we can go over some of the content and get some more interest in :contributing to it. Thanks for picking up my slack there :) -Jon From amy at demarco.com Tue Aug 21 20:26:44 2018 From: amy at demarco.com (Amy Marrich) Date: Tue, 21 Aug 2018 15:26:44 -0500 Subject: [Openstack-operators] [Openstack-sigs] UC Elections will not be held In-Reply-To: <49D533BF-F818-4642-AD23-F93E1F6E8F05@leafe.com> References: <49D533BF-F818-4642-AD23-F93E1F6E8F05@leafe.com> Message-ID: Congrats to VW and Joseph. Thank you to Saverio for his hard work. And lastly thank you to Ed, Chandan, and Mohamed for serving as our election officials! Amy (spotz) User Committee On Tue, Aug 21, 2018 at 2:44 PM, Ed Leafe wrote: > As there were only 2 nominations for the 2 open seats, elections will not > be needed. Congratulations to Matt Van Winkle and Joseph Sandoval! > > -- Ed Leafe > > > > > > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mizuno.shintaro at lab.ntt.co.jp Wed Aug 22 07:04:40 2018 From: mizuno.shintaro at lab.ntt.co.jp (Shintaro Mizuno) Date: Wed, 22 Aug 2018 16:04:40 +0900 Subject: [Openstack-operators] Ops Meetup Agenda Planning - Denver Edition In-Reply-To: References: Message-ID: Erik, all, > The Ops Meetup will take place September 9th - 10th (Monday and > Tuesday) in a dedicated space at the PTG. You are welcome and It's 10th - 11th (Monday and Tuesday) in case someone is planning their travel :) Cheers. Shintaro On 2018/08/22 0:40, Erik McCormick wrote: > Hello Ops, > > As you are hopefully aware, the Ops meetup, now integrated as part of > the Project Team Gathering (PTG) is rapidly approaching. We are a bit > behind on session planning, and we need your help to create an agenda. > > Please insert your session ideas into this etherpad, add subtopics to > already proposed sessions, and +1 those that you are interested in. > Also please put your name, and maybe some contact info, at the bottom. > If you'd be willing to moderate a session, please add yourself to the > moderators list. > > https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018 > > The Ops Meetup will take place September 9th - 10th (Monday and > Tuesday) in a dedicated space at the PTG. You are welcome and > encouraged to participate in other PTG sessions throughout the rest of > the week as well. > > Also as a reminder, EARLY BIRD PRICING ENDS TOMORROW 8/22 at 11:59pm > PDT (06:59 UTC). The price will go from $399 to $599 > > While the price tag may seem a little high to some past Ops Meetup > attendees, remember that registration for the PTG includes passes to > the next two summits. For you regular summit-goers, that's a good > discount. Don't pass it up! > > Looking forward to seeing lots of new and familiar faces in Denver! > > Cheers, > Erik > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Shintaro MIZUNO (水野伸太郎) NTT Software Innovation Center TEL: 0422-59-4977 E-mail: mizuno.shintaro at lab.ntt.co.jp From emccormick at cirrusseven.com Wed Aug 22 13:24:53 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 22 Aug 2018 09:24:53 -0400 Subject: [Openstack-operators] Ops Meetup Agenda Planning - Denver Edition In-Reply-To: References: Message-ID: Good catch Shintaro! Yes, September 10-10 and 11. I must have flipped my arrival date with the start date. Thanks! -Erik On Wed, Aug 22, 2018, 3:06 AM Shintaro Mizuno wrote: > Erik, all, > > > The Ops Meetup will take place September 9th - 10th (Monday and > > Tuesday) in a dedicated space at the PTG. You are welcome and > > It's 10th - 11th (Monday and Tuesday) in case someone is planning their > travel :) > > Cheers. > Shintaro > > On 2018/08/22 0:40, Erik McCormick wrote: > > Hello Ops, > > > > As you are hopefully aware, the Ops meetup, now integrated as part of > > the Project Team Gathering (PTG) is rapidly approaching. We are a bit > > behind on session planning, and we need your help to create an agenda. > > > > Please insert your session ideas into this etherpad, add subtopics to > > already proposed sessions, and +1 those that you are interested in. > > Also please put your name, and maybe some contact info, at the bottom. > > If you'd be willing to moderate a session, please add yourself to the > > moderators list. > > > > https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018 > > > > The Ops Meetup will take place September 9th - 10th (Monday and > > Tuesday) in a dedicated space at the PTG. You are welcome and > > encouraged to participate in other PTG sessions throughout the rest of > > the week as well. > > > > Also as a reminder, EARLY BIRD PRICING ENDS TOMORROW 8/22 at 11:59pm > > PDT (06:59 UTC). The price will go from $399 to $599 > > > > While the price tag may seem a little high to some past Ops Meetup > > attendees, remember that registration for the PTG includes passes to > > the next two summits. For you regular summit-goers, that's a good > > discount. Don't pass it up! > > > > Looking forward to seeing lots of new and familiar faces in Denver! > > > > Cheers, > > Erik > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > -- > Shintaro MIZUNO (水野伸太郎) > NTT Software Innovation Center > TEL: 0422-59-4977 > E-mail: mizuno.shintaro at lab.ntt.co.jp > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Aug 22 13:57:47 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 22 Aug 2018 08:57:47 -0500 Subject: [Openstack-operators] [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration) In-Reply-To: <20180822094620.kncry4ufbe6fwi5u@localhost> References: <20180822094620.kncry4ufbe6fwi5u@localhost> Message-ID: <20180822135747.GA27570@sm-workstation> > > The solution is conceptually simple. We add a new API microversion in > Cinder that adds and optional parameter called "generic_keep_source" > (defaults to False) to both migrate and retype operations. > > This means that if the driver optimized migration cannot do the > migration and the generic migration code is the one doing the migration, > then, instead of our final step being to swap the volume id's and > deleting the source volume, what we would do is to swap the volume id's > and move all the snapshots to reference the new volume. Then we would > create a user message with the new ID of the volume. > How would you propose to "move all the snapshots to reference the new volume"? Most storage does not allow a snapshot to be moved from one volume to another. really the only way a migration of a snapshot can work across all storage types would be to incrementally copy the data from a source to a destination up to the point of the oldest snapshot, create a new snapshot on the new volume, then proceed through until all snapshots have been rebuilt on the new volume. From rosmaita.fossdev at gmail.com Wed Aug 22 15:05:09 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 22 Aug 2018 11:05:09 -0400 Subject: [Openstack-operators] [glance] share image with domain In-Reply-To: <094CFFC9-1F54-4522-8178-1642F94724A0@betacloud-solutions.de> References: <094CFFC9-1F54-4522-8178-1642F94724A0@betacloud-solutions.de> Message-ID: On Tue, Jul 10, 2018 at 8:04 AM Christian Berendt wrote: > It is possible to add a domain as a member, however this is not taken in account. It should be mentioned that you can also add non-existing project ids as a member. Yes, you can add any string as an image member. Back when image sharing was implemented, it was thought that making an extra call to keystone to verify the member_id would be too expensive because people would be sharing so many images all the time, but AFAIK, that hasn't turned out to be the case. So it may be worth revisiting this issue. > For me it looks like it is not possible to share a image with visibility “shared” with a domain. That is correct, the items in the member-list are treated as project IDs. (Actually, that's not entirely true, but will be completely true very early in the Stein development cycle.[0]) > Are there known workarounds or scripts for that use case? I'm not aware of any. You could write a script that took a domain ID, got the list of projects in that domain, and shared the image with all those projects, but then you'd have a synchronization problem when projects were added to/removed from the domain, so that's probably not a good idea. If this is an important use case, please consider proposing a Glance spec [1] or proposing it as a topic for the upcoming PTG [2]. cheers, brian [0] https://specs.openstack.org/openstack/glance-specs/specs/rocky/implemented/glance/spec-lite-deprecate-owner_is_tenant.html [1] http://git.openstack.org/cgit/openstack/glance-specs [2] https://etherpad.openstack.org/p/stein-ptg-glance-planning > Christian. > > -- > Christian Berendt > Chief Executive Officer (CEO) > > Mail: berendt at betacloud-solutions.de > Web: https://www.betacloud-solutions.de > > Betacloud Solutions GmbH > Teckstrasse 62 / 70190 Stuttgart / Deutschland > > Geschäftsführer: Christian Berendt > Unternehmenssitz: Stuttgart > Amtsgericht: Stuttgart, HRB 756139 > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From mriedemos at gmail.com Thu Aug 23 01:23:41 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 22 Aug 2018 20:23:41 -0500 Subject: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration Message-ID: Hi everyone, I have started an etherpad for cells topics at the Stein PTG [1]. The main issue in there right now is dealing with cross-cell cold migration in nova. At a high level, I am going off these requirements: * Cells can shard across flavors (and hardware type) so operators would like to move users off the old flavors/hardware (old cell) to new flavors in a new cell. * There is network isolation between compute hosts in different cells, so no ssh'ing the disk around like we do today. But the image service is global to all cells. Based on this, for the initial support for cross-cell cold migration, I am proposing that we leverage something like shelve offload/unshelve masquerading as resize. We shelve offload from the source cell and unshelve in the target cell. This should work for both volume-backed and non-volume-backed servers (we use snapshots for shelved offloaded non-volume-backed servers). There are, of course, some complications. The main ones that I need help with right now are what happens with volumes and ports attached to the server. Today we detach from the source and attach at the target, but that's assuming the storage backend and network are available to both hosts involved in the move of the server. Will that be the case across cells? I am assuming that depends on the network topology (are routed networks being used?) and storage backend (routed storage?). If the network and/or storage backend are not available across cells, how do we migrate volumes and ports? Cinder has a volume migrate API for admins but I do not know how nova would know the proper affinity per-cell to migrate the volume to the proper host (cinder does not have a routed storage concept like routed provider networks in neutron, correct?). And as far as I know, there is no such thing as port migration in Neutron. Could Placement help with the volume/port migration stuff? Neutron routed provider networks rely on placement aggregates to schedule the VM to a compute host in the same network segment as the port used to create the VM, however, if that segment does not span cells we are kind of stuck, correct? To summarize the issues as I see them (today): * How to deal with the targeted cell during scheduling? This is so we can even get out of the source cell in nova. * How does the API deal with the same instance being in two DBs at the same time during the move? * How to handle revert resize? * How are volumes and ports handled? I can get feedback from my company's operators based on what their deployment will look like for this, but that does not mean it will work for others, so I need as much feedback from operators, especially those running with multiple cells today, as possible. Thanks in advance. [1] https://etherpad.openstack.org/p/nova-ptg-stein-cells -- Thanks, Matt From sorrison at gmail.com Thu Aug 23 02:14:28 2018 From: sorrison at gmail.com (Sam Morrison) Date: Thu, 23 Aug 2018 12:14:28 +1000 Subject: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: References: Message-ID: I think in our case we’d only migrate between cells if we know the network and storage is accessible and would never do it if not. Thinking moving from old to new hardware at a cell level. If storage and network isn’t available ideally it would fail at the api request. There is also ceph backed instances and so this is also something to take into account which nova would be responsible for. I’ll be in Denver so we can discuss more there too. Cheers, Sam > On 23 Aug 2018, at 11:23 am, Matt Riedemann wrote: > > Hi everyone, > > I have started an etherpad for cells topics at the Stein PTG [1]. The main issue in there right now is dealing with cross-cell cold migration in nova. > > At a high level, I am going off these requirements: > > * Cells can shard across flavors (and hardware type) so operators would like to move users off the old flavors/hardware (old cell) to new flavors in a new cell. > > * There is network isolation between compute hosts in different cells, so no ssh'ing the disk around like we do today. But the image service is global to all cells. > > Based on this, for the initial support for cross-cell cold migration, I am proposing that we leverage something like shelve offload/unshelve masquerading as resize. We shelve offload from the source cell and unshelve in the target cell. This should work for both volume-backed and non-volume-backed servers (we use snapshots for shelved offloaded non-volume-backed servers). > > There are, of course, some complications. The main ones that I need help with right now are what happens with volumes and ports attached to the server. Today we detach from the source and attach at the target, but that's assuming the storage backend and network are available to both hosts involved in the move of the server. Will that be the case across cells? I am assuming that depends on the network topology (are routed networks being used?) and storage backend (routed storage?). If the network and/or storage backend are not available across cells, how do we migrate volumes and ports? Cinder has a volume migrate API for admins but I do not know how nova would know the proper affinity per-cell to migrate the volume to the proper host (cinder does not have a routed storage concept like routed provider networks in neutron, correct?). And as far as I know, there is no such thing as port migration in Neutron. > > Could Placement help with the volume/port migration stuff? Neutron routed provider networks rely on placement aggregates to schedule the VM to a compute host in the same network segment as the port used to create the VM, however, if that segment does not span cells we are kind of stuck, correct? > > To summarize the issues as I see them (today): > > * How to deal with the targeted cell during scheduling? This is so we can even get out of the source cell in nova. > > * How does the API deal with the same instance being in two DBs at the same time during the move? > > * How to handle revert resize? > > * How are volumes and ports handled? > > I can get feedback from my company's operators based on what their deployment will look like for this, but that does not mean it will work for others, so I need as much feedback from operators, especially those running with multiple cells today, as possible. Thanks in advance. > > [1] https://etherpad.openstack.org/p/nova-ptg-stein-cells > > -- > > Thanks, > > Matt From dms at danplanet.com Thu Aug 23 13:29:23 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 23 Aug 2018 06:29:23 -0700 Subject: [Openstack-operators] [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <20180823104210.kgctxfjiq47uru34@localhost> (Gorka Eguileor's message of "Thu, 23 Aug 2018 12:42:10 +0200") References: <20180823104210.kgctxfjiq47uru34@localhost> Message-ID: > I think Nova should never have to rely on Cinder's hosts/backends > information to do migrations or any other operation. > > In this case even if Nova had that info, it wouldn't be the solution. > Cinder would reject migrations if there's an incompatibility on the > Volume Type (AZ, Referenced backend, capabilities...) I think I'm missing a bunch of cinder knowledge required to fully grok this situation and probably need to do some reading. Is there some reason that a volume type can't exist in multiple backends or something? I guess I think of volume type as flavor, and the same definition in two places would be interchangeable -- is that not the case? > I don't know anything about Nova cells, so I don't know the specifics of > how we could do the mapping between them and Cinder backends, but > considering the limited range of possibilities in Cinder I would say we > only have Volume Types and AZs to work a solution. I think the only mapping we need is affinity or distance. The point of needing to migrate the volume would purely be because moving cells likely means you moved physically farther away from where you were, potentially with different storage connections and networking. It doesn't *have* to mean that, but I think in reality it would. So the question I think Matt is looking to answer here is "how do we move an instance from a DC in building A to building C and make sure the volume gets moved to some storage local in the new building so we're not just transiting back to the original home for no reason?" Does that explanation help or are you saying that's fundamentally hard to do/orchestrate? Fundamentally, the cells thing doesn't even need to be part of the discussion, as the same rules would apply if we're just doing a normal migration but need to make sure that storage remains affined to compute. > I don't know how the Nova Placement works, but it could hold an > equivalency mapping of volume types to cells as in: > > Cell#1 Cell#2 > > VolTypeA <--> VolTypeD > VolTypeB <--> VolTypeE > VolTypeC <--> VolTypeF > > Then it could do volume retypes (allowing migration) and that would > properly move the volumes from one backend to another. The only way I can think that we could do this in placement would be if volume types were resource providers and we assigned them traits that had special meaning to nova indicating equivalence. Several of the words in that sentence are likely to freak out placement people, myself included :) So is the concern just that we need to know what volume types in one backend map to those in another so that when we do the migration we know what to ask for? Is "they are the same name" not enough? Going back to the flavor analogy, you could kinda compare two flavor definitions and have a good idea if they're equivalent or not... --Dan From jaypipes at gmail.com Thu Aug 23 14:30:59 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 23 Aug 2018 10:30:59 -0400 Subject: [Openstack-operators] [glance] share image with domain In-Reply-To: References: <094CFFC9-1F54-4522-8178-1642F94724A0@betacloud-solutions.de> Message-ID: On 08/22/2018 11:05 AM, Brian Rosmaita wrote: > On Tue, Jul 10, 2018 at 8:04 AM Christian Berendt > wrote: > >> It is possible to add a domain as a member, however this is not taken in account. It should be mentioned that you can also add non-existing project ids as a member. > > Yes, you can add any string as an image member. Back when image > sharing was implemented, it was thought that making an extra call to > keystone to verify the member_id would be too expensive because people > would be sharing so many images all the time, but AFAIK, that hasn't > turned out to be the case. So it may be worth revisiting this issue. It's not just that. It's that if someone then deletes the project from Keystone, it's not like Glance listens for that project delete notification and takes some remediation. Instead, Glance would just keep a now-orphaned project ID in its membership data store. Which is perfectly fine IMHO. Is it ideal? No, but does it hurt anything? Not really... You could make some cleanup out-of-band process that looked for orphaned information and cleaned it up later on. Such is the life of highly distributed services with no single data store that uses referential integrity... :) Best, -jay >> For me it looks like it is not possible to share a image with visibility “shared” with a domain. > > That is correct, the items in the member-list are treated as project > IDs. (Actually, that's not entirely true, but will be completely true > very early in the Stein development cycle.[0]) > >> Are there known workarounds or scripts for that use case? > > I'm not aware of any. You could write a script that took a domain ID, > got the list of projects in that domain, and shared the image with all > those projects, but then you'd have a synchronization problem when > projects were added to/removed from the domain, so that's probably not > a good idea. > > If this is an important use case, please consider proposing a Glance > spec [1] or proposing it as a topic for the upcoming PTG [2]. > > cheers, > brian > > [0] https://specs.openstack.org/openstack/glance-specs/specs/rocky/implemented/glance/spec-lite-deprecate-owner_is_tenant.html > [1] http://git.openstack.org/cgit/openstack/glance-specs > [2] https://etherpad.openstack.org/p/stein-ptg-glance-planning > >> Christian. >> >> -- >> Christian Berendt >> Chief Executive Officer (CEO) >> >> Mail: berendt at betacloud-solutions.de >> Web: https://www.betacloud-solutions.de >> >> Betacloud Solutions GmbH >> Teckstrasse 62 / 70190 Stuttgart / Deutschland >> >> Geschäftsführer: Christian Berendt >> Unternehmenssitz: Stuttgart >> Amtsgericht: Stuttgart, HRB 756139 >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From sean.mcginnis at gmx.com Thu Aug 23 15:22:43 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 23 Aug 2018 10:22:43 -0500 Subject: [Openstack-operators] [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: References: Message-ID: <20180823152242.GB23060@sm-workstation> On Wed, Aug 22, 2018 at 08:23:41PM -0500, Matt Riedemann wrote: > Hi everyone, > > I have started an etherpad for cells topics at the Stein PTG [1]. The main > issue in there right now is dealing with cross-cell cold migration in nova. > > At a high level, I am going off these requirements: > > * Cells can shard across flavors (and hardware type) so operators would like > to move users off the old flavors/hardware (old cell) to new flavors in a > new cell. > > * There is network isolation between compute hosts in different cells, so no > ssh'ing the disk around like we do today. But the image service is global to > all cells. > > Based on this, for the initial support for cross-cell cold migration, I am > proposing that we leverage something like shelve offload/unshelve > masquerading as resize. We shelve offload from the source cell and unshelve > in the target cell. This should work for both volume-backed and > non-volume-backed servers (we use snapshots for shelved offloaded > non-volume-backed servers). > > There are, of course, some complications. The main ones that I need help > with right now are what happens with volumes and ports attached to the > server. Today we detach from the source and attach at the target, but that's > assuming the storage backend and network are available to both hosts > involved in the move of the server. Will that be the case across cells? I am > assuming that depends on the network topology (are routed networks being > used?) and storage backend (routed storage?). If the network and/or storage > backend are not available across cells, how do we migrate volumes and ports? > Cinder has a volume migrate API for admins but I do not know how nova would > know the proper affinity per-cell to migrate the volume to the proper host > (cinder does not have a routed storage concept like routed provider networks > in neutron, correct?). And as far as I know, there is no such thing as port > migration in Neutron. > Just speaking to iSCSI storage, I know some deployments do not route their storage traffic. If this is the case, then both cells would need to have access to the same subnet to still access the volume. I'm also referring to the case where the migration is from one compute host to another compute host, and not from one storage backend to another storage backend. I haven't gone through the workflow, but I thought shelve/unshelve could detach the volume on shelving and reattach it on unshelve. In that workflow, assuming the networking is in place to provide the connectivity, the nova compute host would be connecting to the volume just like any other attach and should work fine. The unknown or tricky part is making sure that there is the network connectivity or routing in place for the compute host to be able to log in to the storage target. If it's the other scenario mentioned where the volume needs to be migrated from one storage backend to another storage backend, then that may require a little more work. The volume would need to be retype'd or migrated (storage migration) from the original backend to the new backend. Again, in this scenario at some point there needs to be network connectivity between cells to copy over that data. There is no storage-offloaded migration in this situation, so Cinder can't currently optimize how that data gets from the original volume backend to the new one. It would require a host copy of all the data on the volume (an often slow and expensive operation) and it would require that the host doing the data copy has access to both the original backend and then new backend. From anne at openstack.org Thu Aug 23 21:21:26 2018 From: anne at openstack.org (Anne Bertucio) Date: Thu, 23 Aug 2018 14:21:26 -0700 Subject: [Openstack-operators] [community][Rocky] Community Meeting: Rocky + project updates In-Reply-To: <87363388-E7B9-499B-AC96-D2751504DAEB@openstack.org> References: <87363388-E7B9-499B-AC96-D2751504DAEB@openstack.org> Message-ID: <50F06905-4D8D-4DFA-AC5E-AFEC5A234B89@openstack.org> Hi all, Updated meeting information below for the OpenStack Community Meeting on August 30 at 3pm UTC. We’ll cover what’s new in the Rocky release, hear updates from the Airship, Kata Containers, StarlingX and Zuul projects, and get a preview of the Berlin Summit. Hope you can join us, but if not, it will be recorded! When: Aug 30, 2018 8:00 AM Pacific Time (US and Canada) Topic: OpenStack Community Meeting Please click the link below to join the webinar: https://zoom.us/j/551803657 Or iPhone one-tap : US: +16699006833,,551803657# or +16468769923,,551803657# Or Telephone: Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 Webinar ID: 551 803 657 International numbers available: https://zoom.us/u/bh2jVweqf Cheers, Anne Bertucio OpenStack Foundation anne at openstack.org | irc: annabelleB > On Aug 16, 2018, at 9:46 AM, Anne Bertucio wrote: > > Hi all, > > Save the date for an OpenStack community meeting on August 30 at 3pm UTC. This is the evolution of the “Marketing Community Release Preview” meeting that we’ve had each cycle. While that meeting has always been open to all, we wanted to expand the topics and encourage anyone who was interested in getting updates on the Rocky release or the newer projects at OSF to attend. > > We’ll cover: > —What’s new in Rocky > (This info will still be at a fairly high level, so might not be new information if you’re someone who stays up to date in the dev ML or is actively involved in upstream work) > > —Updates from Airship, Kata Containers, StarlingX, and Zuul > > —What you can expect at the Berlin Summit in November > > This meeting will be run over Zoom (look for info closer to the 30th) and will be recorded, so if you can’t make the time, don’t panic! > > Cheers, > Anne Bertucio > OpenStack Foundation > anne at openstack.org | irc: annabelleB > > > > > > _______________________________________________ > Marketing mailing list > Marketing at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/marketing -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at lrasc.fr Fri Aug 24 15:11:34 2018 From: nicolas at lrasc.fr (nicolas at lrasc.fr) Date: Fri, 24 Aug 2018 17:11:34 +0200 Subject: [Openstack-operators] [openstack-ansible][queens] networking_sfc plugin not found - venv pb ? Message-ID: <4bb10e155bf01dc550dd20013e05358f@lrasc.fr> OpenStack version : stable/queens OSA version : 17.0.9.dev22 python env version: python2.7 operating system : Ubuntu Server 16.04 Hi all, I was trying to install the *networking_sfc* plugin on my openstack environment thanks to OSA, but it failed. But I may have found the problem. I think the problem comes from the python virtualenv and the networking_sfc python package that is not installed by OSA. Thanks to OSA, I have a python2.7 virtualenv on my neutron-server: "/openstack/venvs/neutron-17.0.9/lib/python2.7" I think its a bug in OSA. But maybe I missed something. These are my steps: 1. Following the inspiration from here (link below), but without installing ODL. I modified the OSA "user_variables.yml" like this. https://docs.openstack.org/openstack-ansible-os_neutron/latest/app-opendaylight.html user at OSA: vim /etc/openstack_deploy/user_variables.yml ``` [...] neutron_plugin_base: - router - metering - networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin - networking_sfc.services.sfc.plugin.SfcPlugin [...] ``` 2. After the OSA deployment is finished with success (with 0 failed), I can see in the neutron-server log that plugins related to SFC are not found. user at neutron-serveur: less /var/log/neutron/neutron-server.log ``` [...] Plugin 'networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin' not found. [...] ``` 3. I decided to manually install [over my OSA deployment] and configure networking_sfc following these links: * https://docs.openstack.org/networking-sfc/latest/install/install.html * https://docs.openstack.org/releasenotes/networking-sfc/queens.html I install with pip (python2.7). BUT first, I must source the right venv (OSA seems to be prepared for that): user at neutron-serveur: source /openstack/venvs/neutron-17.0.9/bin/activate Then I install networking-sfc: (neutron-17.0.9) user at neutron-serveur: pip install -c https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/queens networking-sfc==6.0.0 The install seems to be ok (no error, only Ignoring python3.x version of soft). Then, I modify the neutron config files like this: https://docs.openstack.org/networking-sfc/latest/install/configuration.html And then it seems to be good so far (`openstack network agent list` show all the agent needed, but I have not tested SFC feature yet. I can keep you updated next week). Also, I don't have the CLI "neutron-db-manage" and I don't find how to install/use it. And I don't know if this is important for OSA. Best regards, -- Nicolas From kennelson11 at gmail.com Fri Aug 24 18:15:26 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 24 Aug 2018 11:15:26 -0700 Subject: [Openstack-operators] Berlin Community Contributor Awards Message-ID: Hello Everyone! As we approach the Summit (still a ways away thankfully), I thought I would kick off the Community Contributor Award nominations early this round. For those of you that already know what they are, here is the form[1]. For those of you that have never heard of the CCA, I'll briefly explain what they are :) We all know people in the community that do the dirty jobs, we all know people that will bend over backwards trying to help someone new, we all know someone that is a savant in some area of the code we could never hope to understand. These people rarely get the thanks they deserve and the Community Contributor Awards are a chance to make sure they know that they are appreciated for the amazing work they do and skills they have. So go forth and nominate these amazing community members[1]! Nominations will close on October 21st at 7:00 UTC and winners will be announced at the OpenStack Summit in Berlin. -Kendall (diablo_rojo) [1] https://openstackfoundation.formstack.com/forms/berlin_stein_ccas -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Aug 24 21:10:07 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 24 Aug 2018 16:10:07 -0500 Subject: [Openstack-operators] [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: References: <20180823152242.GB23060@sm-workstation> Message-ID: +operators On 8/24/2018 4:08 PM, Matt Riedemann wrote: > On 8/23/2018 10:22 AM, Sean McGinnis wrote: >> I haven't gone through the workflow, but I thought shelve/unshelve >> could detach >> the volume on shelving and reattach it on unshelve. In that workflow, >> assuming >> the networking is in place to provide the connectivity, the nova >> compute host >> would be connecting to the volume just like any other attach and >> should work >> fine. The unknown or tricky part is making sure that there is the network >> connectivity or routing in place for the compute host to be able to >> log in to >> the storage target. > > Yeah that's also why I like shelve/unshelve as a start since it's doing > volume detach from the source host in the source cell and volume attach > to the target host in the target cell. > > Host aggregates in Nova, as a grouping concept, are not restricted to > cells at all, so you could have hosts in the same aggregate which span > cells, so I'd think that's what operators would be doing if they have > network/storage spanning multiple cells. Having said that, host > aggregates are not exposed to non-admin end users, so again, if we rely > on a normal user to do this move operation via resize, the only way we > can restrict the instance to another host in the same aggregate is via > availability zones, which is the user-facing aggregate construct in > nova. I know Sam would care about this because NeCTAR sets > [cinder]/cross_az_attach=False in nova.conf so servers/volumes are > restricted to the same AZ, but that's not the default, and specifying an > AZ when you create a server is not required (although there is a config > option in nova which allows operators to define a default AZ for the > instance if the user didn't specify one). > > Anyway, my point is, there are a lot of "ifs" if it's not an > operator/admin explicitly telling nova where to send the server if it's > moving across cells. > >> >> If it's the other scenario mentioned where the volume needs to be >> migrated from >> one storage backend to another storage backend, then that may require >> a little >> more work. The volume would need to be retype'd or migrated (storage >> migration) >> from the original backend to the new backend. > > Yeah, the thing with retype/volume migration that isn't great is it > triggers the swap_volume callback to the source host in nova, so if nova > was orchestrating the volume retype/move, we'd need to wait for the swap > volume to be done (not impossible) before proceeding, and only the > libvirt driver implements the swap volume API. I've always wondered, > what the hell do non-libvirt deployments do with respect to the volume > retype/migration APIs in Cinder? Just disable them via policy? > -- Thanks, Matt From mriedemos at gmail.com Fri Aug 24 21:23:06 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 24 Aug 2018 16:23:06 -0500 Subject: [Openstack-operators] [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration) In-Reply-To: <20180821103628.dk3ok76fdruwsaut@lyarwood.usersys.redhat.com> References: <20180821103628.dk3ok76fdruwsaut@lyarwood.usersys.redhat.com> Message-ID: <3a1ffb06-6b8d-883f-f1dd-21921c3066e5@gmail.com> On 8/21/2018 5:36 AM, Lee Yarwood wrote: > I'm definitely in favor of hiding this from users eventually but > wouldn't this require some form of deprecation cycle? > > Warnings within the API documentation would also be useful and even > something we could backport to stable to highlight just how fragile this > API is ahead of any policy change. The swap volume API in nova defaults to admin-only policy rules by default, so for any users that are using it directly, they are (1) admins knowingly shooting themselves, or their users, in the foot or (2) operators have opened up the policy to non-admins (or some other role of user) to hit the API directly. I would ask why that is. -- Thanks, Matt From mriedemos at gmail.com Fri Aug 24 23:37:20 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 24 Aug 2018 18:37:20 -0500 Subject: [Openstack-operators] [nova] Deprecating Core/Disk/RamFilter Message-ID: <40ffccc5-4410-18cc-2862-77d528889ec3@gmail.com> This is just an FYI that I have proposed that we deprecate the core/ram/disk filters [1]. We should have probably done this back in Pike when we removed them from the default enabled_filters list and also deprecated the CachingScheduler, which is the only in-tree scheduler driver that benefits from enabling these filters. With the heal_allocations CLI, added in Rocky, we can probably drop the CachingScheduler in Stein so the pieces are falling into place. As we saw in a recent bug [2], having these enabled in Stein now causes blatantly incorrect filtering on ironic nodes. Comments are welcome here, the review, or in IRC. [1] https://review.openstack.org/#/c/596502/ [2] https://bugs.launchpad.net/tripleo/+bug/1787910 -- Thanks, Matt From xliviux at gmail.com Sat Aug 25 10:26:06 2018 From: xliviux at gmail.com (Liviu Popescu) Date: Sat, 25 Aug 2018 13:26:06 +0300 Subject: [Openstack-operators] c Message-ID: -------------- next part -------------- An HTML attachment was scrubbed... URL: From kunzmann at docomolab-euro.com Mon Aug 27 12:37:28 2018 From: kunzmann at docomolab-euro.com (Kunzmann, Gerald) Date: Mon, 27 Aug 2018 12:37:28 +0000 Subject: [Openstack-operators] Ops Community Documentation - first anchor point In-Reply-To: <20180821192743.GA31796@sm-workstation> References: <20180626164210.GA1445@sm-workstation> <20180821191448.odxe7sgrfpwbwd6x@csail.mit.edu> <20180821192743.GA31796@sm-workstation> Message-ID: Hi everyone, I think this is a great step to pull together all Ops related discussions into one common repo and SIG. I noticed that there was a proposal in the Vancouver discussion [1] to create the repo without CLA. I am curious which way was the repo setup? Not requiring the CLA and also removing some steps from the "full openstack code workflow" might indeed remove a potential barrier for contributions. [1] https://etherpad.openstack.org/p/YVR-Ops-Community-Docs Best regards, Gerald -----Original Message----- From: Sean McGinnis Sent: Dienstag, 21. August 2018 21:28 To: Jonathan Proulx Cc: OpenStack Operators Subject: Re: [Openstack-operators] Ops Community Documentation - first anchor point On Tue, Aug 21, 2018 at 03:14:48PM -0400, Jonathan Proulx wrote: > Hi All... > > I'm still a little confused by the state of this :) > > I know I made some promises then got distracted the looks like Sean > stepped up and got things a bit further, but where is it now? Do we > have an active repo? > > It would be nice to have the repo in place before OPs meetup. > > -Jon > Hey Jon, Pretty much everything is in place now. There is one outstanding patch to officially add things under the SIG governance here: https://review.openstack.org/#/c/591248/ That's a formality that needs to be done, but we do have the content being published to the site: https://docs.openstack.org/operations-guide/ And we have the repo set up and ready for updates to be proposed: http://git.openstack.org/cgit/openstack/operations-guide Our next step is to start encouraging contributions from the community. This would be a great things to discuss at the PTG, and I now realize that I didn't add this obvious thing to our ops PTG planning etherpad. I will add it there and hopefully we can go over some of the content and get some more interest in contributing to it. Thanks! Sean _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5508 bytes Desc: not available URL: From doug at doughellmann.com Mon Aug 27 15:06:42 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 27 Aug 2018 11:06:42 -0400 Subject: [Openstack-operators] Ops Community Documentation - first anchor point In-Reply-To: References: <20180626164210.GA1445@sm-workstation> <20180821191448.odxe7sgrfpwbwd6x@csail.mit.edu> <20180821192743.GA31796@sm-workstation> Message-ID: <1535382377-sup-1787@lrrr.local> Excerpts from Kunzmann, Gerald's message of 2018-08-27 12:37:28 +0000: > Hi everyone, > > I think this is a great step to pull together all Ops related discussions into > one common repo and SIG. > > I noticed that there was a proposal in the Vancouver discussion [1] to create > the repo without CLA. I am curious which way was the repo setup? Not requiring > the CLA and also removing some steps from the "full openstack code workflow" > might indeed remove a potential barrier for contributions. > > [1] https://etherpad.openstack.org/p/YVR-Ops-Community-Docs > > Best regards, > Gerald I do not see the flag that says the CLA is required in the configuration for the repository: http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/acls/openstack/operations-guide.config Doug From eumel at arcor.de Mon Aug 27 15:54:42 2018 From: eumel at arcor.de (Frank Kloeker) Date: Mon, 27 Aug 2018 17:54:42 +0200 Subject: [Openstack-operators] [all] Berlin Hackathon: Hacking the Edge Message-ID: <53844c939fe81fb942b5eed2d8739985@arcor.de> Hello, For the weekend before the Berlin Summit we plan an additional OpenStack community event: "Hacking the Edge" Hackathon. The idea is to build together a community cloud based on Edge technology. We're looking for volunteers: Developers Try out the newest software version from your projects like Nova, Cinder, and Neutron. What's the requirements for Edge and what makes sense? Install different components on different devices and connects all of them. Operators Operation of Edge Cloud is also a challenge. Changes from 'must be online' to 'maybe online'. Which measuring methods are available for monitoring? Where are my backups? Do we need an operation center also in the Edge? Architects General Edge Cloud Architecture. How is the plan for connecting new devices with different connectivity. Scalable application and life cycle management. Bring your own devices like laptop, Raspberry PI, WIFI routers, which you would connect to the Edge Cloud. We host the event location and provide infrastructure, maybe together with a couple of 5G devices, because the venue has one of the first 5G antennas in Germany. Everybody is welcome to join and have fun. We are only limited on the event space. More details are also in the event description. Don't be afraid to ask me directly, via e-mail or IRC. kind regards Frank (eumel8) Registration: https://openstack-hackathon-berlin.eventbrite.com/ Collected ideas/workpad: https://etherpad.openstack.org/p/hacking_the_edge_hackathon_berlin From juliaashleykreger at gmail.com Mon Aug 27 16:53:49 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 27 Aug 2018 09:53:49 -0700 Subject: [Openstack-operators] [ironic][tripleo][edge] Discussing ironic federation and distributed deployments Message-ID: Greetings everyone! We in Ironic land would like to go into the PTG with some additional thoughts, requirements, and ideas as it relates to distributed and geographically distributed deployments. As you may or may not know, we did take a first step towards supporting some of the architectures needed with conductor_groups this past cycle, but we have two very distinct needs that have been expressed with-in the community. 1) Need to federate and share baremetal resources between ironic deployments. A specification[1] was proposed to try and begin to capture what this would look like ironic wise. At a high level, this would look like an ironic node that actually consumes and remotely manages a node via another ironic deployment. Largely this would be a stand-alone user/admin user deployment cases, where hardware inventory insight is needed. 2) Need to securely manage remote sites with different security postures, while not exposing control-plane components as an attack surface. Some early discussion into this would involve changing Conductor/IPA communication flow[2], or at least supporting a different model, and some sort of light weight intermediate middle-man service that helps facilitate the local site management. With that in mind, we would like to schedule a call for sometime next week where we can kind of talk through and discuss these thoughts and needs in real time in advance of the PTG so we can be even better prepared. We are attempting to identify a time with a doodle[3]. Please select a time and date, so we can schedule something for next week. Thanks, -Julia [1]: https://review.openstack.org/#/c/560152/ [2]: https://review.openstack.org/212206 [3]: https://doodle.com/poll/y355wt97heffvp3m From fungi at yuggoth.org Mon Aug 27 18:24:58 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 27 Aug 2018 18:24:58 +0000 Subject: [Openstack-operators] Ops Community Documentation - first anchor point In-Reply-To: <1535382377-sup-1787@lrrr.local> References: <20180626164210.GA1445@sm-workstation> <20180821191448.odxe7sgrfpwbwd6x@csail.mit.edu> <20180821192743.GA31796@sm-workstation> <1535382377-sup-1787@lrrr.local> Message-ID: <20180827182458.lso52srlwqfnu5pz@yuggoth.org> On 2018-08-27 11:06:42 -0400 (-0400), Doug Hellmann wrote: [...] > I do not see the flag that says the CLA is required in the configuration > for the repository: > > http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/acls/openstack/operations-guide.config You can also tell by looking at the general project properties: https://review.openstack.org/#/admin/projects/openstack/operations-guide "Require a valid contributor agreement to upload" there is "INHERIT (false)" which indicates it's not set. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From kendall at openstack.org Mon Aug 27 18:27:06 2018 From: kendall at openstack.org (Kendall Waters) Date: Mon, 27 Aug 2018 13:27:06 -0500 Subject: [Openstack-operators] Early Bird Pricing Ends Tomorrow - OpenStack Summit Berlin Message-ID: Hi everyone, Friendly reminder that the early bird ticket price deadline for the OpenStack Summit Berlin is tomorrow, August 28 at 11:59pm PT (August 29, 6:59 UTC). In Berlin, there will be sessions and workshops around open infrastructure use cases, including CI/CD, container infrastructure, edge computing, HPC / AI / GPUs, private & hybrid cloud, public cloud and NFV. In case you haven’t seen it, the agenda is now live and includes sessions and workshops from Ocado Technology, Metronom, Oerlikon, and more! In addition, make sure to check out the Edge Hackathon hosted by Open Telekom Cloud the weekend prior to the Summit. Register NOW before the price increases to $999 USD! Interested in sponsoring the Summit? Find out more here or email summit at openstack.org. Cheers, Kendall Kendall Waters OpenStack Marketing & Events kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From durrani.anwar at gmail.com Tue Aug 28 11:22:26 2018 From: durrani.anwar at gmail.com (Anwar Durrani) Date: Tue, 28 Aug 2018 16:52:26 +0530 Subject: [Openstack-operators] Openstack Kilo - Danger: There was an error submitting the form. Please try again. Message-ID: Hi Team, I am using KILO, i am having issue while launching any instance, its ending up with following error *Danger: *There was an error submitting the form. Please try again. i have managed to capture the log, where i have found as tail -f /var/log/nova/nova-conductor.log 2018-08-28 16:51:07.298 6187 DEBUG nova.openstack.common.loopingcall [req-4f7a0004-ab9d-431e-8629-b0c4c15617e2 - - - - -] Dynamic looping call > sleeping for 60.00 seconds _inner /usr/lib/python2.7/site-packages/nova/openstack/common/loopingcall.py:132 Any clue why this is happening ? -- Thanks & regards, Anwar M. Durrani +91-9923205011 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at opensourcesolutions.co.uk Tue Aug 28 15:51:39 2018 From: dave at opensourcesolutions.co.uk (Dave Williams) Date: Tue, 28 Aug 2018 16:51:39 +0100 Subject: [Openstack-operators] [kolla][ceph] Message-ID: <20180828155139.GA3946@opensourcesolutions.co.uk> What is the best practice for adding more Ceph OSD's to kolla in a production environment. Does deploy do anything to the existing data or does it simply add the OSD's (and potentially increase the placement groups if re-configured)? reconfigure doesnt touch newly prepared disks from what I see from the code which is where I was expecting might have been done. I am running kolla-ansible queens. Thanks Dave From sorrison at gmail.com Wed Aug 29 00:57:05 2018 From: sorrison at gmail.com (Sam Morrison) Date: Wed, 29 Aug 2018 10:57:05 +1000 Subject: [Openstack-operators] Openstack Kilo - Danger: There was an error submitting the form. Please try again. In-Reply-To: References: Message-ID: <619E9375-36CC-4426-9CCB-41CACDC93663@gmail.com> Hi Anwar, The log message you posted below is not an error (it is at level DEBUG and can be ignored). I would look in your horizon logs and also. The API logs of nova/neutron/glance/cinder to see what’s going on. Good luck! Sam > On 28 Aug 2018, at 9:22 pm, Anwar Durrani wrote: > > Hi Team, > > I am using KILO, i am having issue while launching any instance, its ending up with following error > > Danger: There was an error submitting the form. Please try again. > > i have managed to capture the log, where i have found as > > tail -f /var/log/nova/nova-conductor.log > 2018-08-28 16:51:07.298 6187 DEBUG nova.openstack.common.loopingcall [req-4f7a0004-ab9d-431e-8629-b0c4c15617e2 - - - - -] Dynamic looping call > sleeping for 60.00 seconds _inner /usr/lib/python2.7/site-packages/nova/openstack/common/loopingcall.py:132 > > Any clue why this is happening ? > > -- > > Thanks & regards, > Anwar M. Durrani > +91-9923205011 > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Wed Aug 29 14:22:16 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 29 Aug 2018 10:22:16 -0400 Subject: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: References: Message-ID: <2fd5feda-0399-6b24-7eb7-738f36e14e70@gmail.com> Sorry for delayed response. Was on PTO when this came out. Comments inline... On 08/22/2018 09:23 PM, Matt Riedemann wrote: > Hi everyone, > > I have started an etherpad for cells topics at the Stein PTG [1]. The > main issue in there right now is dealing with cross-cell cold migration > in nova. > > At a high level, I am going off these requirements: > > * Cells can shard across flavors (and hardware type) so operators would > like to move users off the old flavors/hardware (old cell) to new > flavors in a new cell. So cell migrations are kind of the new release upgrade dance. Got it. > * There is network isolation between compute hosts in different cells, > so no ssh'ing the disk around like we do today. But the image service is > global to all cells. > > Based on this, for the initial support for cross-cell cold migration, I > am proposing that we leverage something like shelve offload/unshelve > masquerading as resize. We shelve offload from the source cell and > unshelve in the target cell. This should work for both volume-backed and > non-volume-backed servers (we use snapshots for shelved offloaded > non-volume-backed servers). shelve was and continues to be a hack in order for users to keep an IPv4 address while not consuming compute resources for some amount of time. [1] If cross-cell cold migration is similarly just about the user being able to keep their instance's IPv4 address while allowing an admin to move an instance's storage to another physical location, then my firm belief is that this kind of activity needs to be coordinated *externally to Nova*. Each deployment is going to be different, and in all cases of cross-cell migration, the admins doing these move operations are going to need to understand various network, storage and failure domains that are particular to that deployment (and not something we have the ability to discover in any automated fashion). Since we're not talking about live migration (thank all that is holy), I believe the safest and most effective way to perform such a cross-cell "migration" would be the following basic steps: 0. ensure that each compute node is associated with at least one nova host aggregate that is *only* in a single cell 1. shut down the instance (optionally snapshotting required local disk changes if the user is unfortunately using their root disk for application data) 2. "save" the instance's IP address by manually creating a port in Neutron and assigning the IP address manually to that port. this of course will be deployment-dependent since you will need to hope the saved IP address for the migrating instance is in a subnet range that is available in the target cell 3. migrate the volume manually. this will be entirely deployment and backend-dependent as smcginnis alluded to in a response to this thread 4. have the admin boot the instance in a host aggregate that is known to be in the target cell, passing --network port_id=$SAVED_PORT_WITH_IP and --volume $MIGRATED_VOLUME_UUID arguments as needed. the admin would need to do this because users don't know about host aggregates and, frankly, the user shouldn't know about host aggregates, cells, or any of this. Best, -jay [1] ok, shelve also lets a user keep their instance ID. I don't care much about that. From anne at openstack.org Wed Aug 29 14:44:05 2018 From: anne at openstack.org (Anne Bertucio) Date: Wed, 29 Aug 2018 07:44:05 -0700 Subject: [Openstack-operators] Aug 30, 1500 UTC: Community Meeting: Come learn what's new in Rocky! Message-ID: <732A908B-928A-47AA-9D50-D01E0CCFF8C8@openstack.org> A reminder that there’ll be a community meeting tomorrow August 30, at 1500UTC/8am Pacific, where you can learn about some of the new things in OpenStack Rocky, and get updates on the pilot projects Airship, Kata Containers, StarlingX, and Zuul. We’ll hear from PTLs (Julia Kreger, Ironic; Alex Schultz, TripleO) on what’s new in their projects, as well as pilot project technical contributors Eric Ernst (Kata), Bruce Jones (StarlingX), and OSF staff + contributors Chris Hoge (Airship) and Jeremy Stanley (Zuul). You can join using the webinar info below, but this session will be recorded if you can’t make it live! Learn what's new in the Rocky release, and get updates on Airship, Kata Containers, StarlingX, and Zuul. ————————— When: Aug 30, 2018 8:00 AM Pacific Time (US and Canada) Topic: OpenStack Community Meeting Please click the link below to join the webinar: https://zoom.us/j/551803657 Or iPhone one-tap : US: +16699006833,,551803657# or +16468769923,,551803657# Or Telephone: Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 Webinar ID: 551 803 657 International numbers available: https://zoom.us/u/bh2jVweqf Anne Bertucio OpenStack Foundation anne at openstack.org | irc: annabelleB -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Wed Aug 29 14:47:27 2018 From: dms at danplanet.com (Dan Smith) Date: Wed, 29 Aug 2018 07:47:27 -0700 Subject: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <2fd5feda-0399-6b24-7eb7-738f36e14e70@gmail.com> (Jay Pipes's message of "Wed, 29 Aug 2018 10:22:16 -0400") References: <2fd5feda-0399-6b24-7eb7-738f36e14e70@gmail.com> Message-ID: >> * Cells can shard across flavors (and hardware type) so operators >> would like to move users off the old flavors/hardware (old cell) to >> new flavors in a new cell. > > So cell migrations are kind of the new release upgrade dance. Got it. No, cell migrations are about moving instances between cells for whatever reason. If you have small cells for organization, then it's about not building arbitrary barriers between aisles. If you use it for hardware refresh, then it might be related to long term lifecycle. I'm not sure what any of this has to do with release upgrades or dancing. > shelve was and continues to be a hack in order for users to keep an > IPv4 address while not consuming compute resources for some amount of > time. [1] As we discussed in YVR most recently, it also may become an important thing for operators and users where expensive accelerators are committed to instances with part-time usage patterns. It has also come up more than once in the realm of "but I need to detach my root volume" scenarios. I love to hate on shelve as well, but recently a few more legit (than merely keeping an IPv4 address) use-cases have come out for it, and I don't think Matt is wrong that cross-cell migration *might* be easier as a shelve operation under the covers. > If cross-cell cold migration is similarly just about the user being > able to keep their instance's IPv4 address while allowing an admin to > move an instance's storage to another physical location, then my firm > belief is that this kind of activity needs to be coordinated > *externally to Nova*. I'm not sure how you could make that jump, but no, I don't think that's the case. In any sort of large cloud that uses cells to solve problems of scale, I think it's quite likely to expect that your IPv4 address physically can't be honored in the target cell, and/or requires some less-than-ideal temporary tunneling for bridging the gap. > Since we're not talking about live migration (thank all that is holy), Oh it's coming. Don't think it's not. > I believe the safest and most effective way to perform such a > cross-cell "migration" would be the following basic steps: > > 0. ensure that each compute node is associated with at least one nova > host aggregate that is *only* in a single cell > 1. shut down the instance (optionally snapshotting required local disk > changes if the user is unfortunately using their root disk for > application data) > 2. "save" the instance's IP address by manually creating a port in > Neutron and assigning the IP address manually to that port. this of > course will be deployment-dependent since you will need to hope the > saved IP address for the migrating instance is in a subnet range that > is available in the target cell > 3. migrate the volume manually. this will be entirely deployment and > backend-dependent as smcginnis alluded to in a response to this thread > 4. have the admin boot the instance in a host aggregate that is known > to be in the target cell, passing --network > port_id=$SAVED_PORT_WITH_IP and --volume $MIGRATED_VOLUME_UUID > arguments as needed. the admin would need to do this because users > don't know about host aggregates and, frankly, the user shouldn't know > about host aggregates, cells, or any of this. What you just described here is largely shelve, ignoring the volume migration part and the fact that such a manual process means the user loses the instance's uuid and various other elements about it (such as create time, action/event history, etc). Oh, and ignoring the fact that the user no longer owns their instance (the admin does) :) Especially given that migrating across a cell may mean "one aisle over, same storage provider and network" to a lot of people, the above being a completely manual process seems a little crazy to me. --Dan From jaypipes at gmail.com Wed Aug 29 16:02:25 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 29 Aug 2018 12:02:25 -0400 Subject: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: References: <2fd5feda-0399-6b24-7eb7-738f36e14e70@gmail.com> Message-ID: <9b6b87e3-024b-6035-11ec-3ab41f795a3f@gmail.com> I respect your opinion but respectfully disagree that this is something we need to spend our time on. Comments inline. On 08/29/2018 10:47 AM, Dan Smith wrote: >>> * Cells can shard across flavors (and hardware type) so operators >>> would like to move users off the old flavors/hardware (old cell) to >>> new flavors in a new cell. >> >> So cell migrations are kind of the new release upgrade dance. Got it. > > No, cell migrations are about moving instances between cells for > whatever reason. If you have small cells for organization, then it's > about not building arbitrary barriers between aisles. If you use it for > hardware refresh, then it might be related to long term lifecycle. I'm > not sure what any of this has to do with release upgrades or dancing. A release upgrade dance involves coordination of multiple moving parts. It's about as similar to this scenario as I can imagine. And there's a reason release upgrades are not done entirely within Nova; clearly an external upgrade tool or script is needed to orchestrate the many steps and components involved in the upgrade process. The similar dance for cross-cell migration is the coordination that needs to happen between Nova, Neutron and Cinder. It's called orchestration for a reason and is not what Nova is good at (as we've repeatedly seen) The thing that makes *this* particular scenario problematic is that cells aren't user-visible things. User-visible things could much more easily be orchestrated via external actors, as I still firmly believe this kind of thing should be done. >> shelve was and continues to be a hack in order for users to keep an >> IPv4 address while not consuming compute resources for some amount of >> time. [1] > > As we discussed in YVR most recently, it also may become an important > thing for operators and users where expensive accelerators are committed > to instances with part-time usage patterns. I don't think that's a valid use case in respect to this scenario of cross-cell migration. If the target cell compute doesn't have the same expensive accelerators on them, nobody would want or permit a move to that target cell anyway. Also, I'd love to hear from anyone in the real world who has successfully migrated (live or otherwise) an instance that "owns" expensive hardware (accelerators, SR-IOV PFs, GPUs or otherwise). The patterns that I have seen are one of the following: * Applications don't move. They are pets that stay on one or more VMs or baremetal nodes and they grow roots. * Applications are designed to *utilize* the expensive hardware. They don't "own" the hardware itself. In this latter case, the application is properly designed and stores its persistent data in a volume and doesn't keep state outside of the application volume. In these cases, the process of "migrating" an instance simply goes away. You just detach the application persistent volume, shut down the instance, start up a new one elsewhere (allowing the scheduler to select one that meets the resource constraints in the flavor/image), attach the volume again and off you go. No messing around with shelving, offloading, migrating, or any of that nonsense in Nova. We should not pretend that what we're discussing here is anything other than hacking orchestration workarounds into Nova to handle poorly-designed applications that have grown roots on some hardware and think they "own" hardware resources in a Nova deployment. > It has also come up more than once in the realm of "but I need to > detach my root volume" scenarios. I love to hate on shelve as well, > but recently a few more legit (than merely keeping an IPv4 address) > use-cases have come out for it, and I don't think Matt is wrong that > cross-cell migration *might* be easier as a shelve operation under > the covers. Matt may indeed be right, but I'm certainly allowed to express my opinion that I think shelve is a monstrosity that should be avoided at all costs and building additional orchestration functionality into Nova on top of an already-shaky foundation (shelve) isn't something I think is a long-term maintainable solution. >> If cross-cell cold migration is similarly just about the user being >> able to keep their instance's IPv4 address while allowing an admin to >> move an instance's storage to another physical location, then my firm >> belief is that this kind of activity needs to be coordinated >> *externally to Nova*. > > I'm not sure how you could make that jump, but no, I don't think that's > the case. In any sort of large cloud that uses cells to solve problems > of scale, I think it's quite likely to expect that your IPv4 address > physically can't be honored in the target cell, and/or requires some > less-than-ideal temporary tunneling for bridging the gap. If that's the case, why are we discussing shelve at all? Just stop the instance, copy/migrate the volume data (if needed, again it completely depends on the deployment, network topology and block storage backend), to a new location (new cell, new AZ, new host agg, does it really matter?) and start a new instance, attaching the volume after the instance starts or supplying the volume in the boot/create command. >> Since we're not talking about live migration (thank all that is holy), > > Oh it's coming. Don't think it's not. > >> I believe the safest and most effective way to perform such a >> cross-cell "migration" would be the following basic steps: >> >> 0. ensure that each compute node is associated with at least one nova >> host aggregate that is *only* in a single cell >> 1. shut down the instance (optionally snapshotting required local disk >> changes if the user is unfortunately using their root disk for >> application data) >> 2. "save" the instance's IP address by manually creating a port in >> Neutron and assigning the IP address manually to that port. this of >> course will be deployment-dependent since you will need to hope the >> saved IP address for the migrating instance is in a subnet range that >> is available in the target cell >> 3. migrate the volume manually. this will be entirely deployment and >> backend-dependent as smcginnis alluded to in a response to this thread >> 4. have the admin boot the instance in a host aggregate that is known >> to be in the target cell, passing --network >> port_id=$SAVED_PORT_WITH_IP and --volume $MIGRATED_VOLUME_UUID >> arguments as needed. the admin would need to do this because users >> don't know about host aggregates and, frankly, the user shouldn't know >> about host aggregates, cells, or any of this. > > What you just described here is largely shelve, ignoring the volume > migration part and the fact that such a manual process means the user > loses the instance's uuid and various other elements about it (such as > create time, action/event history, etc). Oh, and ignoring the fact that > the user no longer owns their instance (the admin does) :) The admin only "owns" the instance because we have no ability to transfer ownership of the instance and a cell isn't a user-visible thing. An external script that accomplishes this kind of orchestrated move from one cell to another could easily update the ownership of said instance in the DB. My point is that Nova isn't an orchestrator, and building functionality into Nova to do this type of cross-cell migration IMHO just will lead to even more unmaintainable code paths that few, if any, deployers will ever end up using because they will end up doing it externally anyway due to the need to integrate with backend inventory management systems and other things. Best, -jay > Especially given that migrating across a cell may mean "one aisle over, > same storage provider and network" to a lot of people, the above being a > completely manual process seems a little crazy to me. > > --Dan > From dms at danplanet.com Wed Aug 29 16:39:51 2018 From: dms at danplanet.com (Dan Smith) Date: Wed, 29 Aug 2018 09:39:51 -0700 Subject: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <9b6b87e3-024b-6035-11ec-3ab41f795a3f@gmail.com> (Jay Pipes's message of "Wed, 29 Aug 2018 12:02:25 -0400") References: <2fd5feda-0399-6b24-7eb7-738f36e14e70@gmail.com> <9b6b87e3-024b-6035-11ec-3ab41f795a3f@gmail.com> Message-ID: > A release upgrade dance involves coordination of multiple moving > parts. It's about as similar to this scenario as I can imagine. And > there's a reason release upgrades are not done entirely within Nova; > clearly an external upgrade tool or script is needed to orchestrate > the many steps and components involved in the upgrade process. I'm lost here, and assume we must be confusing terminology or something. > The similar dance for cross-cell migration is the coordination that > needs to happen between Nova, Neutron and Cinder. It's called > orchestration for a reason and is not what Nova is good at (as we've > repeatedly seen) Most other operations in Nova meet this criteria. Boot requires coordination between Nova, Cinder, and Neutron. As do migrate, start, stop, evacuate. We might decide that (for now) the volume migration thing is beyond the line we're willing to cross, and that's cool, but I think it's an arbitrary limitation we shouldn't assume is impossible. Moving instances around *is* what nova is (supposed to be) good at. > The thing that makes *this* particular scenario problematic is that > cells aren't user-visible things. User-visible things could much more > easily be orchestrated via external actors, as I still firmly believe > this kind of thing should be done. I'm having a hard time reconciling these: 1. Cells aren't user-visible, and shouldn't be (your words and mine). 2. Cross-cell migration should be done by an external service (your words). 3. External services work best when things are user-visible (your words). You say the user-invisible-ness makes orchestrating this externally difficult and I agree, but...is your argument here just that it shouldn't be done at all? >> As we discussed in YVR most recently, it also may become an important >> thing for operators and users where expensive accelerators are committed >> to instances with part-time usage patterns. > > I don't think that's a valid use case in respect to this scenario of > cross-cell migration. You're right, it has nothing to do with cross-cell migration at all. I was pointing to *other* legitimate use cases for shelve. > Also, I'd love to hear from anyone in the real world who has > successfully migrated (live or otherwise) an instance that "owns" > expensive hardware (accelerators, SR-IOV PFs, GPUs or otherwise). Again, the accelerator case has nothing to do with migrating across cells, but merely demonstrates another example of where shelve may be the thing operators actually desire. Maybe I shouldn't have confused the discussion by bringing it up. > The patterns that I have seen are one of the following: > > * Applications don't move. They are pets that stay on one or more VMs > or baremetal nodes and they grow roots. > > * Applications are designed to *utilize* the expensive hardware. They > don't "own" the hardware itself. > > In this latter case, the application is properly designed and stores > its persistent data in a volume and doesn't keep state outside of the > application volume. In these cases, the process of "migrating" an > instance simply goes away. You just detach the application persistent > volume, shut down the instance, start up a new one elsewhere (allowing > the scheduler to select one that meets the resource constraints in the > flavor/image), attach the volume again and off you go. No messing > around with shelving, offloading, migrating, or any of that nonsense > in Nova. Jay, you know I sympathize with the fully-ephemeral application case, right? Can we agree that pets are a thing and that migrations are not going to be leaving Nova's scope any time soon? If so, I think we can get back to the real discussion, and if not, I think we probably, er, can't :) > We should not pretend that what we're discussing here is anything > other than hacking orchestration workarounds into Nova to handle > poorly-designed applications that have grown roots on some hardware > and think they "own" hardware resources in a Nova deployment. I have no idea how we got to "own hardware resources" here. The point of this discussion is to make our instance-moving operations work across cells. We designed cellsv2 to be invisible and baked into the core of Nova. We intended for it to not fall into the trap laid by cellsv1, where the presence of multiple cells meant that a bunch of regular operations don't work like they would otherwise. If we're going to discuss removing move operations from Nova, we should do that in another thread. This one is about making existing operations work :) > If that's the case, why are we discussing shelve at all? Just stop the > instance, copy/migrate the volume data (if needed, again it completely > depends on the deployment, network topology and block storage > backend), to a new location (new cell, new AZ, new host agg, does it > really matter?) and start a new instance, attaching the volume after > the instance starts or supplying the volume in the boot/create > command. Because shelve potentially makes it less dependent on the answers to those questions and Matt suggested it as a first step to being able to move things around at all. It means that "copy the data" becomes "talk to glance" which compute nodes can already do. Requiring compute nodes across cells to talk to each other (which could be in different buildings, sites, or security domains) is a whole extra layer of complexity. I do think we'll go there (via resize/migrate at some point, but shelve going through glance for data and through a homeless phase in Nova does simplify a whole set of things. > The admin only "owns" the instance because we have no ability to > transfer ownership of the instance and a cell isn't a user-visible > thing. An external script that accomplishes this kind of orchestrated > move from one cell to another could easily update the ownership of > said instance in the DB. So step 5 was "do surgery on the database"? :) > My point is that Nova isn't an orchestrator, and building > functionality into Nova to do this type of cross-cell migration IMHO > just will lead to even more unmaintainable code paths that few, if > any, deployers will ever end up using because they will end up doing > it externally anyway due to the need to integrate with backend > inventory management systems and other things. On the contrary, per the original goal of cellsv2, I want to make the *existing* code paths in Nova work properly when multiple cells are present. Just like we had to make boot and list work properly with multiple cells, I think we need to do the same with migrate, shelve, etc. --Dan From jimmy at openstack.org Wed Aug 29 16:51:58 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 29 Aug 2018 11:51:58 -0500 Subject: [Openstack-operators] OpenStack Summit Forum in Berlin: Topic Selection Process Message-ID: <5B86CF2E.5010708@openstack.org> Hi all, Welcome to the topic selection process for our Forum in Berlin. This is not a classic conference track with speakers and presentations. OSF community members (participants in development teams, operators, working groups, SIGs, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is your opportunity to help shape the development of future project releases. For OpenStack Berlin marks the beginning of Stein’s release cycle, where ideas and requirements will be gathered. We should come armed with feedback from August's Rocky release if at all possible. We aim to ensure the broadest coverage of topics that will allow for multiple parts of the community getting together to discuss key areas within our community/projects. For OSF Projects (StarlingX, Zuul, Airship, Kata Containers) Welcome! Berlin is your first official opportunity to participate in a Forum. The idea is to gather ideas and requirements for your project’s upcoming release. Look to https://wiki.openstack.org/wiki/Forum for an idea of how to structure fishbowls and discussions for your project. The idea is to ensure the broadest coverage of topics, while allowing for the project community to discuss critical areas of concern. To make sure we are presenting the best topics for discussion, we have asked representatives of each of your projects to help us out in the Forum selection process. There are two stages to the brainstorming: 1. Starting today, set up an etherpad with your team and start discussing ideas you'd like to talk about at the Forum and work out which ones to submit. 2. Then, in a couple of weeks, we will open up a more formal web-based tool for you to submit abstracts for the most popular sessions that came out of your brainstorming. Make an etherpad and add it to the list at: https://wiki.openstack.org/wiki/Forum/Berlin2018 This is your opportunity to think outside the box and talk with other projects, groups, and individuals that you might not see during Summit sessions. Look for interested parties to collaborate with and share your ideas. Examples of typical sessions that make for a great Forum: Strategic, whole-of-community discussions, to think about the big picture, including beyond just one release cycle and new technologies e.g. OpenStack One Platform for containers/VMs/Bare Metal (Strategic session) the entire community congregates to share opinions on how to make OpenStack achieve its integration engine goal Cross-project sessions, in a similar vein to what has happened at past forums, but with increased emphasis on issues that are of relevant to all areas of the community e.g. Rolling Upgrades at Scale (Cross-Project session) – the Large Deployments Team collaborates with Nova, Cinder and Keystone to tackle issues that come up with rolling upgrades when there’s a large number of machines.
 Project-specific sessions, where community members most interested in a specific project can discuss their experience with the project over the last release and provide feedback, collaborate on priorities, and present or generate 'blue sky' ideas for the next release e.g. Neutron Pain Points (Project-Specific session) – Co-organized by neutron developers and users. Neutron developers bring some specific questions about implementation and usage. Neutron users bring feedback from the latest release. All community members interested in Neutron discuss ideas about the future. Think about what kind of session ideas might end up as: Project-specific, cross-project or strategic/whole-of-community discussions. There'll be more slots for the latter two, so do try and think outside the box! This part of the process is where we gather broad community consensus - in theory the second part is just about fitting in as many of the good ideas into the schedule as we can. Further details about the forum can be found at: https://wiki.openstack.org/wiki/Forum Thanks all! Jimmy McArthur, on behalf of the OpenStack Foundation, User Committee & Technical Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Wed Aug 29 17:23:48 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 29 Aug 2018 13:23:48 -0400 Subject: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: References: <2fd5feda-0399-6b24-7eb7-738f36e14e70@gmail.com> <9b6b87e3-024b-6035-11ec-3ab41f795a3f@gmail.com> Message-ID: On 08/29/2018 12:39 PM, Dan Smith wrote: > If we're going to discuss removing move operations from Nova, we should > do that in another thread. This one is about making existing operations > work :) OK, understood. :) >> The admin only "owns" the instance because we have no ability to >> transfer ownership of the instance and a cell isn't a user-visible >> thing. An external script that accomplishes this kind of orchestrated >> move from one cell to another could easily update the ownership of >> said instance in the DB. > > So step 5 was "do surgery on the database"? :) Yep. You'd be surprised how often that ends up being the case. I'm currently sitting here looking at various integration tooling for doing just this kind of thing for our deployments of >150K baremetal compute nodes. The number of specific-to-an-environment variables that need to be considered and worked into the overall migration plan is breathtaking. And trying to do all of that inside of Nova just isn't feasible for the scale at which we run. At least, that's my point of view. I won't drag this conversation out any further on tangents. Best, -jay From jim at jimrollenhagen.com Wed Aug 29 18:08:45 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 29 Aug 2018 14:08:45 -0400 Subject: [Openstack-operators] [openstack-dev] OpenStack Summit Forum in Berlin: Topic Selection Process In-Reply-To: <5B86CF2E.5010708@openstack.org> References: <5B86CF2E.5010708@openstack.org> Message-ID: On Wed, Aug 29, 2018 at 12:51 PM, Jimmy McArthur wrote: > Examples of typical sessions that make for a great Forum: > > Strategic, whole-of-community discussions, to think about the big > picture, including beyond just one release cycle and new technologies > > e.g. OpenStack One Platform for containers/VMs/Bare Metal (Strategic > session) the entire community congregates to share opinions on how to make > OpenStack achieve its integration engine goal > Just to clarify some speculation going on in IRC: this is an example, right? Not a new thing being announced? // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Wed Aug 29 18:26:41 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 29 Aug 2018 12:26:41 -0600 Subject: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <9b6b87e3-024b-6035-11ec-3ab41f795a3f@gmail.com> References: <2fd5feda-0399-6b24-7eb7-738f36e14e70@gmail.com> <9b6b87e3-024b-6035-11ec-3ab41f795a3f@gmail.com> Message-ID: <5B86E561.2060909@windriver.com> On 08/29/2018 10:02 AM, Jay Pipes wrote: > Also, I'd love to hear from anyone in the real world who has successfully > migrated (live or otherwise) an instance that "owns" expensive hardware > (accelerators, SR-IOV PFs, GPUs or otherwise). I thought cold migration of instances with such devices was supported upstream? Chris From jaypipes at gmail.com Wed Aug 29 18:27:35 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 29 Aug 2018 14:27:35 -0400 Subject: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <5B86E561.2060909@windriver.com> References: <2fd5feda-0399-6b24-7eb7-738f36e14e70@gmail.com> <9b6b87e3-024b-6035-11ec-3ab41f795a3f@gmail.com> <5B86E561.2060909@windriver.com> Message-ID: <1ff04107-11d5-3983-4de7-40323ac89495@gmail.com> On 08/29/2018 02:26 PM, Chris Friesen wrote: > On 08/29/2018 10:02 AM, Jay Pipes wrote: > >> Also, I'd love to hear from anyone in the real world who has successfully >> migrated (live or otherwise) an instance that "owns" expensive hardware >> (accelerators, SR-IOV PFs, GPUs or otherwise). > > I thought cold migration of instances with such devices was supported > upstream? That's not what I asked. :) -jay From jimmy at openstack.org Wed Aug 29 18:29:52 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 29 Aug 2018 13:29:52 -0500 Subject: [Openstack-operators] [openstack-dev] OpenStack Summit Forum in Berlin: Topic Selection Process In-Reply-To: References: <5B86CF2E.5010708@openstack.org> Message-ID: <5B86E620.7070707@openstack.org> 100% correct. Just a random example text that we've been reusing since early 2017. Next time, we will consider lorem ipsum ;) Jim Rollenhagen wrote: > On Wed, Aug 29, 2018 at 12:51 PM, Jimmy McArthur > wrote: > > > Examples of typical sessions that make for a great Forum: > > Strategic, whole-of-community discussions, to think about the big > picture, including beyond just one release cycle and new technologies > > e.g. OpenStack One Platform for containers/VMs/Bare Metal > (Strategic session) the entire community congregates to share > opinions on how to make OpenStack achieve its integration engine goal > > > Just to clarify some speculation going on in IRC: this is an example, > right? Not a new thing being announced? > > // jim > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashlee at openstack.org Wed Aug 29 19:16:10 2018 From: ashlee at openstack.org (Ashlee Ferguson) Date: Wed, 29 Aug 2018 14:16:10 -0500 Subject: [Openstack-operators] Travel Support Deadline Tomorrow Message-ID: <48D19CD6-6A56-46EA-A33F-E955E2471735@openstack.org> Hi everyone, Reminder that the deadline to apply for Travel Support to the Berlin Summit closes tomorrow, Thursday, August 30 at 11:59pm PT. APPLY HERE The Travel Support Program's aim is to facilitate participation of active community members to the Summit by covering the costs for their travel and accommodation. If you are a key contributor to a project managed by the OpenStack Foundation, and your company does not cover the costs of your travel and accommodation to Berlin, you can apply for the Travel Support Program. Please email summit at openstack.org with any questions. Thanks, Ashlee Ashlee Ferguson OpenStack Foundation ashlee at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Wed Aug 29 19:49:24 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 29 Aug 2018 19:49:24 +0000 Subject: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: References: <2fd5feda-0399-6b24-7eb7-738f36e14e70@gmail.com> <9b6b87e3-024b-6035-11ec-3ab41f795a3f@gmail.com> Message-ID: I've not followed all the arguments here regarding internals but CERN's background usage of Cells v2 (and thoughts on impact of cross cell migration) is below. Some background at https://www.openstack.org/videos/vancouver-2018/moving-from-cellsv1-to-cellsv2-at-cern. Some rough parameters with the team providing more concrete numbers if needed.... - The VMs to be migrated are not generally not expensive configurations, just hardware lifecycles where boxes go out of warranty or computer centre rack/cooling needs re-organising. For CERN, this is a 6-12 month frequency of ~10,000 VMs per year (with a ~30% pet share) - We make a cell from identical hardware at a single location, this greatly simplifies working out hardware issues, provisioning and management - Some cases can be handled with the 'please delete and re-create'. Many other cases need much user support/downtime (and require significant effort or risk delaying retirements to get agreement) - When a new hardware delivery is made, we would hope to define a new cell (as it is a different configuration) - Depending on the facilities retirement plans, we would work out what needed to be moved to new resources - There are many different scenarios for migration (either live or cold) -- All instances in the old cell would be migrated to the new hardware which would have sufficient capacity -- All instances in a single cell would be migrated to several different cells such as the new cells being smaller -- Some instances would be migrated because those racks need to be retired but other servers in the cell would remain for a further year or two until retirement was mandatory With many cells and multiple locations, spreading the hypervisors across the cells in anticipation of potential migrations is unattractive. From my understanding, these models were feasible with Cells V1. We can discuss further, at the PTG or Summit, on the operational flexibility which we have taken advantage of so far and alternative models. Tim -----Original Message----- From: Dan Smith Date: Wednesday, 29 August 2018 at 18:47 To: Jay Pipes Cc: "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration > A release upgrade dance involves coordination of multiple moving > parts. It's about as similar to this scenario as I can imagine. And > there's a reason release upgrades are not done entirely within Nova; > clearly an external upgrade tool or script is needed to orchestrate > the many steps and components involved in the upgrade process. I'm lost here, and assume we must be confusing terminology or something. > The similar dance for cross-cell migration is the coordination that > needs to happen between Nova, Neutron and Cinder. It's called > orchestration for a reason and is not what Nova is good at (as we've > repeatedly seen) Most other operations in Nova meet this criteria. Boot requires coordination between Nova, Cinder, and Neutron. As do migrate, start, stop, evacuate. We might decide that (for now) the volume migration thing is beyond the line we're willing to cross, and that's cool, but I think it's an arbitrary limitation we shouldn't assume is impossible. Moving instances around *is* what nova is (supposed to be) good at. > The thing that makes *this* particular scenario problematic is that > cells aren't user-visible things. User-visible things could much more > easily be orchestrated via external actors, as I still firmly believe > this kind of thing should be done. I'm having a hard time reconciling these: 1. Cells aren't user-visible, and shouldn't be (your words and mine). 2. Cross-cell migration should be done by an external service (your words). 3. External services work best when things are user-visible (your words). You say the user-invisible-ness makes orchestrating this externally difficult and I agree, but...is your argument here just that it shouldn't be done at all? >> As we discussed in YVR most recently, it also may become an important >> thing for operators and users where expensive accelerators are committed >> to instances with part-time usage patterns. > > I don't think that's a valid use case in respect to this scenario of > cross-cell migration. You're right, it has nothing to do with cross-cell migration at all. I was pointing to *other* legitimate use cases for shelve. > Also, I'd love to hear from anyone in the real world who has > successfully migrated (live or otherwise) an instance that "owns" > expensive hardware (accelerators, SR-IOV PFs, GPUs or otherwise). Again, the accelerator case has nothing to do with migrating across cells, but merely demonstrates another example of where shelve may be the thing operators actually desire. Maybe I shouldn't have confused the discussion by bringing it up. > The patterns that I have seen are one of the following: > > * Applications don't move. They are pets that stay on one or more VMs > or baremetal nodes and they grow roots. > > * Applications are designed to *utilize* the expensive hardware. They > don't "own" the hardware itself. > > In this latter case, the application is properly designed and stores > its persistent data in a volume and doesn't keep state outside of the > application volume. In these cases, the process of "migrating" an > instance simply goes away. You just detach the application persistent > volume, shut down the instance, start up a new one elsewhere (allowing > the scheduler to select one that meets the resource constraints in the > flavor/image), attach the volume again and off you go. No messing > around with shelving, offloading, migrating, or any of that nonsense > in Nova. Jay, you know I sympathize with the fully-ephemeral application case, right? Can we agree that pets are a thing and that migrations are not going to be leaving Nova's scope any time soon? If so, I think we can get back to the real discussion, and if not, I think we probably, er, can't :) > We should not pretend that what we're discussing here is anything > other than hacking orchestration workarounds into Nova to handle > poorly-designed applications that have grown roots on some hardware > and think they "own" hardware resources in a Nova deployment. I have no idea how we got to "own hardware resources" here. The point of this discussion is to make our instance-moving operations work across cells. We designed cellsv2 to be invisible and baked into the core of Nova. We intended for it to not fall into the trap laid by cellsv1, where the presence of multiple cells meant that a bunch of regular operations don't work like they would otherwise. If we're going to discuss removing move operations from Nova, we should do that in another thread. This one is about making existing operations work :) > If that's the case, why are we discussing shelve at all? Just stop the > instance, copy/migrate the volume data (if needed, again it completely > depends on the deployment, network topology and block storage > backend), to a new location (new cell, new AZ, new host agg, does it > really matter?) and start a new instance, attaching the volume after > the instance starts or supplying the volume in the boot/create > command. Because shelve potentially makes it less dependent on the answers to those questions and Matt suggested it as a first step to being able to move things around at all. It means that "copy the data" becomes "talk to glance" which compute nodes can already do. Requiring compute nodes across cells to talk to each other (which could be in different buildings, sites, or security domains) is a whole extra layer of complexity. I do think we'll go there (via resize/migrate at some point, but shelve going through glance for data and through a homeless phase in Nova does simplify a whole set of things. > The admin only "owns" the instance because we have no ability to > transfer ownership of the instance and a cell isn't a user-visible > thing. An external script that accomplishes this kind of orchestrated > move from one cell to another could easily update the ownership of > said instance in the DB. So step 5 was "do surgery on the database"? :) > My point is that Nova isn't an orchestrator, and building > functionality into Nova to do this type of cross-cell migration IMHO > just will lead to even more unmaintainable code paths that few, if > any, deployers will ever end up using because they will end up doing > it externally anyway due to the need to integrate with backend > inventory management systems and other things. On the contrary, per the original goal of cellsv2, I want to make the *existing* code paths in Nova work properly when multiple cells are present. Just like we had to make boot and list work properly with multiple cells, I think we need to do the same with migrate, shelve, etc. --Dan _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From dms at danplanet.com Wed Aug 29 20:04:22 2018 From: dms at danplanet.com (Dan Smith) Date: Wed, 29 Aug 2018 13:04:22 -0700 Subject: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: (Tim Bell's message of "Wed, 29 Aug 2018 19:49:24 +0000") References: <2fd5feda-0399-6b24-7eb7-738f36e14e70@gmail.com> <9b6b87e3-024b-6035-11ec-3ab41f795a3f@gmail.com> Message-ID: > - The VMs to be migrated are not generally not expensive > configurations, just hardware lifecycles where boxes go out of > warranty or computer centre rack/cooling needs re-organising. For > CERN, this is a 6-12 month frequency of ~10,000 VMs per year (with a > ~30% pet share) > - We make a cell from identical hardware at a single location, this > greatly simplifies working out hardware issues, provisioning and > management > - Some cases can be handled with the 'please delete and > re-create'. Many other cases need much user support/downtime (and > require significant effort or risk delaying retirements to get > agreement) Yep, this is the "organizational use case" of cells I refer to. I assume that if one aisle (cell) is being replaced, it makes sense to stand up the new one as its own cell, migrate the pets from one to the other and then decommission the old one. Being only an aisle away, it's reasonable to think that *this* situation might not suffer from the complexity of needing to worry about heavyweight migrate network and storage. > From my understanding, these models were feasible with Cells V1. I don't think cellsv1 supported any notion of moving things between cells at all, unless you had some sort of external hack for doing it. Being able to migrate between cells at all was always one of the things we touted as a "future feature" for cellsv2. Unless of course you mean migration in terms of snapshot-to-glance-and-redeploy? --Dan From jaypipes at gmail.com Wed Aug 29 20:11:13 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 29 Aug 2018 16:11:13 -0400 Subject: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: References: <2fd5feda-0399-6b24-7eb7-738f36e14e70@gmail.com> <9b6b87e3-024b-6035-11ec-3ab41f795a3f@gmail.com> Message-ID: <60e4654e-91ba-7f14-a6d9-7a588c17baee@gmail.com> On 08/29/2018 04:04 PM, Dan Smith wrote: >> - The VMs to be migrated are not generally not expensive >> configurations, just hardware lifecycles where boxes go out of >> warranty or computer centre rack/cooling needs re-organising. For >> CERN, this is a 6-12 month frequency of ~10,000 VMs per year (with a >> ~30% pet share) >> - We make a cell from identical hardware at a single location, this >> greatly simplifies working out hardware issues, provisioning and >> management >> - Some cases can be handled with the 'please delete and >> re-create'. Many other cases need much user support/downtime (and >> require significant effort or risk delaying retirements to get >> agreement) > > Yep, this is the "organizational use case" of cells I refer to. I assume > that if one aisle (cell) is being replaced, it makes sense to stand up > the new one as its own cell, migrate the pets from one to the other and > then decommission the old one. Being only an aisle away, it's reasonable > to think that *this* situation might not suffer from the complexity of > needing to worry about heavyweight migrate network and storage. For this use case, why not just add the new hardware directly into the existing cell and migrate the workloads onto the new hardware, then disable the old hardware and retire it? I mean, there might be a short period of time where the cell's DB and MQ would be congested due to lots of migration operations, but it seems a lot simpler to me than trying to do cross-cell migrations when cells have been designed pretty much from the beginning of cellsv2 to not talk to each other or allow any upcalls. Thoughts? -jay From Tim.Bell at cern.ch Wed Aug 29 20:21:29 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 29 Aug 2018 20:21:29 +0000 Subject: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <60e4654e-91ba-7f14-a6d9-7a588c17baee@gmail.com> References: <2fd5feda-0399-6b24-7eb7-738f36e14e70@gmail.com> <9b6b87e3-024b-6035-11ec-3ab41f795a3f@gmail.com> <60e4654e-91ba-7f14-a6d9-7a588c17baee@gmail.com> Message-ID: <8EF14CDA-F135-4ED9-A9B0-1654CDC08D64@cern.ch> Given the partial retirement scenario (i.e. only racks A-C retired due to cooling contrainsts, racks D-F still active with same old hardware but still useful for years), adding new hardware to old cells would not be non-optimal. I'm ignoring the long list of other things to worry such as preserving IP addresses etc. Sounds like a good topic for PTG/Forum? Tim -----Original Message----- From: Jay Pipes Date: Wednesday, 29 August 2018 at 22:12 To: Dan Smith , Tim Bell Cc: "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration On 08/29/2018 04:04 PM, Dan Smith wrote: >> - The VMs to be migrated are not generally not expensive >> configurations, just hardware lifecycles where boxes go out of >> warranty or computer centre rack/cooling needs re-organising. For >> CERN, this is a 6-12 month frequency of ~10,000 VMs per year (with a >> ~30% pet share) >> - We make a cell from identical hardware at a single location, this >> greatly simplifies working out hardware issues, provisioning and >> management >> - Some cases can be handled with the 'please delete and >> re-create'. Many other cases need much user support/downtime (and >> require significant effort or risk delaying retirements to get >> agreement) > > Yep, this is the "organizational use case" of cells I refer to. I assume > that if one aisle (cell) is being replaced, it makes sense to stand up > the new one as its own cell, migrate the pets from one to the other and > then decommission the old one. Being only an aisle away, it's reasonable > to think that *this* situation might not suffer from the complexity of > needing to worry about heavyweight migrate network and storage. For this use case, why not just add the new hardware directly into the existing cell and migrate the workloads onto the new hardware, then disable the old hardware and retire it? I mean, there might be a short period of time where the cell's DB and MQ would be congested due to lots of migration operations, but it seems a lot simpler to me than trying to do cross-cell migrations when cells have been designed pretty much from the beginning of cellsv2 to not talk to each other or allow any upcalls. Thoughts? -jay From mriedemos at gmail.com Wed Aug 29 21:39:50 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 29 Aug 2018 16:39:50 -0500 Subject: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <8EF14CDA-F135-4ED9-A9B0-1654CDC08D64@cern.ch> References: <2fd5feda-0399-6b24-7eb7-738f36e14e70@gmail.com> <9b6b87e3-024b-6035-11ec-3ab41f795a3f@gmail.com> <60e4654e-91ba-7f14-a6d9-7a588c17baee@gmail.com> <8EF14CDA-F135-4ED9-A9B0-1654CDC08D64@cern.ch> Message-ID: <3cae56bb-cca3-d251-e46f-63c328f254d2@gmail.com> On 8/29/2018 3:21 PM, Tim Bell wrote: > Sounds like a good topic for PTG/Forum? Yeah it's already on the PTG agenda [1][2]. I started the thread because I wanted to get the ball rolling as early as possible, and with people that won't attend the PTG and/or the Forum, to weigh in on not only the known issues with cross-cell migration but also the things I'm not thinking about. [1] https://etherpad.openstack.org/p/nova-ptg-stein [2] https://etherpad.openstack.org/p/nova-ptg-stein-cells -- Thanks, Matt From sean.mcginnis at gmx.com Thu Aug 30 16:33:39 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 30 Aug 2018 11:33:39 -0500 Subject: [Openstack-operators] [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration In-Reply-To: <20180830084608.g76maohgpxbmqvce@localhost> References: <2fd5feda-0399-6b24-7eb7-738f36e14e70@gmail.com> <9b6b87e3-024b-6035-11ec-3ab41f795a3f@gmail.com> <60e4654e-91ba-7f14-a6d9-7a588c17baee@gmail.com> <8EF14CDA-F135-4ED9-A9B0-1654CDC08D64@cern.ch> <3cae56bb-cca3-d251-e46f-63c328f254d2@gmail.com> <20180830084608.g76maohgpxbmqvce@localhost> Message-ID: <20180830163338.GB19523@sm-workstation> > > > > Yeah it's already on the PTG agenda [1][2]. I started the thread because I > > wanted to get the ball rolling as early as possible, and with people that > > won't attend the PTG and/or the Forum, to weigh in on not only the known > > issues with cross-cell migration but also the things I'm not thinking about. > > > > [1] https://etherpad.openstack.org/p/nova-ptg-stein > > [2] https://etherpad.openstack.org/p/nova-ptg-stein-cells > > > > -- > > > > Thanks, > > > > Matt > > > > Should we also add the topic to the Thursday Cinder-Nova slot in case > there are some questions where the Cinder team can assist? > > Cheers, > Gorka. > Good idea. That will be a good time to circle back between the teams to see if any Cinder needs come up that we can still have time to talk through and see if we can get work started. From fungi at yuggoth.org Thu Aug 30 17:03:50 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 30 Aug 2018 17:03:50 +0000 Subject: [Openstack-operators] [all] Bringing the community together (combine the lists!) Message-ID: <20180830170350.wrz4wlanb276kncb@yuggoth.org> The openstack, openstack-dev, openstack-sigs and openstack-operators mailing lists on lists.openstack.org see an increasing amount of cross-posting and thread fragmentation as conversants attempt to reach various corners of our community with topics of interest to one or more (and sometimes all) of those overlapping groups of subscribers. For some time we've been discussing and trying ways to bring our developers, distributors, operators and end users together into a less isolated, more cohesive community. An option which keeps coming up is to combine these different but overlapping mailing lists into one single discussion list. As we covered[1] in Vancouver at the last Forum there are a lot of potential up-sides: 1. People with questions are no longer asking them in a different place than many of the people who have the answers to those questions (the "not for usage questions" in the openstack-dev ML title only serves to drive the wedge between developers and users deeper). 2. The openstack-sigs mailing list hasn't seem much uptake (an order of magnitude fewer subscribers and posts) compared to the other three lists, yet it was intended to bridge the communication gap between them; combining those lists would have been a better solution to the problem than adding yet another turned out to be. 3. At least one out of every ten messages to any of these lists is cross-posted to one or more of the others, because we have topics that span across these divided groups yet nobody is quite sure which one is the best venue for them; combining would eliminate the fragmented/duplicative/divergent discussion which results from participants following up on the different subsets of lists to which they're subscribed, 4. Half of the people who are actively posting to at least one of the four lists subscribe to two or more, and a quarter to three if not all four; they would no longer be receiving multiple copies of the various cross-posts if these lists were combined. The proposal is simple: create a new openstack-discuss mailing list to cover all the above sorts of discussion and stop using the other four. As the OpenStack ecosystem continues to mature and its software and services stabilize, the nature of our discourse is changing (becoming increasingly focused with fewer heated debates, distilling to a more manageable volume), so this option is looking much more attractive than in the past. That's not to say it's quiet (we're looking at roughly 40 messages a day across them on average, after deduplicating the cross-posts), but we've grown accustomed to tagging the subjects of these messages to make it easier for other participants to quickly filter topics which are relevant to them and so would want a good set of guidelines on how to do so for the combined list (a suggested set is already being brainstormed[2]). None of this is set in stone of course, and I expect a lot of continued discussion across these lists (oh, the irony) while we try to settle on a plan, so definitely please follow up with your questions, concerns, ideas, et cetera. As an aside, some of you have probably also seen me talking about experiments I've been doing with Mailman 3... I'm hoping new features in its Hyperkitty and Postorius WebUIs make some of this easier or more accessible to casual participants (particularly in light of the combined list scenario), but none of the plan above hinges on MM3 and should be entirely doable with the MM2 version we're currently using. Also, in case you were wondering, no the irony of cross-posting this message to four mailing lists is not lost on me. ;) [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community [2] https://etherpad.openstack.org/p/common-openstack-ml-topics -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rico.lin.guanyu at gmail.com Thu Aug 30 17:13:58 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 31 Aug 2018 01:13:58 +0800 Subject: [Openstack-operators] [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: +1 on this idea, people been posting around for the exactly same topic and got feedback from ops or devs, but never together, this will help people do discussion on the same table. What needs to be done for this is full topic categories support under `options` page so people get to filter emails properly. On Fri, Aug 31, 2018 at 1:04 AM Jeremy Stanley wrote: > The openstack, openstack-dev, openstack-sigs and openstack-operators > mailing lists on lists.openstack.org see an increasing amount of > cross-posting and thread fragmentation as conversants attempt to > reach various corners of our community with topics of interest to > one or more (and sometimes all) of those overlapping groups of > subscribers. For some time we've been discussing and trying ways to > bring our developers, distributors, operators and end users together > into a less isolated, more cohesive community. An option which keeps > coming up is to combine these different but overlapping mailing > lists into one single discussion list. As we covered[1] in Vancouver > at the last Forum there are a lot of potential up-sides: > > 1. People with questions are no longer asking them in a different > place than many of the people who have the answers to those > questions (the "not for usage questions" in the openstack-dev ML > title only serves to drive the wedge between developers and users > deeper). > > 2. The openstack-sigs mailing list hasn't seem much uptake (an order > of magnitude fewer subscribers and posts) compared to the other > three lists, yet it was intended to bridge the communication gap > between them; combining those lists would have been a better > solution to the problem than adding yet another turned out to be. > > 3. At least one out of every ten messages to any of these lists is > cross-posted to one or more of the others, because we have topics > that span across these divided groups yet nobody is quite sure which > one is the best venue for them; combining would eliminate the > fragmented/duplicative/divergent discussion which results from > participants following up on the different subsets of lists to which > they're subscribed, > > 4. Half of the people who are actively posting to at least one of > the four lists subscribe to two or more, and a quarter to three if > not all four; they would no longer be receiving multiple copies of > the various cross-posts if these lists were combined. > > The proposal is simple: create a new openstack-discuss mailing list > to cover all the above sorts of discussion and stop using the other > four. As the OpenStack ecosystem continues to mature and its > software and services stabilize, the nature of our discourse is > changing (becoming increasingly focused with fewer heated debates, > distilling to a more manageable volume), so this option is looking > much more attractive than in the past. That's not to say it's quiet > (we're looking at roughly 40 messages a day across them on average, > after deduplicating the cross-posts), but we've grown accustomed to > tagging the subjects of these messages to make it easier for other > participants to quickly filter topics which are relevant to them and > so would want a good set of guidelines on how to do so for the > combined list (a suggested set is already being brainstormed[2]). > None of this is set in stone of course, and I expect a lot of > continued discussion across these lists (oh, the irony) while we try > to settle on a plan, so definitely please follow up with your > questions, concerns, ideas, et cetera. > > As an aside, some of you have probably also seen me talking about > experiments I've been doing with Mailman 3... I'm hoping new > features in its Hyperkitty and Postorius WebUIs make some of this > easier or more accessible to casual participants (particularly in > light of the combined list scenario), but none of the plan above > hinges on MM3 and should be entirely doable with the MM2 version > we're currently using. > > Also, in case you were wondering, no the irony of cross-posting this > message to four mailing lists is not lost on me. ;) > > [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community > [2] https://etherpad.openstack.org/p/common-openstack-ml-topics > -- > Jeremy Stanley > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Aug 30 17:17:14 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 30 Aug 2018 13:17:14 -0400 Subject: [Openstack-operators] [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: <1535649366-sup-1027@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-08-30 17:03:50 +0000: > The openstack, openstack-dev, openstack-sigs and openstack-operators > mailing lists on lists.openstack.org see an increasing amount of > cross-posting and thread fragmentation as conversants attempt to > reach various corners of our community with topics of interest to > one or more (and sometimes all) of those overlapping groups of > subscribers. For some time we've been discussing and trying ways to > bring our developers, distributors, operators and end users together > into a less isolated, more cohesive community. An option which keeps > coming up is to combine these different but overlapping mailing > lists into one single discussion list. As we covered[1] in Vancouver > at the last Forum there are a lot of potential up-sides: > > 1. People with questions are no longer asking them in a different > place than many of the people who have the answers to those > questions (the "not for usage questions" in the openstack-dev ML > title only serves to drive the wedge between developers and users > deeper). > > 2. The openstack-sigs mailing list hasn't seem much uptake (an order > of magnitude fewer subscribers and posts) compared to the other > three lists, yet it was intended to bridge the communication gap > between them; combining those lists would have been a better > solution to the problem than adding yet another turned out to be. > > 3. At least one out of every ten messages to any of these lists is > cross-posted to one or more of the others, because we have topics > that span across these divided groups yet nobody is quite sure which > one is the best venue for them; combining would eliminate the > fragmented/duplicative/divergent discussion which results from > participants following up on the different subsets of lists to which > they're subscribed, > > 4. Half of the people who are actively posting to at least one of > the four lists subscribe to two or more, and a quarter to three if > not all four; they would no longer be receiving multiple copies of > the various cross-posts if these lists were combined. > > The proposal is simple: create a new openstack-discuss mailing list > to cover all the above sorts of discussion and stop using the other > four. As the OpenStack ecosystem continues to mature and its > software and services stabilize, the nature of our discourse is > changing (becoming increasingly focused with fewer heated debates, > distilling to a more manageable volume), so this option is looking > much more attractive than in the past. That's not to say it's quiet > (we're looking at roughly 40 messages a day across them on average, > after deduplicating the cross-posts), but we've grown accustomed to > tagging the subjects of these messages to make it easier for other > participants to quickly filter topics which are relevant to them and > so would want a good set of guidelines on how to do so for the > combined list (a suggested set is already being brainstormed[2]). > None of this is set in stone of course, and I expect a lot of > continued discussion across these lists (oh, the irony) while we try > to settle on a plan, so definitely please follow up with your > questions, concerns, ideas, et cetera. > > As an aside, some of you have probably also seen me talking about > experiments I've been doing with Mailman 3... I'm hoping new > features in its Hyperkitty and Postorius WebUIs make some of this > easier or more accessible to casual participants (particularly in > light of the combined list scenario), but none of the plan above > hinges on MM3 and should be entirely doable with the MM2 version > we're currently using. > > Also, in case you were wondering, no the irony of cross-posting this > message to four mailing lists is not lost on me. ;) > > [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community > [2] https://etherpad.openstack.org/p/common-openstack-ml-topics I fully support the idea of merging the lists. Doug From chris at openstack.org Thu Aug 30 17:19:50 2018 From: chris at openstack.org (Chris Hoge) Date: Thu, 30 Aug 2018 10:19:50 -0700 Subject: [Openstack-operators] [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: I also propose that we merge the interop-wg mailing list also, as the volume on that list is small but topics posted to it are of general interest to the community. Chris Hoge (Interop WG Secretary, amongst other things) > On Aug 30, 2018, at 10:03 AM, Jeremy Stanley wrote: > > The openstack, openstack-dev, openstack-sigs and openstack-operators > mailing lists on lists.openstack.org see an increasing amount of > cross-posting and thread fragmentation as conversants attempt to > reach various corners of our community with topics of interest to > one or more (and sometimes all) of those overlapping groups of > subscribers. For some time we've been discussing and trying ways to > bring our developers, distributors, operators and end users together > into a less isolated, more cohesive community. An option which keeps > coming up is to combine these different but overlapping mailing > lists into one single discussion list. As we covered[1] in Vancouver > at the last Forum there are a lot of potential up-sides: > > 1. People with questions are no longer asking them in a different > place than many of the people who have the answers to those > questions (the "not for usage questions" in the openstack-dev ML > title only serves to drive the wedge between developers and users > deeper). > > 2. The openstack-sigs mailing list hasn't seem much uptake (an order > of magnitude fewer subscribers and posts) compared to the other > three lists, yet it was intended to bridge the communication gap > between them; combining those lists would have been a better > solution to the problem than adding yet another turned out to be. > > 3. At least one out of every ten messages to any of these lists is > cross-posted to one or more of the others, because we have topics > that span across these divided groups yet nobody is quite sure which > one is the best venue for them; combining would eliminate the > fragmented/duplicative/divergent discussion which results from > participants following up on the different subsets of lists to which > they're subscribed, > > 4. Half of the people who are actively posting to at least one of > the four lists subscribe to two or more, and a quarter to three if > not all four; they would no longer be receiving multiple copies of > the various cross-posts if these lists were combined. > > The proposal is simple: create a new openstack-discuss mailing list > to cover all the above sorts of discussion and stop using the other > four. As the OpenStack ecosystem continues to mature and its > software and services stabilize, the nature of our discourse is > changing (becoming increasingly focused with fewer heated debates, > distilling to a more manageable volume), so this option is looking > much more attractive than in the past. That's not to say it's quiet > (we're looking at roughly 40 messages a day across them on average, > after deduplicating the cross-posts), but we've grown accustomed to > tagging the subjects of these messages to make it easier for other > participants to quickly filter topics which are relevant to them and > so would want a good set of guidelines on how to do so for the > combined list (a suggested set is already being brainstormed[2]). > None of this is set in stone of course, and I expect a lot of > continued discussion across these lists (oh, the irony) while we try > to settle on a plan, so definitely please follow up with your > questions, concerns, ideas, et cetera. > > As an aside, some of you have probably also seen me talking about > experiments I've been doing with Mailman 3... I'm hoping new > features in its Hyperkitty and Postorius WebUIs make some of this > easier or more accessible to casual participants (particularly in > light of the combined list scenario), but none of the plan above > hinges on MM3 and should be entirely doable with the MM2 version > we're currently using. > > Also, in case you were wondering, no the irony of cross-posting this > message to four mailing lists is not lost on me. ;) > > [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community > [2] https://etherpad.openstack.org/p/common-openstack-ml-topics > -- > Jeremy Stanley > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs From jimmy at openstack.org Thu Aug 30 17:19:55 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 30 Aug 2018 12:19:55 -0500 Subject: [Openstack-operators] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: <5B88273B.3000206@openstack.org> Absolutely support merging. Jeremy Stanley wrote: > The openstack, openstack-dev, openstack-sigs and openstack-operators > mailing lists on lists.openstack.org see an increasing amount of > cross-posting and thread fragmentation as conversants attempt to > reach various corners of our community with topics of interest to > one or more (and sometimes all) of those overlapping groups of > subscribers. For some time we've been discussing and trying ways to > bring our developers, distributors, operators and end users together > into a less isolated, more cohesive community. An option which keeps > coming up is to combine these different but overlapping mailing > lists into one single discussion list. As we covered[1] in Vancouver > at the last Forum there are a lot of potential up-sides: > > 1. People with questions are no longer asking them in a different > place than many of the people who have the answers to those > questions (the "not for usage questions" in the openstack-dev ML > title only serves to drive the wedge between developers and users > deeper). > > 2. The openstack-sigs mailing list hasn't seem much uptake (an order > of magnitude fewer subscribers and posts) compared to the other > three lists, yet it was intended to bridge the communication gap > between them; combining those lists would have been a better > solution to the problem than adding yet another turned out to be. > > 3. At least one out of every ten messages to any of these lists is > cross-posted to one or more of the others, because we have topics > that span across these divided groups yet nobody is quite sure which > one is the best venue for them; combining would eliminate the > fragmented/duplicative/divergent discussion which results from > participants following up on the different subsets of lists to which > they're subscribed, > > 4. Half of the people who are actively posting to at least one of > the four lists subscribe to two or more, and a quarter to three if > not all four; they would no longer be receiving multiple copies of > the various cross-posts if these lists were combined. > > The proposal is simple: create a new openstack-discuss mailing list > to cover all the above sorts of discussion and stop using the other > four. As the OpenStack ecosystem continues to mature and its > software and services stabilize, the nature of our discourse is > changing (becoming increasingly focused with fewer heated debates, > distilling to a more manageable volume), so this option is looking > much more attractive than in the past. That's not to say it's quiet > (we're looking at roughly 40 messages a day across them on average, > after deduplicating the cross-posts), but we've grown accustomed to > tagging the subjects of these messages to make it easier for other > participants to quickly filter topics which are relevant to them and > so would want a good set of guidelines on how to do so for the > combined list (a suggested set is already being brainstormed[2]). > None of this is set in stone of course, and I expect a lot of > continued discussion across these lists (oh, the irony) while we try > to settle on a plan, so definitely please follow up with your > questions, concerns, ideas, et cetera. > > As an aside, some of you have probably also seen me talking about > experiments I've been doing with Mailman 3... I'm hoping new > features in its Hyperkitty and Postorius WebUIs make some of this > easier or more accessible to casual participants (particularly in > light of the combined list scenario), but none of the plan above > hinges on MM3 and should be entirely doable with the MM2 version > we're currently using. > > Also, in case you were wondering, no the irony of cross-posting this > message to four mailing lists is not lost on me. ;) > > [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community > [2] https://etherpad.openstack.org/p/common-openstack-ml-topics > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From juliaashleykreger at gmail.com Thu Aug 30 17:20:33 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 30 Aug 2018 10:20:33 -0700 Subject: [Openstack-operators] [ironic][tripleo][edge] Discussing ironic federation and distributed deployments In-Reply-To: References: Message-ID: Greetings everyone, It looks like the most agreeable time on the doodle[1] seems to be Tuesday September 4th at 13:00 UTC. Are there any objections to using this time? If not, I'll go ahead and create an etherpad, and setup a bluejeans call for that time to enable high bandwidth discussion. -Julia [1]: https://doodle.com/poll/y355wt97heffvp3m On Mon, Aug 27, 2018 at 9:53 AM Julia Kreger wrote: > > Greetings everyone! > > We in Ironic land would like to go into the PTG with some additional > thoughts, requirements, and ideas as it relates to distributed and > geographically distributed deployments. > [trim] From emilien at redhat.com Thu Aug 30 17:29:12 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 30 Aug 2018 13:29:12 -0400 Subject: [Openstack-operators] [ironic][tripleo][edge] Discussing ironic federation and distributed deployments In-Reply-To: References: Message-ID: On Thu, Aug 30, 2018 at 1:21 PM Julia Kreger wrote: > Greetings everyone, > > It looks like the most agreeable time on the doodle[1] seems to be > Tuesday September 4th at 13:00 UTC. Are there any objections to using > this time? > > If not, I'll go ahead and create an etherpad, and setup a bluejeans > call for that time to enable high bandwidth discussion. > TripleO sessions start on Wednesday, so +1 from us (unless I missed something). -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Thu Aug 30 18:57:31 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 30 Aug 2018 12:57:31 -0600 Subject: [Openstack-operators] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: <5B883E1B.2070101@windriver.com> On 08/30/2018 11:03 AM, Jeremy Stanley wrote: > The proposal is simple: create a new openstack-discuss mailing list > to cover all the above sorts of discussion and stop using the other > four. Do we want to merge usage and development onto one list? That could be a busy list for someone who's just asking a simple usage question. Alternately, if we are going to merge everything then why not just use the "openstack" mailing list since it already exists and there are references to it on the web. (Or do you want to force people to move to something new to make them recognize that something has changed?) Chris From zigo at debian.org Thu Aug 30 20:49:26 2018 From: zigo at debian.org (Thomas Goirand) Date: Thu, 30 Aug 2018 22:49:26 +0200 Subject: [Openstack-operators] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <5B883E1B.2070101@windriver.com> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> Message-ID: <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> On 08/30/2018 08:57 PM, Chris Friesen wrote: > On 08/30/2018 11:03 AM, Jeremy Stanley wrote: > >> The proposal is simple: create a new openstack-discuss mailing list >> to cover all the above sorts of discussion and stop using the other >> four. > > Do we want to merge usage and development onto one list? I really don't want this. I'm happy with things being sorted in multiple lists, even though I'm subscribed to multiples. Thomas From fungi at yuggoth.org Thu Aug 30 21:12:57 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 30 Aug 2018 21:12:57 +0000 Subject: [Openstack-operators] [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: <20180830211257.oa6hxd4pningzqf4@yuggoth.org> On 2018-08-31 01:13:58 +0800 (+0800), Rico Lin wrote: [...] > What needs to be done for this is full topic categories support > under `options` page so people get to filter emails properly. [...] Unfortunately, topic filtering is one of the MM2 features the Mailman community decided nobody used (or at least not enough to warrant preserving it in MM3). I do think we need to be consistent about tagging subjects to make client-side filtering more effective for people who want that, but if we _do_ want to be able to upgrade we shouldn't continue to rely on server-side filtering support in Mailman unless we can somehow work with them to help in reimplementing it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu Aug 30 21:25:37 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 30 Aug 2018 21:25:37 +0000 Subject: [Openstack-operators] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <5B883E1B.2070101@windriver.com> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> Message-ID: <20180830212536.yzirmxzxiqhciyby@yuggoth.org> On 2018-08-30 12:57:31 -0600 (-0600), Chris Friesen wrote: [...] > Do we want to merge usage and development onto one list? That > could be a busy list for someone who's just asking a simple usage > question. A counterargument though... projecting the number of unique posts to all four lists combined for this year (both based on trending for the past several years and also simply scaling the count of messages this year so far based on how many days are left) comes out roughly equal to the number of posts which were made to the general openstack mailing list in 2012. > Alternately, if we are going to merge everything then why not just > use the "openstack" mailing list since it already exists and there > are references to it on the web. This was an option we discussed in the "One Community" forum session as well. There seemed to be a slight preference for making a new -disscuss list and retiring the old general one. I see either as an potential solution here. > (Or do you want to force people to move to something new to make them > recognize that something has changed?) That was one of the arguments made. Also I believe we have a *lot* of "black hole" subscribers who aren't actually following that list but whose addresses aren't bouncing new posts we send them for any of a number of possible reasons. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu Aug 30 21:33:41 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 30 Aug 2018 21:33:41 +0000 Subject: [Openstack-operators] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> Message-ID: <20180830213341.yuxyen2elx2c3is4@yuggoth.org> On 2018-08-30 22:49:26 +0200 (+0200), Thomas Goirand wrote: [...] > I really don't want this. I'm happy with things being sorted in > multiple lists, even though I'm subscribed to multiples. I understand where you're coming from, and I used to feel similarly. I was accustomed to communities where developers had one mailing list, users had another, and whenever a user asked a question on the developer mailing list they were told to go away and bother the user mailing list instead (not even a good, old-fashioned "RTFM" for their trouble). You're probably intimately familiar with at least one of these communities. ;) As the years went by, it's become apparent to me that this is actually an antisocial behavior pattern, and actively harmful to the user base. I believe OpenStack actually wants users to see the development work which is underway, come to understand it, and become part of that process. Requiring them to have their conversations elsewhere sends the opposite message. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jimmy at openstack.org Thu Aug 30 21:45:17 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 30 Aug 2018 16:45:17 -0500 Subject: [Openstack-operators] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830213341.yuxyen2elx2c3is4@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> <20180830213341.yuxyen2elx2c3is4@yuggoth.org> Message-ID: <5B88656D.1020209@openstack.org> Jeremy Stanley wrote: > On 2018-08-30 22:49:26 +0200 (+0200), Thomas Goirand wrote: > [...] >> I really don't want this. I'm happy with things being sorted in >> multiple lists, even though I'm subscribed to multiples. IMO this is easily solved by tagging. If emails are properly tagged (which they typically are), most email clients will properly sort on rules and you can just auto-delete if you're 100% not interested in a particular topic. > SNIP > As the years went by, it's become apparent to me that this is > actually an antisocial behavior pattern, and actively harmful to the > user base. I believe OpenStack actually wants users to see the > development work which is underway, come to understand it, and > become part of that process. Requiring them to have their > conversations elsewhere sends the opposite message. I really and truly believe that it has become a blocker for our community. Conversations sent to multiple lists inherently splinter and we end up with different groups coming up with different solutions for a single problem. Literally the opposite desired result of sending things to multiple lists. I believe bringing these groups together, with tags, will solve a lot of immediate problems. It will also have an added bonus of allowing people "catching up" on the community to look to a single place for a thread i/o 1-5 separate lists. It's better in both the short and long term. Cheers, Jimmy > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Thu Aug 30 23:08:56 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Thu, 30 Aug 2018 18:08:56 -0500 Subject: [Openstack-operators] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <5B88656D.1020209@openstack.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> <20180830213341.yuxyen2elx2c3is4@yuggoth.org> <5B88656D.1020209@openstack.org> Message-ID: I think the more we can reduce the ML sprawl the better. I also recall us discussing having some documentation or way of notifying net new signups of how to interact with the ML successfully. An example was having some general guidelines around tagging. Also as a maintainer for at least one of the mailing lists over the past 6+ months I have to inquire about how that will happen going forward which again could be part of this documentation/initial message. Also there are many times I miss messages that for one reason or another do not hit the proper mailing list. I mean we could dive into the minutia or start up the mountain of why keeping things the way they are is worst than making this change and vice versa but I am willing to bet there are more advantages than disadvantages. On Thu, Aug 30, 2018 at 4:45 PM Jimmy McArthur wrote: > > > Jeremy Stanley wrote: > > On 2018-08-30 22:49:26 +0200 (+0200), Thomas Goirand wrote: > [...] > > I really don't want this. I'm happy with things being sorted in > multiple lists, even though I'm subscribed to multiples. > > IMO this is easily solved by tagging. If emails are properly tagged > (which they typically are), most email clients will properly sort on rules > and you can just auto-delete if you're 100% not interested in a particular > topic. > Yes, there are definitely ways to go about discarding unwanted mail automagically or not seeing it at all. And to be honest I think if we are relying on so many separate MLs to do that for us it is better community wide for the responsibility for that to be on individuals. It becomes very tiring and inefficient time wise to have to go through the various issues of the way things are now; cross-posting is a great example that is steadily getting worse. > SNIP > > As the years went by, it's become apparent to me that this is > actually an antisocial behavior pattern, and actively harmful to the > user base. I believe OpenStack actually wants users to see the > development work which is underway, come to understand it, and > become part of that process. Requiring them to have their > conversations elsewhere sends the opposite message. > > I really and truly believe that it has become a blocker for our > community. Conversations sent to multiple lists inherently splinter and we > end up with different groups coming up with different solutions for a > single problem. Literally the opposite desired result of sending things to > multiple lists. I believe bringing these groups together, with tags, will > solve a lot of immediate problems. It will also have an added bonus of > allowing people "catching up" on the community to look to a single place > for a thread i/o 1-5 separate lists. It's better in both the short and > long term. > +1 > > Cheers, > Jimmy > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Aug 31 00:03:35 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 31 Aug 2018 10:03:35 +1000 Subject: [Openstack-operators] [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830211257.oa6hxd4pningzqf4@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180830211257.oa6hxd4pningzqf4@yuggoth.org> Message-ID: <20180831000334.GR26778@thor.bakeyournoodle.com> On Thu, Aug 30, 2018 at 09:12:57PM +0000, Jeremy Stanley wrote: > On 2018-08-31 01:13:58 +0800 (+0800), Rico Lin wrote: > [...] > > What needs to be done for this is full topic categories support > > under `options` page so people get to filter emails properly. > [...] > > Unfortunately, topic filtering is one of the MM2 features the > Mailman community decided nobody used (or at least not enough to > warrant preserving it in MM3). I do think we need to be consistent > about tagging subjects to make client-side filtering more effective > for people who want that, but if we _do_ want to be able to upgrade > we shouldn't continue to rely on server-side filtering support in > Mailman unless we can somehow work with them to help in > reimplementing it. The suggestion is to implement it as a 3rd party plugin or work with the mm community to implement: https://wiki.mailman.psf.io/DEV/Dynamic%20Sublists So if we decide we really want that in mm3 we have options. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From fungi at yuggoth.org Fri Aug 31 00:21:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 31 Aug 2018 00:21:22 +0000 Subject: [Openstack-operators] [Openstack-sigs] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> <20180830213341.yuxyen2elx2c3is4@yuggoth.org> <5B88656D.1020209@openstack.org> Message-ID: <20180831002121.ch76mvqeskplqew2@yuggoth.org> On 2018-08-30 18:08:56 -0500 (-0500), Melvin Hillsman wrote: [...] > I also recall us discussing having some documentation or way of > notifying net new signups of how to interact with the ML > successfully. An example was having some general guidelines around > tagging. Also as a maintainer for at least one of the mailing > lists over the past 6+ months I have to inquire about how that > will happen going forward which again could be part of this > documentation/initial message. [...] Mailman supports customizable welcome messages for new subscribers, so the *technical* implementation there is easy. I do think (and failed to highlight it explicitly earlier I'm afraid) that this proposal comes with an expectation that we provide recommended guidelines for mailing list use/etiquette appropriate to our community. It could be contained entirely within the welcome message, or merely linked to a published document (and whether that's best suited for the Infra Manual or New Contributor Guide or somewhere else entirely is certainly up for debate), or even potentially both. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sfinucan at redhat.com Fri Aug 31 08:35:55 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Fri, 31 Aug 2018 09:35:55 +0100 Subject: [Openstack-operators] [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180831000334.GR26778@thor.bakeyournoodle.com> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180830211257.oa6hxd4pningzqf4@yuggoth.org> <20180831000334.GR26778@thor.bakeyournoodle.com> Message-ID: On Fri, 2018-08-31 at 10:03 +1000, Tony Breeds wrote: > On Thu, Aug 30, 2018 at 09:12:57PM +0000, Jeremy Stanley wrote: > > On 2018-08-31 01:13:58 +0800 (+0800), Rico Lin wrote: > > [...] > > > What needs to be done for this is full topic categories support > > > under `options` page so people get to filter emails properly. > > > > [...] > > > > Unfortunately, topic filtering is one of the MM2 features the > > Mailman community decided nobody used (or at least not enough to > > warrant preserving it in MM3). I do think we need to be consistent > > about tagging subjects to make client-side filtering more effective > > for people who want that, but if we _do_ want to be able to upgrade > > we shouldn't continue to rely on server-side filtering support in > > Mailman unless we can somehow work with them to help in > > reimplementing it. > > The suggestion is to implement it as a 3rd party plugin or work with the > mm community to implement: > https://wiki.mailman.psf.io/DEV/Dynamic%20Sublists > > So if we decide we really want that in mm3 we have options. > > Yours Tony. I've tinked with mailman 3 before so I could probably take a shot at this over the next few week(end)s; however, I've no idea how this feature is supposed to work. Any chance an admin of the current list could send me a couple of screenshots of the feature in mailman 2 along with a brief description of the feature? Alternatively, maybe we could upload them to the wiki page Tony linked above or, better yet, to the technical details page for same: https://wiki.mailman.psf.io/DEV/Brief%20Technical%20Details Cheers, Stephen From james.page at canonical.com Fri Aug 31 10:22:42 2018 From: james.page at canonical.com (James Page) Date: Fri, 31 Aug 2018 11:22:42 +0100 Subject: [Openstack-operators] [upgrade][sig] Upgrade SIG/Stein PTG etherpad Message-ID: Hi Folks We have a half day planned on Monday afternoon in Denver for the customary discussion around OpenStack upgrades. I've started a pad here: https://etherpad.openstack.org/p/upgrade-sig-ptg-stein Please feel free to add ideas and indicate if you will be participating in the discussion. Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Fri Aug 31 12:02:23 2018 From: zigo at debian.org (Thomas Goirand) Date: Fri, 31 Aug 2018 14:02:23 +0200 Subject: [Openstack-operators] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830213341.yuxyen2elx2c3is4@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> <20180830213341.yuxyen2elx2c3is4@yuggoth.org> Message-ID: On 08/30/2018 11:33 PM, Jeremy Stanley wrote: > On 2018-08-30 22:49:26 +0200 (+0200), Thomas Goirand wrote: > [...] >> I really don't want this. I'm happy with things being sorted in >> multiple lists, even though I'm subscribed to multiples. > > I understand where you're coming from I'm coming from the time when OpenStack had a list on launchpad where everything was mixed. We did the split because it was really annoying to have everything mixed. > I was accustomed to communities where developers had one mailing > list, users had another, and whenever a user asked a question on the > developer mailing list they were told to go away and bother the user > mailing list instead (not even a good, old-fashioned "RTFM" for > their trouble). I don't think that's what we are doing. Usually, when someone does the mistake, we do reply to him/her, at the same time pointing to the correct list. > You're probably intimately familiar with at least > one of these communities. ;) I know what you have in mind! Indeed, in that list, it happens that some people are a bit harsh to users. Hopefully, the folks in OpenStack devel aren't like this. > As the years went by, it's become apparent to me that this is > actually an antisocial behavior pattern In the OpenStack lists, every day, some developers take the time to answer users. So I don't see what there is to fix. > I believe OpenStack actually wants users to see the > development work which is underway, come to understand it, and > become part of that process. Users are very much welcome in our -dev list. I don't think there's a problem here. > Requiring them to have their > conversations elsewhere sends the opposite message. In many places and occasion, we've sent the correct message. On 08/30/2018 11:45 PM, Jimmy McArthur wrote: > IMO this is easily solved by tagging. If emails are properly tagged > (which they typically are), most email clients will properly sort on > rules and you can just auto-delete if you're 100% not interested in a > particular topic. This topically works with folks used to send tags. It doesn't for new comers, which is what you see with newbies coming to ask questions. Cheers, Thomas Goirand (zigo) From fungi at yuggoth.org Fri Aug 31 16:17:26 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 31 Aug 2018 16:17:26 +0000 Subject: [Openstack-operators] Mailman topic filtering (was: Bringing the community together...) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180830211257.oa6hxd4pningzqf4@yuggoth.org> <20180831000334.GR26778@thor.bakeyournoodle.com> Message-ID: <20180831161726.wtjbzr6yvz2wgghv@yuggoth.org> On 2018-08-31 09:35:55 +0100 (+0100), Stephen Finucane wrote: [...] > I've tinked with mailman 3 before so I could probably take a shot at > this over the next few week(end)s; however, I've no idea how this > feature is supposed to work. Any chance an admin of the current list > could send me a couple of screenshots of the feature in mailman 2 along > with a brief description of the feature? Alternatively, maybe we could > upload them to the wiki page Tony linked above or, better yet, to the > technical details page for same: > > https://wiki.mailman.psf.io/DEV/Brief%20Technical%20Details Looks like this should be https://wiki.list.org/DEV/Brief%20Technical%20Details instead, however reading through it doesn't really sound like the topic filtering feature from MM2. The List Member Manual has a very brief description of the feature from the subscriber standpoint: http://www.list.org/mailman-member/node29.html The List Administration Manual unfortunately doesn't have any content for the feature, just a stubbed-out section heading: http://www.list.org/mailman-admin/node30.html Sending screenshots to the ML is a bit tough, but luckily MIT's listadmins have posted some so we don't need to: http://web.mit.edu/lists/mailman/topics.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Fri Aug 31 16:45:24 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 31 Aug 2018 16:45:24 +0000 Subject: [Openstack-operators] [all] Bringing the community together (combine the lists!) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> <20180830213341.yuxyen2elx2c3is4@yuggoth.org> Message-ID: <20180831164524.mlksltzbzey6tdyo@yuggoth.org> On 2018-08-31 14:02:23 +0200 (+0200), Thomas Goirand wrote: [...] > I'm coming from the time when OpenStack had a list on launchpad > where everything was mixed. We did the split because it was really > annoying to have everything mixed. [...] These days (just running stats for this calendar year) we've been averaging 4 messages a day on the general openstack at lists.o.o ML, so if it's volume you're worried about most of it would be the current -operators and -dev ML discussions anyway (many of which are general questions from users already, because as you also pointed out we don't usually tell them to take their questions elsewhere any more). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From codeology.lab at gmail.com Fri Aug 31 17:46:43 2018 From: codeology.lab at gmail.com (Cody) Date: Fri, 31 Aug 2018 13:46:43 -0400 Subject: [Openstack-operators] [tripleo]Pacemaker in split controller mode Message-ID: Hi folks, A quick question on TripleO. If I take any pacemaker managed services (e.g. database) from the monolithic controller role and put them onto another cluster, would that cluster be managed as a separate pacemaker cluster? Thank you very much. Regards, Cody From michele at acksyn.org Fri Aug 31 19:06:34 2018 From: michele at acksyn.org (Michele Baldessari) Date: Fri, 31 Aug 2018 21:06:34 +0200 Subject: [Openstack-operators] [tripleo]Pacemaker in split controller mode In-Reply-To: References: Message-ID: <20180831190634.GA2221@holtby> Hi, On Fri, Aug 31, 2018 at 01:46:43PM -0400, Cody wrote: > A quick question on TripleO. If I take any pacemaker managed services > (e.g. database) from the monolithic controller role and put them onto > another cluster, would that cluster be managed as a separate pacemaker > cluster? No, if you split off any pcmk-managed services to a separate role they will still be managed by a single pacemaker cluster. Since Ocata we have composable HA roles, so you can split off DB/messaging/etc to separate nodes (roles). They will be all part of a single cluster. cheers, Michele -- Michele Baldessari C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D From codeology.lab at gmail.com Fri Aug 31 19:52:05 2018 From: codeology.lab at gmail.com (Cody) Date: Fri, 31 Aug 2018 15:52:05 -0400 Subject: [Openstack-operators] [tripleo]Pacemaker in split controller mode In-Reply-To: <20180831190634.GA2221@holtby> References: <20180831190634.GA2221@holtby> Message-ID: Got it! Thank you, Michele. Cheers, Cody On Fri, Aug 31, 2018 at 3:07 PM Michele Baldessari wrote: > > Hi, > > On Fri, Aug 31, 2018 at 01:46:43PM -0400, Cody wrote: > > A quick question on TripleO. If I take any pacemaker managed services > > (e.g. database) from the monolithic controller role and put them onto > > another cluster, would that cluster be managed as a separate pacemaker > > cluster? > > No, if you split off any pcmk-managed services to a separate role they > will still be managed by a single pacemaker cluster. Since Ocata we have > composable HA roles, so you can split off DB/messaging/etc to separate > nodes (roles). They will be all part of a single cluster. > > cheers, > Michele > -- > Michele Baldessari > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From dave at opensourcesolutions.co.uk Fri Aug 31 20:23:43 2018 From: dave at opensourcesolutions.co.uk (Dave Williams) Date: Fri, 31 Aug 2018 21:23:43 +0100 Subject: [Openstack-operators] [kolla][ceph] Adding OSD's to production Ceph In-Reply-To: <20180828155139.GA3946@opensourcesolutions.co.uk> References: <20180828155139.GA3946@opensourcesolutions.co.uk> Message-ID: <20180831202343.GA1036@opensourcesolutions.co.uk> On 16:51, Tue 28 Aug 18, Dave Williams wrote: Sorry the email subject got lost. > What is the best practice for adding more Ceph OSD's to kolla-ansible > in a production environment? Does "deploy" do anything to the existing > data or does it simply add the OSD's (and potentially increase the placement > groups if re-configured)? > > "reconfigure" doesnt touch newly prepared disks from what I see from the > code which is where I was expecting this might have been undertaken. > > I am running kolla-ansible queens. > > Thanks > Dave > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From jonmills at gmail.com Fri Aug 31 21:15:45 2018 From: jonmills at gmail.com (jonmills at gmail.com) Date: Fri, 31 Aug 2018 17:15:45 -0400 Subject: [Openstack-operators] [cloudkitty] Anyone running Cloudkitty with SSL? Message-ID: <27c8f7b395ef4b468dc790d7ffadb869d8be7fa0.camel@gmail.com> Anyone out there have Cloudkitty successfully working with SSL? By which I mean that Cloudkitty is able to talk to keystone over https without cert errors, and also talk to SSL'd rabbitmq? Oh, and the client tools also? Asking for a friend... Jonathan From christophe.sauthier at objectif-libre.com Fri Aug 31 21:20:18 2018 From: christophe.sauthier at objectif-libre.com (Christophe Sauthier) Date: Fri, 31 Aug 2018 23:20:18 +0200 Subject: [Openstack-operators] =?utf-8?q?=5Bcloudkitty=5D_Anyone_running_C?= =?utf-8?q?loudkitty_with_SSL=3F?= In-Reply-To: <27c8f7b395ef4b468dc790d7ffadb869d8be7fa0.camel@gmail.com> References: <27c8f7b395ef4b468dc790d7ffadb869d8be7fa0.camel@gmail.com> Message-ID: Hello Jonathan Can you describe a little more your setup (release/method of installation/linux distribution) /issues that you are facing ? Because we have deployed it/used it many times with SSL without issue... It could be great also that you step up on #cloudkitty to discuss it. Christophe ---- Christophe Sauthier CEO Objectif Libre : Au service de votre Cloud +33 (0) 6 16 98 63 96 | christophe.sauthier at objectif-libre.com https://www.objectif-libre.com | @objectiflibre Recevez la Pause Cloud Et DevOps : https://olib.re/abo-pause Le 2018-08-31 23:15, jonmills at gmail.com a écrit : > Anyone out there have Cloudkitty successfully working with SSL? By > which I mean that Cloudkitty is able to talk to keystone over https > without cert errors, and also talk to SSL'd rabbitmq? Oh, and the > client tools also? > > Asking for a friend... > > > > Jonathan > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From jonmills at gmail.com Fri Aug 31 21:40:48 2018 From: jonmills at gmail.com (jonmills at gmail.com) Date: Fri, 31 Aug 2018 17:40:48 -0400 Subject: [Openstack-operators] [cloudkitty] Anyone running Cloudkitty with SSL? In-Reply-To: References: <27c8f7b395ef4b468dc790d7ffadb869d8be7fa0.camel@gmail.com> Message-ID: On Fri, 2018-08-31 at 23:20 +0200, Christophe Sauthier wrote: > Hello Jonathan > > Can you describe a little more your setup (release/method of > installation/linux distribution) /issues that you are facing ? It is OpenStack Queens, on CentOS 7.5, using the packages from the centos-cloud repo (which I suppose is the same is RDO). # uname -msr Linux 3.10.0-862.3.2.el7.x86_64 x86_64 # rpm -qa |grep cloudkitty |sort openstack-cloudkitty-api-7.0.0-1.el7.noarch openstack-cloudkitty-common-7.0.0-1.el7.noarch openstack-cloudkitty-processor-7.0.0-1.el7.noarch openstack-cloudkitty-ui-7.0.0-1.el7.noarch python2-cloudkittyclient-1.2.0-1.el7.noarch It is 'deployed' with custom puppet code only. I follow exactly the installation guides posted here: https://docs.openstack.org/cloudkitty/queens/index.html I'd prefer not to post full config files, but my [keystone_authtoken] section of cloudkitty.conf is identical (aside from service credentials) to the ones found in my glance, nova, cinder, neutron, gnocchi, ceilometer, etc, all of those services are working perfectly. My processor.log file is full of 2018-08-31 16:38:04.086 30471 WARNING cloudkitty.orchestrator [-] Error while collecting service network.floating: SSL exception connecting to https://keystone.gpcprod:5000/v3/auth/tokens: ("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",): SSLError: SSL exception connecting to https://keystone.gpcprod:5000/v3/auth/tokens: ("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",) 2018-08-31 16:38:04.094 30471 WARNING cloudkitty.orchestrator [-] Error while collecting service image: SSL exception connecting to https://keystone.gpcprod:5000/v3/auth/tokens: ("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",): SSLError: SSL exception connecting to https://keystone.gpcprod:5000/v3/auth/tokens: ("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",) and so on But, I mean, there's other little things too. I can see from running 'openstack --debug rating info-config-get' that it never even loads the cacert from my env, so it fails talking to keystone trying to get a token; the request never even gets to the cloudkitty api endpoint. > > Because we have deployed it/used it many times with SSL without > issue... > > It could be great also that you step up on #cloudkitty to discuss it. > > Christophe > > ---- > Christophe Sauthier > CEO > > Objectif Libre : Au service de votre Cloud > > +33 (0) 6 16 98 63 96 | christophe.sauthier at objectif-libre.com > > https://www.objectif-libre.com | @objectiflibre > Recevez la Pause Cloud Et DevOps : https://olib.re/abo-pause > > Le 2018-08-31 23:15, jonmills at gmail.com a écrit : > > Anyone out there have Cloudkitty successfully working with SSL? By > > which I mean that Cloudkitty is able to talk to keystone over https > > without cert errors, and also talk to SSL'd rabbitmq? Oh, and the > > client tools also? > > > > Asking for a friend... > > > > > > > > Jonathan > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators