From ueha.ayumu at fujitsu.com Tue Dec 1 00:25:12 2020 From: ueha.ayumu at fujitsu.com (ueha.ayumu at fujitsu.com) Date: Tue, 1 Dec 2020 00:25:12 +0000 Subject: [tc][heat][tacker][tosca-parser][heat-translator] Discusssion about heat-translater and tosca-parser maintenance In-Reply-To: References: <9aedf122-5ebb-e29a-d977-b58635cabe51@nokia.com> <1760a4b795a.10e369974791414.187901784754730298@ghanshyammann.com> Message-ID: Hi Bob and Sahdev I’m Ueha from tacker team. Thank you for reviewing my patch on the Victria release. Excuse me during the discussion about maintenance. I posted a new bug fix patch for policies validate. Could you review it? Thanks! https://bugs.launchpad.net/tosca-parser/+bug/1903233 https://review.opendev.org/c/openstack/tosca-parser/+/763144 Best regards, Ueha From: TAKAHASHI TOSHIAKI(高橋 敏明) Sent: Monday, November 30, 2020 6:09 PM To: Rico Lin ; openstack-discuss Subject: RE: [tc][heat][tacker][tosca-parser][heat-translator] Discusssion about heat-translater and tosca-parser maintenance Hi Rico, Thanks. OK, we’ll discuss with Bob to proceed with development of the projects. Regards, Toshiaki From: Rico Lin > Sent: Monday, November 30, 2020 4:34 PM To: openstack-discuss > Subject: Re: [tc][heat][tacker][tosca-parser][heat-translator] Discusssion about heat-translater and tosca-parser maintenance On Mon, Nov 30, 2020 at 11:06 AM TAKAHASHI TOSHIAKI(高橋 敏明) > wrote: > > Need to discuss with Heat, tc, etc.? > > And I'd like to continue to discuss other points such as cooperation with other members(Heat, or is there any users of those?). I don't think you need further discussion with tc as there still are ways for your patch to get reviewed, release package, or for you to join heat-translator-core team As we treat heat translator as a separated team, I'm definitely +1 on any decision from Bob. So not necessary to discuss with heat core team unless you find it difficult to achieve above tasks. I'm more than happy to provide help if needed. -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrian at fleio.com Tue Dec 1 00:34:02 2020 From: adrian at fleio.com (Adrian Andreias) Date: Tue, 1 Dec 2020 02:34:02 +0200 Subject: [kolla] cannot pull docker public hub images Message-ID: Hi, None of the Kolla images on the Docker public registry are tagged. E.g. https://hub.docker.com/r/kolla/centos-source-nova-api Images don't even have the "latest" tag, though they were updated 15 hours ago. And therefore they can't be pulled: $ docker pull kolla/centos-source-nova-api Using default tag: latest Error response from daemon: manifest for kolla/centos-source-nova-api:latest not found: manifest unknown: manifest unknown Is there another public registry where ready-built Kolla images are available? Thanks Regards, Adrian Andreias https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Dec 1 02:11:34 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 30 Nov 2020 20:11:34 -0600 Subject: [qa][all] pip 20.3 breaks jobs In-Reply-To: References: <3KAMKQ.IXCSM9FNMKU52@est.tech> Message-ID: <1761c11c09c.ed85f577896040.508223221179299179@ghanshyammann.com> ---- On Mon, 30 Nov 2020 10:43:47 -0600 Neil Jerram wrote ---- > On Mon, Nov 30, 2020 at 4:18 PM Balázs Gibizer wrote: > Hi, > > Today pip 20.3 is released[1] and jobs started failing during keystone > install in devstack with: > > 2020-11-30 15:14:31.117 | + inc/python:pip_install:193 : > sudo -H LC_ALL=en_US.UTF-8 SETUPTOOLS_USE_DISTUTILS=stdlib http_proxy= > https_proxy= no_proxy= PIP_FIND_LINKS= > SETUPTOOLS_SYS_PATH_TECHNIQUE=rewrite python3.6 -m pip install -c > /opt/stack/old/requirements/upper-constraints.txt -e > /opt/stack/old/neutron > 2020-11-30 15:14:32.271 | Looking in indexes: > https://mirror.gra1.ovh.opendev.org/pypi/simple, > https://mirror.gra1.ovh.opendev.org/wheel/ubuntu-18.04-x86_64 > 2020-11-30 15:14:32.272 | DEPRECATION: Constraints are only allowed to > take the form of a package name and a version specifier. Other forms > were originally permitted as an accident of the implementation, but > were undocumented. The new implementation of the resolver no longer > supports these forms. A possible replacement is replacing the > constraint with a requirement.. You can find discussion regarding this > at https://github.com/pypa/pip/issues/8210. > 2020-11-30 15:14:32.272 | ERROR: Links are not allowed as constraints > > Ah, snap! I just started another thread about this; sorry for not seeing your thread first. > As a workaround to unblock the gate, Elod proposed to cap the pip in devstack for now as we are not sure of how to fix and how much work needed and I agree with that step. Due to grenade jobs failure, we need to apply this workaround on all stable branches until Stable/ussuri. Stable/train and lower branches are already capped with pip10[1]. These are the patches proposed: - https://review.opendev.org/q/I1feed4573820436f91f8f654cc189fa3a21956fd Also, I filled a bug in devstack to track this further: https://bugs.launchpad.net/devstack/+bug/1906322 [1] https://github.com/openstack/devstack/blob/1b35581bb096883ceafbfeea286153eaec184c17/tools/install_pip.sh#L94 -gmann From whayutin at redhat.com Tue Dec 1 05:35:19 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 30 Nov 2020 22:35:19 -0700 Subject: [tripleo][ci] cm2 pip failures In-Reply-To: <20201130165512.omzmoiyrk2o5ohyo@yuggoth.org> References: <20201130165512.omzmoiyrk2o5ohyo@yuggoth.org> Message-ID: On Mon, Nov 30, 2020 at 10:00 AM Jeremy Stanley wrote: > On 2020-11-30 08:51:21 -0700 (-0700), Wesley Hayutin wrote: > > FYI.. > > > https://bugs.lauhttps://docs.google.com/spreadsheets/u/0/nchpad.net/tripleo/+bug/1906265 > > > I've updated the bug as well, but this may be useful information for > other projects too: Be aware that pip 20.3 was uploaded to PyPI > roughly 4 hours ago, and is the first release to enable the new > dependency solver, so you may want to check whether this is > reproducible with earlier pip. > -- > Jeremy Stanley > Thanks Jeremy, I think for our purposes we've cleared this one, but we will be reviewing requirements etc. TripleO - * https://bugs.launchpad.net/tripleo/+bug/1906265 is fix-released Note: we're still tracking and working through the following issues: * Container build job: periodic-tripleo-ci-build-containers-ubi-8-push is failing with dependencies issues cannot install crypto-policies/cyrus-sasl-lib and cyrus-sasl-lib https://bugs.launchpad.net/tripleo/+bug/1902846 * Container build reports SystemError but don't fail: https://bugs.launchpad.net/tripleo/+bug/1905348 * Scenario 001 and 004 failing with wrong ceph role name https://bugs.launchpad.net/tripleo/+bug/1905536 Thanks all! -------------- next part -------------- An HTML attachment was scrubbed... URL: From missile0407 at gmail.com Tue Dec 1 06:45:32 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Tue, 1 Dec 2020 14:45:32 +0800 Subject: [kolla] cannot pull docker public hub images In-Reply-To: References: Message-ID: Hi Adrian, IME, there's no image tag for latest. Most of kolla images are using Openstack release name as tag. If you want to pull image with specific Openstack release, you should pull them with release name tag. E.g docker pull kolla/centos.source-nova-api:victoria Adrian Andreias 於 2020年12月1日 週二 上午8:39寫道: > Hi, > > None of the Kolla images on the Docker public registry are tagged. > > E.g. https://hub.docker.com/r/kolla/centos-source-nova-api > > Images don't even have the "latest" tag, though they were updated 15 hours > ago. > > And therefore they can't be pulled: > > $ docker pull kolla/centos-source-nova-api > Using default tag: latest > Error response from daemon: manifest for > kolla/centos-source-nova-api:latest not found: manifest unknown: manifest > unknown > > Is there another public registry where ready-built Kolla images are > available? > > > Thanks > > > > Regards, > Adrian Andreias > https://fleio.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.parquet at gandi.net Tue Dec 1 08:53:53 2020 From: nicolas.parquet at gandi.net (Nicolas Parquet) Date: Tue, 1 Dec 2020 09:53:53 +0100 Subject: [cinder] Project ID in cinder endpoints and system scopes In-Reply-To: <197a1857-4719-9691-f486-118c77bfe699@gmail.com> References: <5ded0a92-b6c0-8e58-667e-9184f2629f19@gandi.net> <197a1857-4719-9691-f486-118c77bfe699@gmail.com> Message-ID: Thank you for the explanation! I have linked it in the bug ticket for anyone having the same question :) Cheers, Nicolas On 11/30/20 4:03 PM, Brian Rosmaita wrote: > From the cinder side of the bug, there are two things going on here. > > (1) There's an ambiguity in the term "endpoint": it could be (a) the > base URL where the service can be contacted, or (b) the JSON "url" > element that shows up in the "endpoints" list of a service object in the > "catalog" list in the service catalog.  In sense (a), a project_id does > not occur in the Block Storage API endpoint. > > The Block Storage REST API, however, does require a project_id in most > of the URLs it recognizes.  Thinking of an "endpoint" in sense (a), > these look like: >   > where the parts are defined in the Block Storage API reference: >   https://docs.openstack.org/api-ref/block-storage/ > > When you look at the API reference, you'll see that for almost all > calls, the Block Storage API requires a version indicator and project_id > in the path.  So if you leave these out, the service cannot resolve the > URL and returns a 404. > > (2) Cinder doesn't support recognition of token scope in any released > versions; we're working on it for Wallaby.  (There's limited support in > Victoria, but only for the default-types API.) > > > cheers, > brian > > On 11/30/20 4:35 AM, Nicolas Parquet wrote: >> Hello there, >> >> Wondering if anyone has ongoing work or information on the following >> bug: https://bugs.launchpad.net/keystone/+bug/1745905 >> >> We tried removing the project_id from cinder endpoints in order to test >> system scopes and see if more is needed than oslo policy configuration, >> but cannot get anything else than 404 responses. >> >> The bug description suggests it is just a documentation issue, but I >> could not get a working setup. I also don't this mentioned in ptg >> documentation regarding system scopes support. >> >> Any information or hint is welcome! >> >> Regards, >> >> -- >> Nicolas Parquet >> Gandi >> nicolas.parquet at gandi.net >> > > -- Nicolas Parquet Gandi nicolas.parquet at gandi.net From stephenfin at redhat.com Tue Dec 1 10:30:54 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 01 Dec 2020 10:30:54 +0000 Subject: [ops][nova][designate] Does anyone rely on fully-qualified instance names? In-Reply-To: References: <20201130135830.q3fps4dwkxdiwngu@yuggoth.org> <20201130165039.gebc7aaxm4tw3czu@yuggoth.org> <20201130181655.tyzvbvizfbokw72q@yuggoth.org> Message-ID: On Mon, 2020-11-30 at 19:50 +0000, Sean Mooney wrote: > On Mon, 2020-11-30 at 18:16 +0000, Jeremy Stanley wrote: > > On 2020-11-30 10:13:00 -0800 (-0800), Michael Johnson wrote: > > [...] > > > So, I think I am in alignment with Sean. The hostname component should > > > return a 400 if there is an illegal character in the string, such as a > > > period. We also should use Punycode to store/handle unicode hostnames > > > correctly. > > [...] > > > > So to be clear, you're suggesting that if someone asks for an > > instance name which can't be converted to a (legal) hostname, then > > the nova boot call be rejected outright, even though there are > > likely plenty of people who 1. (manually) align their instance names > > with FQDNs, and 2. don't use the hostname metadata anyway? > unfortunetly we cant do that even if its the right thing to do technically. > > i think the path forward has to be something more like this. > > 1.) add a new workaround config option e.g. disable_strict_server_name_check. >     for upgrade reason it would default to true in wallaby >     when strict server name checking is disabled we will transform any malformed >     numeric tld by replacing the '.' with '-' or by replacing the hostname with server- as we do >     with unicode hostnames. This sounds like config-driven API behavior, in that the API will respond differently to the same request on different clouds (an 'ubuntu18.04' server name will work on one cloud and fail on the other). That's generally a big no- no. What makes this different? > 2.) add a new api micro versions and do one of: >    a.) reject servers with invlaid server names that contain '.' with a 400 >    b.) transform server names according to the RFEs (replace all '.' and other disallowed charaters with -) >    c.) add support for FQDNs to the api. > this coudl be something like adding a new field to store the fqdn as a top level filed on the server. >         make hostname contain jsut the host name with the full FQDN in the new fqdn field. >         if the server name is an fqdn the the fqdn field would just be the server name. >         if the server name is the a hostname then the nova dhcp_domain will be appended to servername >         this will allow the remvoal of dhcp_domain from the compute node for config driver generateion and >         we can generate the metadat form teh new fqdn filed. >         if designate is enabled then the fqdn will be taken form the port info. >         in the metadata we will store the instance.hostname which will never be an fqdn in all local hostname keys. >         we can store teh fqdn in the public_hostname key in the ec2 metadata and in a new fqdn filed. >         this will make the values consitent and useful. >         with the new microversion we will nolonger transform the hostname except for multi-create where it will be used >         as a template i.e. - >         TBD if the new micorversion will continue to transform unicode hostname to server- or allow them out of scope for now. This sounds great, and I agree(d) that it's ultimately the correct solution [1], but it doesn't solve anything for users that try the following on any release to date when Designate is configured: openstack server create ... ubuntu18.04 We need to fix this in a backportable manner, hence my suggestion to simply rewrite hostnames if we detect that they're still invalid after sanitization. Remember, we already do sanitization, and with this people that are using valid FQDN like Ruby or Jeremy can continue to do so and people that don't know about this "feature" are able to create instances. I don't see what the downside of that would be. Stephen [1] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2020-11-27.log.html#t2020-11-27T17:43:54 > 3.) change workaround config option default to false enforcing the new behavior for old micorverions but do not remove it. >     this will enable all vm even with old clinets to have to correct behavior. of requiring the hostname to be an actul hostname >     we will not remove this workaround option going forward allowing cloud that want the old behavior to contiue to have it but >     endusers can realy on the consitent behavior by opting in to the new microverion if they have a perference. > > this is a log way to say that i think we need a spec for the new api behavior that adds support for FQDN offically > whatever we decide we need to document the actually expected behavior in both the api refernce and general server careate > doumentation. > > if we backport any part of this i think the new behavior should be disabled by default e.g. transfroming the numeric top level domain > so that the api behavior continues to be unaffected unless operators opt in to enabling it. > > > toughts? > > From skaplons at redhat.com Tue Dec 1 10:45:22 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 1 Dec 2020 11:45:22 +0100 Subject: [neutron] bug deputy report Nov 23-29 In-Reply-To: <6f87beef29314934ba65186d42ccb8b6@huawei.com> References: <6f87beef29314934ba65186d42ccb8b6@huawei.com> Message-ID: <20201201104522.wlbtqzryl3z6kqnf@p1.localdomain> On Mon, Nov 30, 2020 at 02:17:47PM +0000, Oleg Bondarev wrote: > Please find new bugs summary for the week Nov 23-27. > Nothing critical. > 2 RFEs for neutron drivers team to discuss. > > High: > > - https://bugs.launchpad.net/neutron/+bug/1905700 - [OVN] ovn-metadata-agent: "RowNotFound: Cannot find Chassis with name..." when starting the agent > - Gate failure > - fix approved: https://review.opendev.org/c/openstack/neutron/+/764318 > > - https://bugs.launchpad.net/neutron/+bug/1905726 - Qos plugin performs too many queries > - fixes on review: > - https://review.opendev.org/c/openstack/neutron/+/764433 > - https://review.opendev.org/c/openstack/neutron/+/764454 > > Medium: > > - https://bugs.launchpad.net/neutron/+bug/1905271 - [OVS] Polling cycle continuously interrupted by L2 population (when enabled) > - fix on review: https://review.opendev.org/c/openstack/neutron/+/755313 > > - https://bugs.launchpad.net/neutron/+bug/1905551 - functional: test_gateway_chassis_rebalance fails > - Unassigned > > - https://bugs.launchpad.net/neutron/+bug/1905568 - Sanity checks missing port_name while adding tunnel port > - fix approved: https://review.opendev.org/c/openstack/neutron/+/764171 > > - https://bugs.launchpad.net/neutron/+bug/1905611 - OVN.ovsdb_probe_interval takes effect only after initial database dump > - Unassigned This one is actually in progress. Terry pushed patch for it: https://review.opendev.org/c/openstack/neutron/+/764235 I updated LP accordingly. > > Low: > > - https://bugs.launchpad.net/neutron/+bug/1905538 - Some OVS bridges may lack OpenFlow10 protocol > - fix on review: https://review.opendev.org/c/openstack/neutron/+/764150 > > Wishlist: > > - https://bugs.launchpad.net/neutron/+bug/1905276 - Overriding hypervisor name for resource provider always requires a complete list of interfaces/bridges > - fix on review - https://review.opendev.org/c/openstack/neutron/+/763563 > > - https://bugs.launchpad.net/neutron/+bug/1905268 - port list performance for trunks can be optimized > - fix on review - https://review.opendev.org/c/openstack/neutron/+/763777 > > RFEs: > > - https://bugs.launchpad.net/neutron/+bug/1905295 - [RFE] Allow multiple external gateways on a router > > - https://bugs.launchpad.net/neutron/+bug/1905391 - [RFE] VPNaaS support for OVN > > Won't fix: > > - https://bugs.launchpad.net/neutron/+bug/1905552 - neutron-fwaas netlink conntrack driver would catch error while conntrack rules protocol is 'unknown' > - fwaas is not supported > > Expired Bugs: > > - https://bugs.launchpad.net/bugs/1896592 - [neutron-tempest-plugin] test_dhcpv6_stateless_* clashing when creating a IPv6 subnet > > - https://bugs.launchpad.net/bugs/1853632 - designate dns driver does not use domain settings for auth > > > Thanks, > Oleg > --- > Advanced Software Technology Lab > Huawei -- Slawek Kaplonski Principal Software Engineer Red Hat From adrian at fleio.com Tue Dec 1 10:52:12 2020 From: adrian at fleio.com (Adrian Andreias) Date: Tue, 1 Dec 2020 12:52:12 +0200 Subject: [kolla] cannot pull docker public hub images In-Reply-To: References: Message-ID: Hi Eddie, That makes sense and works, indeed: docker pull kolla/centos-source-nova-api:victoria I was seeing no tags here the other day: https://hub.docker.com/r/kolla/centos-source-nova-api/tags?page=1&ordering=last_updated I can see the tags now. It probably just took too long for the tags to load. Thanks Regards, Adrian Andreias https://fleio.com On Tue, Dec 1, 2020 at 8:45 AM Eddie Yen wrote: > Hi Adrian, > > IME, there's no image tag for latest. Most of kolla images are using > Openstack release name as tag. > If you want to pull image with specific Openstack release, you should pull > them with release name tag. > > E.g docker pull kolla/centos.source-nova-api:victoria > > Adrian Andreias 於 2020年12月1日 週二 上午8:39寫道: > >> Hi, >> >> None of the Kolla images on the Docker public registry are tagged. >> >> E.g. https://hub.docker.com/r/kolla/centos-source-nova-api >> >> Images don't even have the "latest" tag, though they were updated 15 >> hours ago. >> >> And therefore they can't be pulled: >> >> $ docker pull kolla/centos-source-nova-api >> Using default tag: latest >> Error response from daemon: manifest for >> kolla/centos-source-nova-api:latest not found: manifest unknown: manifest >> unknown >> >> Is there another public registry where ready-built Kolla images are >> available? >> >> >> Thanks >> >> >> >> Regards, >> Adrian Andreias >> https://fleio.com >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Dec 1 10:59:00 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 1 Dec 2020 11:59:00 +0100 Subject: [neutron] Team meeting agenda Message-ID: <20201201105900.uihargeac7hd2brr@p1.localdomain> Hi neutrinos, Just quick reminder that today at 2pm UTC we will have out weekly team meeting. Agenda is available at [1]. If You have anything to discuss there, please add it to the "On demand" section there. [1] https://wiki.openstack.org/wiki/Network/Meetings -- Slawek Kaplonski Principal Software Engineer Red Hat From smooney at redhat.com Tue Dec 1 12:21:15 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 01 Dec 2020 12:21:15 +0000 Subject: [ops][nova][designate] Does anyone rely on fully-qualified instance names? In-Reply-To: References: <20201130135830.q3fps4dwkxdiwngu@yuggoth.org> <20201130165039.gebc7aaxm4tw3czu@yuggoth.org> <20201130181655.tyzvbvizfbokw72q@yuggoth.org> Message-ID: On Tue, 2020-12-01 at 10:30 +0000, Stephen Finucane wrote: > On Mon, 2020-11-30 at 19:50 +0000, Sean Mooney wrote: > > On Mon, 2020-11-30 at 18:16 +0000, Jeremy Stanley wrote: > > > On 2020-11-30 10:13:00 -0800 (-0800), Michael Johnson wrote: > > > [...] > > > > So, I think I am in alignment with Sean. The hostname component should > > > > return a 400 if there is an illegal character in the string, such as a > > > > period. We also should use Punycode to store/handle unicode hostnames > > > > correctly. > > > [...] > > > > > > So to be clear, you're suggesting that if someone asks for an > > > instance name which can't be converted to a (legal) hostname, then > > > the nova boot call be rejected outright, even though there are > > > likely plenty of people who 1. (manually) align their instance names > > > with FQDNs, and 2. don't use the hostname metadata anyway? > > unfortunetly we cant do that even if its the right thing to do technically. > > > > i think the path forward has to be something more like this. > > > > 1.) add a new workaround config option e.g. disable_strict_server_name_check. > >     for upgrade reason it would default to true in wallaby > >     when strict server name checking is disabled we will transform any malformed > >     numeric tld by replacing the '.' with '-' or by replacing the hostname with server- as we do > >     with unicode hostnames. > > This sounds like config-driven API behavior, in that the API will respond > differently to the same request on different clouds (an 'ubuntu18.04' server > name will work on one cloud and fail on the other). That's generally a big no- > no. What makes this different? this is the backportable bit where we allow the numeric tlds if you opt into it. it is config driven api behavior but it is the only way i think its valid to backport a behavioral api change. we should not do that without a way to disable it hence a workaround config option. > > > 2.) add a new api micro versions and do one of: > >    a.) reject servers with invlaid server names that contain '.' with a 400 > >    b.) transform server names according to the RFEs (replace all '.' and other disallowed charaters with -) > >    c.) add support for FQDNs to the api. > > this coudl be something like adding a new field to store the fqdn as a top level filed on the server. > >         make hostname contain jsut the host name with the full FQDN in the new fqdn field. > >         if the server name is an fqdn the the fqdn field would just be the server name. > >         if the server name is the a hostname then the nova dhcp_domain will be appended to servername > >         this will allow the remvoal of dhcp_domain from the compute node for config driver generateion and > >         we can generate the metadat form teh new fqdn filed. > >         if designate is enabled then the fqdn will be taken form the port info. > >         in the metadata we will store the instance.hostname which will never be an fqdn in all local hostname keys. > >         we can store teh fqdn in the public_hostname key in the ec2 metadata and in a new fqdn filed. > >         this will make the values consitent and useful. > >         with the new microversion we will nolonger transform the hostname except for multi-create where it will be used > >         as a template i.e. - > >         TBD if the new micorversion will continue to transform unicode hostname to server- or allow them out of scope for now. > > This sounds great, and I agree(d) that it's ultimately the correct solution [1], > but it doesn't solve anything for users that try the following on any release to > date when Designate is configured: > >   openstack server create ... ubuntu18.04 > > We need to fix this in a backportable manner, hence my suggestion to simply > rewrite hostnames if we detect that they're still invalid after sanitization. well im not sure we do. ubuntu18.04 would have always failed regardelss of if you have designate or not. numeric tld have never worked it was one of the first things i learned when i started working on hevana so i dont think we have to "fix" that in a backportable way, that is not new with designate, but if we must then we can do so with the workaround option. althouht melanie is right we should name it enable_strict_server_name_checking=false rather than disable_strict_server_name_check=true to follow convention. > Remember, we already do sanitization, and with this people that are using valid > FQDN like Ruby or Jeremy can continue to do so and people that don't know about > this "feature" are able to create instances. I don't see what the downside of > that would be. un less we also adress the fact that the FQDN you enter will not be available to the instance in any way i really dont see the point in just allowing it to be set. the server will be presented with 4 different host names none of which will be the one that you entered as i pointed out in my previous mail. the only reason you can even ping it is because of the dns search domain if that is not congired the the server name will not resolve at all unless you also register that dns name manually or set it in /etc/hosts after teh fact. you can do both of those without actully setting the server name to match so im not seeing a compleing reason to allow numeric tlds Ruby's and Jeremy's use cases condered given numeric tlds never worked it should not affect them provided we contiue to allow fqdns in the server name while enable_strict_server_name_checking=false > > Stephen > > [1] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2020-11-27.log.html#t2020-11-27T17:43:54 > > > 3.) change workaround config option default to false enforcing the new behavior for old micorverions but do not remove it. > >     this will enable all vm even with old clinets to have to correct behavior. of requiring the hostname to be an actul hostname > >     we will not remove this workaround option going forward allowing cloud that want the old behavior to contiue to have it but > >     endusers can realy on the consitent behavior by opting in to the new microverion if they have a perference. > > > > this is a log way to say that i think we need a spec for the new api behavior that adds support for FQDN offically > > whatever we decide we need to document the actually expected behavior in both the api refernce and general server careate > > doumentation. > > > > if we backport any part of this i think the new behavior should be disabled by default e.g. transfroming the numeric top level domain > > so that the api behavior continues to be unaffected unless operators opt in to enabling it. > > > > > > toughts? > > > > > > > From mnaser at vexxhost.com Tue Dec 1 15:24:01 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 1 Dec 2020 10:24:01 -0500 Subject: [tc] weekly update + meeting Message-ID: Hi everyone, Here's an update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # Patches ## Open Reviews - Remove Qinling project team https://review.opendev.org/c/openstack/governance/+/764523 - Clarify the requirements for supports-api-interoperability https://review.opendev.org/c/openstack/governance/+/760562 - Add Resolution of TC stance on the OpenStackClient https://review.opendev.org/c/openstack/governance/+/759904 - Remove Searchlight project team https://review.opendev.org/c/openstack/governance/+/764530 - Update example and oslo code usage in JSON->YAML goal https://review.opendev.org/c/openstack/governance/+/764261 - Remove already done use-builtin-mock from goal https://review.opendev.org/c/openstack/governance/+/764262 - Generate the TC liaisons assignments https://review.opendev.org/c/openstack/governance/+/763810 - Add assert:supports-standalone https://review.opendev.org/c/openstack/governance/+/722399 - Remove assert_supports-zero-downtime-upgrade tag https://review.opendev.org/c/openstack/governance/+/761975 - Clarify impact on releases for SIGs https://review.opendev.org/c/openstack/governance/+/752699## General Changes - Correct Dan Smith IRC nick in TC liaisons https://review.opendev.org/c/openstack/governance/+/763809 - Propose Kendall Nelson for vice chair https://review.opendev.org/c/openstack/governance/+/762014 - Add election schedule exceptions in charter https://review.opendev.org/c/openstack/governance/+/751941 ## Project Updates - Add Magpie charm to OpenStack charms https://review.opendev.org/c/openstack/governance/+/762820 ## Other Reminders - [TC] Weekly meeting December 03. If you would like to add topics for discussion, please go to https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting and fill out your suggestions by Wednesday, December 02, at 2100 UTC. Thanks for reading! Mohammed & Kendall -- Mohammed Naser VEXXHOST, Inc. From ankelezhang at gmail.com Tue Dec 1 06:53:56 2020 From: ankelezhang at gmail.com (Ankele zhang) Date: Tue, 1 Dec 2020 14:53:56 +0800 Subject: nova novnc timeout In-Reply-To: <5f1f6998-48ef-e065-bd81-9b92ccdcf4f3@gmail.com> References: <5f1f6998-48ef-e065-bd81-9b92ccdcf4f3@gmail.com> Message-ID: Thank you very much! Melanie I should read the Nova installation documentation carefully. melanie witt 于2020年11月20日周五 上午5:25写道: > On 11/19/20 01:35, Ankele zhang wrote: > > Hello~ > > I have a OpenStack Rocky platform. My nova.cfg has configured > > "[consoleauth] token_ttl=360000 [workarounds] enable_consoleauth=true", > > I get the console url and access my VM console in web. the console url > > invalid after two or one minutes not 360000s. > > How can I resolve this? > > Look forward to hearing from you. > > Hi Ankele, > > I'm sure you have already read this but for reference, this is the blurb > in the release notes around the console proxy changes [1]. Note that the > [workarounds]enable_consoleauth option has been removed in the Train > release, so to avoid interruptions in consoles during an upgrade to > Train, you must ensure your deployment has fully migrated to the new > per-cell console proxy model in Rocky or Stein. > > In Rocky, console token auths are stored in the cell database(s) (new > way) and if [workarounds]enable_consoleauth=true on the nova-api nodes, > they are additionally stored in the nova-consoleauth service (old way). > Then, on the console proxy side, if [workarounds]enable_consoleauth=true > on the nova-novncproxy nodes, the proxy will first try to validate the > token in the nova-consoleauth service (old way) and if that's not > successful, it will fall back to contacting the cell database to > validate the token (new way). In order for it to succeed at validating > the token in the cell database, the nova-novncproxy needs to be deployed > per cell and have access to the cell database [database]connection. > > If you need to use nova-consoleauth to transition to the > database-backend model, you must set > [workarounds]enable_consoleauth=true on both the nova-novncproxy nodes > (for token validation) and the nova-api nodes (for token auth storage in > the old way). The [consoleauth]token_ttl option needs to be set to the > value you desire on both the nova-consoleauth nodes (old way) and > nova-compute nodes (new way). > > So, I suspect the issue is you need to set the aforementioned config > options on nodes where you don't yet have them set. > > To transition to the new way without console interruption, you will need > to (1) deploy nova-novncproxy services to each of your cells and make > sure they have [database]connection set to the corresponding cell > database, (2) wait until all token auths generated before Rocky are > expired, (3) set [workarounds]enable_consoleauth=false on > nova-novncproxy and nova-api nodes, (4) remove the nova-consoleauth > service from your deployment. > > Hope this helps, > -melanie > > [1] > > https://docs.openstack.org/releasenotes/nova/rocky.html#relnotes-18-0-0-stable-rocky-upgrade-notes > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Akshay.346 at hsc.com Tue Dec 1 12:16:09 2020 From: Akshay.346 at hsc.com (Akshay 346) Date: Tue, 1 Dec 2020 12:16:09 +0000 Subject: IRONIC BAREMETAL partitioned image not spawning. Message-ID: Hello Team, I hope you all are keeping safe and doing good. I have a OpenStack setup up and running with ironic enabled. I am able to spawn a bare metal node with centos8.2 whole-disk image but when I spawn bare metal with partitioned image, it fails to spawn it stating the following trace ( Full trace back attached in the email): "2020-12-01 12:35:34.603 1 ERROR ironic.conductor.utils [req-0d3a375e-90a3-485d-a7c3-895c97c88006 - - - - -] Deploy failed for instance 463bd844-7f80-4c57-98dd-6ca8cfdb6121. Error: [Errno 2] No such file or directory: 'sgdisk': 'sgdisk': FileNotFoundError: [Errno 2] No such file or directory: 'sgdisk': 'sgdisk'". I have also installed sgdisk (yum install gdisk) on the qcow2 from which I build the partitioned image. Can anyone please guide how to debug it? Regards Akshay DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ironic_openstack_partitioned_image_error Type: application/octet-stream Size: 7400 bytes Desc: ironic_openstack_partitioned_image_error URL: From pkliczew at redhat.com Tue Dec 1 17:40:11 2020 From: pkliczew at redhat.com (Piotr Kliczewski) Date: Tue, 1 Dec 2020 18:40:11 +0100 Subject: [Openstack][CFP] Virtualization & IaaS Devroom Message-ID: We are excited to announce that the call for proposals is now open for the Virtualization & IaaS devroom at the upcoming FOSDEM 2021, to be hosted virtually on February 6th 2021. This year will mark FOSDEM’s 21th anniversary as one of the longest-running free and open source software developer events, attracting thousands of developers and users from all over the world. Due to Covid-19, FOSDEM will be held virtually this year on February 6th & 7th, 2021. About the Devroom The Virtualization & IaaS devroom will feature session topics such as open source hypervisors and virtual machine managers such as Xen Project, KVM, bhyve, and VirtualBox, and Infrastructure-as-a-Service projects such as KubeVirt, Apache CloudStack, Foreman, OpenStack, oVirt, QEMU and OpenNebula. This devroom will host presentations that focus on topics of shared interest, such as KVM; libvirt; shared storage; virtualized networking; cloud security; clustering and high availability; interfacing with multiple hypervisors; hyperconverged deployments; and scaling across hundreds or thousands of servers. Presentations in this devroom will be aimed at users or developers working on these platforms who are looking to collaborate and improve shared infrastructure or solve common problems. We seek topics that encourage dialog between projects and continued work post-FOSDEM. Important Dates Submission deadline: 20th of December Acceptance notifications: 25th of December Final schedule announcement: 31st of December Recorded presentations upload deadline: 15th of January Devroom: 6th February 2021 Submit Your Proposal All submissions must be made via the Pentabarf event planning site[1]. If you have not used Pentabarf before, you will need to create an account. If you submitted proposals for FOSDEM in previous years, you can use your existing account. After creating the account, select Create Event to start the submission process. Make sure to select Virtualization and IaaS devroom from the Track list. Please fill out all the required fields, and provide a meaningful abstract and description of your proposed session. Submission Guidelines We expect more proposals than we can possibly accept, so it is vitally important that you submit your proposal on or before the deadline. Late submissions are unlikely to be considered. All presentation slots are 30 minutes, with 20 minutes planned for presentations, and 10 minutes for Q&A. All presentations will need to be pre-recorded and put into our system at least a couple of weeks before the event. The presentations should be uploaded by 15th of January and made available under Creative Commons licenses. In the Submission notes field, please indicate that you agree that your presentation will be licensed under the CC-By-SA-4.0 or CC-By-4.0 license and that you agree to have your presentation recorded. For example: "If my presentation is accepted for FOSDEM, I hereby agree to license all recordings, slides, and other associated materials under the Creative Commons Attribution Share-Alike 4.0 International License. Sincerely, ." In the Submission notes field, please also confirm that if your talk is accepted, you will be able to attend the virtual FOSDEM event for the Q&A. We will not consider proposals from prospective speakers who are unsure whether they will be able to attend the FOSDEM virtual event. If you are experiencing problems with Pentabarf, the proposal submission interface, or have other questions, you can email our devroom mailing list[2] and we will try to help you. Code of Conduct Following the release of the updated code of conduct for FOSDEM, we'd like to remind all speakers and attendees that all of the presentations and discussions in our devroom are held under the guidelines set in the CoC and we expect attendees, speakers, and volunteers to follow the CoC at all times. If you submit a proposal and it is accepted, you will be required to confirm that you accept the FOSDEM CoC. If you have any questions about the CoC or wish to have one of the devroom organizers review your presentation slides or any other content for CoC compliance, please email us and we will do our best to assist you. Call for Volunteers We are also looking for volunteers to help run the devroom. We need assistance with helping speakers to record the presentation as well as helping with streaming and chat moderation for the devroom. Please contact devroom mailing list [2] for more information. Questions? If you have any questions about this devroom, please send your questions to our devroom mailing list. You can also subscribe to the list to receive updates about important dates, session announcements, and to connect with other attendees. See you all at FOSDEM! [1] https://penta.fosdem .org/submission/FOSDEM21 [2] iaas-virt-devroom at lists.fosdem.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Dec 1 22:55:41 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 1 Dec 2020 14:55:41 -0800 Subject: IRONIC BAREMETAL partitioned image not spawning. In-Reply-To: References: Message-ID: Greetings, If your deploy_interface is set to "iscsi", then sgdisk needs to be installed and available to the ironic-conductor process and user. If your deploy_interface is set to "direct", then sgdisk needs to be present with-in the ironic-python-agent ramdisk. The error you pasted into your email suggests, and the logs confirm that you're using the iscsi deployment interface. You will need to ensure your conductor process can execute sgdisk and it is available with-in the environment PATH that the ironic-conductor process is executing with-in. This is not with-in the qcow you're deploying as the commands are executed remotely before the image is written in this case. -Julia On Tue, Dec 1, 2020 at 10:19 AM Akshay 346 wrote: > > Hello Team, > > > > I hope you all are keeping safe and doing good. > > > > I have a OpenStack setup up and running with ironic enabled. > > I am able to spawn a bare metal node with centos8.2 whole-disk image but when I spawn bare metal with partitioned image, it fails to spawn it stating the following trace ( Full trace back attached in the email): > > > > “2020-12-01 12:35:34.603 1 ERROR ironic.conductor.utils [req-0d3a375e-90a3-485d-a7c3-895c97c88006 - - - - -] Deploy failed for instance 463bd844-7f80-4c57-98dd-6ca8cfdb6121. Error: [Errno 2] No such file or directory: 'sgdisk': 'sgdisk': FileNotFoundError: [Errno 2] No such file or directory: 'sgdisk': 'sgdisk'”. > > > > I have also installed sgdisk (yum install gdisk) on the qcow2 from which I build the partitioned image. > > > > Can anyone please guide how to debug it? > > > > Regards > > Akshay > > > > DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. From helena at openstack.org Tue Dec 1 22:57:21 2020 From: helena at openstack.org (helena at openstack.org) Date: Tue, 1 Dec 2020 17:57:21 -0500 (EST) Subject: [ptl] Victoria Release Community Meeting Videos Message-ID: <1606863441.140329242@apps.rackspace.com> Hi Everyone, Thank you to all the PTLs who presented and all the community members who attended the Victoria Release Community Meeting last November! As mentioned before we uploaded the presentations from the community meeting to the project navigator for each project that was presented on ([ Glance ]( https://www.openstack.org/software/releases/victoria/components/glance ), [ Cinder ]( https://www.openstack.org/software/releases/victoria/components/cinder ), [ Manila ]( https://www.openstack.org/software/releases/victoria/components/manila ), [ Nova ]( https://www.openstack.org/software/releases/victoria/components/nova ), [ Masakri ]( https://www.openstack.org/software/releases/victoria/components/masakari ), [ Neutron ]( https://www.openstack.org/software/releases/victoria/components/neutron )). You can also find the full community meeting and all the individual videos on the “[ Community Meetings ]( https://www.youtube.com/playlist?list=PLKqaoAnDyfgpYADSiOfIVwgKb5zbL0GJE )” playlist on YouTube. If you are a PTL interested in still creating a video to add to YouTube and the Project Navigator, you may do so and email it to me. Cheers, Helena -------------- next part -------------- An HTML attachment was scrubbed... URL: From missile0407 at gmail.com Wed Dec 2 00:20:41 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Wed, 2 Dec 2020 08:20:41 +0800 Subject: [kolla] cannot pull docker public hub images In-Reply-To: References: Message-ID: The moment that you couldn't find any tags in kolla images might be the generator is pushing. Kolla will generate then push images to Docker Hub. And because this mechanism will working in multiple releases at the same time, it's normal that not showing up image tags in Docker Tag web site when in progress. Adrian Andreias 於 2020年12月1日 週二 下午6:58寫道: > Hi Eddie, > > That makes sense and works, indeed: > > docker pull kolla/centos-source-nova-api:victoria > > I was seeing no tags here the other day: > > https://hub.docker.com/r/kolla/centos-source-nova-api/tags?page=1&ordering=last_updated > > I can see the tags now. It probably just took too long for the tags to > load. > > Thanks > > > Regards, > Adrian Andreias > https://fleio.com > > > > > On Tue, Dec 1, 2020 at 8:45 AM Eddie Yen wrote: > >> Hi Adrian, >> >> IME, there's no image tag for latest. Most of kolla images are using >> Openstack release name as tag. >> If you want to pull image with specific Openstack release, you should >> pull them with release name tag. >> >> E.g docker pull kolla/centos.source-nova-api:victoria >> >> Adrian Andreias 於 2020年12月1日 週二 上午8:39寫道: >> >>> Hi, >>> >>> None of the Kolla images on the Docker public registry are tagged. >>> >>> E.g. https://hub.docker.com/r/kolla/centos-source-nova-api >>> >>> Images don't even have the "latest" tag, though they were updated 15 >>> hours ago. >>> >>> And therefore they can't be pulled: >>> >>> $ docker pull kolla/centos-source-nova-api >>> Using default tag: latest >>> Error response from daemon: manifest for >>> kolla/centos-source-nova-api:latest not found: manifest unknown: manifest >>> unknown >>> >>> Is there another public registry where ready-built Kolla images are >>> available? >>> >>> >>> Thanks >>> >>> >>> >>> Regards, >>> Adrian Andreias >>> https://fleio.com >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From yoshito.itou.dr at hco.ntt.co.jp Wed Dec 2 07:20:23 2020 From: yoshito.itou.dr at hco.ntt.co.jp (Yoshito Ito) Date: Wed, 02 Dec 2020 16:20:23 +0900 Subject: [tacker] Please check my slide for the discussion with ETSI-NFV Message-ID: <38ccb7b9-52bd-b95a-b2f9-5509165d49ee@hco.ntt.co.jp_1> Hi tacker team, As I mentioned in the previous IRC meeting, I uploaded my slide [1] for the discussion with ETSI NFV members. The meeting will be held on 4th Dec so I hope you to put your comments or questions on the etherpad [2] soon. [1] https://www2.slideshare.net/secret/zQLlzi7wwJ5FGQ [2] https://etherpad.opendev.org/p/tacker-meeting Thanks, Yoshito Ito From radoslaw.piliszek at gmail.com Wed Dec 2 07:26:22 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 2 Dec 2020 08:26:22 +0100 Subject: [ptl] Victoria Release Community Meeting Videos In-Reply-To: <1606863441.140329242@apps.rackspace.com> References: <1606863441.140329242@apps.rackspace.com> Message-ID: On Wed, Dec 2, 2020 at 12:00 AM helena at openstack.org wrote: > > Hi Everyone, > Hi Helena, > > As mentioned before we uploaded the presentations from the community meeting to the project navigator for each project that was presented on (Glance, Cinder, Manila, Nova, Masakri, Neutron). You can also find the full community meeting and all the individual videos on the “Community Meetings” playlist on YouTube. > Thank you, I can see them on YouTube. However, I am unable to find them in the project navigator. Where should I be looking? Kind regards, -yoctozepto From radoslaw.piliszek at gmail.com Wed Dec 2 07:27:53 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 2 Dec 2020 08:27:53 +0100 Subject: [kolla] cannot pull docker public hub images In-Reply-To: References: Message-ID: It is not really that. It seems Docker Hub has its worse days and I found myself confused on our or others images from time to time (that they presumably lack expected tags). -yoctozepto On Wed, Dec 2, 2020 at 1:28 AM Eddie Yen wrote: > > The moment that you couldn't find any tags in kolla images might be the > generator is pushing. Kolla will generate then push images to Docker > Hub. And because this mechanism will working in multiple releases at > the same time, it's normal that not showing up image tags in Docker Tag > web site when in progress. > > > Adrian Andreias 於 2020年12月1日 週二 下午6:58寫道: >> >> Hi Eddie, >> >> That makes sense and works, indeed: >> >> docker pull kolla/centos-source-nova-api:victoria >> >> I was seeing no tags here the other day: >> https://hub.docker.com/r/kolla/centos-source-nova-api/tags?page=1&ordering=last_updated >> >> I can see the tags now. It probably just took too long for the tags to load. >> >> Thanks >> >> >> Regards, >> Adrian Andreias >> https://fleio.com >> >> >> >> >> On Tue, Dec 1, 2020 at 8:45 AM Eddie Yen wrote: >>> >>> Hi Adrian, >>> >>> IME, there's no image tag for latest. Most of kolla images are using Openstack release name as tag. >>> If you want to pull image with specific Openstack release, you should pull them with release name tag. >>> >>> E.g docker pull kolla/centos.source-nova-api:victoria >>> >>> Adrian Andreias 於 2020年12月1日 週二 上午8:39寫道: >>>> >>>> Hi, >>>> >>>> None of the Kolla images on the Docker public registry are tagged. >>>> >>>> E.g. https://hub.docker.com/r/kolla/centos-source-nova-api >>>> >>>> Images don't even have the "latest" tag, though they were updated 15 hours ago. >>>> >>>> And therefore they can't be pulled: >>>> >>>> $ docker pull kolla/centos-source-nova-api >>>> Using default tag: latest >>>> Error response from daemon: manifest for kolla/centos-source-nova-api:latest not found: manifest unknown: manifest unknown >>>> >>>> Is there another public registry where ready-built Kolla images are available? >>>> >>>> >>>> Thanks >>>> >>>> >>>> >>>> Regards, >>>> Adrian Andreias >>>> https://fleio.com >>>> >>>> From pbasaras at gmail.com Wed Dec 2 09:21:12 2020 From: pbasaras at gmail.com (Pavlos Basaras) Date: Wed, 2 Dec 2020 11:21:12 +0200 Subject: [ussuri] [neutron] deploy an additional compute node that resides in a different network Message-ID: Dear community, I am new to openstack, please excuse all newbie questions. I am using ubuntu 18 for all elements. I followed the steps for installing openstack from https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-ussuri . My setup is based on virtualbox, with mgmt at 10.0.0.0/24 and provider at 203.0.113.0/24 (host only adapters), as per instructions. The host of the virtualbox is nating those IPs to the network 192.168.111.0/24 (gw to internet etc.) When i deployed the compute vm at the virtual box e.g., 10.0.0.31, the vms are deployed successfully, and can successfully launch an instance at provider(203.0.113.0/24), internal (192.168.10.0/24), and self service ( 172.16.1.0/24) networks, with associated floating ips, internet access etc. I want to add a new compute node that resides on a different network for deploying vms, i.e., 192.168.111.0/24. The virtual box host is on 192.168.111.15 (this is where the controller vm 10.0.0.11 is deployed ) and the new compute is 192.168.111.17 directly visible from the virtualbox host. For this new node to see the controller i added an iptables rule at 192.168.111.15 (host of the virtualbox) to forward all traffic from 192.168.111.17 to the controller 10.0.0.0.11. Probably this is the wrong way to do it even though the following output seems ok (5g-cpn1=192.168.111.17) and from horizon i can see the hypervisor info, and relevant total and used resources when i deploy vms in 192.168.111.17 (the 5g-cpn1 node) openstack network agent list +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | 2d8c3a89-32c4-4b97-aa4f-ca19db53b24f | L3 agent | controller | nova | :-) | UP | neutron-l3-agent | | 35a6b463-7571-4f41-85bc-4c26ef255012 | Linux bridge agent |* 5g-cpn1 * | None | :-) | UP | neutron-linuxbridge-agent | | 413cd13d-88d7-45ce-8b2e-26fdb265740f | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent | | 42f57bee-63b3-44e6-9392-939ece98719d | Linux bridge agent | compute | None | :-) | UP | neutron-linuxbridge-agent | | 4a787a09-04aa-4350-bd32-0c0177ed06a1 | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent | | 9069e26e-6fef-4b69-9c35-c30ca08377ff | Linux bridge agent | nrUE | None | XXX | UP | neutron-linuxbridge-agent | | fdafc337-7581-4ecd-b175-810713a25e1f | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ openstack compute service list +----+----------------+------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+----------------+------------+----------+---------+-------+----------------------------+ | 3 | nova-scheduler | controller | internal | enabled | up | 2020-12-02T07:21:56.000000 | | 4 | nova-conductor | controller | internal | enabled | up | 2020-12-02T07:22:06.000000 | | 5 | nova-compute | compute | nova | enabled | up | 2020-12-02T07:22:00.000000 | | 6 | nova-compute | nrUE | nova | enabled | down | 2020-11-26T15:59:24.000000 | | 7 | nova-compute |* 5g-cpn1* | nova | enabled | up | 2020-12-02T07:22:06.000000 | +----+----------------+------------+----------+---------+-------+----------------------------+ My current setup does not include the installation of openvswitch so far (at either the controller or the new compute node), so the vms (although deployed successfully) failed to set up networks. For setting up openvswitch correct for my setup is this the guilde that i need to follow?? https://docs.openstack.org/neutron/ussuri/install/ovn/manual_install.html ? Again, please excuse all newbie (in process of understanding) questions so far. Any advice/directions/guides? all the best, Pavlos. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Wed Dec 2 10:57:48 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Wed, 2 Dec 2020 11:57:48 +0100 Subject: [tripleo] Deployment update (node addition) after changing aggregate groups/zones Message-ID: Hi all. After changing the host aggregate group and zone, I cannot run OpenStack deploy command successfully again, even after updating deployment environment files according to my setup. I receive error bigger one in [0]: 2020-12-02 10:16:18.532419 | 52540000-0001-cf95-492f-0000000003ca | FATAL | Nova: Manage aggregate and availability zone and add hosts to the zone | undercloud | error={"changed": false, "msg": "ConflictException: 409: Client Error for url: http://10.120.129.199:8774/v2.1/os-aggregates/1/action, Cannot add host to aggregate 1. Reason: One or more hosts already in availability zone(s) ['Alpha01']."} I was following this link [1] instructions for "Configuring Availability Zones (AZ)" steps to modify with OpenStack commands. And zone was created successfully, but when I needed to add additional nodes, executed deployment again with increased numbers it was complaining about an incorrect aggregate zone, and now it is complaining about not empty zone with error [0] mentioned above. I have added aggregate zones into deployment files even role file... any ideas? Also, I think, this should be mentioned, that added it after install, you lose the possibility to update using tripleo tool and you will need to modify environment files with. [0] http://paste.openstack.org/show/800622/ [1] https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/distributed_compute_node.html#configuring-availability-zones-az -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Wed Dec 2 11:08:31 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Wed, 2 Dec 2020 12:08:31 +0100 Subject: [tripleo] Deployment update (node addition) after changing aggregate groups/zones In-Reply-To: References: Message-ID: Hi all, removed dev rdo mailing list, added by mistake. On Wed, 2 Dec 2020 at 11:57, Ruslanas Gžibovskis wrote: > Hi all. > > After changing the host aggregate group and zone, I cannot run OpenStack > deploy command successfully again, even after updating deployment > environment files according to my setup. > > I receive error bigger one in [0]: > 2020-12-02 10:16:18.532419 | 52540000-0001-cf95-492f-0000000003ca | > FATAL | Nova: Manage aggregate and availability zone and add hosts to the > zone | undercloud | error={"changed": false, "msg": "ConflictException: > 409: Client Error for url: > http://10.120.129.199:8774/v2.1/os-aggregates/1/action, Cannot add host > to aggregate 1. Reason: One or more hosts already in availability zone(s) > ['Alpha01']."} > > I was following this link [1] instructions for "Configuring Availability > Zones (AZ)" steps to modify with OpenStack commands. And zone was created > successfully, but when I needed to add additional nodes, executed > deployment again with increased numbers it was complaining about an > incorrect aggregate zone, and now it is complaining about not empty zone > with error [0] mentioned above. I have added aggregate zones into > deployment files even role file... any ideas? > > Also, I think, this should be mentioned, that added it after install, you > lose the possibility to update using tripleo tool and you will need to > modify environment files with. > > > > [0] http://paste.openstack.org/show/800622/ > [1] > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/distributed_compute_node.html#configuring-availability-zones-az > > > > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Wed Dec 2 11:14:23 2020 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 2 Dec 2020 22:14:23 +1100 Subject: [all][infra] CI test result table in the new gerrit review UI In-Reply-To: <87lfemubxy.tristanC@fedora> References: <20201126143706.ejhc5u2qhtzg2qnr@yuggoth.org> <87tutctar2.tristanC@fedora> <20201126151906.2573xhh35dhblmnh@yuggoth.org> <716a66de7a2cb45ded010d01f6c344ba74dbd7de.camel@redhat.com> <87r1ogt49n.tristanC@fedora> <20201126174517.akyuykwmlwc2z6ei@yuggoth.org> <87o8jkt2ed.tristanC@fedora> <20201127032004.GC522326@fedora19.localdomain> <87lfemubxy.tristanC@fedora> Message-ID: <20201202111423.GA819047@fedora19.localdomain> On Fri, Nov 27, 2020 at 02:11:21PM +0000, Tristan Cacqueray wrote: > From what I understand, we can either use java and the polymer template > system, or the javascript api to implement the zuul results table. This was discussed today in the infra meeting @ [1] There was a couple of broad conclusions. There's no dispute that it works, for now. The maintenace of this in perpituity is the major concern. Some points: There are already two Zuul plugins in the upstream repos zuul : https://gerrit.googlesource.com/plugins/zuul -- This is rather badly namespaced, and is focused on showing the status of Depends-On: changes a cyclic dependencies zuul-status : https://gerrit.googlesource.com/plugins/zuul-status/ -- This is also using a broad namespace, and appears to be related to showing the ongoing job status in the UI directly, but does not show final results. An example I found at [2] from an old wikimedia thread Upstream have shown to take some interest in these plugins when undertaking upgrades to polymer, etc. and thus integrating there is seen as the best practice. Unless we have a strong reason not to, we would like to consume any plugins from upstream for this reason. It would be in our best interest to work with these existing plugins to clarify what they do more directly and come up with some better namespacing to make it more obvious as potential number of plugins grows. The current proposed implementation does not look like any of the upstream plugins. Admittedly, [3] shows this is not a strong eco-system; the documentation is mostly TODO and examples are thin. Implementing this with Reason [4] adds one more significant hurdle; we have no track record of maintaing this javascript dialect and it is not used upstream in any way. It also doesn't use the same build and test frameworks; this may not be a blocker but again creates hurdles by being different to upstream. One more concrete comment is that I'm pretty sure walking the shadow DOM to update things is [5] not quite as intended and we should be using endpoints. In trying to find the best place to start, I've pulled out bits of the checks plugin, image-diff plugin and codeeditor plugin into the simplest thing I could get to a tab with a table in it @ [6]. You can see this for now @ [7]. I think we're going to get more potential eyes using polymer components and making things look like the existing plugins. It's a matter of taste but I think the blank canvas of a separate tab serves the purpose well. I think with the Bazel build bits, we can ultimately write an upstream Zuul job to build this for us in gerrit's Zuul (corvus might know if that's right?) Tristan -- maybe as step one, we could try integrating what you have into that framework? Maybe we rewrite the ReasonML ... but having it something is step 1. Overall I think this is the broad strokes of what we would look at sending upstream. If they think it's too specific, Zuul tenant seems a logical home. We would also like to expand the gate testing to better handle testing plugins. This will involve us automatically adding a repo, sample change, and Zuul user comments during the gate test. A logical extension of this would be to take samples using a headless browser for reporting as artifacts; held nodes can also be used for advanced debugging. This will give us better confidence as we keep our Gerrit up-to-date. I'd like to solict some feedback on the checks plugin/API, which Zuul added support for with [8]. My understanding is that this has never really consolidated upstream and is under redevelopment [9]. I don't think there's much there for us at this point; even still seeing as we are so Zuul centric we might be able to do things with a specific plugin this API will never do. -i [1] http://eavesdrop.openstack.org/meetings/infra/2020/infra.2020-12-01-19.01.log.html#l-62 [2] https://imgur.com/a/uBk2oxQ [3] https://gerrit-review.googlesource.com/Documentation/pg-plugin-dev.html [4] https://reasonml.github.io/ [5] https://github.com/softwarefactory-project/zuul-results-gerrit-plugin/blob/master/src/ZuulResultsPlugin.re#L25 [6] https://github.com/ianw/gerrit-zuul-summary-status [7] https://104.130.172.52/c/openstack/diskimage-builder/+/554002 [8] https://opendev.org/zuul/zuul/commit/e78e948284392477d385d493fc9ec194d544483f [9] https://www.gerritcodereview.com/design-docs/ci-reboot.html From kchamart at redhat.com Wed Dec 2 11:21:54 2020 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 2 Dec 2020 12:21:54 +0100 Subject: [FOSDEM] Re: [Openstack][CFP] Virtualization & IaaS Devroom In-Reply-To: References: Message-ID: <20201202112154.GA664995@paraplu> Hi, Piotr — I've added the keyword "FOSDEM" to the subject, as "Devroom" might not ring a bell for everyone :-) On Tue, Dec 01, 2020 at 06:40:11PM +0100, Piotr Kliczewski wrote: > We are excited to announce that the call for proposals is now open for the > Virtualization & IaaS devroom at the upcoming FOSDEM 2021, to be hosted > virtually on February 6th 2021. > > This year will mark FOSDEM’s 21th anniversary as one of the longest-running > free and open source software developer events, attracting thousands of > developers and users from all over the world. Due to Covid-19, FOSDEM will > be held virtually this year on February 6th & 7th, 2021. > > About the Devroom > > The Virtualization & IaaS devroom will feature session topics such as open > source hypervisors and virtual machine managers such as Xen Project, KVM, > bhyve, and VirtualBox, and Infrastructure-as-a-Service projects such as > KubeVirt, Apache CloudStack, Foreman, OpenStack, oVirt, QEMU and OpenNebula. > > This devroom will host presentations that focus on topics of shared > interest, such as KVM; libvirt; shared storage; virtualized networking; > cloud security; clustering and high availability; interfacing with multiple > hypervisors; hyperconverged deployments; and scaling across hundreds or > thousands of servers. > > Presentations in this devroom will be aimed at users or developers working > on these platforms who are looking to collaborate and improve shared > infrastructure or solve common problems. We seek topics that encourage > dialog between projects and continued work post-FOSDEM. > > Important Dates > > Submission deadline: 20th of December > > Acceptance notifications: 25th of December > > Final schedule announcement: 31st of December > > Recorded presentations upload deadline: 15th of January > > Devroom: 6th February 2021 > > Submit Your Proposal > > All submissions must be made via the Pentabarf event planning site[1]. If > you have not used Pentabarf before, you will need to create an account. If > you submitted proposals for FOSDEM in previous years, you can use your > existing account. > > After creating the account, select Create Event to start the submission > process. Make sure to select Virtualization and IaaS devroom from the Track > list. Please fill out all the required fields, and provide a meaningful > abstract and description of your proposed session. > > Submission Guidelines > > We expect more proposals than we can possibly accept, so it is vitally > important that you submit your proposal on or before the deadline. Late > submissions are unlikely to be considered. > > All presentation slots are 30 minutes, with 20 minutes planned for > presentations, and 10 minutes for Q&A. > > All presentations will need to be pre-recorded and put into our system at > least a couple of weeks before the event. > > The presentations should be uploaded by 15th of January and made available > under Creative > > Commons licenses. In the Submission notes field, please indicate that you > agree that your presentation will be licensed under the CC-By-SA-4.0 or > CC-By-4.0 license and that you agree to have your presentation recorded. > For example: > > "If my presentation is accepted for FOSDEM, I hereby agree to license all > recordings, slides, and other associated materials under the Creative > Commons Attribution Share-Alike 4.0 International License. Sincerely, > ." > > In the Submission notes field, please also confirm that if your talk is > accepted, you will be able to attend the virtual FOSDEM event for the Q&A. > We will not consider proposals from prospective speakers who are unsure > whether they will be able to attend the FOSDEM virtual event. > > If you are experiencing problems with Pentabarf, the proposal submission > interface, or have other questions, you can email our devroom mailing > list[2] and we will try to help you. > > > Code of Conduct > > Following the release of the updated code of conduct for FOSDEM, we'd like > to remind all speakers and attendees that all of the presentations and > discussions in our devroom are held under the guidelines set in the CoC and > we expect attendees, speakers, and volunteers to follow the CoC at all > times. > > If you submit a proposal and it is accepted, you will be required to > confirm that you accept the FOSDEM CoC. If you have any questions about the > CoC or wish to have one of the devroom organizers review your presentation > slides or any other content for CoC compliance, please email us and we will > do our best to assist you. > > Call for Volunteers > > We are also looking for volunteers to help run the devroom. We need > assistance with helping speakers to record the presentation as well as > helping with streaming and chat moderation for the devroom. Please contact > devroom mailing list [2] for more information. > > Questions? > > If you have any questions about this devroom, please send your questions to > our devroom mailing list. You can also subscribe to the list to receive > updates about important dates, session announcements, and to connect with > other attendees. > > See you all at FOSDEM! > > [1] https://penta.fosdem > .org/submission/FOSDEM21 > > [2] iaas-virt-devroom at lists.fosdem.org -- /kashyap From oliver.wenz at dhbw-mannheim.de Wed Dec 2 11:48:42 2020 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Wed, 2 Dec 2020 12:48:42 +0100 Subject: [neutron][openstack-ansible] Instances can only connect to provider-net via tenant-net but not directly In-Reply-To: References: Message-ID: > Neutron sends this notification to the nova-api so You should check in nova-api > logs. > Whole workflow for that process of spawning vm is more or less like below: > 1. nova-compute asks neutron for port, > 2. neutron creates port and binds it with some mechanism driver - so it has > vif_type e.g. "ovs" or "linuxbridge" or some other, > 3. nova, based on that vif details plugs port to the proper bridge on host and > pauses instance until neutron will not do its job, > 4. neutron-l2-agent (linuxbrige or ovs) starts provisioning port and reports > to neutron-server when it is done, > 5. if there is no any provisioning blocks for that port in neutron db (can be > also one from the dhcp agent), neutron sends notification to nova-api that port > is ready, > 6. nova unpauses vm. > > In Your case it seems that on step 5 nova reports some error and that You > should IMO check. > Hi Slawek, thanks for the information and suggestion! I checked the nova-api-os-compute logs and it seems like there are network problems and an issue with wrong token scope: Dec 02 11:15:49 infra1-nova-api-container-83af52a6 nova-api-wsgi[84]: 2020-12-02 11:15:49.074 84 INFO nova.api.openstack.requestlog [req-1d56ad1c-03ea-4dcc-b9da-e4f6e73ccc52 920e739127a14018a55fb4422b0885e7 0f14905dab5546e0adec2b56c0f6be88 - default default] 192.168.110.201 "GET /v2.1/servers/2a1f6fbf-48f8-4e7f-bc36-7162bdaf7e20" status: 200 len: 1358 microversion: 2.1 time: 0.500669 Dec 02 11:15:49 infra1-nova-api-container-83af52a6 nova-api-wsgi[86]: 2020-12-02 11:15:49.256 86 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 104] Connection reset by peer Dec 02 11:15:49 infra1-nova-api-container-83af52a6 uwsgi[48]: Wed Dec 2 11:15:49 2020 - SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /v2.1/flavors/7c20746e-f6db-4344-9aa1-be926696ecf4 (ip 192.168.110.201) !!! Dec 02 11:15:49 infra1-nova-api-container-83af52a6 nova-api-wsgi[86]: 2020-12-02 11:15:49.318 86 INFO nova.api.openstack.requestlog [req-832f794d-db9b-4690-bb0c-8320e7beed47 920e739127a14018a55fb4422b0885e7 0f14905dab5546e0adec2b56c0f6be88 - default default] 192.168.110.201 "GET /v2.1/flavors/7c20746e-f6db-4344-9aa1-be926696ecf4" status: 200 len: 473 microversion: 2.1 time: 0.095833 Dec 02 11:15:57 infra1-nova-api-container-83af52a6 nova-api-wsgi[79]: 2020-12-02 11:15:57.000 79 INFO nova.api.openstack.requestlog [req-2c061260-88f4-4183-a74a-ea59d1ff1ae0 b044fe42a4644837a3bd40beec378876 226e9c9f1fb94c8ab271490ad79c6873 - default default] 192.168.110.201 "HEAD /" status: 200 len: 417 microversion: - time: 0.001837 Dec 02 11:15:57 infra1-nova-api-container-83af52a6 nova-api-wsgi[88]: 2020-12-02 11:15:57.214 88 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 104] Connection reset by peer Dec 02 11:15:57 infra1-nova-api-container-83af52a6 uwsgi[48]: Wed Dec 2 11:15:57 2020 - SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /v2.1/os-server-external-events (ip 192.168.110.201) !!! Dec 02 11:15:57 infra1-nova-api-container-83af52a6 uwsgi[48]: /openstack/venvs/nova-21.1.0/lib/python3.6/site-packages/oslo_policy/policy.py:1007: UserWarning: Policy os_compute_api:os-server-external-events:create failed scope check. The token used to make the request was project scoped but the policy requires ['system'] scope. This behavior may change in the future where using the intended scope is required Dec 02 11:15:57 infra1-nova-api-container-83af52a6 uwsgi[48]: warnings.warn(msg) Dec 02 11:15:57 infra1-nova-api-container-83af52a6 nova-api-wsgi[88]: 2020-12-02 11:15:57.715 88 INFO nova.api.openstack.compute.server_external_events [req-759c5752-c80b-46f2-b3df-0fa126605895 d9a2e96567ec4670bc60dbcc8f66305f 474b7aa9b7894d6782402135c6ef4c2a - default default] Creating event network-changed:ed4a9455-a33c-454c-b74f-b314751cce3d for instance 2a1f6fbf-48f8-4e7f-bc36-7162bdaf7e20 on bc1blade15 Dec 02 11:15:57 infra1-nova-api-container-83af52a6 nova-api-wsgi[88]: 2020-12-02 11:15:57.734 88 INFO nova.api.openstack.requestlog [req-759c5752-c80b-46f2-b3df-0fa126605895 d9a2e96567ec4670bc60dbcc8f66305f 474b7aa9b7894d6782402135c6ef4c2a - default default] 192.168.110.201 "POST /v2.1/os-server-external-events" status: 200 len: 179 microversion: 2.1 time: 0.533910 Dec 02 11:16:09 infra1-nova-api-container-83af52a6 nova-api-wsgi[82]: 2020-12-02 11:16:09.007 82 INFO nova.api.openstack.requestlog [req-064600c9-196e-4388-978c-00b0f5560b2e 745ecbee83d041c6ba92f9e07ba744d5 a2fd11a3bfdd47c8893e18a6f3b7dfa3 - default default] 192.168.110.201 "HEAD /" status: 200 len: 417 microversion: - time: 0.001638 Dec 02 11:16:16 infra1-nova-api-container-83af52a6 nova-api-wsgi[77]: 2020-12-02 11:16:16.432 77 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 104] Connection reset by peer Dec 02 11:16:16 infra1-nova-api-container-83af52a6 uwsgi[48]: Wed Dec 2 11:16:16 2020 - SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /v2.1/os-server-external-events (ip 192.168.110.201) !!! Dec 02 11:16:16 infra1-nova-api-container-83af52a6 uwsgi[48]: /openstack/venvs/nova-21.1.0/lib/python3.6/site-packages/oslo_policy/policy.py:1007: UserWarning: Policy os_compute_api:os-server-external-events:create failed scope check. The token used to make the request was project scoped but the policy requires ['system'] scope. This behavior may change in the future where using the intended scope is required Dec 02 11:16:16 infra1-nova-api-container-83af52a6 uwsgi[48]: warnings.warn(msg) Dec 02 11:16:16 infra1-nova-api-container-83af52a6 nova-api-wsgi[77]: 2020-12-02 11:16:16.939 77 INFO nova.api.openstack.compute.server_external_events [req-71a40ea1-cf64-46d5-9675-d00d539f1922 d9a2e96567ec4670bc60dbcc8f66305f 474b7aa9b7894d6782402135c6ef4c2a - default default] Creating event network-vif-plugged:5cbc8608-8545-4eb3-a745-23ca912347ec for instance ea72988b-83eb-4640-adaa-e24e0c3d018d on bc1blade15 Dec 02 11:16:16 infra1-nova-api-container-83af52a6 nova-api-wsgi[77]: 2020-12-02 11:16:16.958 77 INFO nova.api.openstack.requestlog [req-71a40ea1-cf64-46d5-9675-d00d539f1922 d9a2e96567ec4670bc60dbcc8f66305f 474b7aa9b7894d6782402135c6ef4c2a - default default] 192.168.110.201 "POST /v2.1/os-server-external-events" status: 200 len: 183 microversion: 2.1 time: 0.526920 Dec 02 11:16:17 infra1-nova-api-container-83af52a6 nova-api-wsgi[71]: 2020-12-02 11:16:17.422 71 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 104] Connection reset by peer Kind regards, Oliver From smooney at redhat.com Wed Dec 2 13:50:37 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 02 Dec 2020 13:50:37 +0000 Subject: [kolla] cannot pull docker public hub images In-Reply-To: References: Message-ID: <06831afaf37dca3a8bb12f500e0ed6bb45dcc73e.camel@redhat.com> On Wed, 2020-12-02 at 08:27 +0100, Radosław Piliszek wrote: > It is not really that. > It seems Docker Hub has its worse days and I found myself confused on > our or others images from time to time (that they presumably lack > expected tags). you could just tag master with latest too the issue with latest is people have different expectaions. i expect it to be the tip of master e.g. the latest version of the commited code where as other might expect ti to track the latest stable release e.g. victoria kolla chose not to provide a latest tag since that terminology is not used in the kolla project but that could alway be changed if it was useful or provided better ux you just need to decided what latest means in the context of kolla > > -yoctozepto > > On Wed, Dec 2, 2020 at 1:28 AM Eddie Yen wrote: > > > > The moment that you couldn't find any tags in kolla images might be the > > generator is pushing. Kolla will generate then push images to Docker > > Hub. And because this mechanism will working in multiple releases at > > the same time, it's normal that not showing up image tags in Docker Tag > > web site when in progress. > > > > > > Adrian Andreias 於 2020年12月1日 週二 下午6:58寫道: > > > > > > Hi Eddie, > > > > > > That makes sense and works, indeed: > > > > > > docker pull kolla/centos-source-nova-api:victoria > > > > > > I was seeing no tags here the other day: > > > https://hub.docker.com/r/kolla/centos-source-nova-api/tags?page=1&ordering=last_updated > > > > > > I can see the tags now. It probably just took too long for the tags to load. > > > > > > Thanks > > > > > > > > > Regards, > > > Adrian Andreias > > > https://fleio.com > > > > > > > > > > > > > > > On Tue, Dec 1, 2020 at 8:45 AM Eddie Yen wrote: > > > > > > > > Hi Adrian, > > > > > > > > IME, there's no image tag for latest. Most of kolla images are using Openstack release name as tag. > > > > If you want to pull image with specific Openstack release, you should pull them with release name tag. > > > > > > > > E.g docker pull kolla/centos.source-nova-api:victoria > > > > > > > > Adrian Andreias 於 2020年12月1日 週二 上午8:39寫道: > > > > > > > > > > Hi, > > > > > > > > > > None of the Kolla images on the Docker public registry are tagged. > > > > > > > > > > E.g. https://hub.docker.com/r/kolla/centos-source-nova-api > > > > > > > > > > Images don't even have the "latest" tag, though they were updated 15 hours ago. > > > > > > > > > > And therefore they can't be pulled: > > > > > > > > > > $ docker pull kolla/centos-source-nova-api > > > > > Using default tag: latest > > > > > Error response from daemon: manifest for kolla/centos-source-nova-api:latest not found: manifest unknown: manifest unknown > > > > > > > > > > Is there another public registry where ready-built Kolla images are available? > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > Regards, > > > > > Adrian Andreias > > > > > https://fleio.com > > > > > > > > > > > From Akshay.346 at hsc.com Wed Dec 2 13:22:17 2020 From: Akshay.346 at hsc.com (Akshay 346) Date: Wed, 2 Dec 2020 13:22:17 +0000 Subject: IRONIC BAREMETAL partitioned image not spawning. In-Reply-To: References: Message-ID: Hi Julia, Thanks for your response. I have used "iscsi" as deploy_interface and installed sgdisk (gdisk) on both ironic_conductors and then tried to spawn a BM node. Then it failed stating about "parted is missing". Then I installed "parted" on both ironic_conductors and then tried again. This time it failed stating: 2020-12-02 17:08:39.508 1 ERROR ironic.drivers.modules.agent_base Command: parted -a optimal -s /dev/disk/by-path/ip-10.10.20.62:3260-iscsi-iqn.2008-10.org.openstack:b40ef46f-e754-401f-b414-2c8ce05cdfec-lun-1 -- unit MiB mklabel msdos mkpart primary 1 481281 set 1 boot on 2020-12-02 17:08:39.508 1 ERROR ironic.drivers.modules.agent_base Exit code: 1 2020-12-02 17:08:39.508 1 ERROR ironic.drivers.modules.agent_base Stdout: '' 2020-12-02 17:08:39.508 1 ERROR ironic.drivers.modules.agent_base Stderr: 'Error: The location 481281 is outside of the device /dev/sdc.\n' About my BM node: It has 2 physical disks only. Do you have any idea about this ? Thanks Akshay -----Original Message----- From: Julia Kreger Sent: Wednesday, December 2, 2020 4:26 AM To: Akshay 346 Cc: openstack-discuss at lists.openstack.org Subject: Re: IRONIC BAREMETAL partitioned image not spawning. ** External Email- Treat hyperlink and attachment with caution. ** Greetings, If your deploy_interface is set to "iscsi", then sgdisk needs to be installed and available to the ironic-conductor process and user. If your deploy_interface is set to "direct", then sgdisk needs to be present with-in the ironic-python-agent ramdisk. The error you pasted into your email suggests, and the logs confirm that you're using the iscsi deployment interface. You will need to ensure your conductor process can execute sgdisk and it is available with-in the environment PATH that the ironic-conductor process is executing with-in. This is not with-in the qcow you're deploying as the commands are executed remotely before the image is written in this case. -Julia On Tue, Dec 1, 2020 at 10:19 AM Akshay 346 wrote: > > Hello Team, > > > > I hope you all are keeping safe and doing good. > > > > I have a OpenStack setup up and running with ironic enabled. > > I am able to spawn a bare metal node with centos8.2 whole-disk image but when I spawn bare metal with partitioned image, it fails to spawn it stating the following trace ( Full trace back attached in the email): > > > > “2020-12-01 12:35:34.603 1 ERROR ironic.conductor.utils [req-0d3a375e-90a3-485d-a7c3-895c97c88006 - - - - -] Deploy failed for instance 463bd844-7f80-4c57-98dd-6ca8cfdb6121. Error: [Errno 2] No such file or directory: 'sgdisk': 'sgdisk': FileNotFoundError: [Errno 2] No such file or directory: 'sgdisk': 'sgdisk'”. > > > > I have also installed sgdisk (yum install gdisk) on the qcow2 from which I build the partitioned image. > > > > Can anyone please guide how to debug it? > > > > Regards > > Akshay > > > > DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. From Akshay.346 at hsc.com Wed Dec 2 13:27:18 2020 From: Akshay.346 at hsc.com (Akshay 346) Date: Wed, 2 Dec 2020 13:27:18 +0000 Subject: IRONIC BAREMETAL partitioned image not spawning. In-Reply-To: References: Message-ID: ++ log file. Regards Akshay -----Original Message----- From: Akshay 346 Sent: Wednesday, December 2, 2020 6:52 PM To: Julia Kreger Cc: openstack-discuss at lists.openstack.org Subject: RE: IRONIC BAREMETAL partitioned image not spawning. Hi Julia, Thanks for your response. I have used "iscsi" as deploy_interface and installed sgdisk (gdisk) on both ironic_conductors and then tried to spawn a BM node. Then it failed stating about "parted is missing". Then I installed "parted" on both ironic_conductors and then tried again. This time it failed stating: 2020-12-02 17:08:39.508 1 ERROR ironic.drivers.modules.agent_base Command: parted -a optimal -s /dev/disk/by-path/ip-10.10.20.62:3260-iscsi-iqn.2008-10.org.openstack:b40ef46f-e754-401f-b414-2c8ce05cdfec-lun-1 -- unit MiB mklabel msdos mkpart primary 1 481281 set 1 boot on 2020-12-02 17:08:39.508 1 ERROR ironic.drivers.modules.agent_base Exit code: 1 2020-12-02 17:08:39.508 1 ERROR ironic.drivers.modules.agent_base Stdout: '' 2020-12-02 17:08:39.508 1 ERROR ironic.drivers.modules.agent_base Stderr: 'Error: The location 481281 is outside of the device /dev/sdc.\n' About my BM node: It has 2 physical disks only. Do you have any idea about this ? Thanks Akshay -----Original Message----- From: Julia Kreger Sent: Wednesday, December 2, 2020 4:26 AM To: Akshay 346 Cc: openstack-discuss at lists.openstack.org Subject: Re: IRONIC BAREMETAL partitioned image not spawning. ** External Email- Treat hyperlink and attachment with caution. ** Greetings, If your deploy_interface is set to "iscsi", then sgdisk needs to be installed and available to the ironic-conductor process and user. If your deploy_interface is set to "direct", then sgdisk needs to be present with-in the ironic-python-agent ramdisk. The error you pasted into your email suggests, and the logs confirm that you're using the iscsi deployment interface. You will need to ensure your conductor process can execute sgdisk and it is available with-in the environment PATH that the ironic-conductor process is executing with-in. This is not with-in the qcow you're deploying as the commands are executed remotely before the image is written in this case. -Julia On Tue, Dec 1, 2020 at 10:19 AM Akshay 346 wrote: > > Hello Team, > > > > I hope you all are keeping safe and doing good. > > > > I have a OpenStack setup up and running with ironic enabled. > > I am able to spawn a bare metal node with centos8.2 whole-disk image but when I spawn bare metal with partitioned image, it fails to spawn it stating the following trace ( Full trace back attached in the email): > > > > “2020-12-01 12:35:34.603 1 ERROR ironic.conductor.utils [req-0d3a375e-90a3-485d-a7c3-895c97c88006 - - - - -] Deploy failed for instance 463bd844-7f80-4c57-98dd-6ca8cfdb6121. Error: [Errno 2] No such file or directory: 'sgdisk': 'sgdisk': FileNotFoundError: [Errno 2] No such file or directory: 'sgdisk': 'sgdisk'”. > > > > I have also installed sgdisk (yum install gdisk) on the qcow2 from which I build the partitioned image. > > > > Can anyone please guide how to debug it? > > > > Regards > > Akshay > > > > DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- A non-text attachment was scrubbed... Name: ironic_openstack_partitioned_image_error_parted Type: application/octet-stream Size: 17046 bytes Desc: ironic_openstack_partitioned_image_error_parted URL: From noonedeadpunk at ya.ru Wed Dec 2 14:20:18 2020 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Wed, 02 Dec 2020 16:20:18 +0200 Subject: [neutron][openstack-ansible] Instances can only connect to provider-net via tenant-net but not directly In-Reply-To: References: Message-ID: <249271606918628@mail.yandex.ru> An HTML attachment was scrubbed... URL: From adrian at fleio.com Wed Dec 2 14:23:38 2020 From: adrian at fleio.com (Adrian Andreias) Date: Wed, 2 Dec 2020 16:23:38 +0200 Subject: [kolla] cannot pull docker public hub images In-Reply-To: <06831afaf37dca3a8bb12f500e0ed6bb45dcc73e.camel@redhat.com> References: <06831afaf37dca3a8bb12f500e0ed6bb45dcc73e.camel@redhat.com> Message-ID: Yes, I totally agree. Docker "latest" tag is a bad pattern. We're also not using "latest" in other projects where we choose tags. I was pulling the (default) latest tag because I couldn't see any other tag on hub.docker .com. Based on this and above feedback my conclusion is that the public registry is not reliable enough and we should build our own images and run our own registry. Thanks everyone. Regards, Adrian Andreias https://fleio.com On Wed, Dec 2, 2020 at 3:56 PM Sean Mooney wrote: > On Wed, 2020-12-02 at 08:27 +0100, Radosław Piliszek wrote: > > It is not really that. > > It seems Docker Hub has its worse days and I found myself confused on > > our or others images from time to time (that they presumably lack > > expected tags). > you could just tag master with latest too > > the issue with latest is people have different expectaions. > i expect it to be the tip of master e.g. the latest version of the commited > code where as other might expect ti to track the latest stable release > e.g. victoria > > kolla chose not to provide a latest tag since that terminology is not used > in > the kolla project but that could alway be changed if it was useful or > provided > better ux you just need to decided what latest means in the context of > kolla > > > > -yoctozepto > > > > On Wed, Dec 2, 2020 at 1:28 AM Eddie Yen wrote: > > > > > > The moment that you couldn't find any tags in kolla images might be the > > > generator is pushing. Kolla will generate then push images to Docker > > > Hub. And because this mechanism will working in multiple releases at > > > the same time, it's normal that not showing up image tags in Docker Tag > > > web site when in progress. > > > > > > > > > Adrian Andreias 於 2020年12月1日 週二 下午6:58寫道: > > > > > > > > Hi Eddie, > > > > > > > > That makes sense and works, indeed: > > > > > > > > docker pull kolla/centos-source-nova-api:victoria > > > > > > > > I was seeing no tags here the other day: > > > > > https://hub.docker.com/r/kolla/centos-source-nova-api/tags?page=1&ordering=last_updated > > > > > > > > I can see the tags now. It probably just took too long for the tags > to load. > > > > > > > > Thanks > > > > > > > > > > > > Regards, > > > > Adrian Andreias > > > > https://fleio.com > > > > > > > > > > > > > > > > > > > > On Tue, Dec 1, 2020 at 8:45 AM Eddie Yen > wrote: > > > > > > > > > > Hi Adrian, > > > > > > > > > > IME, there's no image tag for latest. Most of kolla images are > using Openstack release name as tag. > > > > > If you want to pull image with specific Openstack release, you > should pull them with release name tag. > > > > > > > > > > E.g docker pull kolla/centos.source-nova-api:victoria > > > > > > > > > > Adrian Andreias 於 2020年12月1日 週二 上午8:39寫道: > > > > > > > > > > > > Hi, > > > > > > > > > > > > None of the Kolla images on the Docker public registry are > tagged. > > > > > > > > > > > > E.g. https://hub.docker.com/r/kolla/centos-source-nova-api > > > > > > > > > > > > Images don't even have the "latest" tag, though they were > updated 15 hours ago. > > > > > > > > > > > > And therefore they can't be pulled: > > > > > > > > > > > > $ docker pull kolla/centos-source-nova-api > > > > > > Using default tag: latest > > > > > > Error response from daemon: manifest for > kolla/centos-source-nova-api:latest not found: manifest unknown: manifest > unknown > > > > > > > > > > > > Is there another public registry where ready-built Kolla images > are available? > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > Regards, > > > > > > Adrian Andreias > > > > > > https://fleio.com > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Wed Dec 2 14:26:30 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 2 Dec 2020 06:26:30 -0800 Subject: IRONIC BAREMETAL partitioned image not spawning. In-Reply-To: References: Message-ID: You may want to consult the bindep file for binary dependencies[0] since it sounds like you installed Ironic manually. That or the dependencies you encountered are just not expressed for some odd reason. Anyhow, Unfortunately what is happening is the iscsi device is being offered by the agent to the conductor, the system partition utilities is somehow addressing beyond the size of the volume on the system which is a little surprising. This could also mean that the disk in the system is defective or faulty. What you're ultimately going to need is the logging data from the agent. If you look in /var/log/ironic/deploy_logs (this is the standard location, you may have defined elsewhere in ironic.conf), you should be able to identify the compress tar archive of what the agent uploaded, in that should be a journal/log file which should provide all of the details needed from the agent logs. That log will indicate what disk was chosen and offered to the conductor and so on and so forth. Ultimately if you're already using a root device hint[1], you may have the wrong hint and it should be noted that the hints can override automatic device selection and the exclusion of invalid devices. You likely ought to re-evaluate one if one is present, or add one pointing to the required disk on the remote system, or possibly remove any existing hint. Depending on your boot mode and disk image it may work out just fine. Hope that helps! -Julia [0]: https://opendev.org/openstack/ironic/src/branch/master/bindep.txt [1] https://docs.openstack.org/ironic/pike/install/include/root-device-hints.html On Wed, Dec 2, 2020 at 5:22 AM Akshay 346 wrote: > > Hi Julia, > > Thanks for your response. > > I have used "iscsi" as deploy_interface and installed sgdisk (gdisk) on both ironic_conductors and then tried to spawn a BM node. > Then it failed stating about "parted is missing". Then I installed "parted" on both ironic_conductors and then tried again. > > This time it failed stating: > > 2020-12-02 17:08:39.508 1 ERROR ironic.drivers.modules.agent_base Command: parted -a optimal -s /dev/disk/by-path/ip-10.10.20.62:3260-iscsi-iqn.2008-10.org.openstack:b40ef46f-e754-401f-b414-2c8ce05cdfec-lun-1 -- unit MiB mklabel msdos mkpart primary 1 481281 set 1 boot on > 2020-12-02 17:08:39.508 1 ERROR ironic.drivers.modules.agent_base Exit code: 1 > 2020-12-02 17:08:39.508 1 ERROR ironic.drivers.modules.agent_base Stdout: '' > 2020-12-02 17:08:39.508 1 ERROR ironic.drivers.modules.agent_base Stderr: 'Error: The location 481281 is outside of the device /dev/sdc.\n' > > About my BM node: It has 2 physical disks only. > > Do you have any idea about this ? > > Thanks > Akshay > > -----Original Message----- > From: Julia Kreger > Sent: Wednesday, December 2, 2020 4:26 AM > To: Akshay 346 > Cc: openstack-discuss at lists.openstack.org > Subject: Re: IRONIC BAREMETAL partitioned image not spawning. > > ** External Email- Treat hyperlink and attachment with caution. ** > > Greetings, > > If your deploy_interface is set to "iscsi", then sgdisk needs to be installed and available to the ironic-conductor process and user. > If your deploy_interface is set to "direct", then sgdisk needs to be present with-in the ironic-python-agent ramdisk. > > The error you pasted into your email suggests, and the logs confirm that you're using the iscsi deployment interface. You will need to ensure your conductor process can execute sgdisk and it is available with-in the environment PATH that the ironic-conductor process is executing with-in. This is not with-in the qcow you're deploying as the commands are executed remotely before the image is written in this case. > > -Julia > > On Tue, Dec 1, 2020 at 10:19 AM Akshay 346 wrote: > > > > Hello Team, > > > > > > > > I hope you all are keeping safe and doing good. > > > > > > > > I have a OpenStack setup up and running with ironic enabled. > > > > I am able to spawn a bare metal node with centos8.2 whole-disk image but when I spawn bare metal with partitioned image, it fails to spawn it stating the following trace ( Full trace back attached in the email): > > > > > > > > “2020-12-01 12:35:34.603 1 ERROR ironic.conductor.utils [req-0d3a375e-90a3-485d-a7c3-895c97c88006 - - - - -] Deploy failed for instance 463bd844-7f80-4c57-98dd-6ca8cfdb6121. Error: [Errno 2] No such file or directory: 'sgdisk': 'sgdisk': FileNotFoundError: [Errno 2] No such file or directory: 'sgdisk': 'sgdisk'”. > > > > > > > > I have also installed sgdisk (yum install gdisk) on the qcow2 from which I build the partitioned image. > > > > > > > > Can anyone please guide how to debug it? > > > > > > > > Regards > > > > Akshay > > > > > > > > DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. > DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. From thierry at openstack.org Wed Dec 2 16:10:01 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 2 Dec 2020 17:10:01 +0100 Subject: [largescale-sig] Next meeting: December 2, 15utc In-Reply-To: <79158b30-cca1-e64c-db28-66e5637bb1fe@openstack.org> References: <79158b30-cca1-e64c-db28-66e5637bb1fe@openstack.org> Message-ID: We held our meeting today and discussed how to best collect feedback from experienced operators. Meeting logs at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2020/large_scale_sig.2020-12-02-15.00.html TODOs: - all to review pages under https://wiki.openstack.org/wiki/Large_Scale_SIG in preparation for next meeting - ttx to add 5th stage around upgrade and maintain scaled out systems in operation - ttx to make sure oslo.metrics 0.1 is released - all to help in filling out https://etherpad.opendev.org/p/large-scale-sig-scaling-videos - ttx to check out Ops meetups future plans Our next meeting will be Wednesday, December 16 at 15utc in #openstack-meeting-3 on Freenode IRC. We will be reviewing all scaling stages, and identifying simple tasks to do a first pass at improving those pages. -- Thierry Carrez (ttx) From fsbiz at yahoo.com Wed Dec 2 16:31:19 2020 From: fsbiz at yahoo.com (fsbiz at yahoo.com) Date: Wed, 2 Dec 2020 16:31:19 +0000 (UTC) Subject: [ironic] : Third-party ML2 plugins for Openstack ironic with virtual networks In-Reply-To: <1287635714.3401651.1606865544414@mail.yahoo.com> References: <1287635714.3401651.1606865544414@mail.yahoo.com> Message-ID: <2096960188.3638662.1606926679144@mail.yahoo.com> Is anyone using virtual networks for their Openstack Ironic installations? Our flat network is now past 3000 nodes and I am investigating Arista's ML2 plugin and / or Mellanox's NEO as the ML2 plugin. In addition to scaling we also have additional requirements like provisioning a bare-metal serverin a conference room away from the DC for demo purposes. I have general questions on whether anyone is actually using the above two (or any other ML2 plugins)with their Openstack ironic installations ? Thanks,Fred. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at bitswalk.com Wed Dec 2 17:17:30 2020 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Wed, 2 Dec 2020 18:17:30 +0100 Subject: [CLOUDKITTY] - Prometheus metrics.yml sample? Message-ID: Hi everyone, I'm currently finishing our cloudkitty implementation, but I'm facing an issue. For now, our cloudkitty-processor process raise a traceback like this: http://paste.openstack.org/show/Y2ZiYPtZSKzvZk93yNWC/ I can't find any reliable documentation describing how to use prometheus with cloudkitty, so I'm kind of using the guess/read the code way to integrate it, but I'm a bit confused so far. My current metric.yml look like this: http://paste.openstack.org/show/6X23ZfVm1iDMruGy3Gpp/ I've created the appropriate group/service/threshold etc but it still continues to raise the same error. I'm using this openstack_exporter: https://github.com/openstack-exporter/openstack-exporter/releases/tag/v1.1.0 I'm running a Train release Openstack platform if it can help. So, if anyone is having a successful cloudkitty/prometheus integration willing to give a hand, I'll be more than happy ;-) Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Dec 2 17:51:54 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 02 Dec 2020 18:51:54 +0100 Subject: [neutron][stable] Backport of the patch which bumps RPC version Message-ID: <22574695.as6R6ROz1X@p1> Hi, Some time ago we backported in Neutron patch [1] which caused bug [2]. Patch [1] was merged in the Ussuri development cycle so it is already in Ussuri and Victoria. Our mistake was that we merged it then without bump of the RPC version and without code which would provide backward compatybility between old agent and new neutron-server and that's why [2] happens. Now, as we know that, we have proposed patch [3] to fix it in master branch. And my question is - can we backport fix [3] to the stable/victoria and stable/ ussuri to fix the original issue caused by [1] there? In general I know that we shouldn't do that but here are the reasons why we would like to do it in this specific case: - rpc version which we want to change now wasn't changed since Train at least so there will be no any conflict with that, - bump rpc version now and provide backward compatybility on neutron-server will make upgrades Train->Ussuri and minor updates in Ussuri easier as there will be no similar issue like is described in [2] anymore, - patch [1] was already included in Ussuri and Victoria from master branch, it wasn't really cherry-picked there so it actually should be there since the beginning. Based on those reasons mentioned about and also becuase in general it is forbiden by stable policy to backport changes like that to stable branches I would like to know opinion from wider community, especially stable-core-maint team, about what would be the best approach to fix that issue: backport of [3] to stable/{ussuri,victoria} or revert [1] in stable/{ussuri,victoria}. [1] https://review.opendev.org/c/openstack/neutron/+/712632 [2] https://bugs.launchpad.net/neutron/+bug/1903531 [3] https://review.opendev.org/c/openstack/neutron/+/764108 -- Slawek Kaplonski Principal Software Engineer Red Hat From tdecacqu at redhat.com Wed Dec 2 19:15:33 2020 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 02 Dec 2020 19:15:33 +0000 Subject: [all][infra] CI test result table in the new gerrit review UI In-Reply-To: <20201202111423.GA819047@fedora19.localdomain> References: <20201126143706.ejhc5u2qhtzg2qnr@yuggoth.org> <87tutctar2.tristanC@fedora> <20201126151906.2573xhh35dhblmnh@yuggoth.org> <716a66de7a2cb45ded010d01f6c344ba74dbd7de.camel@redhat.com> <87r1ogt49n.tristanC@fedora> <20201126174517.akyuykwmlwc2z6ei@yuggoth.org> <87o8jkt2ed.tristanC@fedora> <20201127032004.GC522326@fedora19.localdomain> <87lfemubxy.tristanC@fedora> <20201202111423.GA819047@fedora19.localdomain> Message-ID: <87a6uwt3xm.tristanC@fedora> On Wed, Dec 02, 2020 at 22:14 Ian Wienand wrote: > On Fri, Nov 27, 2020 at 02:11:21PM +0000, Tristan Cacqueray wrote: >> From what I understand, we can either use java and the polymer template >> system, or the javascript api to implement the zuul results table. > > This was discussed today in the infra meeting @ [1] > > There was a couple of broad conclusions. There's no dispute that it > works, for now. The maintenace of this in perpituity is the major > concern. > > Some points: > > There are already two Zuul plugins in the upstream repos > > zuul : https://gerrit.googlesource.com/plugins/zuul -- > This is rather badly namespaced, and is focused on showing the > status of Depends-On: changes a cyclic dependencies > > zuul-status : https://gerrit.googlesource.com/plugins/zuul-status/ -- > This is also using a broad namespace, and appears to be related to > showing the ongoing job status in the UI directly, but does not show > final results. An example I found at [2] from an old wikimedia > thread > Note that my proposal goal is just to restore the missing table from the new review.opendev.org user interface. I agree that it would be better to follow how upstream does plugins. > Upstream have shown to take some interest in these plugins when > undertaking upgrades to polymer, etc. and thus integrating there is > seen as the best practice. Unless we have a strong reason not to, we > would like to consume any plugins from upstream for this reason. > > It would be in our best interest to work with these existing plugins > to clarify what they do more directly and come up with some better > namespacing to make it more obvious as potential number of plugins > grows. > > The current proposed implementation does not look like any of the > upstream plugins. Admittedly, [3] shows this is not a strong > eco-system; the documentation is mostly TODO and examples are thin. > Implementing this with Reason [4] adds one more significant hurdle; we > have no track record of maintaing this javascript dialect and it is > not used upstream in any way. It also doesn't use the same build and > test frameworks; this may not be a blocker but again creates hurdles > by being different to upstream. One more concrete comment is that I'm > pretty sure walking the shadow DOM to update things is [5] not quite > as intended and we should be using endpoints. > > In trying to find the best place to start, I've pulled out bits of the > checks plugin, image-diff plugin and codeeditor plugin into the > simplest thing I could get to a tab with a table in it @ [6]. You can > see this for now @ [7]. I think we're going to get more potential > eyes using polymer components and making things look like the existing > plugins. It's a matter of taste but I think the blank canvas of a > separate tab serves the purpose well. I think with the Bazel build > bits, we can ultimately write an upstream Zuul job to build this for > us in gerrit's Zuul (corvus might know if that's right?) > > Tristan -- maybe as step one, we could try integrating what you have > into that framework? Maybe we rewrite the ReasonML ... but having it > something is step 1. Overall I think this is the broad strokes of > what we would look at sending upstream. If they think it's too > specific, Zuul tenant seems a logical home. > This is looking great, thank you for looking into that. I guess we can directly re-use the showResult function in-place of the gr-zuul-summary-status-view_html.js content. If using polymer template is required, then we can still re-use the the `CI.fromMessages` function exported by the re-gerrit library to count the rechecks and extract build results from the message array. > We would also like to expand the gate testing to better handle testing > plugins. This will involve us automatically adding a repo, sample > change, and Zuul user comments during the gate test. A logical > extension of this would be to take samples using a headless browser > for reporting as artifacts; held nodes can also be used for advanced > debugging. This will give us better confidence as we keep our Gerrit > up-to-date. > > I'd like to solict some feedback on the checks plugin/API, which Zuul > added support for with [8]. My understanding is that this has never > really consolidated upstream and is under redevelopment [9]. I don't > think there's much there for us at this point; even still seeing as we > are so Zuul centric we might be able to do things with a specific > plugin this API will never do. > It seems like the checks API enables a similar user interface where the CI build details are integrated in the tablist along with a chip-view under the commit message. It's unclear what are the requirements but that looks like a good long term solution since we wouldn't need a custom plugin for Zuul. -Tristan > > [1] http://eavesdrop.openstack.org/meetings/infra/2020/infra.2020-12-01-19.01.log.html#l-62 > [2] https://imgur.com/a/uBk2oxQ > [3] https://gerrit-review.googlesource.com/Documentation/pg-plugin-dev.html > [4] https://reasonml.github.io/ > [5] https://github.com/softwarefactory-project/zuul-results-gerrit-plugin/blob/master/src/ZuulResultsPlugin.re#L25 > [6] https://github.com/ianw/gerrit-zuul-summary-status > [7] https://104.130.172.52/c/openstack/diskimage-builder/+/554002 > [8] https://opendev.org/zuul/zuul/commit/e78e948284392477d385d493fc9ec194d544483f > [9] https://www.gerritcodereview.com/design-docs/ci-reboot.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 515 bytes Desc: not available URL: From Arkady.Kanevsky at dell.com Wed Dec 2 19:14:10 2020 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Wed, 2 Dec 2020 19:14:10 +0000 Subject: [Interop] Message-ID: Team, I looked at https://www.openstack.org/brand/interop/. Looks like it had not been touched for a while. It does not list any add-on powered program. But it does list latest approved 2020.06 guidelines. Should we add it a verbiage for orchestration and DNS add-ons? And shortly about file system. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell EMC office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Wed Dec 2 19:32:55 2020 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Wed, 2 Dec 2020 19:32:55 +0000 Subject: [Interop] In-Reply-To: References: Message-ID: A few more things. We have *.next files in interop directory. Does anybody knows what they are used for? It does not look like they have been updated for a while. Thanks, Arkady From: Kanevsky, Arkady Sent: Wednesday, December 2, 2020 1:14 PM To: OpenStack Discuss Subject: [Interop] Team, I looked at https://www.openstack.org/brand/interop/. Looks like it had not been touched for a while. It does not list any add-on powered program. But it does list latest approved 2020.06 guidelines. Should we add it a verbiage for orchestration and DNS add-ons? And shortly about file system. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell EMC office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Wed Dec 2 19:48:52 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Wed, 2 Dec 2020 20:48:52 +0100 Subject: [rdo][centos8][repos] CentOS8 RDO-USSURI repo missing openvswitch2.13 Message-ID: Hi all, I have an issue, in CentOS8 repos for RDO. It is refering to openvswitch2.13 name, but in the repos I can see openvswitch packages with version in version field, looks like in dependency field it is missing dash (-) or smth... here is a paste of repos and error message [0]. Or any other way how this could be fixed? thanks. and packages I can see [1] here. [0] http://paste.openstack.org/show/xEalilL1rRJSDTzNCGt1/ [1] http://paste.openstack.org/show/VLWrNFpRLatwq5qXP2kD/ -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Wed Dec 2 20:21:14 2020 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 2 Dec 2020 13:21:14 -0700 Subject: [rdo][centos8][repos] CentOS8 RDO-USSURI repo missing openvswitch2.13 In-Reply-To: References: Message-ID: It appears to be available in the centos8 nfv repo. Have you tried that? http://mirror.centos.org/centos/8/nfv/ On Wed, Dec 2, 2020 at 12:54 PM Ruslanas Gžibovskis wrote: > > Hi all, > > I have an issue, in CentOS8 repos for RDO. It is refering to openvswitch2.13 name, but in the repos I can see openvswitch packages with version in version field, looks like in dependency field it is missing dash (-) or smth... here is a paste of repos and error message [0]. Or any other way how this could be fixed? thanks. > > and packages I can see [1] here. > > [0] http://paste.openstack.org/show/xEalilL1rRJSDTzNCGt1/ > [1] http://paste.openstack.org/show/VLWrNFpRLatwq5qXP2kD/ > > -- > Ruslanas Gžibovskis > +370 6030 7030 From gmann at ghanshyammann.com Wed Dec 2 22:11:33 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 02 Dec 2020 16:11:33 -0600 Subject: [Interop] In-Reply-To: References: Message-ID: <1762582bca9.b017f5361015014.8504468265850212521@ghanshyammann.com> ---- On Wed, 02 Dec 2020 13:32:55 -0600 Kanevsky, Arkady wrote ---- > > A few more things. > We have *.next files in interop directory. Does anybody knows what they are used for? > It does not look like they have been updated for a while. *.next file is used to prepare the next version guidelines. The idea here is whenever the new guidelines are approved by the board then we need to update the latest approved guidelines n *.next so that these can be seeded to the next new guidelines. It seems we missed to update it when 2020.06.json guidelines are approved. I will update it once 2020.11.json is approved. We should add the new adds-on in https://www.openstack.org/brand/interop/ I also see we need to update a lot of information on the documentation side also which is out of date. -gmann > Thanks, > Arkady > > From: Kanevsky, Arkady > Sent: Wednesday, December 2, 2020 1:14 PM > To: OpenStack Discuss > Subject: [Interop] > > Team, > I looked at https://www.openstack.org/brand/interop/. > Looks like it had not been touched for a while. > It does not list any add-on powered program. > But it does list latest approved 2020.06 guidelines. > > Should we add it a verbiage for orchestration and DNS add-ons? And shortly about file system. > > Thanks, > > Arkady Kanevsky, Ph.D. > SP Chief Technologist & DE > Dell EMC office of CTO > Dell Inc. One Dell Way, MS PS2-91 > Round Rock, TX 78682, USA > Phone: 512 7204955 > > From ykarel at redhat.com Thu Dec 3 05:13:06 2020 From: ykarel at redhat.com (Yatin Karel) Date: Thu, 3 Dec 2020 10:43:06 +0530 Subject: [rdo][centos8][repos] CentOS8 RDO-USSURI repo missing openvswitch2.13 In-Reply-To: References: Message-ID: Hi Ruslanas, On Thu, Dec 3, 2020 at 1:20 AM Ruslanas Gžibovskis wrote: > > Hi all, > > I have an issue, in CentOS8 repos for RDO. It is refering to openvswitch2.13 name, but in the repos I can see openvswitch packages with version in version field, looks like in dependency field it is missing dash (-) or smth... here is a paste of repos and error message [0]. Or any other way how this could be fixed? thanks. > > and packages I can see [1] here. With Ussuri, RDO has switched to utilize openvswitch2.13 builds from NFV SIG[1], You likely have older centos-release-openstack-ussuri installed, can confirm with rpm -q centos-release-openstack-ussuri(centos-release-openstack-ussuri-1-4.el8.noarch is needed to utilize openvswitch2.13 from NFV builds), So what you need to do is to run "sudo dnf update centos-release-openstack-ussuri -y" before triggering sudo dnf update, which will pull centos-release-nfv-openvswitch which provides openswitch2.13. Also with sudo dnf update, openvswitch-2.12 will be upgraded to openvswitch2.13 and will result in openvswitch service to be stopped, so additional steps to restart of openvswitch would be needed. Special Treatment for OpenvSwitch as part of TripleO update is being taken care with [2], so you need to ensure if [2] is available in your environment before running TripleO update or upgrade. The latest tagged release of tripleo-ansible for ussuri is missing the needed patch [2], so you can patch that manually or request new releases like [3]. If any other tool apart from TripleO is used then you can use manual update/restart of openvswitch or adjust the tool to handle that. [1] https://review.rdoproject.org/r/#/c/30774/ [2] https://review.opendev.org/c/openstack/tripleo-ansible/+/761856 [3] https://review.opendev.org/c/openstack/releases/+/755718/ > > [0] http://paste.openstack.org/show/xEalilL1rRJSDTzNCGt1/ > [1] http://paste.openstack.org/show/VLWrNFpRLatwq5qXP2kD/ > > -- > Ruslanas Gžibovskis > +370 6030 7030 Thanks and Regards Yatin Karel From radoslaw.piliszek at gmail.com Thu Dec 3 09:22:29 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 3 Dec 2020 10:22:29 +0100 Subject: [all] Dynamic Zuul results table in Gerrit 3 Message-ID: Hello Fellow OpenStack and OpenDev Folks! TL;DR click on [3] and enjoy. I am starting this thread to not hijack the discussion happening on [1]. First of all, I would like to thank gibi (Balazs Gibizer) for hacking a way to get the place to render the table in the first place (pun intended). I have been a long-time-now user of [2]. I have improved and customised it for myself but never really got to share back the changes I made. The new Gerrit obviously broke the whole script so it was of no use to share at that particular state. However, inspired by gibi's work, I decided to finally sit down and fix it to work with Gerrit 3 and here it comes: [3]. Works well on Chrome with Tampermonkey. Not tested others. I hope you will enjoy this little helper (I do). I know the script looks super fugly but it generally boils down to a mix of styles of 3 people and Gerrit having funky UI rendering. Finally, I'd also like to thank hrw (Marcin Juszkiewicz) for linking me to the original Michel's script in 2019. [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-November/019051.html [2] https://opendev.org/x/coats/src/commit/444c95738677593dcfed0cfd9667d4c4f0d596a3/coats/openstack_gerrit_zuul_status.user.js [3] https://gist.github.com/yoctozepto/7ea1271c299d143388b7c1b1802ee75e Kind regards, -yoctozepto From ykarel at redhat.com Thu Dec 3 09:43:51 2020 From: ykarel at redhat.com (Yatin Karel) Date: Thu, 3 Dec 2020 15:13:51 +0530 Subject: [all] Dynamic Zuul results table in Gerrit 3 In-Reply-To: References: Message-ID: Hi, On Thu, Dec 3, 2020 at 2:57 PM Radosław Piliszek wrote: > > Hello Fellow OpenStack and OpenDev Folks! > > TL;DR click on [3] and enjoy. > > I am starting this thread to not hijack the discussion happening on [1]. > > First of all, I would like to thank gibi (Balazs Gibizer) for hacking > a way to get the place to render the table in the first place (pun > intended). > > I have been a long-time-now user of [2]. > I have improved and customised it for myself but never really got to > share back the changes I made. > The new Gerrit obviously broke the whole script so it was of no use to > share at that particular state. > However, inspired by gibi's work, I decided to finally sit down and > fix it to work with Gerrit 3 and here it comes: [3]. > Works well on Chrome with Tampermonkey. Not tested others. Thanks, Works for me in firefox with Greasemonkey. The only difference I noticed wrt previous gerrit one is that the current one displays both already available zuul results(from previous run) and the current running one, which is fine. > > I hope you will enjoy this little helper (I do). > > I know the script looks super fugly but it generally boils down to a > mix of styles of 3 people and Gerrit having funky UI rendering. > > Finally, I'd also like to thank hrw (Marcin Juszkiewicz) for linking > me to the original Michel's script in 2019. > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-November/019051.html > [2] https://opendev.org/x/coats/src/commit/444c95738677593dcfed0cfd9667d4c4f0d596a3/coats/openstack_gerrit_zuul_status.user.js > [3] https://gist.github.com/yoctozepto/7ea1271c299d143388b7c1b1802ee75e > > Kind regards, > -yoctozepto > Thanks and Regards Yatin Karel From radoslaw.piliszek at gmail.com Thu Dec 3 09:46:37 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 3 Dec 2020 10:46:37 +0100 Subject: [all] Dynamic Zuul results table in Gerrit 3 In-Reply-To: References: Message-ID: On Thu, Dec 3, 2020 at 10:33 AM Marcin Juszkiewicz wrote: > > W dniu 03.12.2020 o 10:22, Radosław Piliszek pisze: > > Hello Fellow OpenStack and OpenDev Folks! > > > > TL;DR click on [3] and enjoy. > > Rename to *.user.js so Tampermonkey will recognize it as userscript and > add window around with Install button. > I, personally, hate this "feature" of Tampermonkey but sure - if it makes it easier for you. ;-) Done. Anyone - just click on "Raw" and you will be pestered by Tampermonkey to install the script. > > However, inspired by gibi's work, I decided to finally sit down and > > fix it to work with Gerrit 3 and here it comes: [3]. > > Works well on Chrome with Tampermonkey. Not tested others. > > Firefox with Tampermonkey - script works. > Yay, thanks for confirming! > > I hope you will enjoy this little helper (I do). > > Nice job! > Well, thank you! :-) > It does not catch all Zuul jobs. For Kolla patches AArch64 jobs are not > listed. I have already replied on IRC but will repeat here for others. The dynamic part includes all the running queues. However, the static part parses only a single comment and AArch64 is in a different one. I am obviously biased (being x86-only folk) but I find it more problematic than worth it to hack it any further. I hope a proper solution arrives sooner. :-) > > I know the script looks super fugly but it generally boils down to a > > mix of styles of 3 people and Gerrit having funky UI rendering. > > > Finally, I'd also like to thank hrw (Marcin Juszkiewicz) for linking > > me to the original Michel's script in 2019. > > YW. > -yoctozepto From radoslaw.piliszek at gmail.com Thu Dec 3 10:03:42 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 3 Dec 2020 11:03:42 +0100 Subject: [all] Dynamic Zuul results table in Gerrit 3 In-Reply-To: References: Message-ID: On Thu, Dec 3, 2020 at 10:44 AM Yatin Karel wrote: > > Hi, > > On Thu, Dec 3, 2020 at 2:57 PM Radosław Piliszek > wrote: > > > > Hello Fellow OpenStack and OpenDev Folks! > > > > TL;DR click on [3] and enjoy. > > > > I am starting this thread to not hijack the discussion happening on [1]. > > > > First of all, I would like to thank gibi (Balazs Gibizer) for hacking > > a way to get the place to render the table in the first place (pun > > intended). > > > > I have been a long-time-now user of [2]. > > I have improved and customised it for myself but never really got to > > share back the changes I made. > > The new Gerrit obviously broke the whole script so it was of no use to > > share at that particular state. > > However, inspired by gibi's work, I decided to finally sit down and > > fix it to work with Gerrit 3 and here it comes: [3]. > > Works well on Chrome with Tampermonkey. Not tested others. > > Thanks, Works for me in firefox with Greasemonkey. The only difference > I noticed wrt previous gerrit one is that the current one displays > both already available zuul results(from previous run) and the current > running one, which is fine. Yay, thanks for confirming. Yes, I obviously forgot all the little changes I made but this is one of them. I somehow like it better like this. The styling is not perfect now to differentiate between the two tables but it can be figured out based on the contents. > > > > I hope you will enjoy this little helper (I do). > > > > > I know the script looks super fugly but it generally boils down to a > > mix of styles of 3 people and Gerrit having funky UI rendering. > > > > Finally, I'd also like to thank hrw (Marcin Juszkiewicz) for linking > > me to the original Michel's script in 2019. > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-November/019051.html > > [2] https://opendev.org/x/coats/src/commit/444c95738677593dcfed0cfd9667d4c4f0d596a3/coats/openstack_gerrit_zuul_status.user.js > > [3] https://gist.github.com/yoctozepto/7ea1271c299d143388b7c1b1802ee75e > > > > Kind regards, > > -yoctozepto > > > > Thanks and Regards > Yatin Karel > -yoctozepto From radoslaw.piliszek at gmail.com Thu Dec 3 12:16:46 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 3 Dec 2020 13:16:46 +0100 Subject: [all] Dynamic Zuul results table in Gerrit 3 In-Reply-To: References: Message-ID: On Thu, Dec 3, 2020 at 11:03 AM Radosław Piliszek wrote: > ... I obviously forgot all the little changes I made ... Just remembered one so writing down for posterity. One of the enhancements is that it supports multiple tenants, i.e. it works for non-openstack/ namespaces, e.g. x/, opendev/ or zuul/ too. -yoctozepto From v at prokofev.me Thu Dec 3 12:47:48 2020 From: v at prokofev.me (Vladimir Prokofev) Date: Thu, 3 Dec 2020 15:47:48 +0300 Subject: [neutron] default(ish) firewall rules Message-ID: Hello. I'm running Queens private cloud with few separate projects inside. Guests in those projects have 2 networks - public, which is a provider network with public IP addresses, and private which is a VXLAN overlay network specific to the project. That's the setup, now here's the issue. They're mostly Windows guests there, and they tend to have browser service enabled on both public and private networks. This leads to situations where guests from one project can see guests in other projects over a public network via NetBIOS/SMB protocols, which is undesirable. I have two partial solutions in mind. Create some default firewall rule, similar to that exists by default for DHCP protocol that prohibit guests to act as DHCP server, but for UDP 137-139 port range. But not only I completely forgot how to do this(I think I saw some documentation about it ~2 years ago), but this will also block said protocol over private networks too, which is not an ideal solution. I would still love it if someone could point me to a proper documentation here. Second option is to add similar entries to security group rules. This will allow public/private interface differentiation by applying different security group to different interfaces, but introduces the possibility for cloud operator to delete those entries(either by mistake, or explicitly) which will lead to protocol being allowed once again. Anyone has any idea of a better solution here? -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Dec 3 13:10:29 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 3 Dec 2020 14:10:29 +0100 Subject: [neutron] default(ish) firewall rules In-Reply-To: References: Message-ID: <20201203131029.mbuc5mtzn7opqd4w@p1.localdomain> Hi, On Thu, Dec 03, 2020 at 03:47:48PM +0300, Vladimir Prokofev wrote: > Hello. > > I'm running Queens private cloud with few separate projects inside. Guests > in those projects have 2 networks - public, which is a provider > network with public IP addresses, and private which is a VXLAN overlay > network specific to the project. > > That's the setup, now here's the issue. > They're mostly Windows guests there, and they tend to have browser service > enabled on both public and private networks. This leads to situations where > guests from one project can see guests in other projects over a public > network via NetBIOS/SMB protocols, which is undesirable. > > I have two partial solutions in mind. > Create some default firewall rule, similar to that exists by default for > DHCP protocol that prohibit guests to act as DHCP server, but for UDP > 137-139 port range. > But not only I completely forgot how to do this(I think I saw some > documentation about it ~2 years ago), but this will also block said > protocol over private networks too, which is not an ideal solution. I would > still love it if someone could point me to a proper documentation here. > Second option is to add similar entries to security group rules. This will > allow public/private interface differentiation by applying different > security group to different interfaces, but introduces the possibility for > cloud operator to delete those entries(either by mistake, or explicitly) > which will lead to protocol being allowed once again. If You will add rule to SG as an admin user, then regular users (owners of the SG) will not be able to remove it. But they will still be able to stop using this SG completly. > > Anyone has any idea of a better solution here? What if You would plug those VMs only to the private networks and use Floating IPs to have public connectivity? Would that work for You? -- Slawek Kaplonski Principal Software Engineer Red Hat From mnaser at vexxhost.com Thu Dec 3 13:23:20 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 3 Dec 2020 08:23:20 -0500 Subject: [all] Dynamic Zuul results table in Gerrit 3 In-Reply-To: References: Message-ID: On Thu, Dec 3, 2020 at 4:22 AM Radosław Piliszek wrote: > > Hello Fellow OpenStack and OpenDev Folks! > > TL;DR click on [3] and enjoy. I just set it up and it's awesome. Perhaps this could somehow make it's way into a Gerrit plugin so it can be available for all users. > I am starting this thread to not hijack the discussion happening on [1]. > > First of all, I would like to thank gibi (Balazs Gibizer) for hacking > a way to get the place to render the table in the first place (pun > intended). > > I have been a long-time-now user of [2]. > I have improved and customised it for myself but never really got to > share back the changes I made. > The new Gerrit obviously broke the whole script so it was of no use to > share at that particular state. > However, inspired by gibi's work, I decided to finally sit down and > fix it to work with Gerrit 3 and here it comes: [3]. > Works well on Chrome with Tampermonkey. Not tested others. > > I hope you will enjoy this little helper (I do). > > I know the script looks super fugly but it generally boils down to a > mix of styles of 3 people and Gerrit having funky UI rendering. > > Finally, I'd also like to thank hrw (Marcin Juszkiewicz) for linking > me to the original Michel's script in 2019. > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-November/019051.html > [2] https://opendev.org/x/coats/src/commit/444c95738677593dcfed0cfd9667d4c4f0d596a3/coats/openstack_gerrit_zuul_status.user.js > [3] https://gist.github.com/yoctozepto/7ea1271c299d143388b7c1b1802ee75e > > Kind regards, > -yoctozepto > -- Mohammed Naser VEXXHOST, Inc. From tdecacqu at redhat.com Thu Dec 3 13:38:48 2020 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 03 Dec 2020 13:38:48 +0000 Subject: [all] Dynamic Zuul results table in Gerrit 3 In-Reply-To: References: Message-ID: <877dpzt3fb.tristanC@fedora> On Thu, Dec 03, 2020 at 10:22 Radosław Piliszek wrote: > Hello Fellow OpenStack and OpenDev Folks! > > TL;DR click on [3] and enjoy. > Hello It seems like this script is injecting build details directly using the innerHTML attribute without filtering html entities, please see the `Security considerations` section of https://developer.mozilla.org/en-US/docs/Web/API/Element/innerHTML -Tristan > I am starting this thread to not hijack the discussion happening on [1]. > > First of all, I would like to thank gibi (Balazs Gibizer) for hacking > a way to get the place to render the table in the first place (pun > intended). > > I have been a long-time-now user of [2]. > I have improved and customised it for myself but never really got to > share back the changes I made. > The new Gerrit obviously broke the whole script so it was of no use to > share at that particular state. > However, inspired by gibi's work, I decided to finally sit down and > fix it to work with Gerrit 3 and here it comes: [3]. > Works well on Chrome with Tampermonkey. Not tested others. > > I hope you will enjoy this little helper (I do). > > I know the script looks super fugly but it generally boils down to a > mix of styles of 3 people and Gerrit having funky UI rendering. > > Finally, I'd also like to thank hrw (Marcin Juszkiewicz) for linking > me to the original Michel's script in 2019. > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-November/019051.html > [2] https://opendev.org/x/coats/src/commit/444c95738677593dcfed0cfd9667d4c4f0d596a3/coats/openstack_gerrit_zuul_status.user.js > [3] https://gist.github.com/yoctozepto/7ea1271c299d143388b7c1b1802ee75e > > Kind regards, > -yoctozepto -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 515 bytes Desc: not available URL: From radoslaw.piliszek at gmail.com Thu Dec 3 13:52:45 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 3 Dec 2020 14:52:45 +0100 Subject: [all] Dynamic Zuul results table in Gerrit 3 In-Reply-To: <877dpzt3fb.tristanC@fedora> References: <877dpzt3fb.tristanC@fedora> Message-ID: On Thu, Dec 3, 2020 at 2:38 PM Tristan Cacqueray wrote: > > > On Thu, Dec 03, 2020 at 10:22 Radosław Piliszek wrote: > > Hello Fellow OpenStack and OpenDev Folks! > > > > TL;DR click on [3] and enjoy. > > > > Hello > > It seems like this script is injecting build details directly using > the innerHTML attribute without filtering html entities, > please see the `Security considerations` section of > > https://developer.mozilla.org/en-US/docs/Web/API/Element/innerHTML Yes, it is a generally valid remark but I consider both Gerrit and Zuul (both of OpenDev) to have the exact same level of trust so did not modify the approach. But yes, for anyone trying to learn best practices from this snippet - please do not, it is far from them. :-) In general this approach is very wasteful as it causes rebuilding (or rather rejoining) and reparsing of html, instead of DOM manipulations. For such a simple table it does not hurt but please do not do it at home. -yoctozepto From skaplons at redhat.com Thu Dec 3 14:01:15 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 3 Dec 2020 15:01:15 +0100 Subject: [neutron] Drivers meeting agenda for 04.12.2020 Message-ID: <20201203140115.r2ojdwxso4ods6kp@p1.localdomain> Hi, Agenda for drivers meeting [1] this week have to new RFEs to discuss: * https://bugs.launchpad.net/neutron/+bug/1905115 * https://bugs.launchpad.net/neutron/+bug/1905295 And there is also "on demand" topic added by Rodolfo: "Smart-Nic Management Overall Design" (Nova spec). Neutron needs to add a new extension to provide to Nova the device profile information. This is a new string parameter added to the port blob and should be included when the port is created. Reference: https://review.opendev.org/c/openstack/nova-specs/+/742785/12/specs/wallaby/approved/support-sriov-smartnic.rst#263 Rodolfo - is Your topic related to the RFE https://bugs.launchpad.net/neutron/+bug/1906602 ? See You tomorrow on the meeting :) [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda -- Slawek Kaplonski Principal Software Engineer Red Hat From ralonsoh at redhat.com Thu Dec 3 14:11:24 2020 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Thu, 3 Dec 2020 15:11:24 +0100 Subject: [neutron] Drivers meeting agenda for 04.12.2020 In-Reply-To: <20201203140115.r2ojdwxso4ods6kp@p1.localdomain> References: <20201203140115.r2ojdwxso4ods6kp@p1.localdomain> Message-ID: Hi Slawek: Yes, [1] is related to this spec. It wasn't created when I added this topic to the agenda. Thanks! [1]https://bugs.launchpad.net/neutron/+bug/1906602 On Thu, Dec 3, 2020 at 3:01 PM Slawek Kaplonski wrote: > Hi, > > Agenda for drivers meeting [1] this week have to new RFEs to discuss: > > * https://bugs.launchpad.net/neutron/+bug/1905115 > * https://bugs.launchpad.net/neutron/+bug/1905295 > > And there is also "on demand" topic added by Rodolfo: > > "Smart-Nic Management Overall Design" (Nova spec). Neutron needs to add a > new > extension to provide to Nova the device profile information. This is a new > string parameter added to the port blob and should be included when the > port is > created. Reference: > > https://review.opendev.org/c/openstack/nova-specs/+/742785/12/specs/wallaby/approved/support-sriov-smartnic.rst#263 > > Rodolfo - is Your topic related to the RFE > https://bugs.launchpad.net/neutron/+bug/1906602 ? > > See You tomorrow on the meeting :) > > [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From v at prokofev.me Thu Dec 3 14:12:31 2020 From: v at prokofev.me (Vladimir Prokofev) Date: Thu, 3 Dec 2020 17:12:31 +0300 Subject: [neutron] default(ish) firewall rules In-Reply-To: <20201203131029.mbuc5mtzn7opqd4w@p1.localdomain> References: <20201203131029.mbuc5mtzn7opqd4w@p1.localdomain> Message-ID: > If You will add rule to SG as an admin user, then regular users (owners of the SG) will not be able to remove it. > But they will still be able to stop using this SG completly. That's a neat trick, didn't know about it, thank you. > What if You would plug those VMs only to the private networks and use Floating IPs to have public connectivity? Would that work for You? That is an excellent solution, seeing as almost every big public cloud provider does it, and it did come to my mind. This was also our initial cloud design back a few years ago. Unfortunately, we had some issues with DDOS attacks back then, that flooded a single IP address, and that attack would completely overwhelm the network node that was terminating that floating IP. This, in turn, led to multiple other projects losing connectivity for the duration of the attack. At the time we looked into other solutions, particularly the one where floating IP terminates on compute node instead of a network node, but were unable to implement it, and switched to a more direct approach with public IPs being assigned directly to guests via a provider network. So this is the best practice, yes, but this will require to rethink and redesign whole cloud, which is not possible at the moment. So I'm looking at some simpler, quick-fix style solution. чт, 3 дек. 2020 г. в 16:10, Slawek Kaplonski : > Hi, > > On Thu, Dec 03, 2020 at 03:47:48PM +0300, Vladimir Prokofev wrote: > > Hello. > > > > I'm running Queens private cloud with few separate projects inside. > Guests > > in those projects have 2 networks - public, which is a provider > > network with public IP addresses, and private which is a VXLAN overlay > > network specific to the project. > > > > That's the setup, now here's the issue. > > They're mostly Windows guests there, and they tend to have browser > service > > enabled on both public and private networks. This leads to situations > where > > guests from one project can see guests in other projects over a public > > network via NetBIOS/SMB protocols, which is undesirable. > > > > I have two partial solutions in mind. > > Create some default firewall rule, similar to that exists by default for > > DHCP protocol that prohibit guests to act as DHCP server, but for UDP > > 137-139 port range. > > But not only I completely forgot how to do this(I think I saw some > > documentation about it ~2 years ago), but this will also block said > > protocol over private networks too, which is not an ideal solution. I > would > > still love it if someone could point me to a proper documentation here. > > Second option is to add similar entries to security group rules. This > will > > allow public/private interface differentiation by applying different > > security group to different interfaces, but introduces the possibility > for > > cloud operator to delete those entries(either by mistake, or explicitly) > > which will lead to protocol being allowed once again. > > If You will add rule to SG as an admin user, then regular users (owners of > the > SG) will not be able to remove it. > But they will still be able to stop using this SG completly. > > > > > Anyone has any idea of a better solution here? > > What if You would plug those VMs only to the private networks and use > Floating > IPs to have public connectivity? Would that work for You? > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Dec 3 14:51:05 2020 From: marios at redhat.com (Marios Andreou) Date: Thu, 3 Dec 2020 16:51:05 +0200 Subject: [tripleo] wallaby-1 milestone and open specs Message-ID: Hi TripleO a reminder that this week is the Wallaby-1 milestone. As discussed at the last tripleo irc meeting [1] we should merge all specs by wallaby-1 if we are going to realistically deliver those things during W. Some specs have now merged and if you didn't already do so you can read them at [2]. There are still a few that are in review [3]. Call to all core reviewers please try and find time to review those over the next few days, in particular: network ports v2 @ [4], frrouter/bgp @ [5] and ephemeral heat [6]. Proposers of those specs please try respond quickly to reviewer comments so we can merge asap. thank you marios [1] http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-11-24-14.00.log.html [2] https://opendev.org/openstack/tripleo-specs/src/branch/master/specs/wallaby [3] https://review.opendev.org/q/project:openstack/tripleo-specs+status:open [4] https://review.opendev.org/c/openstack/tripleo-specs/+/760536 [5] https://review.opendev.org/c/openstack/tripleo-specs/+/758249 [6] https://review.opendev.org/c/openstack/tripleo-specs/+/765000 -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Thu Dec 3 15:00:31 2020 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 3 Dec 2020 16:00:31 +0100 Subject: [neutron][stable] Backport of the patch which bumps RPC version In-Reply-To: <22574695.as6R6ROz1X@p1> References: <22574695.as6R6ROz1X@p1> Message-ID: Hi Slawek, Thanks for turning to the mailing list with this issue and for the explanation! In short: from stable core viewpoint I think this case could get an exception and we could merge back patch [3] to Victoria and Ussuri. Here is my thinking: - If I understand correctly, the critical code change ([1]) is already there in many release, in Ussuri (16.0.0, 16.1.0, 16.2.0) and Victoria (17.0.0). So I think it would not be fortunate to revert the code change, because it could cause some similar issue. (RPC change. Am I right?) - The code change is there from Ussuri, which means actually the RPC has changed already, only the versioning is missed. - Also, as you wrote, it would mean easier upgrades to backport patch [3]. - Those, who already upgraded from Train to Ussuri probably bumped into the issue, so in case the change is reverted they will face an issue again. Based on these and what you wrote I think now the less problematic way forward if we take an exception and propose + merge the backports. If anyone has any other opinion or objection or sees any technical issue we haven't thought of, then let us know! Thanks, Előd On 2020. 12. 02. 18:51, Slawek Kaplonski wrote: > Hi, > > Some time ago we backported in Neutron patch [1] which caused bug [2]. > Patch [1] was merged in the Ussuri development cycle so it is already in > Ussuri and Victoria. Our mistake was that we merged it then without bump of > the RPC version and without code which would provide backward compatybility > between old agent and new neutron-server and that's why [2] happens. > > Now, as we know that, we have proposed patch [3] to fix it in master branch. > And my question is - can we backport fix [3] to the stable/victoria and stable/ > ussuri to fix the original issue caused by [1] there? > In general I know that we shouldn't do that but here are the reasons why we > would like to do it in this specific case: > - rpc version which we want to change now wasn't changed since Train at least > so there will be no any conflict with that, > - bump rpc version now and provide backward compatybility on neutron-server > will make upgrades Train->Ussuri and minor updates in Ussuri easier as there > will be no similar issue like is described in [2] anymore, > - patch [1] was already included in Ussuri and Victoria from master branch, it > wasn't really cherry-picked there so it actually should be there since the > beginning. > > Based on those reasons mentioned about and also becuase in general it is > forbiden by stable policy to backport changes like that to stable branches I > would like to know opinion from wider community, especially stable-core-maint > team, about what would be the best approach to fix that issue: backport of [3] > to stable/{ussuri,victoria} or revert [1] in stable/{ussuri,victoria}. > > [1] https://review.opendev.org/c/openstack/neutron/+/712632 > [2] https://bugs.launchpad.net/neutron/+bug/1903531 > [3] https://review.opendev.org/c/openstack/neutron/+/764108 > From pbasaras at gmail.com Thu Dec 3 15:00:42 2020 From: pbasaras at gmail.com (Pavlos Basaras) Date: Thu, 3 Dec 2020 17:00:42 +0200 Subject: [ussuri] [neutron] deploy an additional compute node that resides in a different network In-Reply-To: References: Message-ID: Hello, with regard to this issue, the dhcp discover does not reach the dnsmasq at the controller, as the controller is on 10.0.0.11 and the compute node is on 192.168.111.17. With iptables i forward all unicast all traffic to the 10.0.0.11 from the 192.168.111.17 network, so the compute node is visible to the controller, however, since dhcp discover (offer, request,ack) is broadcast, this does not reach the controller network space, and thus there is no ip allocated to the vm on host 192.168.111.17. Is there a way to go around this issue? Do you see any problem with my general setup? all the best, Pavlos On Wed, Dec 2, 2020 at 11:21 AM Pavlos Basaras wrote: > Dear community, > > I am new to openstack, please excuse all newbie questions. > > I am using ubuntu 18 for all elements. > I followed the steps for installing openstack from > https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-ussuri > . > > My setup is based on virtualbox, with mgmt at 10.0.0.0/24 and provider at > 203.0.113.0/24 (host only adapters), as per instructions. > The host of the virtualbox is nating those IPs to the network > 192.168.111.0/24 (gw to internet etc.) > > When i deployed the compute vm at the virtual box e.g., 10.0.0.31, the vms > are deployed successfully, and can successfully launch an instance > at provider(203.0.113.0/24), internal (192.168.10.0/24), and self service > (172.16.1.0/24) networks, with associated floating ips, internet > access etc. > > I want to add a new compute node that resides on a different network for > deploying vms, i.e., 192.168.111.0/24. The virtual box host is on > 192.168.111.15 (this is where the controller vm 10.0.0.11 is deployed ) > and the new compute is 192.168.111.17 directly visible from the virtualbox > host. > > For this new node to see the controller i added an iptables rule at > 192.168.111.15 (host of the virtualbox) to forward all traffic from > 192.168.111.17 to the controller 10.0.0.0.11. > Probably this is the wrong way to do it even though the following output > seems ok (5g-cpn1=192.168.111.17) and from horizon i can see the hypervisor > info, and relevant total and used resources when i deploy vms > in 192.168.111.17 (the 5g-cpn1 node) > > openstack network agent list > > +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ > | ID | Agent Type | Host | > Availability Zone | Alive | State | Binary | > > +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ > | 2d8c3a89-32c4-4b97-aa4f-ca19db53b24f | L3 agent | controller | > nova | :-) | UP | neutron-l3-agent | > | 35a6b463-7571-4f41-85bc-4c26ef255012 | Linux bridge agent |* 5g-cpn1 * > | None | :-) | UP | neutron-linuxbridge-agent | > | 413cd13d-88d7-45ce-8b2e-26fdb265740f | Metadata agent | controller | > None | :-) | UP | neutron-metadata-agent | > | 42f57bee-63b3-44e6-9392-939ece98719d | Linux bridge agent | compute | > None | :-) | UP | neutron-linuxbridge-agent | > | 4a787a09-04aa-4350-bd32-0c0177ed06a1 | DHCP agent | controller | > nova | :-) | UP | neutron-dhcp-agent | > | 9069e26e-6fef-4b69-9c35-c30ca08377ff | Linux bridge agent | nrUE | > None | XXX | UP | neutron-linuxbridge-agent | > | fdafc337-7581-4ecd-b175-810713a25e1f | Linux bridge agent | controller | > None | :-) | UP | neutron-linuxbridge-agent | > > +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ > > openstack compute service list > > +----+----------------+------------+----------+---------+-------+----------------------------+ > | ID | Binary | Host | Zone | Status | State | Updated > At | > > +----+----------------+------------+----------+---------+-------+----------------------------+ > | 3 | nova-scheduler | controller | internal | enabled | up | > 2020-12-02T07:21:56.000000 | > | 4 | nova-conductor | controller | internal | enabled | up | > 2020-12-02T07:22:06.000000 | > | 5 | nova-compute | compute | nova | enabled | up | > 2020-12-02T07:22:00.000000 | > | 6 | nova-compute | nrUE | nova | enabled | down | > 2020-11-26T15:59:24.000000 | > | 7 | nova-compute |* 5g-cpn1* | nova | enabled | up | > 2020-12-02T07:22:06.000000 | > > +----+----------------+------------+----------+---------+-------+----------------------------+ > > > My current setup does not include the installation of openvswitch so far > (at either the controller or the new compute node), so the vms (although > deployed successfully) failed to set up networks. > > For setting up openvswitch correct for my setup is this the guilde that i > need to follow?? > https://docs.openstack.org/neutron/ussuri/install/ovn/manual_install.html > ? > > Again, please excuse all newbie (in process of understanding) questions so > far. > > Any advice/directions/guides? > > > all the best, > Pavlos. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Dec 3 15:05:57 2020 From: marios at redhat.com (Marios Andreou) Date: Thu, 3 Dec 2020 17:05:57 +0200 Subject: [tripleo] next meeting Tuesday Dec 08 @ 1400 UTC in #tripleo Message-ID: The next scheduled TripleO irc meeting is ** Tuesday 08th December at 1400 UTC in #tripleo. ** ** https://etherpad.opendev.org/p/tripleo-meeting-items ** Please add anything you want to raise at https://etherpad.opendev.org/p/tripleo-meeting-items This could be anything including review requests, blocking issues or to socialise ongoing or planned work. Our last meeting was held on Nov 24th - you can find the logs there http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-11-24-14.00.log.html thanks, marios -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Dec 3 15:33:50 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 3 Dec 2020 16:33:50 +0100 Subject: [neutron] default(ish) firewall rules In-Reply-To: References: <20201203131029.mbuc5mtzn7opqd4w@p1.localdomain> Message-ID: <20201203153350.chq2gvgnl34mnmc6@p1.localdomain> Hi, On Thu, Dec 03, 2020 at 05:12:31PM +0300, Vladimir Prokofev wrote: > > If You will add rule to SG as an admin user, then regular users (owners > of the > SG) will not be able to remove it. > > But they will still be able to stop using this SG completly. > > That's a neat trick, didn't know about it, thank you. > > > What if You would plug those VMs only to the private networks and use > Floating IPs to have public connectivity? Would that work for You? > > That is an excellent solution, seeing as almost every big public cloud > provider does it, and it did come to my mind. This was also our initial > cloud design back a few years ago. > Unfortunately, we had some issues with DDOS attacks back then, that flooded > a single IP address, and that attack would completely overwhelm the network > node that was terminating that floating IP. This, in turn, led to multiple > other projects losing connectivity for the duration of the attack. > At the time we looked into other solutions, particularly the one where > floating IP terminates on compute node instead of a network node, but were > unable to implement it, and switched to a more direct approach with public > IPs being assigned directly to guests via a provider network. There is DVR solution in Neutron which distributes traffic which uses Floating IPs to the compute nodes. And now we have also OVN driver which provides distributed routers by default :) > So this is the best practice, yes, but this will require to rethink and > redesign whole cloud, which is not possible at the moment. So I'm looking > at some simpler, quick-fix style solution. I don't know what else I could propose You. Sorry :/ > > чт, 3 дек. 2020 г. в 16:10, Slawek Kaplonski : > > > Hi, > > > > On Thu, Dec 03, 2020 at 03:47:48PM +0300, Vladimir Prokofev wrote: > > > Hello. > > > > > > I'm running Queens private cloud with few separate projects inside. > > Guests > > > in those projects have 2 networks - public, which is a provider > > > network with public IP addresses, and private which is a VXLAN overlay > > > network specific to the project. > > > > > > That's the setup, now here's the issue. > > > They're mostly Windows guests there, and they tend to have browser > > service > > > enabled on both public and private networks. This leads to situations > > where > > > guests from one project can see guests in other projects over a public > > > network via NetBIOS/SMB protocols, which is undesirable. > > > > > > I have two partial solutions in mind. > > > Create some default firewall rule, similar to that exists by default for > > > DHCP protocol that prohibit guests to act as DHCP server, but for UDP > > > 137-139 port range. > > > But not only I completely forgot how to do this(I think I saw some > > > documentation about it ~2 years ago), but this will also block said > > > protocol over private networks too, which is not an ideal solution. I > > would > > > still love it if someone could point me to a proper documentation here. > > > Second option is to add similar entries to security group rules. This > > will > > > allow public/private interface differentiation by applying different > > > security group to different interfaces, but introduces the possibility > > for > > > cloud operator to delete those entries(either by mistake, or explicitly) > > > which will lead to protocol being allowed once again. > > > > If You will add rule to SG as an admin user, then regular users (owners of > > the > > SG) will not be able to remove it. > > But they will still be able to stop using this SG completly. > > > > > > > > Anyone has any idea of a better solution here? > > > > What if You would plug those VMs only to the private networks and use > > Floating > > IPs to have public connectivity? Would that work for You? > > > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > > > -- Slawek Kaplonski Principal Software Engineer Red Hat From skaplons at redhat.com Thu Dec 3 15:36:40 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 3 Dec 2020 16:36:40 +0100 Subject: [neutron][stable] Backport of the patch which bumps RPC version In-Reply-To: References: <22574695.as6R6ROz1X@p1> Message-ID: <20201203153640.nlecmopvezschijm@p1.localdomain> Hi, Thx a lot Előd for Your input on this topic. On Thu, Dec 03, 2020 at 04:00:31PM +0100, Előd Illés wrote: > Hi Slawek, > > Thanks for turning to the mailing list with this issue and for the > explanation! > > In short: from stable core viewpoint I think this case could get an > exception and we could merge back patch [3] to Victoria and Ussuri. > > Here is my thinking: > > - If I understand correctly, the critical code change ([1]) is already there > in many release, in Ussuri (16.0.0, 16.1.0, 16.2.0) and Victoria (17.0.0). > So I think it would not be fortunate to revert the code change, because it > could cause some similar issue. (RPC change. Am I right?) > - The code change is there from Ussuri, which means actually the RPC has > changed already, only the versioning is missed. > - Also, as you wrote, it would mean easier upgrades to backport patch [3]. > - Those, who already upgraded from Train to Ussuri probably bumped into the > issue, so in case the change is reverted they will face an issue again. Yes. All those points are correct. > > Based on these and what you wrote I think now the less problematic way > forward if we take an exception and propose + merge the backports. That is our understanding too and that's why I asked for that here :) > > If anyone has any other opinion or objection or sees any technical issue we > haven't thought of, then let us know! > > Thanks, > > Előd > > > On 2020. 12. 02. 18:51, Slawek Kaplonski wrote: > > Hi, > > > > Some time ago we backported in Neutron patch [1] which caused bug [2]. > > Patch [1] was merged in the Ussuri development cycle so it is already in > > Ussuri and Victoria. Our mistake was that we merged it then without bump of > > the RPC version and without code which would provide backward compatybility > > between old agent and new neutron-server and that's why [2] happens. > > > > Now, as we know that, we have proposed patch [3] to fix it in master branch. > > And my question is - can we backport fix [3] to the stable/victoria and stable/ > > ussuri to fix the original issue caused by [1] there? > > In general I know that we shouldn't do that but here are the reasons why we > > would like to do it in this specific case: > > - rpc version which we want to change now wasn't changed since Train at least > > so there will be no any conflict with that, > > - bump rpc version now and provide backward compatybility on neutron-server > > will make upgrades Train->Ussuri and minor updates in Ussuri easier as there > > will be no similar issue like is described in [2] anymore, > > - patch [1] was already included in Ussuri and Victoria from master branch, it > > wasn't really cherry-picked there so it actually should be there since the > > beginning. > > > > Based on those reasons mentioned about and also becuase in general it is > > forbiden by stable policy to backport changes like that to stable branches I > > would like to know opinion from wider community, especially stable-core-maint > > team, about what would be the best approach to fix that issue: backport of [3] > > to stable/{ussuri,victoria} or revert [1] in stable/{ussuri,victoria}. > > > > [1] https://review.opendev.org/c/openstack/neutron/+/712632 > > [2] https://bugs.launchpad.net/neutron/+bug/1903531 > > [3] https://review.opendev.org/c/openstack/neutron/+/764108 > > > -- Slawek Kaplonski Principal Software Engineer Red Hat From gfidente at redhat.com Thu Dec 3 15:46:16 2020 From: gfidente at redhat.com (Giulio Fidente) Date: Thu, 3 Dec 2020 16:46:16 +0100 Subject: [tripleo] wallaby-1 milestone and open specs In-Reply-To: References: Message-ID: <4c2993e5-d40c-5b9b-caac-5d789fbf2497@redhat.com> On 12/3/20 3:51 PM, Marios Andreou wrote: > Hi TripleO > > a reminder that this week is the Wallaby-1 milestone. As discussed at > the last tripleo irc meeting [1] we should merge all specs by wallaby-1 > if we are going to realistically deliver those things during W. > > Some specs have now merged and if you didn't already do so you can read > them at [2]. > > There are still a few that are in review [3]. thanks Marios, I'll update the tripleo-ceph-ganesha spec tomorrow; we're excited we'll have the option to deploy ganesha standalone using cephadm directly, this should make integration in tripleo a lot nicer -- Giulio Fidente GPG KEY: 08D733BA From Arkady.Kanevsky at dell.com Thu Dec 3 15:58:21 2020 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 3 Dec 2020 15:58:21 +0000 Subject: [Interop] In-Reply-To: <1762582bca9.b017f5361015014.8504468265850212521@ghanshyammann.com> References: <1762582bca9.b017f5361015014.8504468265850212521@ghanshyammann.com> Message-ID: Glad we agree. Ghanshayam will you update *.next files? Or do you want me to submit these? Suggest we put doc update to this Friday agenda. -----Original Message----- From: Ghanshyam Mann Sent: Wednesday, December 2, 2020 4:12 PM To: Kanevsky, Arkady Cc: OpenStack Discuss Subject: RE: [Interop] [EXTERNAL EMAIL] ---- On Wed, 02 Dec 2020 13:32:55 -0600 Kanevsky, Arkady wrote ---- > > A few more things. > We have *.next files in interop directory. Does anybody knows what they are used for? > It does not look like they have been updated for a while. *.next file is used to prepare the next version guidelines. The idea here is whenever the new guidelines are approved by the board then we need to update the latest approved guidelines n *.next so that these can be seeded to the next new guidelines. It seems we missed to update it when 2020.06.json guidelines are approved. I will update it once 2020.11.json is approved. We should add the new adds-on in https://www.openstack.org/brand/interop/ I also see we need to update a lot of information on the documentation side also which is out of date. -gmann > Thanks, > Arkady > > From: Kanevsky, Arkady > Sent: Wednesday, December 2, 2020 1:14 PM > To: OpenStack Discuss > Subject: [Interop] > > Team, > I looked at https://www.openstack.org/brand/interop/. > Looks like it had not been touched for a while. > It does not list any add-on powered program. > But it does list latest approved 2020.06 guidelines. > > Should we add it a verbiage for orchestration and DNS add-ons? And shortly about file system. > > Thanks, > > Arkady Kanevsky, Ph.D. > SP Chief Technologist & DE > Dell EMC office of CTO > Dell Inc. One Dell Way, MS PS2-91 > Round Rock, TX 78682, USA > Phone: 512 7204955 > > From gmann at ghanshyammann.com Thu Dec 3 16:30:58 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 03 Dec 2020 10:30:58 -0600 Subject: [Interop] In-Reply-To: References: <1762582bca9.b017f5361015014.8504468265850212521@ghanshyammann.com> Message-ID: <17629714747.fd183bf018689.4591668109258388891@ghanshyammann.com> ---- On Thu, 03 Dec 2020 09:58:21 -0600 Kanevsky, Arkady wrote ---- > Glad we agree. > Ghanshayam will you update *.next files? Or do you want me to submit these? I can update once 2020.11.json guidelines are approved by the board. As these guidelines are currently in draft state we need to wait to fill these in *.next files. -gmann > > Suggest we put doc update to this Friday agenda. > > > -----Original Message----- > From: Ghanshyam Mann > Sent: Wednesday, December 2, 2020 4:12 PM > To: Kanevsky, Arkady > Cc: OpenStack Discuss > Subject: RE: [Interop] > > > [EXTERNAL EMAIL] > > ---- On Wed, 02 Dec 2020 13:32:55 -0600 Kanevsky, Arkady wrote ---- > > A few more things. > > We have *.next files in interop directory. Does anybody knows what they are used for? > > It does not look like they have been updated for a while. > > *.next file is used to prepare the next version guidelines. The idea here is whenever the new guidelines are approved by the board then we need to update the latest approved guidelines n *.next so that these can be seeded to the next new guidelines. > > It seems we missed to update it when 2020.06.json guidelines are approved. I will update it once 2020.11.json is approved. > > We should add the new adds-on in https://www.openstack.org/brand/interop/ > I also see we need to update a lot of information on the documentation side also which is out of date. > > -gmann > > > Thanks, > > Arkady > > > > From: Kanevsky, Arkady > > Sent: Wednesday, December 2, 2020 1:14 PM > To: OpenStack Discuss > Subject: [Interop] > > Team, > I looked at https://www.openstack.org/brand/interop/. > > Looks like it had not been touched for a while. > > It does not list any add-on powered program. > > But it does list latest approved 2020.06 guidelines. > > > > Should we add it a verbiage for orchestration and DNS add-ons? And shortly about file system. > > > > Thanks, > > > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell EMC office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > > > > From stephenfin at redhat.com Thu Dec 3 17:16:47 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 03 Dec 2020 17:16:47 +0000 Subject: [ops][nova][designate] Does anyone rely on fully-qualified instance names? In-Reply-To: References: Message-ID: <15ee1cfee4ec8ac6b108ac49abadb05890bb01eb.camel@redhat.com> On Mon, 2020-11-30 at 11:51 +0000, Stephen Finucane wrote: > When attaching a port to an instance, nova will check for DNS support in neutron > and set a 'dns_name' attribute if found. To populate this attribute, nova uses a > sanitised version of the instance name, stored in the instance.hostname > attribute. This sanitisation simply strips out any unicode characters and > replaces underscores and spaces with dashes, before truncating to 63 characters. > It does not currently replace periods and this is the cause of bug 1581977 [1], > where an instance name such as 'ubuntu20.04' will fail to schedule since neutron > identifies '04' as an invalid TLD. > > The question now is what to do to resolve this. There are two obvious paths > available to us. The first is to simply catch these invalid hostnames and > replace them with an arbitrary hostname of format 'Server-{serverUUID}'. This is > what we currently do for purely unicode instance names and is what I've proposed > at [2]. The other option is to strip all periods, or rather replace them with > hyphens, when sanitizing the instance name. This is more predictable but breaks > the ability to use the instance name as a FQDN. Such usage is something I'm told > we've never supported, but I'm concerned that there are users out there who are > relying on this all the same and I'd like to get a feel for whether this is the > case first. > > So, the question: does anyone currently rely on this inadvertent "feature"? Thanks to everyone who replied to this. We discussed this in today's nova meeting [1] and decided we're okay with changing how we generate instance names, and that we can backport this since there are no guarantees made in either the documentation or API reference as to what a instance's hostname will be and existing instance's won't see their hostname change. There are two options available to us: * Replace periods with dashes This has the best results for people that are naming their instance with FQDNs, since the hostname looks sane. 'test-instance.mydomain.org' -> 'test-instance-mydomain-org' 'ubuntu18.04' -> 'ubuntu18-04' * Strip everything after the first period This has the best results for everyone else, since the hostname better reflects the original display name. 'test-instance.mydomain.org' -> 'test-instance' 'ubuntu18.04' -> 'ubuntu18' If anyone has strong feeling on either approach, please let us know. If not, we'll duke this out ourselves on #openstack-nova next week. Also, as an aside, I think we all realize that long term, the best solution for this would probably be a API change. This would allow us to add an 'openstack server create --hostname' parameter that is correctly validated against the various RFCs. I'm not currently planning to work on this but I'd be happy to assist anyone that was interested in doing so. Cheers, Stephen [1] http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-12-03-16.00.log.txt > Cheers, > Stephen > > [1] https://launchpad.net/bugs/1581977 > [2] https://review.opendev.org/c/openstack/nova/+/764482 > From alifshit at redhat.com Thu Dec 3 17:29:57 2020 From: alifshit at redhat.com (Artom Lifshitz) Date: Thu, 3 Dec 2020 12:29:57 -0500 Subject: [ops][nova][designate] Does anyone rely on fully-qualified instance names? In-Reply-To: <15ee1cfee4ec8ac6b108ac49abadb05890bb01eb.camel@redhat.com> References: <15ee1cfee4ec8ac6b108ac49abadb05890bb01eb.camel@redhat.com> Message-ID: So coming in very late here with a more... radical? idea. This is just brainstorming, but here it goes: Neutron explodes when we update the port with an invalid `dns_name` here [1]. The request we send in [1] is populated here [2]. So... why not just *not* do that? IOW, the port will not have a `dns_name` set at all by Nova when we create the VM, and users can use Neutron's port-update API [3] to set the hostname they desire. That way, if Neutron does return a BadRequest, it will really be because the fqdn is invalid, not because Nova tried to be "smart". [1] https://github.com/openstack/nova/blob/stable/train/nova/network/neutronv2/api.py#L1138 [2] https://github.com/openstack/nova/blob/stable/train/nova/network/neutronv2/api.py#L1494 [3] https://docs.openstack.org/api-ref/network/v2/index.html?expanded=update-port-detail#update-port On Thu, Dec 3, 2020 at 12:20 PM Stephen Finucane wrote: > > On Mon, 2020-11-30 at 11:51 +0000, Stephen Finucane wrote: > > When attaching a port to an instance, nova will check for DNS support in neutron > > and set a 'dns_name' attribute if found. To populate this attribute, nova uses a > > sanitised version of the instance name, stored in the instance.hostname > > attribute. This sanitisation simply strips out any unicode characters and > > replaces underscores and spaces with dashes, before truncating to 63 characters. > > It does not currently replace periods and this is the cause of bug 1581977 [1], > > where an instance name such as 'ubuntu20.04' will fail to schedule since neutron > > identifies '04' as an invalid TLD. > > > > The question now is what to do to resolve this. There are two obvious paths > > available to us. The first is to simply catch these invalid hostnames and > > replace them with an arbitrary hostname of format 'Server-{serverUUID}'. This is > > what we currently do for purely unicode instance names and is what I've proposed > > at [2]. The other option is to strip all periods, or rather replace them with > > hyphens, when sanitizing the instance name. This is more predictable but breaks > > the ability to use the instance name as a FQDN. Such usage is something I'm told > > we've never supported, but I'm concerned that there are users out there who are > > relying on this all the same and I'd like to get a feel for whether this is the > > case first. > > > > So, the question: does anyone currently rely on this inadvertent "feature"? > > Thanks to everyone who replied to this. We discussed this in today's nova > meeting [1] and decided we're okay with changing how we generate instance names, > and that we can backport this since there are no guarantees made in either the > documentation or API reference as to what a instance's hostname will be and > existing instance's won't see their hostname change. There are two options > available to us: > > * Replace periods with dashes > > This has the best results for people that are naming their instance with > FQDNs, since the hostname looks sane. > > 'test-instance.mydomain.org' -> 'test-instance-mydomain-org' > 'ubuntu18.04' -> 'ubuntu18-04' > > * Strip everything after the first period > > This has the best results for everyone else, since the hostname better > reflects the original display name. > > 'test-instance.mydomain.org' -> 'test-instance' > 'ubuntu18.04' -> 'ubuntu18' > > If anyone has strong feeling on either approach, please let us know. If not, > we'll duke this out ourselves on #openstack-nova next week. > > Also, as an aside, I think we all realize that long term, the best solution for > this would probably be a API change. This would allow us to add an 'openstack > server create --hostname' parameter that is correctly validated against the > various RFCs. I'm not currently planning to work on this but I'd be happy to > assist anyone that was interested in doing so. > > Cheers, > Stephen > > [1] http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-12-03-16.00.log.txt > > > Cheers, > > Stephen > > > > [1] https://launchpad.net/bugs/1581977 > > [2] https://review.opendev.org/c/openstack/nova/+/764482 > > > > > From zigo at debian.org Thu Dec 3 21:09:26 2020 From: zigo at debian.org (Thomas Goirand) Date: Thu, 3 Dec 2020 22:09:26 +0100 Subject: [ops][nova][designate] Does anyone rely on fully-qualified instance names? In-Reply-To: References: Message-ID: On 11/30/20 12:51 PM, Stephen Finucane wrote: > The other option is to strip all periods, or rather replace them with > hyphens, when sanitizing the instance name. This is more predictable but breaks > the ability to use the instance name as a FQDN. Such usage is something I'm told > we've never supported, but I'm concerned that there are users out there who are > relying on this all the same and I'd like to get a feel for whether this is the > case first. Hi, We don't use Designate *yet*, but we're planning to. Using an FQDN for the instance name is what we used to do so far. Even if that's not something that *was* supported, it would IMO be desirable to support it, at least in the future. Just my 2 cents, hoping to help, Cheers, Thomas Goirand (zigo) From tonyliu0592 at hotmail.com Thu Dec 3 21:43:38 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 3 Dec 2020 21:43:38 +0000 Subject: [rdo][centos8] OVS and OVN packages Message-ID: Hi, I see OVN package ovn2.13-central-20.09.0-2.el8.x86_64.rpm and OVS in openvswitch2.13-2.13.0-60.el8.x86_64.rpm [1]. But I can't find them in [2]. Where are they built? Which source package is it based, [3] and [4]? [1] http://mirror.centos.org/centos/8/nfv/x86_64/openvswitch-2/Packages/o/ [2] https://cbs.centos.org/koji/packages [3] https://github.com/ovn-org/ovn/archive/v20.09.0.tar.gz [4] https://github.com/openvswitch/ovs/archive/v2.13.1.tar.gz Thanks! Tony From ykarel at redhat.com Fri Dec 4 07:26:14 2020 From: ykarel at redhat.com (Yatin Karel) Date: Fri, 4 Dec 2020 12:56:14 +0530 Subject: [rdo][centos8] OVS and OVN packages In-Reply-To: References: Message-ID: Hi Tony, These are built in CBS only. ovn2.13:- https://cbs.centos.org/koji/packageinfo?packageID=8195 openvswitch2.13:- https://cbs.centos.org/koji/packageinfo?packageID=8194 These are rebuilts from FDP SRPMS:- ftp://ftp.redhat.com/pub/redhat/linux/enterprise/8Base/en/Fast-Datapath/SRPMS/. Wrt Sources, [3] is correct, and [4] should be https://github.com/openvswitch/ovs/archive/v2.13.0.tar.gz The RPM SPECS can be found in:- https://git.centos.org/rpms/openvswitch/tree/c8-sig-nfv-openvswitch-2.13 https://git.centos.org/rpms/ovn/tree/c8-sig-nfv-openvswitch-2.13 Thanks and Regards Yatin Karel On Fri, Dec 4, 2020 at 3:15 AM Tony Liu wrote: > > Hi, > > I see OVN package ovn2.13-central-20.09.0-2.el8.x86_64.rpm and > OVS in openvswitch2.13-2.13.0-60.el8.x86_64.rpm [1]. > But I can't find them in [2]. > Where are they built? > Which source package is it based, [3] and [4]? > > [1] http://mirror.centos.org/centos/8/nfv/x86_64/openvswitch-2/Packages/o/ > [2] https://cbs.centos.org/koji/packages > [3] https://github.com/ovn-org/ovn/archive/v20.09.0.tar.gz > [4] https://github.com/openvswitch/ovs/archive/v2.13.1.tar.gz > > > Thanks! > Tony > > From hberaud at redhat.com Fri Dec 4 07:56:04 2020 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 4 Dec 2020 08:56:04 +0100 Subject: [release] Release countdown for week R-19, Nov 30 - Dec 04 Message-ID: Greetings Everyone! Development Focus ----------------- We are now past the Wallaby-1 milestone. Teams should now be focused on feature development and completion of release cycle goals [0]. [0] https://governance.openstack.org/tc/goals/selected/wallaby/index.html General Information ------------------- Our next milestone in this development cycle will be Wallaby-2, on 21 January, 2021. This milestone is when we freeze the list of deliverables that will be included in the Wallaby final release, so if you plan to introduce new deliverables in this release, please propose a change to add an empty deliverable file in the deliverables/wallaby directory of the openstack/releases repository. Now is also generally a good time to look at bugfixes that were introduced in the master branch that might make sense to be backported and released in a stable release. If you have any question around the OpenStack release process, feel free to ask on this mailing-list or on the #openstack-release channel on IRC. Upcoming Deadlines & Dates -------------------------- Cinder Spec Freeze: 18 December, 2020 Manila Spec Freeze: 25 December, 2020 Wallaby-2 Milestone: 21 January, 2021 Hervé Beraud (hberaud) and the Release Management Team -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Fri Dec 4 08:20:16 2020 From: sbauza at redhat.com (Sylvain Bauza) Date: Fri, 4 Dec 2020 09:20:16 +0100 Subject: [ops][nova][designate] Does anyone rely on fully-qualified instance names? In-Reply-To: References: Message-ID: To be clear, I think there are confusions in between three names : #1 the instance display name #2 the /etc/hostname #3 the related /etc/hosts name For #1, having a FQDN [1] is OK. Also, as it's an API field, we can't change it or it would need a new microversion. For #2, as Jeremy said, in general you just have the short instance name, not the whole FQDN, so I think it's totally fine to strip the name after the first period (and AFAICT, that's why you see the short name already as the OS cuts it already) For #3, you can *either* have short names or FQDNs but given we see problems with Designate, I'd be telling that we should also strip the name instead of having the whole FQDN, as anyway the domain is not verified by Nova. -Sylvain [1] By FQDN, I mean a name like "instance.tld" where "tld" is "domain[\..*]+" On Thu, Dec 3, 2020 at 10:16 PM Thomas Goirand wrote: > On 11/30/20 12:51 PM, Stephen Finucane wrote: > > The other option is to strip all periods, or rather replace them with > > hyphens, when sanitizing the instance name. This is more predictable but > breaks > > the ability to use the instance name as a FQDN. Such usage is something > I'm told > > we've never supported, but I'm concerned that there are users out there > who are > > relying on this all the same and I'd like to get a feel for whether this > is the > > case first. > > Hi, > > We don't use Designate *yet*, but we're planning to. Using an FQDN for > the instance name is what we used to do so far. Even if that's not > something that *was* supported, it would IMO be desirable to support it, > at least in the future. > > Just my 2 cents, hoping to help, > Cheers, > > Thomas Goirand (zigo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arne.wiebalck at cern.ch Fri Dec 4 16:37:31 2020 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Fri, 4 Dec 2020 17:37:31 +0100 Subject: [baremetal-sig][ironic] Tue Dec 8, 2pm UTC: Ironic and Redfish Interop Profiles Message-ID: Dear all, The Bare Metal SIG will meet on Tue Dec 8 at 2pm UTC. This time there will be a 10 minute "topic-of-the-day" presentation by Richard Pioso: 'An Introduction to Ironic Redfish Interoperability Profiles' So, if you would like to learn how to validate if hardware is fit for use with Redfish and Ironic, find all the details for this meeting on the SIG's etherpad: https://etherpad.opendev.org/p/bare-metal-sig Everyone is welcome, don't miss out! Cheers, Arne From juliaashleykreger at gmail.com Fri Dec 4 16:42:49 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 4 Dec 2020 08:42:49 -0800 Subject: [ironic] : Third-party ML2 plugins for Openstack ironic with virtual networks In-Reply-To: <2096960188.3638662.1606926679144@mail.yahoo.com> References: <1287635714.3401651.1606865544414@mail.yahoo.com> <2096960188.3638662.1606926679144@mail.yahoo.com> Message-ID: Greetings Fred, Good to hear from you! I've not heard of anyone using the Mellanox ML2 driver, but it does seem to have VNIC_BAREMETAL support. I have heard some people use the Arista ML2 driver, but haven't really gotten any feedback as of recent. Most operators I speak to have wound up using networking-generic-switch and in some cases even contributing back to it. This is what we use for testing, and while the documentation says not for production use, people do seem to use it for such and seem to be generally happy with it from the feedback I've gotten over the last couple of years. Speaking of flat network size, you may also want to explore using conductor groups to possibly consider delineating pools of conductors, which could also increase your operational security if your spanning beyond a single facility. Then again, you may already be doing that. :) Let us know if you have any other questions we can assist with. -Julia On Wed, Dec 2, 2020 at 8:33 AM fsbiz at yahoo.com wrote: > > Is anyone using virtual networks for their Openstack Ironic installations? > > Our flat network is now past 3000 nodes and I am investigating Arista's ML2 > plugin and / or Mellanox's NEO as the ML2 plugin. > > In addition to scaling we also have additional requirements like provisioning a bare-metal server > in a conference room away from the DC for demo purposes. > > I have general questions on whether anyone is actually using the above two (or any other ML2 plugins) > with their Openstack ironic installations ? > > Thanks, > Fred. From tonyliu0592 at hotmail.com Fri Dec 4 17:08:22 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Fri, 4 Dec 2020 17:08:22 +0000 Subject: [rdo][centos8] OVS and OVN packages In-Reply-To: References: Message-ID: Thank you Yatin! Tony > -----Original Message----- > From: Yatin Karel > Sent: Thursday, December 3, 2020 11:26 PM > To: Tony Liu > Cc: centos at centos.org; OpenStack Discuss discuss at lists.openstack.org> > Subject: Re: [rdo][centos8] OVS and OVN packages > > Hi Tony, > > These are built in CBS only. > ovn2.13:- https://cbs.centos.org/koji/packageinfo?packageID=8195 > openvswitch2.13:- https://cbs.centos.org/koji/packageinfo?packageID=8194 > > These are rebuilts from FDP SRPMS:- > ftp://ftp.redhat.com/pub/redhat/linux/enterprise/8Base/en/Fast- > Datapath/SRPMS/. > > Wrt Sources, [3] is correct, and [4] should be > https://github.com/openvswitch/ovs/archive/v2.13.0.tar.gz > > The RPM SPECS can be found in:- > https://git.centos.org/rpms/openvswitch/tree/c8-sig-nfv-openvswitch-2.13 > https://git.centos.org/rpms/ovn/tree/c8-sig-nfv-openvswitch-2.13 > > Thanks and Regards > Yatin Karel > > On Fri, Dec 4, 2020 at 3:15 AM Tony Liu wrote: > > > > Hi, > > > > I see OVN package ovn2.13-central-20.09.0-2.el8.x86_64.rpm and OVS in > > openvswitch2.13-2.13.0-60.el8.x86_64.rpm [1]. > > But I can't find them in [2]. > > Where are they built? > > Which source package is it based, [3] and [4]? > > > > [1] > > http://mirror.centos.org/centos/8/nfv/x86_64/openvswitch-2/Packages/o/ > > [2] https://cbs.centos.org/koji/packages > > [3] https://github.com/ovn-org/ovn/archive/v20.09.0.tar.gz > > [4] https://github.com/openvswitch/ovs/archive/v2.13.1.tar.gz > > > > > > Thanks! > > Tony > > > > From Arkady.Kanevsky at dell.com Fri Dec 4 17:24:42 2020 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 4 Dec 2020 17:24:42 +0000 Subject: [baremetal-sig][ironic] Tue Dec 8, 2pm UTC: Ironic and Redfish Interop Profiles In-Reply-To: References: Message-ID: Dell Customer Communication - Confidential I will miss it due to Foundation board meeting. Looking forward to minutes on it. Best luck to Richard -----Original Message----- From: Arne Wiebalck Sent: Friday, December 4, 2020 10:38 AM To: openstack-discuss Subject: [baremetal-sig][ironic] Tue Dec 8, 2pm UTC: Ironic and Redfish Interop Profiles [EXTERNAL EMAIL] Dear all, The Bare Metal SIG will meet on Tue Dec 8 at 2pm UTC. This time there will be a 10 minute "topic-of-the-day" presentation by Richard Pioso: 'An Introduction to Ironic Redfish Interoperability Profiles' So, if you would like to learn how to validate if hardware is fit for use with Redfish and Ironic, find all the details for this meeting on the SIG's etherpad: https://etherpad.opendev.org/p/bare-metal-sig Everyone is welcome, don't miss out! Cheers, Arne From amy at demarco.com Fri Dec 4 18:17:22 2020 From: amy at demarco.com (Amy Marrich) Date: Fri, 4 Dec 2020 12:17:22 -0600 Subject: [Diversity] Diversity & Inclusion Meeting 12/7 Message-ID: The Diversity & Inclusion WG invites members of all OSF projects to our next meeting Monday, December 7th, at 17:00 UTC in the #openstack-diversity channel. The agenda can be found at https://etherpad.openstack.org/p/diversity -wg-agenda. Please feel free to add any other topics you wish to discuss at the meeting. Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From yatindra.shashi at intel.com Fri Dec 4 09:19:02 2020 From: yatindra.shashi at intel.com (Shashi, Yatindra) Date: Fri, 4 Dec 2020 09:19:02 +0000 Subject: devstack multinode: MTU tag not sent in Libvirt XML created from Nova Message-ID: Hi All, I am using devstack-train version. Can you guys tell me which source file or function should I look when I get mtu size in the network-info but not in the libvirtxml parsed like below log Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: DEBUG nova.network.os_vif_util [None req-31d5b683-9e85-455e-95fd-b110de636fe4 admin admin] Converting VIF {"id": "d0358b23-d96b-4394-969d-32858b60e261", "addre ss": "fa:16:3e:46:97:e0", "network": {"id": "991289bb-6794-4673-b32c-e0f2296a8949", "bridge": "br-int", "label": "shared", "subnets": [{"cidr": "192.168.233.0/24", "dns": [], "gateway": {"address": "192.168.233.1 ", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.233.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.16 8.233.2"}}], "meta": {"injected": false, "tenant_id": "ac158184024043faba836ec3d347d0d7", "mtu": 1450, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"connectivity": "l2", "port_filter": true, "ovs_hybrid_plug": false, "datapath_type": "system", "bridge_name": "br-int"}, "devname": "tapd0358b23-d9", "ovs_interfaceid": "d0358b23-d96b-4394-969d-32858b60e261", "qbh_params": null, "qbg_params": null , "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "meta": {}} {{(pid=8579) nova_to_osvif_vif /opt/stack/nova/nova/network/os_vif_util.py:516}} Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: DEBUG nova.network.os_vif_util [None req-31d5b683-9e85-455e-95fd-b110de636fe4 admin admin] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:97: e0,bridge_name='br-int',has_traffic_filtering=True,id=d0358b23-d96b-4394-969d-32858b60e261,network=Network(991289bb-6794-4673-b32c-e0f2296a8949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_del ete=False,vif_name='tapd0358b23-d9') {{(pid=8579) nova_to_osvif_vif /opt/stack/nova/nova/network/os_vif_util.py:553}} Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: DEBUG nova.objects.instance [None req-31d5b683-9e85-455e-95fd-b110de636fe4 admin admin] Lazy-loading 'pci_devices' on Instance uuid 997103d3-0a60-4ce2-a85d-c42 eb24cfd8b {{(pid=8579) obj_load_attr /opt/stack/nova/nova/objects/instance.py:1090}} Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: DEBUG nova.virt.libvirt.driver [None req-31d5b683-9e85-455e-95fd-b110de636fe4 admin admin] [instance: 997103d3-0a60-4ce2-a85d-c42eb24cfd8b] End _get_guest_xml xml= Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: 997103d3-0a60-4ce2-a85d-c42eb24cfd8b Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: instance-00000001 Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: 1048576 Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: 1 Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: cl_acrn Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: 2020-12-04 09:11:22 Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: 1024 Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: 10 Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: 0 Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: 0 Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: 1 Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: admin Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: admin Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: hvm Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: /usr/share/acrn/bios/OVMF.fd Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: --> Why no mtu set in xml Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: Dez 04 10:11:22 acrn1-NUC7i7DNHE nova-compute[8579]: : I checked the solution mentioned in [ https://bugs.launchpad.net/nova/+bug/1747496] and the source code which it said to add is already in my nova code. Anyone have any suggestions? Mit freundlichen Grüßen/ with best regards, Yatindra Shashi IoTG DE- Intel Corporation Munich, Germany Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhanesh1212 at gmail.com Fri Dec 4 11:23:24 2020 From: dhanesh1212 at gmail.com (dhanesh1212121212) Date: Fri, 4 Dec 2020 16:53:24 +0530 Subject: openstack read only user Message-ID: Hi Team, Please let me know the steps to create a read only user in openstack. (My version is Rocky) Regards, Dhanesh M. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Fri Dec 4 19:33:00 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Fri, 4 Dec 2020 20:33:00 +0100 Subject: [TripleO] how to make that inspection IP is given only to known hosts Message-ID: Hi all, I have a situation, when in my network, I have loads of equipment, which I do not control. and Inspection range gets occupied quite fast. and in TCP dump I get such messages: DHCP-Message Option 53, length 1: NACK Server-ID Option 54, length 4: DHCPD-IP MSG Option 56, length 21: "address not available" I have disabled: enabled_node_discovery = false Anything else? maybe additional environment options for undercloud I could provide? Than kyou in advance, have a good $day_time -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rfolco at redhat.com Fri Dec 4 20:31:26 2020 From: rfolco at redhat.com (Rafael Folco) Date: Fri, 4 Dec 2020 18:31:26 -0200 Subject: [tripleo] TripleO CI Summary: Unified Sprint 36 Message-ID: Greetings, The TripleO CI team has just completed **Unified Sprint 36** (Nov 13 thru Dec 03). The following is a summary of completed work during this sprint cycle: - Added Tempest cleanup support: https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/762405/ - Ported jobs running against tripleo-ansible stable/train to centos-8 - ansible-role-collect-logs role is packaged now and available for train onwards - All the jobs are migrated to vexxhost and rdo-cloud nodeset is - removed - Continued merging changes to wire-up content provider jobs across TripleO related projects to remove docker.io dependencies: https://review.opendev.org/#/q/topic:new-ci-job - Designed a dependency pipeline to early detect breakages in the base OS: https://hackmd.io/14KFQiyWSBCRsfJmBZNARw?both - component standalone-upgrade - Enable the component-ci-testing repo if it is installed: https://review.opendev.org/c/openstack/tripleo-quickstart-extras/+/764767 - https://review.rdoproject.org/r/#/c/31200/ Adds tripleo component standalone-upgrade-master/victoria/ussuri - Redundant content providers for the upgrade jobs to reduce resource usage: https://review.opendev.org/761188 - Tempest scenario manager improvements: 1. https://review.opendev.org/c/openstack/tempest/+/753689 2. https://review.opendev.org/c/openstack/tempest/+/754081 3. https://review.opendev.org/c/openstack/tempest/+/755072 - Promoter configuration fix: https://review.rdoproject.org/r/#/c/28014/ - Created a new job in downstream for 16.2 component pipeline which deploys multiple overcloud stack using a single undercloud node: - https://review.opendev.org/c/openstack/tripleo-quickstart-extras/+/763786 - https://review.opendev.org/c/openstack/tripleo-quickstart/+/761061 - Added OSP 13 ansible openstack collection tests job - Wrote test cases for cockpit (telegraf container bump to python3): https://review.rdoproject.org/r/#/c/30492/ - Removed ubi-8 based jobs: https://review.opendev.org/760426 - Ansible-collections-openstack improvements: 1. ttps://review.opendev.org/c/openstack/ansible-collections-openstack/+/764379 2. https://review.opendev.org/c/openstack/ansible-collections-openstack/+/764381 3. https://review.opendev.org/c/openstack/ansible-collections-openstack/+/764411 4. https://review.opendev.org/c/openstack/ansible-collections-openstack/+/764060 Upstream promotions [1] Ruck/Rover recorded notes [2]. NEXT SPRINT =========== The planned work for the next sprint is to continue work started in the previous sprint: The Ruck and Rover shifts for the upcoming weeks are scheduled as follows: Dec 7: Sorin Sbarnea (zbr), Marios Andreou (marios) Dec 14: Chandan Kumar (chandankumar), Amol Kahat (akahat) Dec 21: Bhagyashri Shewale (bhagyashris), Soniya Vyas (soniya29) Dec 28: Sandeep Yadav (ysandeep), Sagi Shnaidman (sshnaidm) Jan 4: Wesley Hayutin (weshay), Poja Jadhav (pojadhav) Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Ruck/rover notes to be tracked in hackmd [2]. Thanks, --rfolco [1] http://dashboard-ci.tripleo.org/d/HkOLImOMk/upstream-and-rdo-promotions?orgId=1&fullscreen&panelId=33 [2] https://hackmd.io/R0kCgz_7SHSix_cNgoC9pg -- Folco -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sat Dec 5 02:52:06 2020 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sat, 5 Dec 2020 02:52:06 +0000 Subject: tox -e pep8 Message-ID: Do we still using it? If not, what have we replaced it with? Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell EMC office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Sat Dec 5 03:06:08 2020 From: iurygregory at gmail.com (Iury Gregory) Date: Sat, 5 Dec 2020 04:06:08 +0100 Subject: tox -e pep8 In-Reply-To: References: Message-ID: Hi Arkady, Yes the projects are using pep8, you don't need to specify in the zuul file if you have the template `openstack-python3--jobs`. In ironic for example: https://opendev.org/openstack/ironic/src/branch/master/zuul.d/project.yaml#L6 The template is defined in https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/project-templates.yaml#L479-L498 Em sáb., 5 de dez. de 2020 às 03:55, Kanevsky, Arkady < Arkady.Kanevsky at dell.com> escreveu: > Do we still using it? > > If not, what have we replaced it with? > > > > Thanks, > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell EMC office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Dec 5 03:44:01 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 5 Dec 2020 03:44:01 +0000 Subject: tox -e pep8 In-Reply-To: References: Message-ID: <20201205034400.rmnohg3z3tfkuiyn@yuggoth.org> On 2020-12-05 02:52:06 +0000 (+0000), Kanevsky, Arkady wrote: > Do we still using it? > If not, what have we replaced it with? Most projects do still have a "pep8" tox testenv, however these days it usually invokes the flake8 utility which calls pycodestyle (the successor of the old pep8 utility) as one of multiple plugins. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ssbarnea at redhat.com Sat Dec 5 07:39:23 2020 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Sat, 5 Dec 2020 07:39:23 +0000 Subject: tox -e pep8 In-Reply-To: <20201205034400.rmnohg3z3tfkuiyn@yuggoth.org> References: <20201205034400.rmnohg3z3tfkuiyn@yuggoth.org> Message-ID: My impression was that the newer recommended tox environment was “linters’ and it would decouple the implementation from the process name, making easy for each project too adapt their linters based on their needs. A grep on codesearch could show how popular is each. I think that one of the reasons many projects were not converted is because job is defined by a shared template and making a bulk transition requires a lot of effort. I am wondering if we could use a trick to easy this kind of migration: make zuul job detect which environment is present and call it. Basically we can have a generic zuul linter that calls either pep8 or linters tox end. We can go even further and make it call “yarn lint” if found. On Sat, 5 Dec 2020 at 03:48 Jeremy Stanley wrote: > On 2020-12-05 02:52:06 +0000 (+0000), Kanevsky, Arkady wrote: > > Do we still using it? > > If not, what have we replaced it with? > > Most projects do still have a "pep8" tox testenv, however these days > it usually invokes the flake8 utility which calls pycodestyle (the > successor of the old pep8 utility) as one of multiple plugins. > -- > Jeremy Stanley > -- -- /sorin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Sat Dec 5 17:04:08 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Sat, 5 Dec 2020 18:04:08 +0100 Subject: openstack read only user In-Reply-To: References: Message-ID: hi Dhanesh. At least in Newton you had to create a new role, and create a new policy.json for each service (nova, neutron, glance, and so on) for that role, and assign user to that group. but in Queens , I saw it was looking like working, and itm ight have something like that by default (I mean role). On Fri, 4 Dec 2020 at 20:01, dhanesh1212121212 wrote: > Hi Team, > > Please let me know the steps to create a read only user in openstack. (My > version is Rocky) > > Regards, > Dhanesh M. > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Sat Dec 5 17:06:21 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Sat, 5 Dec 2020 18:06:21 +0100 Subject: [rdo][centos8][repos] CentOS8 RDO-USSURI repo missing openvswitch2.13 In-Reply-To: References: Message-ID: Yes, Thank you Alex and Yatin, Yes I have installed it a while ago. :) Yup it works now with what you have advised me. On Thu, 3 Dec 2020 at 06:13, Yatin Karel wrote: > Hi Ruslanas, > > On Thu, Dec 3, 2020 at 1:20 AM Ruslanas Gžibovskis > wrote: > > > > Hi all, > > > > I have an issue, in CentOS8 repos for RDO. It is refering to > openvswitch2.13 name, but in the repos I can see openvswitch packages with > version in version field, looks like in dependency field it is missing dash > (-) or smth... here is a paste of repos and error message [0]. Or any other > way how this could be fixed? thanks. > > > > and packages I can see [1] here. > > With Ussuri, RDO has switched to utilize openvswitch2.13 builds from > NFV SIG[1], You likely have older centos-release-openstack-ussuri > installed, can confirm with rpm -q > > centos-release-openstack-ussuri(centos-release-openstack-ussuri-1-4.el8.noarch > is needed to utilize openvswitch2.13 from NFV builds), > > So what you need to do is to run "sudo dnf update > centos-release-openstack-ussuri -y" before triggering sudo dnf update, > which will pull centos-release-nfv-openvswitch which provides > openswitch2.13. > > Also with sudo dnf update, openvswitch-2.12 will be upgraded to > openvswitch2.13 and will result in openvswitch service to be stopped, > so additional steps to restart of openvswitch would be needed. > Special Treatment for OpenvSwitch as part of TripleO update is being > taken care with [2], so you need to ensure if [2] is available in your > environment before running TripleO update or upgrade. The latest > tagged release of tripleo-ansible for ussuri is missing the needed > patch [2], so you can patch that manually or request new releases like > [3]. > > If any other tool apart from TripleO is used then you can use manual > update/restart of openvswitch or adjust the tool to handle that. > > > > [1] https://review.rdoproject.org/r/#/c/30774/ > [2] https://review.opendev.org/c/openstack/tripleo-ansible/+/761856 > [3] https://review.opendev.org/c/openstack/releases/+/755718/ > > > > > [0] http://paste.openstack.org/show/xEalilL1rRJSDTzNCGt1/ > > [1] http://paste.openstack.org/show/VLWrNFpRLatwq5qXP2kD/ > > > > -- > > Ruslanas Gžibovskis > > +370 6030 7030 > > Thanks and Regards > Yatin Karel > > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Sat Dec 5 17:08:08 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Sat, 5 Dec 2020 18:08:08 +0100 Subject: [tripleo] Deployment update (node addition) after changing aggregate groups/zones In-Reply-To: References: Message-ID: Hi all, Any thoughts on this one? > Hi all. >> >> After changing the host aggregate group and zone, I cannot run OpenStack >> deploy command successfully again, even after updating deployment >> environment files according to my setup. >> >> I receive error bigger one in [0]: >> 2020-12-02 10:16:18.532419 | 52540000-0001-cf95-492f-0000000003ca | >> FATAL | Nova: Manage aggregate and availability zone and add hosts to the >> zone | undercloud | error={"changed": false, "msg": "ConflictException: >> 409: Client Error for url: >> http://10.120.129.199:8774/v2.1/os-aggregates/1/action, Cannot add host >> to aggregate 1. Reason: One or more hosts already in availability zone(s) >> ['Alpha01']."} >> >> I was following this link [1] instructions for "Configuring Availability >> Zones (AZ)" steps to modify with OpenStack commands. And zone was created >> successfully, but when I needed to add additional nodes, executed >> deployment again with increased numbers it was complaining about an >> incorrect aggregate zone, and now it is complaining about not empty zone >> with error [0] mentioned above. I have added aggregate zones into >> deployment files even role file... any ideas? >> >> Also, I think, this should be mentioned, that added it after install, you >> lose the possibility to update using tripleo tool and you will need to >> modify environment files with. >> >> >> >> [0] http://paste.openstack.org/show/800622/ >> [1] >> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/distributed_compute_node.html#configuring-availability-zones-az >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Sat Dec 5 17:24:18 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sat, 5 Dec 2020 12:24:18 -0500 Subject: openstack read only user In-Reply-To: References: Message-ID: As far as I know, the support for a read only user is not complete in Queens or Rocky. http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017556.html On Sat, Dec 5, 2020 at 12:16 PM Ruslanas Gžibovskis wrote: > hi Dhanesh. > > At least in Newton you had to create a new role, and create a new > policy.json for each service (nova, neutron, glance, and so on) for that > role, and assign user to that group. > > but in Queens , I saw it was looking like working, and itm ight have > something like that by default (I mean role). > > > On Fri, 4 Dec 2020 at 20:01, dhanesh1212121212 > wrote: > >> Hi Team, >> >> Please let me know the steps to create a read only user in openstack. (My >> version is Rocky) >> >> Regards, >> Dhanesh M. >> > > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sat Dec 5 19:18:27 2020 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 5 Dec 2020 14:18:27 -0500 Subject: octavia flavor question Message-ID: Folks, I am deployed Octavia and now trying to understand how does flavor play there role using this guide https://docs.openstack.org/octavia/latest/admin/flavors.html In my octavia.conf i have statically defined nova flavor so when i spin up LB it always use m1.amphora (UUID: 8651951f-e478-4efb-b298-cf3150d38472) which has 1vCPU/2GB memory. # grep amp_flavor_id /etc/octavia/octavia.conf amp_flavor_id = 8651951f-e478-4efb-b298-cf3150d38472 Question: how do I create or tell octavia to use multiple nova flavors for example, I want 3 flavors m1.amphora - 1vCPU/2GB/20GB disk m2.amphora - 4vCPU/4GB/20GB disk m3.amphora - 8vCPU/8GB/40GB disk (with HugePage and CPUpinning support) I want my users to have the option to select whatever flavor they like. how does that fit here? I didn't find any document explaining that feature to select your desired nova flavor. Did I miss something? From anlin.kong at gmail.com Sun Dec 6 00:40:29 2020 From: anlin.kong at gmail.com (Lingxian Kong) Date: Sun, 6 Dec 2020 13:40:29 +1300 Subject: octavia flavor question In-Reply-To: References: Message-ID: I'm on a devstack host with Octavia master. By running "openstack loadbalancer provider capability list amphora", I can see "compute_flavor" of type "flavor", so that I can specify a nova flavor in "--flavor-data" parameter when creating flavorprofile. A full command line example: https://dpaste.com/4CB8GE2RN --- Lingxian Kong Senior Software Engineer Catalyst Cloud www.catalystcloud.nz On Sun, Dec 6, 2020 at 8:23 AM Satish Patel wrote: > Folks, > > I am deployed Octavia and now trying to understand how does flavor > play there role using this guide > https://docs.openstack.org/octavia/latest/admin/flavors.html > > > In my octavia.conf i have statically defined nova flavor so when i > spin up LB it always use m1.amphora (UUID: > 8651951f-e478-4efb-b298-cf3150d38472) which has 1vCPU/2GB memory. > > # grep amp_flavor_id /etc/octavia/octavia.conf > amp_flavor_id = 8651951f-e478-4efb-b298-cf3150d38472 > > Question: how do I create or tell octavia to use multiple nova flavors > for example, I want 3 flavors > > m1.amphora - 1vCPU/2GB/20GB disk > m2.amphora - 4vCPU/4GB/20GB disk > m3.amphora - 8vCPU/8GB/40GB disk (with HugePage and CPUpinning support) > > I want my users to have the option to select whatever flavor they > like. how does that fit here? I didn't find any document explaining > that feature to select your desired nova flavor. Did I miss something? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sun Dec 6 03:01:00 2020 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 5 Dec 2020 22:01:00 -0500 Subject: octavia flavor question In-Reply-To: References: Message-ID: Thank you so much, that is what I was looking for. On Sat, Dec 5, 2020 at 7:40 PM Lingxian Kong wrote: > > I'm on a devstack host with Octavia master. By running "openstack loadbalancer provider capability list amphora", I can see "compute_flavor" of type "flavor", so that I can specify a nova flavor in "--flavor-data" parameter when creating flavorprofile. > > A full command line example: https://dpaste.com/4CB8GE2RN > > --- > Lingxian Kong > Senior Software Engineer > Catalyst Cloud > www.catalystcloud.nz > > > On Sun, Dec 6, 2020 at 8:23 AM Satish Patel wrote: >> >> Folks, >> >> I am deployed Octavia and now trying to understand how does flavor >> play there role using this guide >> https://docs.openstack.org/octavia/latest/admin/flavors.html >> >> >> In my octavia.conf i have statically defined nova flavor so when i >> spin up LB it always use m1.amphora (UUID: >> 8651951f-e478-4efb-b298-cf3150d38472) which has 1vCPU/2GB memory. >> >> # grep amp_flavor_id /etc/octavia/octavia.conf >> amp_flavor_id = 8651951f-e478-4efb-b298-cf3150d38472 >> >> Question: how do I create or tell octavia to use multiple nova flavors >> for example, I want 3 flavors >> >> m1.amphora - 1vCPU/2GB/20GB disk >> m2.amphora - 4vCPU/4GB/20GB disk >> m3.amphora - 8vCPU/8GB/40GB disk (with HugePage and CPUpinning support) >> >> I want my users to have the option to select whatever flavor they >> like. how does that fit here? I didn't find any document explaining >> that feature to select your desired nova flavor. Did I miss something? >> From skaplons at redhat.com Sun Dec 6 09:42:57 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sun, 6 Dec 2020 10:42:57 +0100 Subject: [ci] Kernel panics in the guest vm Message-ID: <20201206094257.emilk2n7j6rgdcyi@p1.localdomain> Hi, Since some time I noticed that quite often some scenario jobs are failing due to issue with SSH to the guest vm and when I was checking the reason of this SSH failure, it seems that it's due to Kernel panic in the guest vm, like e.g. [1]: [ 0.000000] Console: colour VGA+ 80x25 [ 0.000000] printk: console [tty1] enabled [ 0.000000] printk: console [ttyS0] enabled [ 0.000000] ACPI: Core revision 20190703 [ 0.000000] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns [ 0.000000] APIC: Switch to symmetric I/O mode setup [ 0.000000] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.000000] ..MP-BIOS bug: 8254 timer not connected to IO-APIC [ 0.000000] ...trying to set up timer (IRQ0) through the 8259A ... [ 0.000000] ..... (found apic 0 pin 2) ... [ 0.000000] ....... failed. [ 0.000000] ...trying to set up timer as Virtual Wire IRQ... [ 0.000000] ..... failed. [ 0.000000] ...trying to set up timer as ExtINT IRQ... [ 0.000000] ..... failed :(. [ 0.000000] Kernel panic - not syncing: IO-APIC + timer doesn't work! Boot with apic=debug and send a report. Then try booting with the 'noapic' option. [ 0.000000] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.3.0-26-generic #28~18.04.1-Ubuntu [ 0.000000] Hardware name: OpenStack Foundation OpenStack Nova, BIOS 1.13.0-1ubuntu1 04/01/2014 [ 0.000000] Call Trace: [ 0.000000] dump_stack+0x6d/0x95 [ 0.000000] panic+0xfe/0x2d4 [ 0.000000] check_timer+0x5e8/0x685 [ 0.000000] ? radix_tree_lookup+0xd/0x10 [ 0.000000] setup_IO_APIC+0x182/0x1ca [ 0.000000] apic_intr_mode_init+0x1f5/0x1f8 [ 0.000000] x86_late_time_init+0x1b/0x22 [ 0.000000] start_kernel+0x4cb/0x58b [ 0.000000] x86_64_start_reservations+0x24/0x26 [ 0.000000] x86_64_start_kernel+0x74/0x77 [ 0.000000] secondary_startup_64+0xa4/0xb0 [ 0.000000] ---[ end Kernel panic - not syncing: IO-APIC + timer doesn't work! Boot with apic=debug and send a report. Then try booting with the 'noapic' option. ]--- Logstash [2] is telling me that it is problem not only in neutron related jobs. Maybe someone of You was already trying to investigate such issue and maybe You have some ideas what we can do with it? In this specific example above [1], it was Cirros 0.5.1 image used. But I didn't check if that is the case in all other cases TBH. [1] https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_c50/764921/1/gate/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid/c501b2c/testr_results.html [2] http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Kernel%20panic%20-%20not%20syncing%3A%20IO-APIC%20%2B%20timer%20doesn't%20work!%20%20Boot%20with%20apic%3Ddebug%20and%20send%20a%20report.%20%20Then%20try%20booting%20with%20the%20'noapic'%20option.%5C%22 -- Slawek Kaplonski Principal Software Engineer Red Hat From ruslanas at lpic.lt Sun Dec 6 11:24:51 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Sun, 6 Dec 2020 12:24:51 +0100 Subject: [tripleo][ussuri] overcloud deploy increase timeout/wai time to load initramfs to deploy os Message-ID: Hi all, To deploy initramfs takes longer time (around 17 min). And while it is loading, overcloud deploy sais: ' Went to status ERROR due to "Message: No valid host was found. , Code: 500" '. the remote compute is in a different continent, so speed is quite slow. is it possible to increase wait time? Thank you. Regards. -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sun Dec 6 17:31:03 2020 From: zigo at debian.org (Thomas Goirand) Date: Sun, 6 Dec 2020 18:31:03 +0100 Subject: octavia flavor question In-Reply-To: References: Message-ID: On 12/5/20 8:18 PM, Satish Patel wrote: > Folks, > > I am deployed Octavia and now trying to understand how does flavor > play there role using this guide > https://docs.openstack.org/octavia/latest/admin/flavors.html > > > In my octavia.conf i have statically defined nova flavor so when i > spin up LB it always use m1.amphora (UUID: > 8651951f-e478-4efb-b298-cf3150d38472) which has 1vCPU/2GB memory. > > # grep amp_flavor_id /etc/octavia/octavia.conf > amp_flavor_id = 8651951f-e478-4efb-b298-cf3150d38472 > > Question: how do I create or tell octavia to use multiple nova flavors > for example, I want 3 flavors > > m1.amphora - 1vCPU/2GB/20GB disk > m2.amphora - 4vCPU/4GB/20GB disk > m3.amphora - 8vCPU/8GB/40GB disk (with HugePage and CPUpinning support) > > I want my users to have the option to select whatever flavor they > like. how does that fit here? I didn't find any document explaining > that feature to select your desired nova flavor. Did I miss something? Hi, I'm not sure how to do what you wrote above, but what I know, is that what Octavia recommends isn't enough. For very busy sites, I would recommend 4GB of RAM, and I would strongly recommend having logrotate to rotate the logs every hours, plus not keep too many logs. I've done that because otherwise, we saw failures in the Amphora (no remaining HDD space, etc.). Though, in such setup, 4GB of HDD space is enough (instead of 2GB, which is what was recommended). You're pushing the limit to 20GB, that's really too much. I hope this helps, Cheers, Thomas Goirand (zigo) From Arkady.Kanevsky at dell.com Sun Dec 6 19:58:08 2020 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Sun, 6 Dec 2020 19:58:08 +0000 Subject: tox -e pep8 In-Reply-To: References: <20201205034400.rmnohg3z3tfkuiyn@yuggoth.org> Message-ID: Thanks all for help. From: Sorin Sbarnea Sent: Saturday, December 5, 2020 1:39 AM To: Jeremy Stanley Cc: openstack-discuss at lists.openstack.org Subject: Re: tox -e pep8 [EXTERNAL EMAIL] My impression was that the newer recommended tox environment was “linters’ and it would decouple the implementation from the process name, making easy for each project too adapt their linters based on their needs. A grep on codesearch could show how popular is each. I think that one of the reasons many projects were not converted is because job is defined by a shared template and making a bulk transition requires a lot of effort. I am wondering if we could use a trick to easy this kind of migration: make zuul job detect which environment is present and call it. Basically we can have a generic zuul linter that calls either pep8 or linters tox end. We can go even further and make it call “yarn lint” if found. On Sat, 5 Dec 2020 at 03:48 Jeremy Stanley > wrote: On 2020-12-05 02:52:06 +0000 (+0000), Kanevsky, Arkady wrote: > Do we still using it? > If not, what have we replaced it with? Most projects do still have a "pep8" tox testenv, however these days it usually invokes the flake8 utility which calls pycodestyle (the successor of the old pep8 utility) as one of multiple plugins. -- Jeremy Stanley -- -- /sorin -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcafarel at redhat.com Mon Dec 7 09:25:36 2020 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Mon, 7 Dec 2020 10:25:36 +0100 Subject: [neutron] Bug deputy report (week starting on 2020-11-30) Message-ID: Hi neutrinos, time to start last bug deputy rotation for 2020, here are the bugs reported in week 49. A pretty normal range of bugs from CI fixes to scaling enhancements, most in progress. Two are unassigned right now: OVN tempest failure ( https://bugs.launchpad.net/neutron/+bug/1906490) and DHCP agent leases fix with segments (https://bugs.launchpad.net/neutron/+bug/1906406) New RFE: * [RFE] add new extension "device-profile" for port - https://bugs.launchpad.net/neutron/+bug/1906602 New extension for device_profile notion of Cyborg project - already discussed and approved Critical * oom killer kills mysqld process on the node running fullstack tests - https://bugs.launchpad.net/neutron/+bug/1906366 Patch to limit resources used in FT tests: https://review.opendev.org/c/openstack/neutron/+/764907 * SSH failures in the neutron-ovn-tempest-ovs-release-ipv6-only job - https://bugs.launchpad.net/neutron/+bug/1906490 Unassigned * neutron_tempest_plugin.api.admin.test_dhcp_agent_scheduler.DHCPAgentSchedulersTestJSON.test_dhcp_port_status_active is failing often - https://bugs.launchpad.net/neutron/+bug/1906654 https://review.opendev.org/c/openstack/neutron/+/755313 may help here, to confirm High * [ovn] Tempest tests failing while creating security group driver with KeyError: 'remote_address_group_id' - https://bugs.launchpad.net/neutron/+bug/1906500 Broke after a recent merge (and not tested in gates), fix at https://review.opendev.org/c/openstack/neutron/+/765101 * [OVN Octavia Provider] OVN provider not setting member offline correctly on create when admin_state_up=False - https://bugs.launchpad.net/neutron/+bug/1906568 Fix for tempest test in https://review.opendev.org/c/openstack/ovn-octavia-provider/+/765213 Medium * [segments] dnsmasq can't delete lease for instance due to mismatch between client ip and local addr - https://bugs.launchpad.net/neutron/+bug/1906406 With routed provider networks, DHCP agent at startup receives all leases, not only those from its segment Unassigned * MTU for networks with enabled vlan transparency should be 4 bytes lower - https://bugs.launchpad.net/neutron/+bug/1906318 Assigned to slaweq * [L2][scale] add a trunk size config option for bundle flow install - https://bugs.launchpad.net/neutron/+bug/1906487 New option to help with large set of flows installation: https://review.opendev.org/c/openstack/neutron/+/765072 Low * dns-integration api extension shouldn't be enabled by ovn_l3 plugin if there is no corresponding ML2 extension driver enabled - https://bugs.launchpad.net/neutron/+bug/1906311 Patch https://review.opendev.org/c/openstack/neutron/+/764831 * [L3] router HA port concurrently deleting - https://bugs.launchpad.net/neutron/+bug/1906375 Happened on rocky but code did not change since, patch in progress https://review.opendev.org/c/openstack/neutron/+/764913 * [L2] current binding port get type errors - https://bugs.launchpad.net/neutron/+bug/1906381 Patch in progress at https://review.opendev.org/c/openstack/neutron/+/764917 * Wrong error message when removing interface from router - https://bugs.launchpad.net/neutron/+bug/1906508 Suggested fix https://review.opendev.org/c/openstack/neutron/+/765129 -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Mon Dec 7 11:32:22 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 7 Dec 2020 12:32:22 +0100 Subject: [tripleo] Deployment update (node addition) after changing aggregate groups/zones In-Reply-To: References: Message-ID: anyone know, how to bypass aggregation group error? thank you. On Sat, 5 Dec 2020 at 18:08, Ruslanas Gžibovskis wrote: > Hi all, > > Any thoughts on this one? > > >> Hi all. >>> >>> After changing the host aggregate group and zone, I cannot run OpenStack >>> deploy command successfully again, even after updating deployment >>> environment files according to my setup. >>> >>> I receive error bigger one in [0]: >>> 2020-12-02 10:16:18.532419 | 52540000-0001-cf95-492f-0000000003ca | >>> FATAL | Nova: Manage aggregate and availability zone and add hosts to the >>> zone | undercloud | error={"changed": false, "msg": "ConflictException: >>> 409: Client Error for url: >>> http://10.120.129.199:8774/v2.1/os-aggregates/1/action, Cannot add host >>> to aggregate 1. Reason: One or more hosts already in availability zone(s) >>> ['Alpha01']."} >>> >>> I was following this link [1] instructions for "Configuring Availability >>> Zones (AZ)" steps to modify with OpenStack commands. And zone was created >>> successfully, but when I needed to add additional nodes, executed >>> deployment again with increased numbers it was complaining about an >>> incorrect aggregate zone, and now it is complaining about not empty zone >>> with error [0] mentioned above. I have added aggregate zones into >>> deployment files even role file... any ideas? >>> >>> Also, I think, this should be mentioned, that added it after install, >>> you lose the possibility to update using tripleo tool and you will need to >>> modify environment files with. >>> >>> >>> >>> [0] http://paste.openstack.org/show/800622/ >>> [1] >>> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/distributed_compute_node.html#configuring-availability-zones-az >>> >>> >>> > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Dec 7 13:01:14 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 07 Dec 2020 13:01:14 +0000 Subject: [ci] Kernel panics in the guest vm In-Reply-To: <20201206094257.emilk2n7j6rgdcyi@p1.localdomain> References: <20201206094257.emilk2n7j6rgdcyi@p1.localdomain> Message-ID: <876bd728548d6b9cc35d8c80d0f6e5ec37bfc5c7.camel@redhat.com> On Sun, 2020-12-06 at 10:42 +0100, Slawek Kaplonski wrote: > Hi, > > Since some time I noticed that quite often some scenario jobs are failing due to > issue with SSH to the guest vm and when I was checking the reason of this SSH > failure, it seems that it's due to Kernel panic in the guest vm, like e.g. [1]: > > [ 0.000000] Console: colour VGA+ 80x25 > [ 0.000000] printk: console [tty1] enabled > [ 0.000000] printk: console [ttyS0] enabled > [ 0.000000] ACPI: Core revision 20190703 > [ 0.000000] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns > [ 0.000000] APIC: Switch to symmetric I/O mode setup > [ 0.000000] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 > [ 0.000000] ..MP-BIOS bug: 8254 timer not connected to IO-APIC > [ 0.000000] ...trying to set up timer (IRQ0) through the 8259A ... > [ 0.000000] ..... (found apic 0 pin 2) ... > [ 0.000000] ....... failed. > [ 0.000000] ...trying to set up timer as Virtual Wire IRQ... > [ 0.000000] ..... failed. > [ 0.000000] ...trying to set up timer as ExtINT IRQ... > [ 0.000000] ..... failed :(. > [ 0.000000] Kernel panic - not syncing: IO-APIC + timer doesn't work! Boot with apic=debug and send a report. Then try booting with the 'noapic' option. > [ 0.000000] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.3.0-26-generic #28~18.04.1-Ubuntu > [ 0.000000] Hardware name: OpenStack Foundation OpenStack Nova, BIOS 1.13.0-1ubuntu1 04/01/2014 > [ 0.000000] Call Trace: > [ 0.000000] dump_stack+0x6d/0x95 > [ 0.000000] panic+0xfe/0x2d4 > [ 0.000000] check_timer+0x5e8/0x685 > [ 0.000000] ? radix_tree_lookup+0xd/0x10 > [ 0.000000] setup_IO_APIC+0x182/0x1ca > [ 0.000000] apic_intr_mode_init+0x1f5/0x1f8 > [ 0.000000] x86_late_time_init+0x1b/0x22 > [ 0.000000] start_kernel+0x4cb/0x58b > [ 0.000000] x86_64_start_reservations+0x24/0x26 > [ 0.000000] x86_64_start_kernel+0x74/0x77 > [ 0.000000] secondary_startup_64+0xa4/0xb0 > [ 0.000000] ---[ end Kernel panic - not syncing: IO-APIC + timer doesn't work! Boot with apic=debug and send a report. Then try booting with the 'noapic' option. ]--- > > Logstash [2] is telling me that it is problem not only in neutron related jobs. > Maybe someone of You was already trying to investigate such issue and maybe You > have some ideas what we can do with it? > In this specific example above [1], it was Cirros 0.5.1 image used. But I didn't > check if that is the case in all other cases TBH. this has been happening for months its not new. this might be an issue with the ci providers qemu verion or the kernel in the cirros image we could provide a way to disabel the io apic via nova likely via an image property which we would set on the cirros image via devstack. byond that i dont know what we can do other then move to something like alpine whihc is maintained instead of cirros rhel https://bugzilla.redhat.com/show_bug.cgi?id=221658 and ubuntu https://bugs.launchpad.net/ubuntu/+source/linux/+bug/52553 have both hit this issue in the past in the ~2.6 kernel timeframe cirros uses a ubuntu 18.04 kernel so i think its more likely to be https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1856387 that is in theory fix in the 4.15 kernel that 18.04 default too but cirros is using a 5.3 which i think is form the cloud arche that might not be patched. > > [1] https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_c50/764921/1/gate/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid/c501b2c/testr_results.html > [2] http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Kernel%20panic%20-%20not%20syncing%3A%20IO-APIC%20%2B%20timer%20doesn't%20work!%20%20Boot%20with%20apic%3Ddebug%20and%20send%20a%20report.%20%20Then%20try%20booting%20with%20the%20'noapic'%20option.%5C%22 > From whayutin at redhat.com Mon Dec 7 13:38:22 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 7 Dec 2020 06:38:22 -0700 Subject: [tripleo][ci] CI jobs failures on RDO project In-Reply-To: References: Message-ID: FYI ---------- Forwarded message --------- From: Daniel Pawlik Date: Mon, Dec 7, 2020 at 6:13 AM Subject: [rdo-users] CI jobs failures on RDO project To: Hello, We are facing an outage in our cloud providers. That's why from time to time you can have problems validating CI jobs in RDO project. We are doing our best to fix the issues as soon as possible. Kindly regards, Dan _______________________________________________ users mailing list users at lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/users To unsubscribe: users-unsubscribe at lists.rdoproject.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon Dec 7 14:04:00 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 7 Dec 2020 15:04:00 +0100 Subject: [ci] Kernel panics in the guest vm In-Reply-To: <876bd728548d6b9cc35d8c80d0f6e5ec37bfc5c7.camel@redhat.com> References: <20201206094257.emilk2n7j6rgdcyi@p1.localdomain> <876bd728548d6b9cc35d8c80d0f6e5ec37bfc5c7.camel@redhat.com> Message-ID: I wonder why we have not seen this in Kolla CIs. We always spawn one cirros instance. Could it be related to doing this concurrently? As in, some qemu/kvm component has an ugly race condition? -yoctozepto On Mon, Dec 7, 2020 at 2:02 PM Sean Mooney wrote: > > On Sun, 2020-12-06 at 10:42 +0100, Slawek Kaplonski wrote: > > Hi, > > > > Since some time I noticed that quite often some scenario jobs are failing due to > > issue with SSH to the guest vm and when I was checking the reason of this SSH > > failure, it seems that it's due to Kernel panic in the guest vm, like e.g. [1]: > > > > [ 0.000000] Console: colour VGA+ 80x25 > > [ 0.000000] printk: console [tty1] enabled > > [ 0.000000] printk: console [ttyS0] enabled > > [ 0.000000] ACPI: Core revision 20190703 > > [ 0.000000] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns > > [ 0.000000] APIC: Switch to symmetric I/O mode setup > > [ 0.000000] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 > > [ 0.000000] ..MP-BIOS bug: 8254 timer not connected to IO-APIC > > [ 0.000000] ...trying to set up timer (IRQ0) through the 8259A ... > > [ 0.000000] ..... (found apic 0 pin 2) ... > > [ 0.000000] ....... failed. > > [ 0.000000] ...trying to set up timer as Virtual Wire IRQ... > > [ 0.000000] ..... failed. > > [ 0.000000] ...trying to set up timer as ExtINT IRQ... > > [ 0.000000] ..... failed :(. > > [ 0.000000] Kernel panic - not syncing: IO-APIC + timer doesn't work! Boot with apic=debug and send a report. Then try booting with the 'noapic' option. > > [ 0.000000] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.3.0-26-generic #28~18.04.1-Ubuntu > > [ 0.000000] Hardware name: OpenStack Foundation OpenStack Nova, BIOS 1.13.0-1ubuntu1 04/01/2014 > > [ 0.000000] Call Trace: > > [ 0.000000] dump_stack+0x6d/0x95 > > [ 0.000000] panic+0xfe/0x2d4 > > [ 0.000000] check_timer+0x5e8/0x685 > > [ 0.000000] ? radix_tree_lookup+0xd/0x10 > > [ 0.000000] setup_IO_APIC+0x182/0x1ca > > [ 0.000000] apic_intr_mode_init+0x1f5/0x1f8 > > [ 0.000000] x86_late_time_init+0x1b/0x22 > > [ 0.000000] start_kernel+0x4cb/0x58b > > [ 0.000000] x86_64_start_reservations+0x24/0x26 > > [ 0.000000] x86_64_start_kernel+0x74/0x77 > > [ 0.000000] secondary_startup_64+0xa4/0xb0 > > [ 0.000000] ---[ end Kernel panic - not syncing: IO-APIC + timer doesn't work! Boot with apic=debug and send a report. Then try booting with the 'noapic' option. ]--- > > > > Logstash [2] is telling me that it is problem not only in neutron related jobs. > > Maybe someone of You was already trying to investigate such issue and maybe You > > have some ideas what we can do with it? > > In this specific example above [1], it was Cirros 0.5.1 image used. But I didn't > > check if that is the case in all other cases TBH. > this has been happening for months its not new. > this might be an issue with the ci providers qemu verion or the kernel in the cirros image > we could provide a way to disabel the io apic via nova likely via an image property which we would set on the cirros image via devstack. > byond that i dont know what we can do other then move to something like alpine whihc is maintained instead of cirros > > rhel https://bugzilla.redhat.com/show_bug.cgi?id=221658 and ubuntu https://bugs.launchpad.net/ubuntu/+source/linux/+bug/52553 > have both hit this issue in the past in the ~2.6 kernel timeframe > > cirros uses a ubuntu 18.04 kernel so i think its more likely to be https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1856387 > > that is in theory fix in the 4.15 kernel that 18.04 default too but cirros is using a 5.3 which i think is form the cloud arche that might not be > patched. > > > > > [1] https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_c50/764921/1/gate/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid/c501b2c/testr_results.html > > [2] http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Kernel%20panic%20-%20not%20syncing%3A%20IO-APIC%20%2B%20timer%20doesn't%20work!%20%20Boot%20with%20apic%3Ddebug%20and%20send%20a%20report.%20%20Then%20try%20booting%20with%20the%20'noapic'%20option.%5C%22 > > > > > From dpawlik at redhat.com Mon Dec 7 14:26:37 2020 From: dpawlik at redhat.com (Daniel Pawlik) Date: Mon, 7 Dec 2020 15:26:37 +0100 Subject: [tripleo] CI jobs failures on RDO project Message-ID: Hello, We are facing an outage in our cloud providers. That's why from time to time you can have problems validating CI jobs in RDO project. We are doing our best to fix the issues as soon as possible. Kindly regards, Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Dec 7 15:35:11 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 07 Dec 2020 15:35:11 +0000 Subject: [ci] Kernel panics in the guest vm In-Reply-To: References: <20201206094257.emilk2n7j6rgdcyi@p1.localdomain> <876bd728548d6b9cc35d8c80d0f6e5ec37bfc5c7.camel@redhat.com> Message-ID: On Mon, 2020-12-07 at 15:04 +0100, Radosław Piliszek wrote: > I wonder why we have not seen this in Kolla CIs. > We always spawn one cirros instance. > Could it be related to doing this concurrently? > As in, some qemu/kvm component has an ugly race condition? it is an intermitent failure. gennerally one vm out of the entire tempest run will hit this and the rest will be fine i think this is just a guest kernel issue. concurrancy may be a factor as might load but if you are only spawning one vm i would guess its just much less likely to fail in this manner. > > -yoctozepto > > On Mon, Dec 7, 2020 at 2:02 PM Sean Mooney wrote: > > > > On Sun, 2020-12-06 at 10:42 +0100, Slawek Kaplonski wrote: > > > Hi, > > > > > > Since some time I noticed that quite often some scenario jobs are failing due to > > > issue with SSH to the guest vm and when I was checking the reason of this SSH > > > failure, it seems that it's due to Kernel panic in the guest vm, like e.g. [1]: > > > > > > [ 0.000000] Console: colour VGA+ 80x25 > > > [ 0.000000] printk: console [tty1] enabled > > > [ 0.000000] printk: console [ttyS0] enabled > > > [ 0.000000] ACPI: Core revision 20190703 > > > [ 0.000000] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns > > > [ 0.000000] APIC: Switch to symmetric I/O mode setup > > > [ 0.000000] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 > > > [ 0.000000] ..MP-BIOS bug: 8254 timer not connected to IO-APIC > > > [ 0.000000] ...trying to set up timer (IRQ0) through the 8259A ... > > > [ 0.000000] ..... (found apic 0 pin 2) ... > > > [ 0.000000] ....... failed. > > > [ 0.000000] ...trying to set up timer as Virtual Wire IRQ... > > > [ 0.000000] ..... failed. > > > [ 0.000000] ...trying to set up timer as ExtINT IRQ... > > > [ 0.000000] ..... failed :(. > > > [ 0.000000] Kernel panic - not syncing: IO-APIC + timer doesn't work! Boot with apic=debug and send a report. Then try booting with the 'noapic' option. > > > [ 0.000000] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.3.0-26-generic #28~18.04.1-Ubuntu > > > [ 0.000000] Hardware name: OpenStack Foundation OpenStack Nova, BIOS 1.13.0-1ubuntu1 04/01/2014 > > > [ 0.000000] Call Trace: > > > [ 0.000000] dump_stack+0x6d/0x95 > > > [ 0.000000] panic+0xfe/0x2d4 > > > [ 0.000000] check_timer+0x5e8/0x685 > > > [ 0.000000] ? radix_tree_lookup+0xd/0x10 > > > [ 0.000000] setup_IO_APIC+0x182/0x1ca > > > [ 0.000000] apic_intr_mode_init+0x1f5/0x1f8 > > > [ 0.000000] x86_late_time_init+0x1b/0x22 > > > [ 0.000000] start_kernel+0x4cb/0x58b > > > [ 0.000000] x86_64_start_reservations+0x24/0x26 > > > [ 0.000000] x86_64_start_kernel+0x74/0x77 > > > [ 0.000000] secondary_startup_64+0xa4/0xb0 > > > [ 0.000000] ---[ end Kernel panic - not syncing: IO-APIC + timer doesn't work! Boot with apic=debug and send a report. Then try booting with the 'noapic' option. ]--- > > > > > > Logstash [2] is telling me that it is problem not only in neutron related jobs. > > > Maybe someone of You was already trying to investigate such issue and maybe You > > > have some ideas what we can do with it? > > > In this specific example above [1], it was Cirros 0.5.1 image used. But I didn't > > > check if that is the case in all other cases TBH. > > this has been happening for months its not new. > > this might be an issue with the ci providers qemu verion or the kernel in the cirros image > > we could provide a way to disabel the io apic via nova likely via an image property which we would set on the cirros image via devstack. > > byond that i dont know what we can do other then move to something like alpine whihc is maintained instead of cirros > > > > rhel https://bugzilla.redhat.com/show_bug.cgi?id=221658 and ubuntu https://bugs.launchpad.net/ubuntu/+source/linux/+bug/52553 > > have both hit this issue in the past in the ~2.6 kernel timeframe > > > > cirros uses a ubuntu 18.04 kernel so i think its more likely to be https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1856387 > > > > that is in theory fix in the 4.15 kernel that 18.04 default too but cirros is using a 5.3 which i think is form the cloud arche that might not be > > patched. > > > > > > > > [1] https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_c50/764921/1/gate/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid/c501b2c/testr_results.html > > > [2] http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Kernel%20panic%20-%20not%20syncing%3A%20IO-APIC%20%2B%20timer%20doesn't%20work!%20%20Boot%20with%20apic%3Ddebug%20and%20send%20a%20report.%20%20Then%20try%20booting%20with%20the%20'noapic'%20option.%5C%22 > > > > > > > > > > From amy at demarco.com Mon Dec 7 18:04:53 2020 From: amy at demarco.com (Amy Marrich) Date: Mon, 7 Dec 2020 12:04:53 -0600 Subject: [Diversity] Diversity and Inclusion WG Meeting Change for January Message-ID: Due to the end of year holidays and the fact some people may still be out we will be moving our January meeting. As our backup meeting date is on Martin Luther King Day, the WG has selected January 11 to meet in IRC. As always I'll send out a reminder in the days leading up to the meeting. Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.parquet at gandi.net Mon Dec 7 18:24:49 2020 From: nicolas.parquet at gandi.net (Nicolas Parquet) Date: Mon, 7 Dec 2020 19:24:49 +0100 Subject: [ironic][masakari] Should masakari rely on Ironic to power fence failing hosts? Message-ID: <1d78ede5-b9ea-659a-87d0-5bf6751df502@gandi.net> Hello Ironic people! I am coming to you with a question from one of the points we discussed during Masakari's PTG [1]. When a host fails and Masakari gets notified, there are some cases in which we would like Masakari to power fence the host before evacuating its instances to other hosts. That way it is safe to start the instances from the failed host on another one. Given that Ironic has the responsability of managing hosts, we would be interested to know how Masakari should do that, in your view? We discussed implementing IPMI power on / off directly in masakari, but maybe an integration where Masakari calls some Ironic API would be better? Or maybe we should implement it relying on the pyghmi library, or any other library? A positive side of relying on Ironic would be that Masakari does not have to store IPMI information about hosts; however that would create a dependency between the 2 projects as some deployments might use masakari without managing their hosts through Ironic. Any insight is welcome! Regards, Nicolas [1] https://etherpad.opendev.org/p/masakari-wallaby-vptg -- Nicolas Parquet Gandi nicolas.parquet at gandi.net From owalsh at redhat.com Mon Dec 7 19:02:10 2020 From: owalsh at redhat.com (Oliver Walsh) Date: Mon, 7 Dec 2020 19:02:10 +0000 Subject: [rdo-users] [tripleo] Deployment update (node addition) after changing aggregate groups/zones In-Reply-To: References: Message-ID: Hi, You will need to manually remove the hosts from the old zone ("Alpha01") before adding them to the new zone. A host can only belong to one AZ. Thanks, Ollie On Mon, 7 Dec 2020 at 11:32, Ruslanas Gžibovskis wrote: > anyone know, how to bypass aggregation group error? thank you. > > On Sat, 5 Dec 2020 at 18:08, Ruslanas Gžibovskis wrote: > >> Hi all, >> >> Any thoughts on this one? >> >> >>> Hi all. >>>> >>>> After changing the host aggregate group and zone, I cannot run >>>> OpenStack deploy command successfully again, even after updating deployment >>>> environment files according to my setup. >>>> >>>> I receive error bigger one in [0]: >>>> 2020-12-02 10:16:18.532419 | 52540000-0001-cf95-492f-0000000003ca | >>>> FATAL | Nova: Manage aggregate and availability zone and add hosts to the >>>> zone | undercloud | error={"changed": false, "msg": "ConflictException: >>>> 409: Client Error for url: >>>> http://10.120.129.199:8774/v2.1/os-aggregates/1/action, Cannot add >>>> host to aggregate 1. Reason: One or more hosts already in availability >>>> zone(s) ['Alpha01']."} >>>> >>>> I was following this link [1] instructions for "Configuring >>>> Availability Zones (AZ)" steps to modify with OpenStack commands. And zone >>>> was created successfully, but when I needed to add additional nodes, >>>> executed deployment again with increased numbers it was complaining about >>>> an incorrect aggregate zone, and now it is complaining about not empty zone >>>> with error [0] mentioned above. I have added aggregate zones into >>>> deployment files even role file... any ideas? >>>> >>>> Also, I think, this should be mentioned, that added it after install, >>>> you lose the possibility to update using tripleo tool and you will need to >>>> modify environment files with. >>>> >>>> >>>> >>>> [0] http://paste.openstack.org/show/800622/ >>>> [1] >>>> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/distributed_compute_node.html#configuring-availability-zones-az >>>> >>>> >>>> >> > > -- > Ruslanas Gžibovskis > +370 6030 7030 > _______________________________________________ > users mailing list > users at lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/users > > To unsubscribe: users-unsubscribe at lists.rdoproject.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Mon Dec 7 19:05:16 2020 From: helena at openstack.org (helena at openstack.org) Date: Mon, 7 Dec 2020 14:05:16 -0500 (EST) Subject: [ptl] Victoria Release Community Meeting Videos In-Reply-To: References: <1606863441.140329242@apps.rackspace.com> Message-ID: <1607367916.682432235@apps.rackspace.com> Hi Radoslaw and others, Sorry about that! The upload got overwritten, however, they are now in the project navigator ([ Glance ]( https://www.openstack.org/software/releases/victoria/components/glance ), [ Cinder ]( https://www.openstack.org/software/releases/victoria/components/cinder ), [ Manila ]( https://www.openstack.org/software/releases/victoria/components/manila ), [ Nova ]( https://www.openstack.org/software/releases/victoria/components/nova ), [ Masakri ]( https://www.openstack.org/software/releases/victoria/components/masakari ), [ Neutron ]( https://www.openstack.org/software/releases/victoria/components/neutron ))! Again, thank you to all the PTLs who presented and all the community members who attended the Victoria Release Community Meeting! If you are a PTL interested in still creating a video to add to [ YouTube ]( https://www.youtube.com/playlist?list=PLKqaoAnDyfgpYADSiOfIVwgKb5zbL0GJE ) and the Project Navigator, you may do so and email it to me. Cheers, Helena -----Original Message----- From: "Radosław Piliszek" Sent: Wednesday, December 2, 2020 2:26am To: "helena at openstack.org" Cc: "OpenStack Discuss" Subject: Re: [ptl] Victoria Release Community Meeting Videos On Wed, Dec 2, 2020 at 12:00 AM helena at openstack.org wrote: > > Hi Everyone, > Hi Helena, > > As mentioned before we uploaded the presentations from the community meeting to the project navigator for each project that was presented on (Glance, Cinder, Manila, Nova, Masakri, Neutron). You can also find the full community meeting and all the individual videos on the “Community Meetings” playlist on YouTube. > Thank you, I can see them on YouTube. However, I am unable to find them in the project navigator. Where should I be looking? Kind regards, -yoctozepto -------------- next part -------------- An HTML attachment was scrubbed... URL: From slav.vdimov at gmail.com Sun Dec 6 23:34:13 2020 From: slav.vdimov at gmail.com (Stanislav Dimov) Date: Sun, 6 Dec 2020 23:34:13 +0000 Subject: [security] Auto-renewing trusted user certificates for Openstack services using the ACME protocol Message-ID: Hello all, I have recently started exploring Openstack with the goal of using it to replace my current private cloud infrastructure. I have been reading the docs about security and I noticed that there isn't really a (straight forward) way of securing Openstack services communication with user provided, trusted, SSL certificates. I believe this should not be the case. My current infrastructure uses a privately hosted CA, that supports the ACME protocol. All my hosts submit CSRs to it, and respond to the ACME challenges in order to get it signed. All certificates are short-lived (1h), but never expire thanks to the ACME automation. I have achieved this through an open source project called Smallstep Step CA and Smallstep Step CLI tools. It is dead easy to set up. All of the tools needed to achieve this can also be containerized, for simplicity. Thus, I propose the following solution (keep in mind I am not an Openstack developer): Addition of an ACME client, with a configurable ACME URL, to all (or as many as possible) Openstack services, that can submit CSRs to an ACME server (basically almost identical to the already implemented Openstack Let's Encrypt functionality for public endpoints). Also, optionally, the creation of a new Openstack service, using the Smallstep Step CA, which can sign the CSRs, and thus eliminate the need for a manual setup of a separate Smallstep CA host. I am providing some links to the Smallstep repositories and documentation for easier access: https://github.com/smallstep/certificates https://github.com/smallstep/cli https://github.com/smallstep/hello-mtls https://smallstep.com/docs/ Thank you for your time and consideration. Kind regards, Stanislav -------------- next part -------------- An HTML attachment was scrubbed... URL: From velugubantla.praveen at Ltts.com Mon Dec 7 07:28:55 2020 From: velugubantla.praveen at Ltts.com (Velugubantla Praveen) Date: Mon, 7 Dec 2020 07:28:55 +0000 Subject: [Neutron] How to enable the SCTP protal support in security-groups for Openstack-Rocky release Message-ID: Hi Team, In OPENSTACK security groups how to add the SMTP protocol rules, can some-one please point me to the right documentation of how to add those required configurations to enable SMTP and other protocols to my openstack-rocky release neutron setup. Can we add some additional protocol's above the default provided security rules? Any help or suggestion is highly appreciated. Thanks in advance. Regards, ________________________________________________________ Velugubantla Praveen Engineer - Non-Media Solutions Communications & Media L&T TECHNOLOGY SERVICES LIMITED L3 Building, Manyata Embassy Business Park, Nagawara Hobli, Bengaluru-560045 ________________________________________________________ Mobile: +91 9154111420 www.LTTS.com [cid:e997ce03-4d35-4f33-ad56-ad4facc1f804] L&T Technology Services Ltd www.LTTS.com L&T Technology Services Limited (LTTS) is committed to safeguard your data privacy. For more information to view our commitment towards data privacy under GDPR, please visit the privacy policy on our website www.Ltts.com. This Email may contain confidential or privileged information for the intended recipient (s). If you are not the intended recipient, please do not use or disseminate the information, notify the sender and delete it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook-bdu41mgs.jpg Type: image/jpeg Size: 3655 bytes Desc: Outlook-bdu41mgs.jpg URL: From elanchezian.settu at gigamon.com Mon Dec 7 13:02:04 2020 From: elanchezian.settu at gigamon.com (Elanchezian Settu) Date: Mon, 7 Dec 2020 13:02:04 +0000 Subject: [networking-ovs-dpdk] Message-ID: Hi, Tried ovs-dpdk installation in below setup and noticing it fails with OVS-DPDK initialization timeout problem. Herewith I had attached the local.conf file, which I had used for my setup bring up. Server: Ubuntu 18.04 latest OpenStack: Stein OVS-DPDK: Master byte-compiling /usr/local/lib/python3.6/dist-packages/networking_ovs_dpdk/agent/ovs_dpdk_firewall.py to ovs_dpdk_firewall.cpython-36.pyc running install_egg_info Copying networking_ovs_dpdk.egg-info to /usr/local/lib/python3.6/dist-packages/networking_ovs_dpdk-2015.1.1.dev275-py3.6.egg-info running install_scripts /usr/local/lib/python3.6/dist-packages/pbr/packaging.py:410: EasyInstallDeprecationWarning: Use get_header header = easy_install.get_script_header("", executable, is_wininst) ++/opt/stack/networking-ovs-dpdk/devstack/plugin.sh:source:35 popd /opt/stack/ovs ++/opt/stack/networking-ovs-dpdk/devstack/plugin.sh:source:36 start_ovs_dpdk ++/opt/stack/networking-ovs-dpdk/devstack/libs/ovs-dpdk:start_ovs_dpdk:19 '[' -e /etc/init.d/ovs-dpdk ']' ++/opt/stack/networking-ovs-dpdk/devstack/libs/ovs-dpdk:start_ovs_dpdk:20 sudo service ovs-dpdk start Job for ovs-dpdk.service failed because a timeout was exceeded. See "systemctl status ovs-dpdk.service" and "journalctl -xe" for details. +/opt/stack/networking-ovs-dpdk/devstack/libs/ovs-dpdk:start_ovs_dpdk:1 exit_trap +./stack.sh:exit_trap:528 local r=1 ++./stack.sh:exit_trap:529 jobs -p +./stack.sh:exit_trap:529 jobs= +./stack.sh:exit_trap:532 [[ -n '' ]] +./stack.sh:exit_trap:538 '[' -f '' ']' +./stack.sh:exit_trap:543 kill_spinner +./stack.sh:kill_spinner:438 '[' '!' -z '' ']' +./stack.sh:exit_trap:545 [[ 1 -ne 0 ]] +./stack.sh:exit_trap:546 echo 'Error on exit' Error on exit +./stack.sh:exit_trap:548 type -p generate-subunit +./stack.sh:exit_trap:549 generate-subunit 1607341696 1042 fail +./stack.sh:exit_trap:551 [[ -z /opt/stack/logs/screen ]] +./stack.sh:exit_trap:554 /home/ubuntu/devstack/tools/worlddump.py -d /opt/stack/logs/screen Traceback (most recent call last): File "/home/ubuntu/devstack/tools/worlddump.py", line 255, in sys.exit(main()) File "/home/ubuntu/devstack/tools/worlddump.py", line 239, in main sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0) File "/usr/lib/python3.6/os.py", line 1017, in fdopen return io.open(fd, *args, **kwargs) ValueError: can't have unbuffered text I/O Exception ignored in: <_io.TextIOWrapper name='' mode='w' encoding='UTF-8'> OSError: [Errno 9] Bad file descriptor ubuntu at ubuntu:~/devstack$ ubuntu at ubuntu:~/devstack$ systemctl status ovs-dpdk.service ● ovs-dpdk.service - LSB: Open vSwitch DPDK Loaded: loaded (/etc/init.d/ovs-dpdk; generated) Active: failed (Result: timeout) since Mon 2020-12-07 12:05:38 UTC; 15s ago Docs: man:systemd-sysv-generator(8) Process: 21377 ExecStart=/etc/init.d/ovs-dpdk start (code=killed, signal=TERM) Tasks: 2 (limit: 9830) CGroup: /system.slice/ovs-dpdk.service ├─21984 sudo ovs-vsctl --may-exist add-port br-enp24s0f1 enp24s0f1 -- set Interface enp24s0f1 type=dpdk options:dpdk-devargs=0000:18:00.1 other_config:pci_address=0000:18:00.1 other_config:driver=igb_uio other_config:p └─21985 ovs-vsctl --may-exist add-port br-enp24s0f1 enp24s0f1 -- set Interface enp24s0f1 type=dpdk options:dpdk-devargs=0000:18:00.1 other_config:pci_address=0000:18:00.1 other_config:driver=igb_uio other_config:previo Dec 07 12:00:41 ubuntu sudo[21775]: root : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/opt/stack/DPDK-v18.11/usertools/dpdk-devbind.py -b igb_uio 0000:5e:00.1 Dec 07 12:00:41 ubuntu sudo[21775]: pam_unix(sudo:session): session opened for user root by (uid=0) Dec 07 12:00:42 ubuntu sudo[21775]: pam_unix(sudo:session): session closed for user root Dec 07 12:00:42 ubuntu ovs-dpdk[21377]: add-port command: ovs-vsctl --may-exist add-port br-enp24s0f1 enp24s0f1 -- set Interface enp24s0f1 type=dpdk options:dpdk-devargs=0000:18:00.1 other_config:pci_address=0000:18:00.1 Dec 07 12:00:42 ubuntu sudo[21984]: root : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/ovs-vsctl --may-exist add-port br-enp24s0f1 enp24s0f1 -- set Interface enp24s0f1 type=dpdk options:dpdk-devargs=0000:18:00.1 other_ Dec 07 12:00:42 ubuntu sudo[21984]: pam_unix(sudo:session): session opened for user root by (uid=0) Dec 07 12:00:42 ubuntu ovs-vsctl[21985]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --may-exist add-port br-enp24s0f1 enp24s0f1 -- set Interface enp24s0f1 type=dpdk options:dpdk-devargs=0000:18:00.1 other_config:pci_address=0000:18 Dec 07 12:05:38 ubuntu systemd[1]: ovs-dpdk.service: Start operation timed out. Terminating. Dec 07 12:05:38 ubuntu systemd[1]: ovs-dpdk.service: Failed with result 'timeout'. Dec 07 12:05:38 ubuntu systemd[1]: Failed to start LSB: Open vSwitch DPDK. ubuntu at ubuntu:~/devstack$ Thanks & Regards, Elan This message may contain confidential and privileged information. If it has been sent to you in error, please reply to advise the sender of the error and then immediately delete it. If you are not the intended recipient, do not read, copy, disclose or otherwise use this message. The sender disclaims any liability for such unauthorized use. NOTE that all incoming emails sent to Gigamon email accounts will be archived and may be scanned by us and/or by external service providers to detect and prevent threats to our systems, investigate illegal or inappropriate behavior, and/or eliminate unsolicited promotional emails (“spam”). -------------- next part -------------- An HTML attachment was scrubbed... URL: From hyunwoo18 at gmail.com Mon Dec 7 16:11:19 2020 From: hyunwoo18 at gmail.com (Hyunwoo KIM) Date: Mon, 7 Dec 2020 10:11:19 -0600 Subject: [ops] In a compute node, outbound traffic blocked when a VM running Message-ID: Summary of the problem This problem is in a compute node, not in a VM. Once a VM is running in a compute node, all outbound connections in a compute node (not VM) are blocked. For example: # telnet www.google.com 80 Trying 172.217.5.4... Technical Details: We only use provider network. These 4 services are running in each compute node: - neutron-linuxbridge-agent.service - neutron-dhcp-agent.service - neutron-metadata-agent.service - openstack-nova-compute.service Detailed description of the problem: In a compute node, the following is the result of iptables -L when no VM is running: Chain INPUT (policy ACCEPT) target prot opt source destination neutron-linuxbri-INPUT all -- anywhere anywhere And our usual rules Chain FORWARD (policy ACCEPT) target prot opt source destination neutron-filter-top all -- anywhere anywhere neutron-linuxbri-FORWARD all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination neutron-filter-top all -- anywhere anywhere neutron-linuxbri-OUTPUT all -- anywhere anywhere Chain neutron-filter-top (2 references) target prot opt source destination neutron-linuxbri-local all -- anywhere anywhere Chain neutron-linuxbri-FORWARD (1 references) target prot opt source destination ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-out tapb --physdev-is-bridged ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in tapb --physdev-is-bridged ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-out tap9 --physdev-is-bridged ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in tap9 --physdev-is-bridged Chain neutron-linuxbri-INPUT (1 references) target prot opt source destination Chain neutron-linuxbri-OUTPUT (1 references) target prot opt source destination Chain neutron-linuxbri-local (1 references) target prot opt source destination Chain neutron-linuxbri-sg-chain (0 references) target prot opt source destination ACCEPT all -- anywhere anywhere Chain neutron-linuxbri-sg-fallback (0 references) target prot opt source destination DROP all -- anywhere anywhere In the same compute node, when a VM is running, the following is the result of iptables -L: Chain INPUT (policy ACCEPT) target prot opt source destination neutron-linuxbri-INPUT all -- anywhere anywhere And our usual rules Chain FORWARD (policy ACCEPT) target prot opt source destination neutron-filter-top all -- anywhere anywhere neutron-linuxbri-FORWARD all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination neutron-filter-top all -- anywhere anywhere neutron-linuxbri-OUTPUT all -- anywhere anywhere Chain neutron-filter-top (2 references) target prot opt source destination neutron-linuxbri-local all -- anywhere anywhere Chain neutron-linuxbri-FORWARD (1 references) target prot opt source destination neutron-linuxbri-sg-chain all -- anywhere anywhere PHYSDEV match --physdev-out tap8 --physdev-is-bridged neutron-linuxbri-sg-chain all -- anywhere anywhere PHYSDEV match --physdev-in tap8 --physdev-is-bridged ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-out tapb --physdev-is-bridged ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in tapb --physdev-is-bridged ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-out tap9 --physdev-is-bridged ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in tap9 --physdev-is-bridged Chain neutron-linuxbri-INPUT (1 references) target prot opt source destination neutron-linuxbri-o8 all -- anywhere anywhere PHYSDEV match --physdev-in tap8 --physdev-is-bridged Chain neutron-linuxbri-OUTPUT (1 references) target prot opt source destination Chain neutron-linuxbri-i8 (1 references) target prot opt source destination RETURN all -- anywhere anywhere state RELATED,ESTABLISHED RETURN udp -- anywhere fermicloud248.fnal.gov udp spt:bootps dpt:bootpc RETURN udp -- anywhere 255.255.255.255 udp spt:bootps dpt:bootpc RETURN icmp -- anywhere anywhere RETURN tcp -- fermilab-net.fnal.gov/16 anywhere tcp dpt:ssh RETURN all -- anywhere anywhere match-set NIPv41d69ba3c-68e3-414f-8f1b- src DROP all -- anywhere anywhere state INVALID neutron-linuxbri-sg-fallback all -- anywhere anywhere Chain neutron-linuxbri-local (1 references) target prot opt source destination Chain neutron-linuxbri-o8 (2 references) target prot opt source destination RETURN udp -- default 255.255.255.255 udp spt:bootpc dpt:bootps neutron-linuxbri-s8 all -- anywhere anywhere RETURN udp -- anywhere anywhere udp spt:bootpc dpt:bootps DROP udp -- anywhere anywhere udp spt:bootps dpt:bootpc RETURN all -- anywhere anywhere state RELATED,ESTABLISHED RETURN tcp -- anywhere anywhere tcp dpt:https RETURN all -- anywhere anywhere RETURN tcp -- anywhere anywhere tcp dpt:http DROP all -- anywhere anywhere state INVALID neutron-linuxbri-sg-fallback all -- anywhere anywhere Chain neutron-linuxbri-s8 (1 references) target prot opt source destination RETURN all -- fermicloud248.fnal.gov anywhere MAC FA:16: DROP all -- anywhere anywhere Chain neutron-linuxbri-sg-chain (2 references) target prot opt source destination neutron-linuxbri-i8 all -- anywhere anywhere PHYSDEV match --physdev-out tap8 --physdev-is-bridged neutron-linuxbri-o8 all -- anywhere anywhere PHYSDEV match --physdev-in tap8 --physdev-is-bridged ACCEPT all -- anywhere anywhere Chain neutron-linuxbri-sg-fallback (2 references) target prot opt source destination DROP all -- anywhere anywhere Let me summarize the differences from when no VM running: Chain INPUT : no change Chain FORWARD: no change Chain OUTPUT : no change Chain neutron-filter-top: no change Chain neutron-linuxbri-FORWARD: Two new rules are added neutron-linuxbri-sg-chain neutron-linuxbri-sg-chain Chain neutron-linuxbri-INPUT: One new rule is added neutron-linuxbri-o8ae816b0-f Chain neutron-linuxbri-sg-chain: Two new rules are added neutron-linuxbri-i8 neutron-linuxbri-o8 Chain neutron-linuxbri-OUTPUT: no change Chain neutron-linuxbri-local: no change Chain neutron-linuxbri-sg-fallback: no change Chain neutron-linuxbri-i8: A new chain with multiple rules Chain neutron-linuxbri-o8: A new chain with multiple rules Chain neutron-linuxbri-s8: A new chain with multiple rules But now a problem arises here: All outbound connections are blocked (remember this is in a compute node, not VM): For example: # telnet www.google.com 80 Trying 172.217.5.4... When there isn't any VM running, We don't see this problem. I was wondering if I needed to create a new security group rule for the port 80 (for example) but that didn't solve the issue. Any technical advice will be appreciated, Thanks, Hyunwoo -------------- next part -------------- An HTML attachment was scrubbed... URL: From owalsh at redhat.com Mon Dec 7 19:27:20 2020 From: owalsh at redhat.com (Oliver Walsh) Date: Mon, 7 Dec 2020 19:27:20 +0000 Subject: [TripleO] how to make that inspection IP is given only to known hosts In-Reply-To: References: Message-ID: Hi, The provisioning network needs to be isolated, typically by using VLANs on the switch: https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/environments/baremetal.html#networking In general, you can only have one DHCP server on an L2 network (ignoring high-availability DHCP setups). Thanks, Ollie On Fri, 4 Dec 2020 at 19:34, Ruslanas Gžibovskis wrote: > Hi all, > > I have a situation, when in my network, I have loads of equipment, which I > do not control. and Inspection range gets occupied quite fast. > > and in TCP dump I get such messages: > DHCP-Message Option 53, length 1: NACK > Server-ID Option 54, length 4: DHCPD-IP > MSG Option 56, length 21: "address not available" > > I have disabled: enabled_node_discovery = false > > Anything else? > > maybe additional environment options for undercloud I could provide? > > Than kyou in advance, have a good $day_time > -- > Ruslanas Gžibovskis > +370 6030 7030 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haleyb.dev at gmail.com Mon Dec 7 19:53:42 2020 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 7 Dec 2020 14:53:42 -0500 Subject: [Neutron] How to enable the SCTP protal support in security-groups for Openstack-Rocky release In-Reply-To: References: Message-ID: <75399943-504d-22b3-757c-08fce2ee4789@gmail.com> Hi, On 12/7/20 2:28 AM, Velugubantla Praveen wrote: > Hi Team, > > In OPENSTACK security groups how to add the SMTP protocol rules, can > some-one please point me to the right documentation of how to add those > required configurations to enable SMTP and other protocols to my > openstack-rocky release neutron setup. > > Can we add some additional protocol's above the default provided > security rules? > > Any help or suggestion is highly appreciated. Thanks in advance. So the email subject says SCTP, but the body says SMTP. Assuming you meant SCTP it should be as simple as this to allow all SCTP in: $ openstack security group rule create --ingress --protocol sctp default +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | created_at | 2020-12-07T19:45:59Z | | description | | | direction | ingress | | ether_type | IPv4 | | id | 5d2183a0-8779-44e2-a170-8cfe21352606 | | name | None | | port_range_max | None | | port_range_min | None | | project_id | 573ef6e9362c43599e1faf26029de056 | | protocol | sctp | | remote_group_id | None | | remote_ip_prefix | 0.0.0.0/0 | | revision_number | 0 | | security_group_id | 63cc50d6-ace7-4575-9224-458cd8751228 | | tags | [] | | updated_at | 2020-12-07T19:45:59Z | +-------------------+--------------------------------------+ Of course you can also specify ports, etc. If that gives an error you can try with the protocol number, "--protocol 132". -Brian > > Regards, > > ________________________________________________________ > > *Velugubantla Praveen * > > Engineer - Non-Media Solutions > > *Communications & Media* > > *L&T TECHNOLOGY SERVICES LIMITED* > > L3 Building, Manyata Embassy Business Park, > Nagawara Hobli,Bengaluru-560045 > > ________________________________________________________ From gmann at ghanshyammann.com Mon Dec 7 20:30:46 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 07 Dec 2020 14:30:46 -0600 Subject: [tc][all][qinling] Retiring the Qinling project In-Reply-To: <175bd48122d.ee0d0ddb120064.3093467998927637704@ghanshyammann.com> References: <175b3963a49.115662abe16508.6386586116324619581@ghanshyammann.com> <3dd69b5e9a3ac32c4f6f7bae633fa509a6230325.camel@redhat.com> <175bd48122d.ee0d0ddb120064.3093467998927637704@ghanshyammann.com> Message-ID: <1763ee6415e.b436cd1883905.5875166078655575151@ghanshyammann.com> ---- On Thu, 12 Nov 2020 10:26:58 -0600 Ghanshyam Mann wrote ---- > ---- On Thu, 12 Nov 2020 09:23:41 -0600 Stephen Finucane wrote ---- > > On Tue, 2020-11-10 at 13:16 -0600, Ghanshyam Mann wrote: > > Hello Everyone, > > > > As you know, Qinling is a leaderless project for the Wallaby cycle, > > which means there is no PTL > > candidate to lead it in the Wallaby cycle. 'No PTL' and no liaisons for > > DPL model is one of the criteria > > which triggers TC to start checking the health, maintainers of the > > project for dropping the project > > from OpenStack Governance[1]. > > > > TC discussed the leaderless project in PTG[2] and checked if the > > project has maintainers and what > > activities are done in the Victoria development cycle. It seems no > > functional changes in Qinling repos > > except few gate fixes or community goal commits[3]. > > > > Based on all these checks and no maintainer for Qinling, TC decided to > > drop this project from OpenStack > > governance in the Wallaby cycle. Ref: Mandatory Repository Retirement > > resolution [4] and the detailed process > > is in the project guide docs [5]. > > > > If your organization product/customer use/rely on this project then > > this is the right time to step forward to > > maintain it otherwise from the Wallaby cycle, Qinling will move out of > > OpenStack governance by keeping > > their repo under OpenStack namespace with an empty master branch with > > 'Not Maintained' message in README. > > If someone from old or new maintainers shows interest to continue its > > development then it can be re-added > > to OpenStack governance. > > > > With that thanks to Qinling contributors and PTLs (especially lxkong ) > > for maintaining this project. > > > > No comments on the actual retirement, but don't forget that someone > > from the Foundation will need to update the OpenStack Map available at > > https://www.openstack.org/openstack-map and included on > > https://www.openstack.org/software/. > > Yes, that is part of removing the dependencies/usage of the retiring projects step. Updates: Most of the retirement patches are merged now (next is to merge the project-config and governance patch), please merge all the dependencies removal patches to avoid any break in respective project gate/features. https://review.opendev.org/q/topic:%22retire-qinling%22+(status:open%20OR%20status:merged) -gmann > > -gmann > > > > > Stephen > > > > [1] > > https://governance.openstack.org/tc/reference/dropping-projects.html > > [2] https://etherpad.opendev.org/p/tc-wallaby-ptg > > [3] > > https://www.stackalytics.com/?release=victoria&module=qinling-group&metric=commits > > [4] > > https://governance.openstack.org/tc/resolutions/20190711-mandatory-repository-retirement.html > > [5] > > https://docs.openstack.org/project-team-guide/repository.html#retiring-a-repository > > > > -gmann > > > > > > > > > > > > From gmann at ghanshyammann.com Mon Dec 7 20:31:29 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 07 Dec 2020 14:31:29 -0600 Subject: [tc][all][searchlight ] Retiring the Searchlight project In-Reply-To: <8554915e-0ee1-7d69-1192-1a043b71f742@debian.org> References: <175b3960913.120cf390316504.5945799248975474230@ghanshyammann.com> <297cf22b-2172-cb51-f282-4c0e674a9a07@debian.org> <175b44e80fd.f4c0ee8420790.4317890968302424842@ghanshyammann.com> <7e1e40f2-afeb-ebcb-6faa-b3b7534a8039@openstack.org> <8554915e-0ee1-7d69-1192-1a043b71f742@debian.org> Message-ID: <1763ee6eafe.11c3a7b1083930.1008218442884362294@ghanshyammann.com> ---- On Thu, 12 Nov 2020 10:43:57 -0600 Thomas Goirand wrote ---- > On 11/12/20 11:29 AM, Thierry Carrez wrote: > > Ghanshyam Mann wrote: > >> Yes, as part of the retirement process all deliverables under the > >> project needs to be removed > >> and before removal we do: > >> 1. Remove all dependencies. > >> 2. Refactor/remove the gate job dependency also. > >> 3. Remove the code from the retiring repo. > > > > I think Thomas's point was that some of those retired deliverables are > > required by non-retired deliverables, like: > > > > - python-qinlingclient being required by mistral-extra > > > > - python-searchlightclient and python-karborclient being required by > > openstackclient and python-openstackclient > > > > We might need to remove those features/dependencies first, which might > > take time... > > Exactly, thanks for correcting my (very) poor wording. Updates: Most of the retirement patches are merged now (next is to merge the project-config and governance patch), please merge all the dependencies removal patches to avoid any break in respective project gate/features. https://review.opendev.org/q/topic:%22retire-searchlight%22+(status:open%20OR%20status:merged) -gmann > > Cheers, > > Thomas Goirand (zigo) > > From gouthampravi at gmail.com Mon Dec 7 20:53:07 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Mon, 7 Dec 2020 12:53:07 -0800 Subject: [interop][refstack] Revival/Maintenance proposal Message-ID: Hi, As many of you are aware, the interop working group defines test advisories for evaluating implementations of OpenStack for interoperability. The implementers themselves - whether they are OpenStack cloud operators, distributions or vendors of specific products aligning with OpenStack use refstack-client [1] to submit their results. Refstack client is a bunch of helpful scripts to stand up tempest, run the tests and compile results. The results compiled from refstack-client are posted to https://refstack.openstack.org - code for which exists in the "refstack" repository [2] For a few releases now, gmann has pointed out to the Interop WG that refstack needs maintainers. In our last meeting, Martin Kopec (mkopec) volunteered to maintain refstack-client, and help coordinate changes to adopt the latest advisories published by the interop working group. Martin's been running interop testing for Red Hat OpenStack, and he has contributed ansible automation for installing, running tests and submitting results [3]. So he'll bring valuable interop perspective to this project and maintain it effectively. The interop WG also suggested that Martin's ansible scripts [3] be moved under the "osf/" organization on opendev.org, and he's currently pursuing that [4]. I see a number of names on the refstack-core group on Gerrit [5]. Is it possible there's anyone in the team that's still involved in the maintenance of this project, and can add Martin? Thanks, Goutham [1] https://opendev.org/osf/refstack-client [2] https://opendev.org/osf/refstack [3] https://opendev.org/x/ansible-role-refstack-client [4] https://review.opendev.org/c/openstack/project-config/+/765787 [5] https://review.opendev.org/admin/groups/8cd7203820004ccdb67c999ca3b811534bf76d6f,members -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Dec 7 21:28:10 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 07 Dec 2020 15:28:10 -0600 Subject: [interop][refstack] Revival/Maintenance proposal In-Reply-To: References: Message-ID: <1763f1ad0ba.ada65cde85178.8593886632553736471@ghanshyammann.com> ---- On Mon, 07 Dec 2020 14:53:07 -0600 Goutham Pacha Ravi wrote ---- > Hi, > As many of you are aware, the interop working group defines test advisories for evaluating implementations of OpenStack for interoperability. The implementers themselves - whether they are OpenStack cloud operators, distributions or vendors of specific products aligning with OpenStack use refstack-client [1] to submit their results. Refstack client is a bunch of helpful scripts to stand up tempest, run the tests and compile results. The results compiled from refstack-client are posted to https://refstack.openstack.org - code for which exists in the "refstack" repository [2] > For a few releases now, gmann has pointed out to the Interop WG that refstack needs maintainers. In our last meeting, Martin Kopec (mkopec) volunteered to maintain refstack-client, and help coordinate changes to adopt the latest advisories published by the interop working group. Martin's been running interop testing for Red Hat OpenStack, and he has contributed ansible automation for installing, running tests and submitting results [3]. So he'll bring valuable interop perspective to this project and maintain it effectively. The interop WG also suggested that Martin's ansible scripts [3] be moved under the "osf/" organization on opendev.org, and he's currently pursuing that [4]. > I see a number of names on the refstack-core group on Gerrit [5]. Is it possible there's anyone in the team that's still involved in the maintenance of this project, and can add Martin? +1 on adding Martin in refstack-core group. I think Thierry can add him. Also, we can have interop chair (Prakash currently) also to be part of these groups to make such future changes? -gmann > Thanks, Goutham > [1] https://opendev.org/osf/refstack-client[2] https://opendev.org/osf/refstack[3] https://opendev.org/x/ansible-role-refstack-client[4] https://review.opendev.org/c/openstack/project-config/+/765787[5] https://review.opendev.org/admin/groups/8cd7203820004ccdb67c999ca3b811534bf76d6f,members > > From skaplons at redhat.com Mon Dec 7 21:33:12 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 7 Dec 2020 22:33:12 +0100 Subject: [ops] In a compute node, outbound traffic blocked when a VM running In-Reply-To: References: Message-ID: <20201207213312.ipkm6xj2n7i6gzpu@p1.localdomain> Hi, On Mon, Dec 07, 2020 at 10:11:19AM -0600, Hyunwoo KIM wrote: > Summary of the problem > > This problem is in a compute node, not in a VM. > > Once a VM is running in a compute node, > > all outbound connections in a compute node (not VM) are blocked. > > For example: > > # telnet www.google.com 80 > > Trying 172.217.5.4... > > > > Technical Details: > > We only use provider network. > > These 4 services are running in each compute node: > > - neutron-linuxbridge-agent.service > > - neutron-dhcp-agent.service > > - neutron-metadata-agent.service > > - openstack-nova-compute.service > > > > Detailed description of the problem: > > > In a compute node, the following is the result of iptables -L when no VM is > running: > > > > > Chain INPUT (policy ACCEPT) > > target prot opt source destination > > neutron-linuxbri-INPUT all -- anywhere anywhere > > And our usual rules > > > Chain FORWARD (policy ACCEPT) > > target prot opt source destination > > neutron-filter-top all -- anywhere anywhere > > neutron-linuxbri-FORWARD all -- anywhere anywhere > > > Chain OUTPUT (policy ACCEPT) > > target prot opt source destination > > neutron-filter-top all -- anywhere anywhere > > neutron-linuxbri-OUTPUT all -- anywhere anywhere > > > Chain neutron-filter-top (2 references) > > target prot opt source destination > > neutron-linuxbri-local all -- anywhere anywhere > > > Chain neutron-linuxbri-FORWARD (1 references) > > target prot opt source destination > > ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-out tapb > --physdev-is-bridged > > ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in tapb > --physdev-is-bridged > > ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-out tap9 > --physdev-is-bridged > > ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in tap9 > --physdev-is-bridged > > > Chain neutron-linuxbri-INPUT (1 references) > > target prot opt source destination > > > Chain neutron-linuxbri-OUTPUT (1 references) > > target prot opt source destination > > > Chain neutron-linuxbri-local (1 references) > > target prot opt source destination > > > Chain neutron-linuxbri-sg-chain (0 references) > > target prot opt source destination > > ACCEPT all -- anywhere anywhere > > > Chain neutron-linuxbri-sg-fallback (0 references) > > target prot opt source destination > > DROP all -- anywhere anywhere > > > > > > In the same compute node, when a VM is running, > > the following is the result of iptables -L: > > > > > > Chain INPUT (policy ACCEPT) > > target prot opt source destination > > neutron-linuxbri-INPUT all -- anywhere anywhere > > And our usual rules > > > Chain FORWARD (policy ACCEPT) > > target prot opt source destination > > neutron-filter-top all -- anywhere anywhere > > neutron-linuxbri-FORWARD all -- anywhere anywhere > > > Chain OUTPUT (policy ACCEPT) > > target prot opt source destination > > neutron-filter-top all -- anywhere anywhere > > neutron-linuxbri-OUTPUT all -- anywhere anywhere > > > Chain neutron-filter-top (2 references) > > target prot opt source destination > > neutron-linuxbri-local all -- anywhere anywhere > > > Chain neutron-linuxbri-FORWARD (1 references) > > target prot opt source destination > > neutron-linuxbri-sg-chain all -- anywhere anywhere PHYSDEV match > --physdev-out tap8 --physdev-is-bridged > > neutron-linuxbri-sg-chain all -- anywhere anywhere PHYSDEV match > --physdev-in tap8 --physdev-is-bridged > > ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-out tapb > --physdev-is-bridged > > ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in tapb > --physdev-is-bridged > > ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-out tap9 > --physdev-is-bridged > > ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in tap9 > --physdev-is-bridged > > > Chain neutron-linuxbri-INPUT (1 references) > > target prot opt source destination > > neutron-linuxbri-o8 all -- anywhere anywhere PHYSDEV match --physdev-in > tap8 --physdev-is-bridged > > > Chain neutron-linuxbri-OUTPUT (1 references) > > target prot opt source destination > > > Chain neutron-linuxbri-i8 (1 references) > > target prot opt source destination > > RETURN all -- anywhere anywhere state RELATED,ESTABLISHED > > RETURN udp -- anywhere fermicloud248.fnal.gov udp spt:bootps > dpt:bootpc > > RETURN udp -- anywhere 255.255.255.255 udp spt:bootps dpt:bootpc > > RETURN icmp -- anywhere anywhere > > RETURN tcp -- fermilab-net.fnal.gov/16 anywhere tcp dpt:ssh > > RETURN all -- anywhere anywhere match-set > NIPv41d69ba3c-68e3-414f-8f1b- src > > DROP all -- anywhere anywhere state INVALID > > neutron-linuxbri-sg-fallback all -- anywhere anywhere > > > Chain neutron-linuxbri-local (1 references) > > target prot opt source destination > > > Chain neutron-linuxbri-o8 (2 references) > > target prot opt source destination > > RETURN udp -- default 255.255.255.255 udp spt:bootpc > dpt:bootps > > neutron-linuxbri-s8 all -- anywhere anywhere > > RETURN udp -- anywhere anywhere udp spt:bootpc dpt:bootps > > DROP udp -- anywhere anywhere udp spt:bootps dpt:bootpc > > RETURN all -- anywhere anywhere state RELATED,ESTABLISHED > > RETURN tcp -- anywhere anywhere tcp dpt:https > > RETURN all -- anywhere anywhere > > RETURN tcp -- anywhere anywhere tcp dpt:http > > DROP all -- anywhere anywhere state INVALID > > neutron-linuxbri-sg-fallback all -- anywhere anywhere > > > Chain neutron-linuxbri-s8 (1 references) > > target prot opt source destination > > RETURN all -- fermicloud248.fnal.gov anywhere MAC FA:16: > > DROP all -- anywhere anywhere > > > > Chain neutron-linuxbri-sg-chain (2 references) > > target prot opt source destination > > neutron-linuxbri-i8 all -- anywhere anywhere PHYSDEV match --physdev-out > tap8 --physdev-is-bridged > > neutron-linuxbri-o8 all -- anywhere anywhere PHYSDEV match --physdev-in tap8 > --physdev-is-bridged > > ACCEPT all -- anywhere anywhere > > > Chain neutron-linuxbri-sg-fallback (2 references) > > target prot opt source destination > > DROP all -- anywhere anywhere > > > > > > Let me summarize the differences from when no VM running: > > > Chain INPUT : no change > > Chain FORWARD: no change > > Chain OUTPUT : no change > > Chain neutron-filter-top: no change > > > Chain neutron-linuxbri-FORWARD: Two new rules are added > > neutron-linuxbri-sg-chain > > neutron-linuxbri-sg-chain > > > Chain neutron-linuxbri-INPUT: One new rule is added > > neutron-linuxbri-o8ae816b0-f > > > Chain neutron-linuxbri-sg-chain: Two new rules are added > > neutron-linuxbri-i8 > > neutron-linuxbri-o8 Those are chains which represents rules from Your Security Group used by a VM > > > Chain neutron-linuxbri-OUTPUT: no change > > Chain neutron-linuxbri-local: no change > > Chain neutron-linuxbri-sg-fallback: no change > > > Chain neutron-linuxbri-i8: A new chain with multiple rules > > Chain neutron-linuxbri-o8: A new chain with multiple rules In those 2 chains there are ingress and egress SG rules implemented > > Chain neutron-linuxbri-s8: A new chain with multiple rules And in this one there are antispoofing rules for Your port added. > > > > But now a problem arises here: > > All outbound connections are blocked (remember this is in a compute node, > not VM): > > For example: > > # telnet www.google.com 80 > > Trying 172.217.5.4... > > > When there isn't any VM running, We don't see this problem. > > > I was wondering if I needed to create a new security group rule for the > port 80 (for example) > > but that didn't solve the issue. > > > Any technical advice will be appreciated, You should check where exactly Your packets are dropped. Also, You didn't tell us what is the type of the Neutron network to which Your VM is plugged and how bridges are done on Your compute node. > > Thanks, > > Hyunwoo -- Slawek Kaplonski Principal Software Engineer Red Hat From rosmaita.fossdev at gmail.com Mon Dec 7 21:49:27 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 7 Dec 2020 16:49:27 -0500 Subject: openstack read only user In-Reply-To: References: Message-ID: <4715c72f-82d4-0376-6b75-056e7c898565@gmail.com> On 12/5/20 12:24 PM, Laurent Dumont wrote: > As far as I know, the support for a read only user is not complete in > Queens or Rocky. > > http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017556.html > I believe that's correct, you have to configure it on your own as Ruslanas suggested. You can see if this helps: https://docs.openstack.org/cinder/latest/configuration/block-storage/policy-config-HOWTO.html > > On Sat, Dec 5, 2020 at 12:16 PM Ruslanas Gžibovskis > wrote: > > hi Dhanesh. > > At least in Newton you had to create a new role, and create a new > policy.json for each service (nova, neutron, glance, and so on) for > that role, and assign user to that group. > > but in Queens , I saw it was looking like working, and itm ight have > something like that by default (I mean role). > > > On Fri, 4 Dec 2020 at 20:01, dhanesh1212121212 > > wrote: > > Hi Team, > > Please let me know the steps to create a read only user in > openstack. (My version is Rocky) > > Regards, > Dhanesh M. > > > > -- > Ruslanas Gžibovskis > +370 6030 7030 > From gagehugo at gmail.com Mon Dec 7 22:41:54 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 7 Dec 2020 16:41:54 -0600 Subject: [OSSA-2020-008] horizon: Open redirect in workflow forms Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 ============================================== OSSA-2020-008: Open redirect in workflow forms ============================================== :Date: December 03, 2020 :CVE: CVE-2020-29565 Affects ~~~~~~~ - - Horizon: <15.3.2, >=16.0.0 <16.2.1, >=17.0.0 <18.3.3, >=18.4.0 <18.6.0 Description ~~~~~~~~~~~ Pritam Singh (Red Hat) reported a vulnerability in Horizon's workflow forms. Previously there was a lack of validation on the "next" parameter, which would allow someone to supply a malicious URL in Horizon that can cause an automatic redirect to the provided malicious URL. Patches ~~~~~~~ - - https://review.opendev.org/758843 (Stein) - - https://review.opendev.org/758841 (Train) Credits ~~~~~~~ - - Pritam Singh from Red Hat (CVE-2020-29565) References ~~~~~~~~~~ - - https://launchpad.net/bugs/1865026 - - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29565 -----BEGIN PGP SIGNATURE----- iQIzBAEBCgAdFiEEWa125cLHIuv6ekof56j9K3b+vREFAl/OrjwACgkQ56j9K3b+ vRG/Gg//Tyj5La8eFwIrwhpDbV/tKNFS+t3NzuhJzLS24WNS9cLf5yDronRdBPdT Ow2OegTZ7K5GyoRARpycTjtE66RIizX9I8Kx27FXPc83hLYYOs/MButYpqcp0swM 687RXZGFcZ5HZtPuRuTcclEcyhzvcUX7HXmznOCmVOHchr+RXzmp6cXC7tyCuNkV cGuuMtptDfkFmn2MpGmiTWEiMusMRbV5HqeyY39jg5dwph0kbMCcuzkX6c2WHubE T+rjVKbmqHr+v7og6mkZoK+pVk6Ulta/lGsYh/0NlszdQw3poN4FIt//TIwJZVwx WSlbMt6IwBW5XiPXvjpX9Awis6CT0jxlIV5XBq+klr3Jo+YnDsChElIPQs3CRKoM vqXVextHCk3LK1Evs3FkBns2Taro4tWOlkGYKR6INT4F1TJKNIzIUiF08673uF3B 8zXDfnVEb7tEMqwu6OdVnfQQ4SRu7uyrN1sHhtwIyfK10AAI7gfJL/wbItJy21Om SQahTfDnikEY5gYYU+NH0LBMXkE0I/T+uvPh4LgP7wUxCMR9uI8+iA0711Gp/aPD WUdm3pUfIJYE7Gq6sT7BJQftHyMPcxOBj+MIrmFDFOxyPV70Mub+f34zxdu3Qoda tZNpy/BGL19VqrlRa9R8H65tzzNy7k5GqkaUYEF5/LegfUgZOTo= =jr+k -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Tue Dec 8 02:24:37 2020 From: hjensas at redhat.com (Harald Jensas) Date: Tue, 8 Dec 2020 03:24:37 +0100 Subject: [TripleO] how to make that inspection IP is given only to known hosts In-Reply-To: References: Message-ID: <879157b1-95b7-cfe1-4694-591900882db1@redhat.com> On 12/7/20 8:27 PM, Oliver Walsh wrote: > Hi, > > The provisioning network needs to be isolated, typically by using VLANs > on the switch: > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/environments/baremetal.html#networking > > > In general, you can only have one DHCP server on an L2 network (ignoring > high-availability DHCP setups). > > Thanks, > Ollie > I fully agree with Ollie here, you should have the provisioning leg of the undercloud on a isolated VLAN. However, if you cant get an isolated network segment, and are on Victoria release ironic inspector has a new option that can be used to make the inspector DHCP server only answer requests from known MAC addresses, see: https://review.opendev.org/c/openstack/ironic-inspector/+/753435 // Harald > > On Fri, 4 Dec 2020 at 19:34, Ruslanas Gžibovskis > wrote: > > Hi all, > > I have a situation, when in my network, I have loads of equipment, > which I do not control. and Inspection range gets occupied quite fast. > > and in TCP dump I get such messages: >    DHCP-Message Option 53, length 1: NACK >    Server-ID Option 54, length 4: DHCPD-IP >    MSG Option 56, length 21: "address not available" > > I have disabled: enabled_node_discovery = false > > Anything else? > > maybe additional environment options for undercloud I could provide? > > Than kyou in advance, have a good $day_time > -- > Ruslanas Gžibovskis > +370 6030 7030 > From ruslanas at lpic.lt Tue Dec 8 06:53:53 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Tue, 8 Dec 2020 07:53:53 +0100 Subject: [TripleO] how to make that inspection IP is given only to known hosts In-Reply-To: <879157b1-95b7-cfe1-4694-591900882db1@redhat.com> References: <879157b1-95b7-cfe1-4694-591900882db1@redhat.com> Message-ID: yeah, same here, I would like to have a dedicated network :) but (as now popular to say) #reallife :D Thank you. Will take a look at the upgrade. On Tue, 8 Dec 2020 at 03:32, Harald Jensas wrote: > On 12/7/20 8:27 PM, Oliver Walsh wrote: > > Hi, > > > > The provisioning network needs to be isolated, typically by using VLANs > > on the switch: > > > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/environments/baremetal.html#networking > > < > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/environments/baremetal.html#networking > > > > > > In general, you can only have one DHCP server on an L2 network (ignoring > > high-availability DHCP setups). > > > > Thanks, > > Ollie > > > > I fully agree with Ollie here, you should have the provisioning leg of > the undercloud on a isolated VLAN. > > However, if you cant get an isolated network segment, and are on > Victoria release ironic inspector has a new option that can be used to > make the inspector DHCP server only answer requests from known MAC > addresses, see: > https://review.opendev.org/c/openstack/ironic-inspector/+/753435 > > > // > Harald > > > > > On Fri, 4 Dec 2020 at 19:34, Ruslanas Gžibovskis > > wrote: > > > > Hi all, > > > > I have a situation, when in my network, I have loads of equipment, > > which I do not control. and Inspection range gets occupied quite > fast. > > > > and in TCP dump I get such messages: > > DHCP-Message Option 53, length 1: NACK > > Server-ID Option 54, length 4: DHCPD-IP > > MSG Option 56, length 21: "address not available" > > > > I have disabled: enabled_node_discovery = false > > > > Anything else? > > > > maybe additional environment options for undercloud I could provide? > > > > Than kyou in advance, have a good $day_time > > -- > > Ruslanas Gžibovskis > > +370 6030 7030 > > > > > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Tue Dec 8 06:59:02 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Tue, 8 Dec 2020 07:59:02 +0100 Subject: [rdo-users] [tripleo] Deployment update (node addition) after changing aggregate groups/zones In-Reply-To: References: Message-ID: The first deployment set all computes to zone named according to stack name, but later I have created Alpha01, Alpha02. And set according in node-info.yaml file. But still, it fails with message, that some compute is already present in zone Alpha01... like it cannot create such zone. And I say, yes captain, I know, I have created and added YOU into that zone... Maybe I need to do some "tweaks" to DB? just now thought about it. On Mon, 7 Dec 2020 at 20:02, Oliver Walsh wrote: > Hi, > > You will need to manually remove the hosts from the old zone ("Alpha01") > before adding them to the new zone. A host can only belong to one AZ. > > Thanks, > Ollie > > On Mon, 7 Dec 2020 at 11:32, Ruslanas Gžibovskis wrote: > >> anyone know, how to bypass aggregation group error? thank you. >> >> On Sat, 5 Dec 2020 at 18:08, Ruslanas Gžibovskis >> wrote: >> >>> Hi all, >>> >>> Any thoughts on this one? >>> >>> >>>> Hi all. >>>>> >>>>> After changing the host aggregate group and zone, I cannot run >>>>> OpenStack deploy command successfully again, even after updating deployment >>>>> environment files according to my setup. >>>>> >>>>> I receive error bigger one in [0]: >>>>> 2020-12-02 10:16:18.532419 | 52540000-0001-cf95-492f-0000000003ca | >>>>> FATAL | Nova: Manage aggregate and availability zone and add hosts to >>>>> the zone | undercloud | error={"changed": false, "msg": "ConflictException: >>>>> 409: Client Error for url: >>>>> http://10.120.129.199:8774/v2.1/os-aggregates/1/action, Cannot add >>>>> host to aggregate 1. Reason: One or more hosts already in availability >>>>> zone(s) ['Alpha01']."} >>>>> >>>>> I was following this link [1] instructions for "Configuring >>>>> Availability Zones (AZ)" steps to modify with OpenStack commands. And zone >>>>> was created successfully, but when I needed to add additional nodes, >>>>> executed deployment again with increased numbers it was complaining about >>>>> an incorrect aggregate zone, and now it is complaining about not empty zone >>>>> with error [0] mentioned above. I have added aggregate zones into >>>>> deployment files even role file... any ideas? >>>>> >>>>> Also, I think, this should be mentioned, that added it after install, >>>>> you lose the possibility to update using tripleo tool and you will need to >>>>> modify environment files with. >>>>> >>>>> >>>>> >>>>> [0] http://paste.openstack.org/show/800622/ >>>>> [1] >>>>> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/distributed_compute_node.html#configuring-availability-zones-az >>>>> >>>>> >>>>> >>> >> >> -- >> Ruslanas Gžibovskis >> +370 6030 7030 >> _______________________________________________ >> users mailing list >> users at lists.rdoproject.org >> http://lists.rdoproject.org/mailman/listinfo/users >> >> To unsubscribe: users-unsubscribe at lists.rdoproject.org >> > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Tue Dec 8 08:26:11 2020 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 8 Dec 2020 08:26:11 +0000 Subject: [ironic][masakari] Should masakari rely on Ironic to power fence failing hosts? In-Reply-To: <1d78ede5-b9ea-659a-87d0-5bf6751df502@gandi.net> References: <1d78ede5-b9ea-659a-87d0-5bf6751df502@gandi.net> Message-ID: On Mon, 7 Dec 2020 at 18:25, Nicolas Parquet wrote: > > Hello Ironic people! > > I am coming to you with a question from one of the points we discussed > during Masakari's PTG [1]. > > When a host fails and Masakari gets notified, there are some cases in > which we would like Masakari to power fence the host before evacuating > its instances to other hosts. That way it is safe to start the instances > from the failed host on another one. > > Given that Ironic has the responsability of managing hosts, we would be > interested to know how Masakari should do that, in your view? > We discussed implementing IPMI power on / off directly in masakari, but > maybe an integration where Masakari calls some Ironic API would be better? > Or maybe we should implement it relying on the pyghmi library, or any > other library? > > A positive side of relying on Ironic would be that Masakari does not > have to store IPMI information about hosts; however that would create a > dependency between the 2 projects as some deployments might use masakari > without managing their hosts through Ironic. > > Any insight is welcome! Hi Nicolas, I would suggest defining a power management plugin interface, with the first implementation being for Ironic. If in future someone wants to add support for MAAS, IPMI, or any other tool, it should be possible. Any per-host configuration would need to be extensible enough to support this. > > Regards, > Nicolas > > [1] https://etherpad.opendev.org/p/masakari-wallaby-vptg > > > -- > Nicolas Parquet > Gandi > nicolas.parquet at gandi.net > From honjo.rikimaru at ntt-tx.co.jp Tue Dec 8 09:36:39 2020 From: honjo.rikimaru at ntt-tx.co.jp (Rikimaru Honjo) Date: Tue, 08 Dec 2020 18:36:39 +0900 Subject: [Keystone]Question about access rules for identity API Message-ID: <38ee02a3-cb58-9231-1717-7b284a373a38@ntt-tx.co.jp_1> Hi, Are the access rules not applied for identity API? I created an application credential with a access rule that allows only project list API. But the application credential allows all identity APIs. Is this correct? Are there any documents that explains about this? By the way, the application credential denied all other service's APIs. I think this behavior is correct. I use OpenStack Ubuntu. Best regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at ntt-tx.co.jp From skaplons at redhat.com Tue Dec 8 11:16:15 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 08 Dec 2020 12:16:15 +0100 Subject: [neutron] Team meeting - Tuesday 08.12.2020 Message-ID: <8584326.hv565dTyh4@p1> Hi, It's just quick reminder that today at 1400 UTC on the #openstack-meeting-3 channel there will be neutron team meeting. Agenda is on [1]. If You have any topics which You would like to discuss, please add it to the "On demand section" in the agenda page. Just after that, at 1500 UTC in the same channel there will be also Neutron CI team meeting. [1] https://wiki.openstack.org/wiki/Network/Meetings -- Slawek Kaplonski Principal Software Engineer Red Hat From pbasaras at gmail.com Tue Dec 8 12:14:23 2020 From: pbasaras at gmail.com (Pavlos Basaras) Date: Tue, 8 Dec 2020 14:14:23 +0200 Subject: [development] Installation of Openstack for development using OVS Message-ID: Hello, I would like to install openstack for testing/development (not production) with OVS, with a controller node and multiple compute nodes, with the option to use ironic. I have followed the documentation https://docs.openstack.org/install-guide/ and i have a successful installation with linux bridges instead of OVS, with all the features working as mentioned in the tutorial. Any directions in how to have an openstack installation with OVS, e.g., a similar manual? Any advice welcome. all the best Pavlos. -------------- next part -------------- An HTML attachment was scrubbed... URL: From strigazi at gmail.com Tue Dec 8 12:36:09 2020 From: strigazi at gmail.com (Spyros Trigazis) Date: Tue, 8 Dec 2020 13:36:09 +0100 Subject: [infra][magnum][ci] Issues installing bashate and coverage Message-ID: Hello infra, openstack-tox-lower-constraints fails for bashate and coverage. (Maybe more, I bumped bashate and it failed for coverage. I don;t want to waste more resources on our CI) eg https://review.opendev.org/c/openstack/magnum/+/765881 https://review.opendev.org/c/openstack/magnum/+/765979 Do we miss something? Thanks, Spyros -------------- next part -------------- An HTML attachment was scrubbed... URL: From ts-takahashi at nec.com Tue Dec 8 13:20:19 2020 From: ts-takahashi at nec.com (=?utf-8?B?VEFLQUhBU0hJIFRPU0hJQUtJKOmrmOapi+OAgOaVj+aYjik=?=) Date: Tue, 8 Dec 2020 13:20:19 +0000 Subject: [tc][heat][tacker][tosca-parser][heat-translator] Discusssion about heat-translater and tosca-parser maintenance In-Reply-To: References: <9aedf122-5ebb-e29a-d977-b58635cabe51@nokia.com> <1760a4b795a.10e369974791414.187901784754730298@ghanshyammann.com> Message-ID: Hi Bob and Sahdev, Tacker team has started to discuss, and at least 5 members want to participate in the maintenance of heat-translator and tosca-parser. In my understanding, heat-translator and tosca-parser are different projects and core team is different. We’d like to different members to participate each core team. Is it OK? Should I send the name list to you? Best regards, Toshiaki From: ueha.ayumu at fujitsu.com Sent: Tuesday, December 1, 2020 9:25 AM To: openstack-discuss Subject: RE: [tc][heat][tacker][tosca-parser][heat-translator] Discusssion about heat-translater and tosca-parser maintenance Hi Bob and Sahdev I’m Ueha from tacker team. Thank you for reviewing my patch on the Victria release. Excuse me during the discussion about maintenance. I posted a new bug fix patch for policies validate. Could you review it? Thanks! https://bugs.launchpad.net/tosca-parser/+bug/1903233 https://review.opendev.org/c/openstack/tosca-parser/+/763144 Best regards, Ueha From: TAKAHASHI TOSHIAKI(高橋 敏明) > Sent: Monday, November 30, 2020 6:09 PM To: Rico Lin >; openstack-discuss > Subject: RE: [tc][heat][tacker][tosca-parser][heat-translator] Discusssion about heat-translater and tosca-parser maintenance Hi Rico, Thanks. OK, we’ll discuss with Bob to proceed with development of the projects. Regards, Toshiaki From: Rico Lin > Sent: Monday, November 30, 2020 4:34 PM To: openstack-discuss > Subject: Re: [tc][heat][tacker][tosca-parser][heat-translator] Discusssion about heat-translater and tosca-parser maintenance On Mon, Nov 30, 2020 at 11:06 AM TAKAHASHI TOSHIAKI(高橋 敏明) > wrote: > > Need to discuss with Heat, tc, etc.? > > And I'd like to continue to discuss other points such as cooperation with other members(Heat, or is there any users of those?). I don't think you need further discussion with tc as there still are ways for your patch to get reviewed, release package, or for you to join heat-translator-core team As we treat heat translator as a separated team, I'm definitely +1 on any decision from Bob. So not necessary to discuss with heat core team unless you find it difficult to achieve above tasks. I'm more than happy to provide help if needed. -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Tue Dec 8 13:33:41 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 8 Dec 2020 08:33:41 -0500 Subject: [cinder] reminder: virtual mid-cycle wednesday 1400 utc Message-ID: <09ecdaee-afc0-2e06-2afc-60c90eacd60e@gmail.com> The Cinder Wallaby R-18 virtual mid-cycle will be held: DATE: WEDNESDAY 9 DECEMBER 2020 TIME: 1400-1600 UTC LOCATION: https://bluejeans.com/3228528973 The meeting will be recorded. Please add topics to the mid-cycle etherpad: https://etherpad.opendev.org/p/cinder-wallaby-mid-cycles cheers, brian From mnaser at vexxhost.com Tue Dec 8 14:11:17 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 8 Dec 2020 09:11:17 -0500 Subject: [tc] weekly update Message-ID: Hi everyone, Here's an update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # Patches ## Open Reviews - Revive os_monasca https://review.opendev.org/c/openstack/governance/+/765800 - Deprecate openstack-ansible-galera_client role https://review.opendev.org/c/openstack/governance/+/765784 - Clarify impact on releases for SIGs https://review.opendev.org/c/openstack/governance/+/752699 - Remove Searchlight project team https://review.opendev.org/c/openstack/governance/+/764530 - Remove Qinling project team https://review.opendev.org/c/openstack/governance/+/764523 - Add Resolution of TC stance on the OpenStackClient https://review.opendev.org/c/openstack/governance/+/759904 ## General Changes - Remove already done use-builtin-mock from goal https://review.opendev.org/c/openstack/governance/+/764262 - Propose Kendall Nelson for vice chair https://review.opendev.org/c/openstack/governance/+/762014 - Add election schedule exceptions in charter https://review.opendev.org/c/openstack/governance/+/751941 - Generate the TC liaisons assignments https://review.opendev.org/c/openstack/governance/+/763810 - Update example and oslo code usage in JSON->YAML goal https://review.opendev.org/c/openstack/governance/+/764261 - Clarify the requirements for supports-api-interoperability https://review.opendev.org/c/openstack/governance/+/760562 - Remove assert_supports-zero-downtime-upgrade tag https://review.opendev.org/c/openstack/governance/+/761975 - Add assert:supports-standalone https://review.opendev.org/c/openstack/governance/+/722399 ## Project Updates - Add Magpie charm to OpenStack charms https://review.opendev.org/c/openstack/governance/+/762820 ## Other Reminders - [TC] Weekly meeting, December 10 at 1500 UTC. If you would like to add topics for discussion, please go to https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting and fill out your suggestions by Wednesday, December 09, at 2100 UTC. Thanks for reading! Mohammed & Kendall -- Mohammed Naser VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue Dec 8 14:35:44 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 8 Dec 2020 15:35:44 +0100 Subject: [all][qa][kolla][ops][tc] CentOS Project shifts focus to CentOS Stream Message-ID: TL;DR After 2021 there will be no CentOS 8 point releases. The focus is on CentOS 8 Stream. For more information, please read [1]. As the QA team, we have Stream testing proposed already [2]. We actually had Victoria from Manila ask us about it quite a while ago. Ghanshyam promised to coordinate this topic with the TC. As the Kolla team, we made switch to CentOS 8 Stream a priority of Wallaby and we will likely discuss this option for stable branches which will outlive the 2021 deadline. PS: Don't kill me. I'm just passing the information on. :-) [1] https://lists.centos.org/pipermail/centos-devel/2020-December/075451.html [2] https://review.opendev.org/c/openstack/devstack/+/759122 Kind regards, -yoctozepto From timsateroy at gmail.com Tue Dec 8 15:12:30 2020 From: timsateroy at gmail.com (=?UTF-8?B?VGltIFPDpnRlcsO4eQ==?=) Date: Tue, 8 Dec 2020 16:12:30 +0100 Subject: [neutron] Status on BGP support when using OVN in Neutron Message-ID: Hi, I'm trying to explore BGP support when using OVN in Neutron, but haven't been able to find much info on the topic. According to [1] committed in march 2020, OVN seems to lack similar functionality as is seen in ML2/OVS, but there's no further references: > Currently ML2/OVS supports making a tenant subnet routable via BGP, > and can announce host routes for both floating and fixed IP > addresses. I've tried searching for 'bgp' on Neutrons launchpad for issues tagged with 'ovn', in the 'ovn-org/ovn' repo or on docs.ovn.org, but I'm getting blanks. Does anyone happen to know where OVN stands on this, and especially in the context of Neutron? Or perhaps able to point me in the right direction where I can find out more? Thanks. [1]: https://docs.openstack.org/neutron/victoria/ovn/gaps.html -- Tim From dpawlik at redhat.com Tue Dec 8 16:00:30 2020 From: dpawlik at redhat.com (Daniel Pawlik) Date: Tue, 8 Dec 2020 17:00:30 +0100 Subject: [tripleo][ci] CI jobs failures on RDO project In-Reply-To: References: Message-ID: Hi, The CI jobs should be fine now. Regards, Dan On Mon, Dec 7, 2020 at 2:44 PM Wesley Hayutin wrote: > FYI > > ---------- Forwarded message --------- > From: Daniel Pawlik > Date: Mon, Dec 7, 2020 at 6:13 AM > Subject: [rdo-users] CI jobs failures on RDO project > To: > > > Hello, > > We are facing an outage in our cloud providers. That's why from time to > time you can have problems validating CI jobs in RDO project. > > We are doing our best to fix the issues as soon as possible. > > Kindly regards, > Dan > _______________________________________________ > users mailing list > users at lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/users > > To unsubscribe: users-unsubscribe at lists.rdoproject.org > -- Regards, Daniel Pawlik -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Dec 8 16:16:33 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 08 Dec 2020 10:16:33 -0600 Subject: [all][qa][kolla][ops][tc] CentOS Project shifts focus to CentOS Stream In-Reply-To: References: Message-ID: <1764323e0c1.fdda33c8129426.7268906329031471915@ghanshyammann.com> ---- On Tue, 08 Dec 2020 08:35:44 -0600 Radosław Piliszek wrote ---- > TL;DR > > After 2021 there will be no CentOS 8 point releases. > The focus is on CentOS 8 Stream. > > For more information, please read [1]. > > As the QA team, we have Stream testing proposed already [2]. > We actually had Victoria from Manila ask us about it quite a while ago. > > Ghanshyam promised to coordinate this topic with the TC. Added in TC's next meeting agenda happening on Thursday, Dec 3rd. Please join us to discuss the plan and volunteer for help as this will require more work as stable branches since stable/ussuri might need updates. - https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda_Suggestions -gmann > > As the Kolla team, we made switch to CentOS 8 Stream a priority of > Wallaby and we will likely discuss this option for stable branches > which will outlive the 2021 deadline. > > PS: Don't kill me. I'm just passing the information on. :-) > > [1] https://lists.centos.org/pipermail/centos-devel/2020-December/075451.html > [2] https://review.opendev.org/c/openstack/devstack/+/759122 > > Kind regards, > -yoctozepto > > From gmann at ghanshyammann.com Tue Dec 8 16:19:23 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 08 Dec 2020 10:19:23 -0600 Subject: [all][qa][kolla][ops][tc] CentOS Project shifts focus to CentOS Stream In-Reply-To: <1764323e0c1.fdda33c8129426.7268906329031471915@ghanshyammann.com> References: <1764323e0c1.fdda33c8129426.7268906329031471915@ghanshyammann.com> Message-ID: <17643267900.dc1eac52129573.5740193542955543393@ghanshyammann.com> ---- On Tue, 08 Dec 2020 10:16:33 -0600 Ghanshyam Mann wrote ---- > ---- On Tue, 08 Dec 2020 08:35:44 -0600 Radosław Piliszek wrote ---- > > TL;DR > > > > After 2021 there will be no CentOS 8 point releases. > > The focus is on CentOS 8 Stream. > > > > For more information, please read [1]. > > > > As the QA team, we have Stream testing proposed already [2]. > > We actually had Victoria from Manila ask us about it quite a while ago. > > > > Ghanshyam promised to coordinate this topic with the TC. > > Added in TC's next meeting agenda happening on Thursday, Dec 3rd. It's on Dec 10th, 15 UTC: http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann > > Please join us to discuss the plan and volunteer for help as this will require more work > as stable branches since stable/ussuri might need updates. > > - https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda_Suggestions > > -gmann > > > > > As the Kolla team, we made switch to CentOS 8 Stream a priority of > > Wallaby and we will likely discuss this option for stable branches > > which will outlive the 2021 deadline. > > > > PS: Don't kill me. I'm just passing the information on. :-) > > > > [1] https://lists.centos.org/pipermail/centos-devel/2020-December/075451.html > > [2] https://review.opendev.org/c/openstack/devstack/+/759122 > > > > Kind regards, > > -yoctozepto > > > > > From hyunwoo18 at gmail.com Mon Dec 7 22:14:24 2020 From: hyunwoo18 at gmail.com (Hyunwoo KIM) Date: Mon, 7 Dec 2020 16:14:24 -0600 Subject: [ops] In a compute node, outbound traffic blocked when a VM running In-Reply-To: <20201207213312.ipkm6xj2n7i6gzpu@p1.localdomain> References: <20201207213312.ipkm6xj2n7i6gzpu@p1.localdomain> Message-ID: Hi, Thanks for the messages. > You should check where exactly Your packets are dropped. I am communicating with our Linux systems engineers to learn how I can find out which rule is dropping the packets. In the meantime, which method would you recommend for this purpose? > Also, You didn't tell us what is the type of the Neutron network to which > Your VM is plugged and The VM is plugged to a provider network. > how bridges are done on Your compute node. We are using Linux Bridges. Are there any more/other information that I should provide? Thanks, Hyunwoo Application developer at Fermilab. On Mon, Dec 7, 2020 at 3:33 PM Slawek Kaplonski wrote: > Hi, > > On Mon, Dec 07, 2020 at 10:11:19AM -0600, Hyunwoo KIM wrote: > > Summary of the problem > > > > This problem is in a compute node, not in a VM. > > > > Once a VM is running in a compute node, > > > > all outbound connections in a compute node (not VM) are blocked. > > > > For example: > > > > # telnet www.google.com 80 > > > > Trying 172.217.5.4... > > > > > > > > Technical Details: > > > > We only use provider network. > > > > These 4 services are running in each compute node: > > > > - neutron-linuxbridge-agent.service > > > > - neutron-dhcp-agent.service > > > > - neutron-metadata-agent.service > > > > - openstack-nova-compute.service > > > > > > > > Detailed description of the problem: > > > > > > In a compute node, the following is the result of iptables -L when no VM > is > > running: > > > > > > > > > > Chain INPUT (policy ACCEPT) > > > > target prot opt source destination > > > > neutron-linuxbri-INPUT all -- anywhere anywhere > > > > And our usual rules > > > > > > Chain FORWARD (policy ACCEPT) > > > > target prot opt source destination > > > > neutron-filter-top all -- anywhere anywhere > > > > neutron-linuxbri-FORWARD all -- anywhere anywhere > > > > > > Chain OUTPUT (policy ACCEPT) > > > > target prot opt source destination > > > > neutron-filter-top all -- anywhere anywhere > > > > neutron-linuxbri-OUTPUT all -- anywhere anywhere > > > > > > Chain neutron-filter-top (2 references) > > > > target prot opt source destination > > > > neutron-linuxbri-local all -- anywhere anywhere > > > > > > Chain neutron-linuxbri-FORWARD (1 references) > > > > target prot opt source destination > > > > ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-out tapb > > --physdev-is-bridged > > > > ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in tapb > > --physdev-is-bridged > > > > ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-out tap9 > > --physdev-is-bridged > > > > ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in tap9 > > --physdev-is-bridged > > > > > > Chain neutron-linuxbri-INPUT (1 references) > > > > target prot opt source destination > > > > > > Chain neutron-linuxbri-OUTPUT (1 references) > > > > target prot opt source destination > > > > > > Chain neutron-linuxbri-local (1 references) > > > > target prot opt source destination > > > > > > Chain neutron-linuxbri-sg-chain (0 references) > > > > target prot opt source destination > > > > ACCEPT all -- anywhere anywhere > > > > > > Chain neutron-linuxbri-sg-fallback (0 references) > > > > target prot opt source destination > > > > DROP all -- anywhere anywhere > > > > > > > > > > > > In the same compute node, when a VM is running, > > > > the following is the result of iptables -L: > > > > > > > > > > > > Chain INPUT (policy ACCEPT) > > > > target prot opt source destination > > > > neutron-linuxbri-INPUT all -- anywhere anywhere > > > > And our usual rules > > > > > > Chain FORWARD (policy ACCEPT) > > > > target prot opt source destination > > > > neutron-filter-top all -- anywhere anywhere > > > > neutron-linuxbri-FORWARD all -- anywhere anywhere > > > > > > Chain OUTPUT (policy ACCEPT) > > > > target prot opt source destination > > > > neutron-filter-top all -- anywhere anywhere > > > > neutron-linuxbri-OUTPUT all -- anywhere anywhere > > > > > > Chain neutron-filter-top (2 references) > > > > target prot opt source destination > > > > neutron-linuxbri-local all -- anywhere anywhere > > > > > > Chain neutron-linuxbri-FORWARD (1 references) > > > > target prot opt source destination > > > > neutron-linuxbri-sg-chain all -- anywhere anywhere PHYSDEV match > > --physdev-out tap8 --physdev-is-bridged > > > > neutron-linuxbri-sg-chain all -- anywhere anywhere PHYSDEV match > > --physdev-in tap8 --physdev-is-bridged > > > > ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-out tapb > > --physdev-is-bridged > > > > ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in tapb > > --physdev-is-bridged > > > > ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-out tap9 > > --physdev-is-bridged > > > > ACCEPT all -- anywhere anywhere PHYSDEV match --physdev-in tap9 > > --physdev-is-bridged > > > > > > Chain neutron-linuxbri-INPUT (1 references) > > > > target prot opt source destination > > > > neutron-linuxbri-o8 all -- anywhere anywhere PHYSDEV match --physdev-in > > tap8 --physdev-is-bridged > > > > > > Chain neutron-linuxbri-OUTPUT (1 references) > > > > target prot opt source destination > > > > > > Chain neutron-linuxbri-i8 (1 references) > > > > target prot opt source destination > > > > RETURN all -- anywhere anywhere state RELATED,ESTABLISHED > > > > RETURN udp -- anywhere fermicloud248.fnal.gov udp spt:bootps > > dpt:bootpc > > > > RETURN udp -- anywhere 255.255.255.255 udp spt:bootps dpt:bootpc > > > > RETURN icmp -- anywhere anywhere > > > > RETURN tcp -- fermilab-net.fnal.gov/16 anywhere tcp dpt:ssh > > > > RETURN all -- anywhere anywhere match-set > > NIPv41d69ba3c-68e3-414f-8f1b- src > > > > DROP all -- anywhere anywhere state INVALID > > > > neutron-linuxbri-sg-fallback all -- anywhere anywhere > > > > > > Chain neutron-linuxbri-local (1 references) > > > > target prot opt source destination > > > > > > Chain neutron-linuxbri-o8 (2 references) > > > > target prot opt source destination > > > > RETURN udp -- default 255.255.255.255 udp spt:bootpc > > dpt:bootps > > > > neutron-linuxbri-s8 all -- anywhere anywhere > > > > RETURN udp -- anywhere anywhere udp spt:bootpc dpt:bootps > > > > DROP udp -- anywhere anywhere udp spt:bootps dpt:bootpc > > > > RETURN all -- anywhere anywhere state RELATED,ESTABLISHED > > > > RETURN tcp -- anywhere anywhere tcp dpt:https > > > > RETURN all -- anywhere anywhere > > > > RETURN tcp -- anywhere anywhere tcp dpt:http > > > > DROP all -- anywhere anywhere state INVALID > > > > neutron-linuxbri-sg-fallback all -- anywhere anywhere > > > > > > Chain neutron-linuxbri-s8 (1 references) > > > > target prot opt source destination > > > > RETURN all -- fermicloud248.fnal.gov anywhere MAC FA:16: > > > > DROP all -- anywhere anywhere > > > > > > > > Chain neutron-linuxbri-sg-chain (2 references) > > > > target prot opt source destination > > > > neutron-linuxbri-i8 all -- anywhere anywhere PHYSDEV match > --physdev-out > > tap8 --physdev-is-bridged > > > > neutron-linuxbri-o8 all -- anywhere anywhere PHYSDEV match > --physdev-in tap8 > > --physdev-is-bridged > > > > ACCEPT all -- anywhere anywhere > > > > > > Chain neutron-linuxbri-sg-fallback (2 references) > > > > target prot opt source destination > > > > DROP all -- anywhere anywhere > > > > > > > > > > > > Let me summarize the differences from when no VM running: > > > > > > Chain INPUT : no change > > > > Chain FORWARD: no change > > > > Chain OUTPUT : no change > > > > Chain neutron-filter-top: no change > > > > > > Chain neutron-linuxbri-FORWARD: Two new rules are added > > > > neutron-linuxbri-sg-chain > > > > neutron-linuxbri-sg-chain > > > > > > Chain neutron-linuxbri-INPUT: One new rule is added > > > > neutron-linuxbri-o8ae816b0-f > > > > > > Chain neutron-linuxbri-sg-chain: Two new rules are added > > > > neutron-linuxbri-i8 > > > > neutron-linuxbri-o8 > > Those are chains which represents rules from Your Security Group used by a > VM > > > > > > > Chain neutron-linuxbri-OUTPUT: no change > > > > Chain neutron-linuxbri-local: no change > > > > Chain neutron-linuxbri-sg-fallback: no change > > > > > > Chain neutron-linuxbri-i8: A new chain with multiple rules > > > > Chain neutron-linuxbri-o8: A new chain with multiple rules > > In those 2 chains there are ingress and egress SG rules implemented > > > > > Chain neutron-linuxbri-s8: A new chain with multiple rules > > And in this one there are antispoofing rules for Your port added. > > > > > > > > > But now a problem arises here: > > > > All outbound connections are blocked (remember this is in a compute node, > > not VM): > > > > For example: > > > > # telnet www.google.com 80 > > > > Trying 172.217.5.4... > > > > > > When there isn't any VM running, We don't see this problem. > > > > > > I was wondering if I needed to create a new security group rule for the > > port 80 (for example) > > > > but that didn't solve the issue. > > > > > > Any technical advice will be appreciated, > > You should check where exactly Your packets are dropped. > Also, You didn't tell us what is the type of the Neutron network to which > Your VM is plugged and how bridges are done on Your compute node. > > > > > Thanks, > > > > Hyunwoo > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Dec 8 17:12:48 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 8 Dec 2020 17:12:48 +0000 Subject: [infra][magnum][ci] Issues installing bashate and coverage In-Reply-To: References: Message-ID: <20201208171248.6dffedoymqj7dgkr@yuggoth.org> On 2020-12-08 13:36:09 +0100 (+0100), Spyros Trigazis wrote: > openstack-tox-lower-constraints fails for bashate and coverage. > (Maybe more, I bumped bashate and it failed for coverage. I don;t > want to waste more resources on our CI) > eg https://review.opendev.org/c/openstack/magnum/+/765881 > https://review.opendev.org/c/openstack/magnum/+/765979 > > Do we miss something? Pip 20.3.0, released 8 days ago, turned on a new and much more thorough dependency resolver. Earlier versions of pip did not try particularly hard to make sure the dependencies claimed by packages were all satisfied. Virtualenv 20.2.2 released yesterday and increased the version of pip it's vendoring to a version which uses the new solver as well. These changes mean that latent version conflicts are now being correctly identified as bugs, and these jobs will do a far better job of actually confirming the declared versions of dependencies are able to be tested. One thing which looks really weird and completely contradictory to me is that your lower-constraints job on change 765881 is applying both upper and lower constraints lists to the pip install command. Maybe the lower constraints list is expected to override the earlier upper constraints, but is that really going to represent a compatible set? That aside, trying to reproduce locally I run into yet a third error: Could not find a version that satisfies the requirement warlock!=1.3.0,<2,>=1.0.1 (from python-glanceclient) And indeed, python-glanceclient insists warlock 1.3.0 should be skipped, while magnum's lower-constraints.txt says you must install warlock==1.3.0 so that's a clear contradiction as well. My recommendation is to work on reproducing this locally first and play a bit of whack-a-mole with the entries in your lower-constraints.txt to find versions of things which will actually be coinstallable with current versions of pip. You don't need to run the full tox testenv, just try installing your constrainted deps into a venv with upgraded pip like so: python3.8 -m venv foo foo/bin/pip install -U pip foo/bin/pip install -c lower-constraints.txt \ -r test-requirements.txt -r requirements.txt You'll also likely want to delete and recreate the venv each time you try, since pip will now also try to take the requirements of already installed packages into account, and that might further change the behavior you see. Hope that helps! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From whayutin at redhat.com Tue Dec 8 18:37:29 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 8 Dec 2020 11:37:29 -0700 Subject: [tripleo][ci] update - CI jobs failures on RDO and Software Factory Project In-Reply-To: References: <20201208132529.3mctmb5ougn6jzpo@lyarwood-laptop.usersys.redhat.com> Message-ID: forwarded from rdo.... Hi, The CI jobs that are running on Vexxhost should be fine now. Regards, Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Dec 8 19:34:36 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 8 Dec 2020 11:34:36 -0800 Subject: [ironic] No Weekly Meeting - December 21st, 28th, and January 4th. Message-ID: Greetings fellow humans! As we have in years past, we are approaching the end of year holiday season where people begin to disappear to use time off, rest, and relax their brains. As such, we're cancelling[0] our December 21st, 28th, and January 4th weekly meetings. As in past years, contributors tend to remain connected to IRC, but expect people to be heads-down if they are working, so replies will be sporadic. This also means December 14th will be our last weekly meeting of 2020. In accordance with the prophecy of end of year time off, we'll be in auto-pilot of sorts during the weeks we do not meet. Contributors should feel free to update the review priorities section on the etherpad[1] and look to it for what is going on or what needs additional reviews/feedback. One bit of warning, CI breakages during this time of year are common. Single core approvals to fix CI issues are encouraged in this case. Have a wonderful remainder of the year everyone! -Julia [0]: http://eavesdrop.openstack.org/meetings/ironic/2020/ironic.2020-12-07-15.00.log.html [1]: https://etherpad.opendev.org/p/IronicWhiteBoard From honjo.rikimaru at ntt-tx.co.jp Wed Dec 9 09:05:30 2020 From: honjo.rikimaru at ntt-tx.co.jp (Rikimaru Honjo) Date: Wed, 09 Dec 2020 18:05:30 +0900 Subject: [Keystone]Question about access rules for identity API In-Reply-To: <38ee02a3-cb58-9231-1717-7b284a373a38@ntt-tx.co.jp_1> References: <38ee02a3-cb58-9231-1717-7b284a373a38@ntt-tx.co.jp_1> Message-ID: Sorry, On 2020/12/08 18:36, Rikimaru Honjo wrote: > Hi, > > Are the access rules not applied for identity API? > > I created an application credential with a access rule that allows > only project list API. > But the application credential allows all identity APIs. > > Is this correct? Are there any documents that explains about this? > > By the way, the application credential denied all other service's > APIs. I think this behavior is correct. > > I use OpenStack Ubuntu. The last sentence was typo. "I use OpenStack Ussuri." > > Best regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at ntt-tx.co.jp From zigo at debian.org Wed Dec 9 09:50:04 2020 From: zigo at debian.org (Thomas Goirand) Date: Wed, 9 Dec 2020 10:50:04 +0100 Subject: [neutron] Status on BGP support when using OVN in Neutron In-Reply-To: References: Message-ID: <89aacc2d-2fc1-e9b2-2398-4000133f134d@debian.org> On 12/8/20 4:12 PM, Tim Sæterøy wrote: > Hi, > > I'm trying to explore BGP support when using OVN in Neutron, but > haven't been able to find much info on the topic. According to [1] > committed in march 2020, OVN seems to lack similar functionality as is > seen in ML2/OVS, but there's no further references: > >> Currently ML2/OVS supports making a tenant subnet routable via BGP, >> and can announce host routes for both floating and fixed IP >> addresses. > > I've tried searching for 'bgp' on Neutrons launchpad for issues tagged > with 'ovn', in the 'ovn-org/ovn' repo or on docs.ovn.org, but I'm > getting blanks. > > Does anyone happen to know where OVN stands on this, and especially in > the context of Neutron? Or perhaps able to point me in the right > direction where I can find out more? Thanks. > > [1]: https://docs.openstack.org/neutron/victoria/ovn/gaps.html > > -- > Tim Hi, As much as I know, there's no upstream (ie: OpenStack without downstream modification) for BGP-to-the-host support, appart from the non-mainstream callico driver, which I haven't tested. However, I have been able to do a kind of "bgp-to-the-rack" where the L2 networking is limited to segments on a single rack. This can be done adding this patch to Neutron: https://review.opendev.org/c/openstack/neutron/+/669395 As you can see, I did a few iteration of this patch, but now, I've been told to add more db tests, which I don't have the skills for. I asked for more help from the Neutron team, but so far, everyone seems busy. However, I did test this, and it works kind of well. :) If you wish to help to get this patch going in, please do! :) Note that this doesn't use OVN. I have no idea if it would work with OVN or not (I never did such a setup, only used plain OVS). Now, this doesn't replace a full BGP-to-the-host setup, but hopefully, the Neutron team will be able to have this done "soon" (whatever that means), as interest for it is growing. Cheers, Thomas Goirand (zigo) From dalvarez at redhat.com Wed Dec 9 12:33:09 2020 From: dalvarez at redhat.com (Daniel Alvarez Sanchez) Date: Wed, 9 Dec 2020 13:33:09 +0100 Subject: [neutron] Status on BGP support when using OVN in Neutron In-Reply-To: <89aacc2d-2fc1-e9b2-2398-4000133f134d@debian.org> References: <89aacc2d-2fc1-e9b2-2398-4000133f134d@debian.org> Message-ID: Hi all, On Wed, Dec 9, 2020 at 10:56 AM Thomas Goirand wrote: > On 12/8/20 4:12 PM, Tim Sæterøy wrote: > > Hi, > > > > I'm trying to explore BGP support when using OVN in Neutron, but > > haven't been able to find much info on the topic. According to [1] > > committed in march 2020, OVN seems to lack similar functionality as is > > seen in ML2/OVS, but there's no further references: > > > >> Currently ML2/OVS supports making a tenant subnet routable via BGP, > >> and can announce host routes for both floating and fixed IP > >> addresses. > > > > I've tried searching for 'bgp' on Neutrons launchpad for issues tagged > > with 'ovn', in the 'ovn-org/ovn' repo or on docs.ovn.org, but I'm > > getting blanks. > > > > Does anyone happen to know where OVN stands on this, and especially in > > the context of Neutron? Or perhaps able to point me in the right > > direction where I can find out more? Thanks. Precisely, there is a talk scheduled for today in the OVS/OVN con about introducing BGP support in OVN [0][1] by Nutanix folks. This is more about EVPN and advertising /32 (or /128) to the router. So far, there is nothing specific in Neutron for the ML2/OVN driver. However, neutron-dynamic-routing project would work for you with ML2/OVN to advertise tenant networks but advertising FIPs is currently relying on having 'agent gateway' ports which consume one IP per L3 agent per compute node. This is something that ML2/OVN doesn't do so currently there's no way with neutron-dynamic-routing to advertise FIP host routes directly to the compute node. Can you describe the use cases that you'd have in mind or you just wanted to know the current status? Thanks a lot! daniel [0] https://www.openvswitch.org/support/ovscon2020/ [1] http://www.openvswitch.org/support/ovscon2020/slides/OVS-CONF-2020-OVN-WITH-DYNAMIC-ROUTING.pdf > > > > > [1]: https://docs.openstack.org/neutron/victoria/ovn/gaps.html > > > > -- > > Tim > > Hi, > > As much as I know, there's no upstream (ie: OpenStack without downstream > modification) for BGP-to-the-host support, appart from the > non-mainstream callico driver, which I haven't tested. > > However, I have been able to do a kind of "bgp-to-the-rack" where the L2 > networking is limited to segments on a single rack. This can be done > adding this patch to Neutron: > > https://review.opendev.org/c/openstack/neutron/+/669395 > > As you can see, I did a few iteration of this patch, but now, I've been > told to add more db tests, which I don't have the skills for. I asked > for more help from the Neutron team, but so far, everyone seems busy. > However, I did test this, and it works kind of well. :) > > If you wish to help to get this patch going in, please do! :) > > Note that this doesn't use OVN. I have no idea if it would work with OVN > or not (I never did such a setup, only used plain OVS). > > Now, this doesn't replace a full BGP-to-the-host setup, but hopefully, > the Neutron team will be able to have this done "soon" (whatever that > means), as interest for it is growing. > > Cheers, > > Thomas Goirand (zigo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalvarez at redhat.com Wed Dec 9 12:34:04 2020 From: dalvarez at redhat.com (Daniel Alvarez Sanchez) Date: Wed, 9 Dec 2020 13:34:04 +0100 Subject: [neutron] Status on BGP support when using OVN in Neutron In-Reply-To: References: <89aacc2d-2fc1-e9b2-2398-4000133f134d@debian.org> Message-ID: On Wed, Dec 9, 2020 at 1:33 PM Daniel Alvarez Sanchez wrote: > Hi all, > > On Wed, Dec 9, 2020 at 10:56 AM Thomas Goirand wrote: > >> On 12/8/20 4:12 PM, Tim Sæterøy wrote: >> > Hi, >> > >> > I'm trying to explore BGP support when using OVN in Neutron, but >> > haven't been able to find much info on the topic. According to [1] >> > committed in march 2020, OVN seems to lack similar functionality as is >> > seen in ML2/OVS, but there's no further references: >> > >> >> Currently ML2/OVS supports making a tenant subnet routable via BGP, >> >> and can announce host routes for both floating and fixed IP >> >> addresses. >> > >> > I've tried searching for 'bgp' on Neutrons launchpad for issues tagged >> > with 'ovn', in the 'ovn-org/ovn' repo or on docs.ovn.org, but I'm >> > getting blanks. >> > >> > Does anyone happen to know where OVN stands on this, and especially in >> > the context of Neutron? Or perhaps able to point me in the right >> > direction where I can find out more? Thanks. > > > Precisely, there is a talk scheduled for today in the OVS/OVN con about > introducing BGP support in OVN [0][1] by Nutanix folks. This is more about > EVPN and advertising /32 (or /128) to the router. > > So far, there is nothing specific in Neutron for the ML2/OVN driver. > However, neutron-dynamic-routing project would work for you with ML2/OVN to > advertise tenant networks but advertising FIPs is currently relying on > having 'agent gateway' ports which consume one IP per L3 agent per compute > node. > s/per L3 agent per compute node/per L3 agent per provider network > This is something that ML2/OVN doesn't do so currently there's no way with > neutron-dynamic-routing to advertise FIP host routes directly to the > compute node. > > Can you describe the use cases that you'd have in mind or you just wanted > to know the current status? > > Thanks a lot! > daniel > > [0] https://www.openvswitch.org/support/ovscon2020/ > [1] > http://www.openvswitch.org/support/ovscon2020/slides/OVS-CONF-2020-OVN-WITH-DYNAMIC-ROUTING.pdf > > >> >> > >> > [1]: https://docs.openstack.org/neutron/victoria/ovn/gaps.html >> > >> > -- >> > Tim >> >> Hi, >> >> As much as I know, there's no upstream (ie: OpenStack without downstream >> modification) for BGP-to-the-host support, appart from the >> non-mainstream callico driver, which I haven't tested. >> >> However, I have been able to do a kind of "bgp-to-the-rack" where the L2 >> networking is limited to segments on a single rack. This can be done >> adding this patch to Neutron: >> >> https://review.opendev.org/c/openstack/neutron/+/669395 >> >> As you can see, I did a few iteration of this patch, but now, I've been >> told to add more db tests, which I don't have the skills for. I asked >> for more help from the Neutron team, but so far, everyone seems busy. >> However, I did test this, and it works kind of well. :) >> >> If you wish to help to get this patch going in, please do! :) >> >> Note that this doesn't use OVN. I have no idea if it would work with OVN >> or not (I never did such a setup, only used plain OVS). >> >> Now, this doesn't replace a full BGP-to-the-host setup, but hopefully, >> the Neutron team will be able to have this done "soon" (whatever that >> means), as interest for it is growing. >> >> Cheers, >> >> Thomas Goirand (zigo) >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Wed Dec 9 13:59:04 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 9 Dec 2020 13:59:04 +0000 Subject: [all][stable] bandit 1.6.3 drops py2 support Message-ID: <20201209135904.3npvtzwzldsgot6c@lyarwood-laptop.usersys.redhat.com> Hello all, $subject [1][2] is breaking various <= stable/train jobs where we attempt to pull bandit in while still using py2. This has been reported upstream and it looks like the 1.6.3 release may end up being yanked. If it isn't I've proposed the following requirements change to try to cap bandit to the 1.6.2 release, assuming this is safe to do on stable: Cap bandit at 1.6.2 when using py2 https://review.opendev.org/c/openstack/requirements/+/766170 Cheers, [1] https://github.com/PyCQA/bandit/releases/tag/1.6.3 [2] https://github.com/PyCQA/bandit/pull/615 [3] https://github.com/PyCQA/bandit/issues/663 -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 From C-Albert.Braden at charter.com Wed Dec 9 14:23:53 2020 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Wed, 9 Dec 2020 14:23:53 +0000 Subject: [kolla] Ussuri Horizon containers fail after Centos 8.3 release Message-ID: <0fa875fd0a90458ca2216c333ed855f4@NCEMEXGP009.CORP.CHARTERCOM.com> We are running Train on Centos 7, but I'm experimenting with Adjutant under Ussuri on Centos 8. Until yesterday my Ussuri Horizon containers were ok, but after the Centos 8.3 release they started failing with "{"log":"CommandError: An error occurred during rendering /var/lib/kolla/venv/lib/python3.6/site-packages/openstack_dashboard/templates/horizon/_scripts.html: Couldn't find any precompiler in COMPRESS_PRECOMPILERS setting for mimetype '\\'text/javascript\\''.\n","stream":"stderr","time":"2020-12-08T20:08:03.267208752Z"}" I dug through the Centos release notes and didn't find anything there. [1] I already have the "lowercasing" issue fixed [2]. I see the javascript reference in _scripts.html [3] but it's not obvious what is going wrong. How can I work around this? Is there a variable I can set to build the container on Centos 8.2? Or is there a better way? More logs at [4] [1] https://wiki.centos.org/Manuals/ReleaseNotes/CentOS8.2011 [2] https://review.opendev.org/c/openstack/kolla/+/765854 [3] http://paste.openstack.org/show/800896/ [4] http://paste.openstack.org/show/800897/ I apologize for the nonsense below. So far I have not been able to stop it from being attached to my external emails. I'm working on it. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Dec 9 14:40:06 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 9 Dec 2020 14:40:06 +0000 Subject: [all][stable] bandit 1.6.3 drops py2 support In-Reply-To: <20201209135904.3npvtzwzldsgot6c@lyarwood-laptop.usersys.redhat.com> References: <20201209135904.3npvtzwzldsgot6c@lyarwood-laptop.usersys.redhat.com> Message-ID: <20201209144006.d4yxdyv5sng5bl5l@yuggoth.org> On 2020-12-09 13:59:04 +0000 (+0000), Lee Yarwood wrote: > Hello all, > > $subject [1][2] is breaking various <= stable/train jobs where we > attempt to pull bandit in while still using py2. This has been reported > upstream and it looks like the 1.6.3 release may end up being yanked. > > If it isn't I've proposed the following requirements change to try to > cap bandit to the 1.6.2 release, assuming this is safe to do on stable: > > Cap bandit at 1.6.2 when using py2 > https://review.opendev.org/c/openstack/requirements/+/766170 [...] It's typically recommended to pin static analysis tools strictly less than the next major release in (test-)requirements lists of individual projects. Part of why it's blacklisted in the global requirements repository is so that the central upper-constraints.txt won't override project level decisions on what versions of these tools to run. Granted, it would also have made more sense if bandit uprevved to 2.0.0 when dropping Python 2.x support, so that in-project requirements in the form bandit<2 could have prevented the impact. But all that's to say, pinning bandit in stable branches of individual projects using it would be the more expected fix here. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From lyarwood at redhat.com Wed Dec 9 15:41:33 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 9 Dec 2020 15:41:33 +0000 Subject: [all][stable] bandit 1.6.3 drops py2 support In-Reply-To: <20201209144006.d4yxdyv5sng5bl5l@yuggoth.org> References: <20201209135904.3npvtzwzldsgot6c@lyarwood-laptop.usersys.redhat.com> <20201209144006.d4yxdyv5sng5bl5l@yuggoth.org> Message-ID: <20201209154133.fr5js3b5yow73aue@lyarwood-laptop.usersys.redhat.com> On 09-12-20 14:40:06, Jeremy Stanley wrote: > On 2020-12-09 13:59:04 +0000 (+0000), Lee Yarwood wrote: > > Hello all, > > > > $subject [1][2] is breaking various <= stable/train jobs where we > > attempt to pull bandit in while still using py2. This has been reported > > upstream and it looks like the 1.6.3 release may end up being yanked. > > > > If it isn't I've proposed the following requirements change to try to > > cap bandit to the 1.6.2 release, assuming this is safe to do on stable: > > > > Cap bandit at 1.6.2 when using py2 > > https://review.opendev.org/c/openstack/requirements/+/766170 > [...] > > It's typically recommended to pin static analysis tools strictly > less than the next major release in (test-)requirements lists of > individual projects. Part of why it's blacklisted in the global > requirements repository is so that the central upper-constraints.txt > won't override project level decisions on what versions of these > tools to run. Granted, it would also have made more sense if bandit > uprevved to 2.0.0 when dropping Python 2.x support, so that > in-project requirements in the form bandit<2 could have prevented > the impact. But all that's to say, pinning bandit in stable branches > of individual projects using it would be the more expected fix here. ACK thanks Jeremy, I had started that below before going back to an earlier attempt with requirements. I'll reopen these now and test things in the Nova change. https://review.opendev.org/q/topic:bug/1907438 Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From DHilsbos at performair.com Wed Dec 9 15:48:17 2020 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Wed, 9 Dec 2020 15:48:17 +0000 Subject: [all]OpenStack + CentOS Message-ID: <0670B960225633449A24709C291A52524F9D2FC5@COM01.performair.local> All; As you may or may not know; yesterday morning RedHat announced the end of CentOS as a rebuild distribution[1]. "CentOS" will be retired in favor of the recently announced "CentOS Stream." Can OpenStack be installed on CentOS Stream? Since CentOS Stream is currently at 8, the question really is: Can OpenStack Victoria be installed on CentOS Stream 8? How about Ussuri? Thank you, Dominic L. Hilsbos, MBA Director - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com [1: https://blog.centos.org/2020/12/future-is-centos-stream/] Dominic L. Hilsbos, MBA Director - Information Technology [Perform Air International, Inc.] DHilsbos at PerformAir.com 300 S. Hamilton Pl. Gilbert, AZ 85233 Phone: (480) 610-3500 Fax: (480) 610-3501 www.PerformAir.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 17150 bytes Desc: image001.png URL: From owalsh at redhat.com Wed Dec 9 16:06:45 2020 From: owalsh at redhat.com (Oliver Walsh) Date: Wed, 9 Dec 2020 16:06:45 +0000 Subject: [rdo-users] [tripleo] Deployment update (node addition) after changing aggregate groups/zones In-Reply-To: References: Message-ID: Ok, that will not work. If you want to manually configure the AZs after deployment then just stop using [1]. Cheers, Ollie On Tue, 8 Dec 2020 at 06:59, Ruslanas Gžibovskis wrote: > The first deployment set all computes to zone named according to stack > name, but later I have created Alpha01, Alpha02. And set according in > node-info.yaml file. But still, it fails with message, that some compute is > already present in zone Alpha01... like it cannot create such zone. And I > say, yes captain, I know, I have created and added YOU into that zone... > Maybe I need to do some "tweaks" to DB? just now thought about it. > > On Mon, 7 Dec 2020 at 20:02, Oliver Walsh wrote: > >> Hi, >> >> You will need to manually remove the hosts from the old zone ("Alpha01") >> before adding them to the new zone. A host can only belong to one AZ. >> >> Thanks, >> Ollie >> >> On Mon, 7 Dec 2020 at 11:32, Ruslanas Gžibovskis >> wrote: >> >>> anyone know, how to bypass aggregation group error? thank you. >>> >>> On Sat, 5 Dec 2020 at 18:08, Ruslanas Gžibovskis >>> wrote: >>> >>>> Hi all, >>>> >>>> Any thoughts on this one? >>>> >>>> >>>>> Hi all. >>>>>> >>>>>> After changing the host aggregate group and zone, I cannot run >>>>>> OpenStack deploy command successfully again, even after updating deployment >>>>>> environment files according to my setup. >>>>>> >>>>>> I receive error bigger one in [0]: >>>>>> 2020-12-02 10:16:18.532419 | 52540000-0001-cf95-492f-0000000003ca | >>>>>> FATAL | Nova: Manage aggregate and availability zone and add hosts to >>>>>> the zone | undercloud | error={"changed": false, "msg": "ConflictException: >>>>>> 409: Client Error for url: >>>>>> http://10.120.129.199:8774/v2.1/os-aggregates/1/action, Cannot add >>>>>> host to aggregate 1. Reason: One or more hosts already in availability >>>>>> zone(s) ['Alpha01']."} >>>>>> >>>>>> I was following this link [1] instructions for "Configuring >>>>>> Availability Zones (AZ)" steps to modify with OpenStack commands. And zone >>>>>> was created successfully, but when I needed to add additional nodes, >>>>>> executed deployment again with increased numbers it was complaining about >>>>>> an incorrect aggregate zone, and now it is complaining about not empty zone >>>>>> with error [0] mentioned above. I have added aggregate zones into >>>>>> deployment files even role file... any ideas? >>>>>> >>>>>> Also, I think, this should be mentioned, that added it after install, >>>>>> you lose the possibility to update using tripleo tool and you will need to >>>>>> modify environment files with. >>>>>> >>>>>> >>>>>> >>>>>> [0] http://paste.openstack.org/show/800622/ >>>>>> [1] >>>>>> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/distributed_compute_node.html#configuring-availability-zones-az >>>>>> >>>>>> >>>>>> >>>> >>> >>> -- >>> Ruslanas Gžibovskis >>> +370 6030 7030 >>> _______________________________________________ >>> users mailing list >>> users at lists.rdoproject.org >>> http://lists.rdoproject.org/mailman/listinfo/users >>> >>> To unsubscribe: users-unsubscribe at lists.rdoproject.org >>> >> > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Wed Dec 9 16:09:15 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 9 Dec 2020 17:09:15 +0100 Subject: [kolla] Ussuri Horizon containers fail after Centos 8.3 release In-Reply-To: <0fa875fd0a90458ca2216c333ed855f4@NCEMEXGP009.CORP.CHARTERCOM.com> References: <0fa875fd0a90458ca2216c333ed855f4@NCEMEXGP009.CORP.CHARTERCOM.com> Message-ID: On Wed, Dec 9, 2020 at 3:26 PM Braden, Albert wrote: > > We are running Train on Centos 7, but I’m experimenting with Adjutant under Ussuri on Centos 8. Until yesterday my Ussuri Horizon containers were ok, but after the Centos 8.3 release they started failing with “{"log":"CommandError: An error occurred during rendering /var/lib/kolla/venv/lib/python3.6/site-packages/openstack_dashboard/templates/horizon/_scripts.html: Couldn't find any precompiler in COMPRESS_PRECOMPILERS setting for mimetype '\\'text/javascript\\''.\n","stream":"stderr","time":"2020-12-08T20:08:03.267208752Z"}” > > Are they failing as in "not starting" or fail to render properly to browser? -yoctozepto From juliaashleykreger at gmail.com Wed Dec 9 16:12:38 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 9 Dec 2020 08:12:38 -0800 Subject: [all]OpenStack + CentOS In-Reply-To: <0670B960225633449A24709C291A52524F9D2FC5@COM01.performair.local> References: <0670B960225633449A24709C291A52524F9D2FC5@COM01.performair.local> Message-ID: I suspect it is too early to tell. I know some projects have already started switching over continuous integration testing jobs over to Centos Stream where applicable, however, that is largely going to be only for new releases moving forward at this immediate time. Some projects are also fighting issues in python package resolution at this time, so they may not even have gotten to the point of evaluating Centos Stream and the resulting impact. On Wed, Dec 9, 2020 at 7:51 AM wrote: > All; > > > > As you may or may not know; yesterday morning RedHat announced the end of > CentOS as a rebuild distribution[1]. "CentOS" will be retired in favor of > the recently announced "CentOS Stream." > > > > Can OpenStack be installed on CentOS Stream? > > > > Since CentOS Stream is currently at 8, the question really is: Can > OpenStack Victoria be installed on CentOS Stream 8? How about Ussuri? > > > > Thank you, > > > > Dominic L. Hilsbos, MBA > > Director - Information Technology > > Perform Air International Inc. > > DHilsbos at PerformAir.com > > www.PerformAir.com > > > > [1: https://blog.centos.org/2020/12/future-is-centos-stream/] > > > > > > Dominic L. Hilsbos, MBA > > Director – Information Technology > > [image: Perform Air International, Inc.] > > DHilsbos at PerformAir.com > > 300 S. Hamilton Pl. > > Gilbert, AZ 85233 > > Phone: (480) 610-3500 > > Fax: (480) 610-3501 > > www.PerformAir.com > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 17150 bytes Desc: not available URL: From aschultz at redhat.com Wed Dec 9 16:19:24 2020 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 9 Dec 2020 09:19:24 -0700 Subject: [all]OpenStack + CentOS In-Reply-To: <0670B960225633449A24709C291A52524F9D2FC5@COM01.performair.local> References: <0670B960225633449A24709C291A52524F9D2FC5@COM01.performair.local> Message-ID: On Wed, Dec 9, 2020 at 8:58 AM wrote: > All; > > > > As you may or may not know; yesterday morning RedHat announced the end of > CentOS as a rebuild distribution[1]. "CentOS" will be retired in favor of > the recently announced "CentOS Stream." > > > > Can OpenStack be installed on CentOS Stream? > > > > Since CentOS Stream is currently at 8, the question really is: Can > OpenStack Victoria be installed on CentOS Stream 8? How about Ussuri? > > >From a TripleO perspective we have tested it in CI and it has worked fairly consistently against master[0]. We will be working to align on Stream in the current release so if you are using RDO + Stream it should work. If it doesn't, it's something we will be addressing. I cannot vouch for the cloud sig version of OpenStack that comes straight from CentOS but that should also work and will likely get any fixes in the near future if required. CentOS 8 support has been available since Ussuri and we've backported support for Train to ensure that folks can upgrade from CentOS 7 to 8 using Train. CentOS 8 Stream isn't that much different than CentOS 8 classic except it'll likely get newer versions of packaging first but those packages should be backwards compatible. [0] https://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-ci-centos-8-standalone-centos8stream-master > > Thank you, > > > > Dominic L. Hilsbos, MBA > > Director - Information Technology > > Perform Air International Inc. > > DHilsbos at PerformAir.com > > www.PerformAir.com > > > > [1: https://blog.centos.org/2020/12/future-is-centos-stream/] > > > > > > Dominic L. Hilsbos, MBA > > Director – Information Technology > > [image: Perform Air International, Inc.] > > DHilsbos at PerformAir.com > > 300 S. Hamilton Pl. > > Gilbert, AZ 85233 > > Phone: (480) 610-3500 > > Fax: (480) 610-3501 > > www.PerformAir.com > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 17150 bytes Desc: not available URL: From ruslanas at lpic.lt Wed Dec 9 16:23:44 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Wed, 9 Dec 2020 18:23:44 +0200 Subject: [rdo-users] [tripleo] Deployment update (node addition) after changing aggregate groups/zones In-Reply-To: References: Message-ID: Hi Oliver, Can you please confirm then the following: Remove NovaAZ from roles file. Remove novaAZ from environment files, and it should be able to update without this step? Maybe i will need to ensure they are in same cell and so on, and assign zone manually. Is it correct? On Wed, 9 Dec 2020, 18:07 Oliver Walsh, wrote: > Ok, that will not work. If you want to manually configure the AZs after > deployment then just stop using [1]. > > Cheers, > Ollie > > On Tue, 8 Dec 2020 at 06:59, Ruslanas Gžibovskis wrote: > >> The first deployment set all computes to zone named according to stack >> name, but later I have created Alpha01, Alpha02. And set according in >> node-info.yaml file. But still, it fails with message, that some compute is >> already present in zone Alpha01... like it cannot create such zone. And I >> say, yes captain, I know, I have created and added YOU into that zone... >> Maybe I need to do some "tweaks" to DB? just now thought about it. >> >> On Mon, 7 Dec 2020 at 20:02, Oliver Walsh wrote: >> >>> Hi, >>> >>> You will need to manually remove the hosts from the old zone ("Alpha01") >>> before adding them to the new zone. A host can only belong to one AZ. >>> >>> Thanks, >>> Ollie >>> >>> On Mon, 7 Dec 2020 at 11:32, Ruslanas Gžibovskis >>> wrote: >>> >>>> anyone know, how to bypass aggregation group error? thank you. >>>> >>>> On Sat, 5 Dec 2020 at 18:08, Ruslanas Gžibovskis >>>> wrote: >>>> >>>>> Hi all, >>>>> >>>>> Any thoughts on this one? >>>>> >>>>> >>>>>> Hi all. >>>>>>> >>>>>>> After changing the host aggregate group and zone, I cannot run >>>>>>> OpenStack deploy command successfully again, even after updating deployment >>>>>>> environment files according to my setup. >>>>>>> >>>>>>> I receive error bigger one in [0]: >>>>>>> 2020-12-02 10:16:18.532419 | 52540000-0001-cf95-492f-0000000003ca | >>>>>>> FATAL | Nova: Manage aggregate and availability zone and add hosts to >>>>>>> the zone | undercloud | error={"changed": false, "msg": "ConflictException: >>>>>>> 409: Client Error for url: >>>>>>> http://10.120.129.199:8774/v2.1/os-aggregates/1/action, Cannot add >>>>>>> host to aggregate 1. Reason: One or more hosts already in availability >>>>>>> zone(s) ['Alpha01']."} >>>>>>> >>>>>>> I was following this link [1] instructions for "Configuring >>>>>>> Availability Zones (AZ)" steps to modify with OpenStack commands. And zone >>>>>>> was created successfully, but when I needed to add additional nodes, >>>>>>> executed deployment again with increased numbers it was complaining about >>>>>>> an incorrect aggregate zone, and now it is complaining about not empty zone >>>>>>> with error [0] mentioned above. I have added aggregate zones into >>>>>>> deployment files even role file... any ideas? >>>>>>> >>>>>>> Also, I think, this should be mentioned, that added it after >>>>>>> install, you lose the possibility to update using tripleo tool and you will >>>>>>> need to modify environment files with. >>>>>>> >>>>>>> >>>>>>> >>>>>>> [0] http://paste.openstack.org/show/800622/ >>>>>>> [1] >>>>>>> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/distributed_compute_node.html#configuring-availability-zones-az >>>>>>> >>>>>>> >>>>>>> >>>>> >>>> >>>> -- >>>> Ruslanas Gžibovskis >>>> +370 6030 7030 >>>> _______________________________________________ >>>> users mailing list >>>> users at lists.rdoproject.org >>>> http://lists.rdoproject.org/mailman/listinfo/users >>>> >>>> To unsubscribe: users-unsubscribe at lists.rdoproject.org >>>> >>> >> >> -- >> Ruslanas Gžibovskis >> +370 6030 7030 >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Dec 9 16:55:13 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 09 Dec 2020 16:55:13 +0000 Subject: [all]OpenStack + CentOS In-Reply-To: References: <0670B960225633449A24709C291A52524F9D2FC5@COM01.performair.local> Message-ID: On Wed, 2020-12-09 at 09:19 -0700, Alex Schultz wrote: > On Wed, Dec 9, 2020 at 8:58 AM wrote: > > > All; > > > > > > > > As you may or may not know; yesterday morning RedHat announced the end of > > CentOS as a rebuild distribution[1]. "CentOS" will be retired in favor of > > the recently announced "CentOS Stream." > > > > > > > > Can OpenStack be installed on CentOS Stream? > > > > > > > > Since CentOS Stream is currently at 8, the question really is: Can > > OpenStack Victoria be installed on CentOS Stream 8? How about Ussuri? > > > > > From a TripleO perspective we have tested it in CI and it has worked fairly > consistently against master[0]. > i have been using centos-stream for my sriov thest host with devstack for most of the last year. i have not hit any issues with it that were specific to centos stream. so anicdotally yes it has worked fine form me when installing form source. installing form packages might be a different story. bu the ooo responce is promising in that regard. > We will be working to align on Stream in > the current release so if you are using RDO + Stream it should work. If it > doesn't, it's something we will be addressing. I cannot vouch for the cloud > sig version of OpenStack that comes straight from CentOS but that should > also work and will likely get any fixes in the near future if required. > CentOS 8 support has been available since Ussuri and we've backported > support for Train to ensure that folks can upgrade from CentOS 7 to 8 using > Train. CentOS 8 Stream isn't that much different than CentOS 8 classic > except it'll likely get newer versions of packaging first but those > packages should be backwards compatible. > > [0] > https://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-ci-centos-8-standalone-centos8stream-master > > > > > > Thank you, > > > > > > > > Dominic L. Hilsbos, MBA > > > > Director - Information Technology > > > > Perform Air International Inc. > > > > DHilsbos at PerformAir.com > > > > www.PerformAir.com > > > > > > > > [1: https://blog.centos.org/2020/12/future-is-centos-stream/] > > > > > > > > > > > > Dominic L. Hilsbos, MBA > > > > Director – Information Technology > > > > [image: Perform Air International, Inc.] > > > > DHilsbos at PerformAir.com > > > > 300 S. Hamilton Pl. > > > > Gilbert, AZ 85233 > > > > Phone: (480) 610-3500 > > > > Fax: (480) 610-3501 > > > > www.PerformAir.com > > > > > > > > > > From mihalis68 at gmail.com Wed Dec 9 18:17:18 2020 From: mihalis68 at gmail.com (Chris Morgan) Date: Wed, 9 Dec 2020 13:17:18 -0500 Subject: review.opendev.org issue? Message-ID: Seems down for me? -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Dec 9 18:22:35 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 9 Dec 2020 18:22:35 +0000 Subject: [infra] review.opendev.org issue? In-Reply-To: References: Message-ID: <20201209182235.boaocebmvc3hveak@yuggoth.org> On 2020-12-09 13:17:18 -0500 (-0500), Chris Morgan wrote: > Seems down for me? We're experiencing some periodic resource starvation in the service and attempting to troubleshoot it, should be responding again as soon as we work out what's going on. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From dvd at redhat.com Wed Dec 9 18:22:54 2020 From: dvd at redhat.com (David Vallee Delisle) Date: Wed, 9 Dec 2020 13:22:54 -0500 Subject: review.opendev.org issue? In-Reply-To: References: Message-ID: Same here. David Vallee Delisle Senior Software Engineer Red Hat Canada dvd at redhat.com T: 438.796.0254 IM: dvd On Wed, Dec 9, 2020 at 1:20 PM Chris Morgan wrote: > Seems down for me? > > -- > Chris Morgan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed Dec 9 18:25:20 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 9 Dec 2020 13:25:20 -0500 Subject: review.opendev.org issue? In-Reply-To: References: Message-ID: Hi Chris, The team just announced over IRC the following: NOTICE: The Gerrit service on review.opendev.org is currently responding slowly or timing out due to resource starvation, investigation is underway They are working on it :) Thanks, Mohammed On Wed, Dec 9, 2020 at 1:22 PM Chris Morgan wrote: > > Seems down for me? > > -- > Chris Morgan -- Mohammed Naser VEXXHOST, Inc. From ces.eduardo98 at gmail.com Wed Dec 9 18:27:41 2020 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Wed, 9 Dec 2020 15:27:41 -0300 Subject: [manila] Graduation of share migration APIs Message-ID: Hello, fellow Zorillas and stackers! During the Wallaby PTG, the Manila team discussed the need to remove the experimental flags from the share migration feature APIs. Then, we have realized that there aren't many third party drivers that have implemented "driver-assisted/optimized share migration", so we wanted to get some input from vendors community members on this. Do you want/have plans to implement the driver optimized migration? It'd also be helpful to know if you have tested this feature (even using host assisted migration) or intend to test host assisted migration and can give us some feedback/thoughts on this? In order to graduate the share migration APIs, we must make sure that they are stable. We haven't had bugs opened against it but gathering feedback from the wider community would be helpful to us. Cheers, Carlos -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Dec 9 18:34:25 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 9 Dec 2020 18:34:25 +0000 Subject: review.opendev.org issue? In-Reply-To: References: Message-ID: <20201209183425.l4px52z7lvwmqk2w@yuggoth.org> On 2020-12-09 13:25:20 -0500 (-0500), Mohammed Naser wrote: > The team just announced over IRC the following: > > NOTICE: The Gerrit service on review.opendev.org is currently > responding slowly or timing out due to resource starvation, > investigation is underway > > They are working on it :) [...] For future reference, those notices also get published here: https://wiki.openstack.org/wiki/Infrastructure_Status -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mihalis68 at gmail.com Wed Dec 9 18:36:01 2020 From: mihalis68 at gmail.com (Chris Morgan) Date: Wed, 9 Dec 2020 13:36:01 -0500 Subject: [infra] review.opendev.org issue? In-Reply-To: <20201209182235.boaocebmvc3hveak@yuggoth.org> References: <20201209182235.boaocebmvc3hveak@yuggoth.org> Message-ID: thanks and good luck! On Wed, Dec 9, 2020 at 1:28 PM Jeremy Stanley wrote: > On 2020-12-09 13:17:18 -0500 (-0500), Chris Morgan wrote: > > Seems down for me? > > We're experiencing some periodic resource starvation in the service > and attempting to troubleshoot it, should be responding again as > soon as we work out what's going on. > -- > Jeremy Stanley > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Dec 9 19:43:43 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 09 Dec 2020 19:43:43 +0000 Subject: [all]OpenStack + CentOS In-Reply-To: References: <0670B960225633449A24709C291A52524F9D2FC5@COM01.performair.local> Message-ID: <8ca753d50350dc9c3f6cc496674dabd3e4b8ea7a.camel@redhat.com> On Wed, 2020-12-09 at 16:55 +0000, Sean Mooney wrote: > On Wed, 2020-12-09 at 09:19 -0700, Alex Schultz wrote: > > On Wed, Dec 9, 2020 at 8:58 AM wrote: > > > > > All; > > > > > > > > > > > > As you may or may not know; yesterday morning RedHat announced the end of > > > CentOS as a rebuild distribution[1]. "CentOS" will be retired in favor of > > > the recently announced "CentOS Stream." > > > > > > > > > > > > Can OpenStack be installed on CentOS Stream? > > > > > > > > > > > > Since CentOS Stream is currently at 8, the question really is: Can > > > OpenStack Victoria be installed on CentOS Stream 8? How about Ussuri? > > > > > > > > From a TripleO perspective we have tested it in CI and it has worked fairly > > consistently against master[0]. > > > > i have been using centos-stream for my sriov thest host with devstack for most of the last > year. i have not hit any issues with it that were specific to centos stream. > so anicdotally yes it has worked fine form me when installing form source. actully to that end i tried deploying to day and it looks like the output of lsb_release -i -s change form CentOS to CentOSStream at some point over the last few months [centos at sriov-1 devstack]$ lsb_release -i -s CentOSStream so https://github.com/openstack/devstack/blame/97f3100c4f6cc8ae4f7059b5099654ef8b13b0d4/functions-common#L455 now fails its a small change butthere could be other cases. > > installing form packages might be a different story. > bu the ooo responce is promising in that regard. > > >  We will be working to align on Stream in > > the current release so if you are using RDO + Stream it should work. If it > > doesn't, it's something we will be addressing. I cannot vouch for the cloud > > sig version of OpenStack that comes straight from CentOS but that should > > also work and will likely get any fixes in the near future if required. > > CentOS 8 support has been available since Ussuri and we've backported > > support for Train to ensure that folks can upgrade from CentOS 7 to 8 using > > Train. CentOS 8 Stream isn't that much different than CentOS 8 classic > > except it'll likely get newer versions of packaging first but those > > packages should be backwards compatible. > > > > [0] > > https://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-ci-centos-8-standalone-centos8stream-master > > > > > > > > > > Thank you, > > > > > > > > > > > > Dominic L. Hilsbos, MBA > > > > > > Director - Information Technology > > > > > > Perform Air International Inc. > > > > > > DHilsbos at PerformAir.com > > > > > > www.PerformAir.com > > > > > > > > > > > > [1: https://blog.centos.org/2020/12/future-is-centos-stream/] > > > > > > > > > > > > > > > > > > Dominic L. Hilsbos, MBA > > > > > > Director – Information Technology > > > > > > [image: Perform Air International, Inc.] > > > > > > DHilsbos at PerformAir.com > > > > > > 300 S. Hamilton Pl. > > > > > > Gilbert, AZ 85233 > > > > > > Phone: (480) 610-3500 > > > > > > Fax: (480) 610-3501 > > > > > > www.PerformAir.com > > > > > > > > > > > > > > > > > From owalsh at redhat.com Wed Dec 9 19:46:21 2020 From: owalsh at redhat.com (Oliver Walsh) Date: Wed, 9 Dec 2020 19:46:21 +0000 Subject: [rdo-users] [tripleo] Deployment update (node addition) after changing aggregate groups/zones In-Reply-To: References: Message-ID: Hi, On Wed, 9 Dec 2020 at 16:24, Ruslanas Gžibovskis wrote: > Hi Oliver, > > Can you please confirm then the following: > > Remove NovaAZ from roles file. > That is not necessary. The tripleo service is included in the compute roles by default but it's also disabled by default. > Remove novaAZ from environment files, and it should be able to update > without this step? > Yes, remove the nova-az-config.yaml environment file from the deploy command line. I think that should just revert back to default (service disabled). If not then you can explicitly disable it with: resource_registry: OS::TripleO::Services::NovaAZConfig: OS::Heat::None Cheers, Ollie > > Maybe i will need to ensure they are in same cell and so on, and assign > zone manually. > > Is it correct? > > On Wed, 9 Dec 2020, 18:07 Oliver Walsh, wrote: > >> Ok, that will not work. If you want to manually configure the AZs after >> deployment then just stop using [1]. >> >> Cheers, >> Ollie >> >> On Tue, 8 Dec 2020 at 06:59, Ruslanas Gžibovskis >> wrote: >> >>> The first deployment set all computes to zone named according to stack >>> name, but later I have created Alpha01, Alpha02. And set according in >>> node-info.yaml file. But still, it fails with message, that some compute is >>> already present in zone Alpha01... like it cannot create such zone. And I >>> say, yes captain, I know, I have created and added YOU into that zone... >>> Maybe I need to do some "tweaks" to DB? just now thought about it. >>> >>> On Mon, 7 Dec 2020 at 20:02, Oliver Walsh wrote: >>> >>>> Hi, >>>> >>>> You will need to manually remove the hosts from the old zone >>>> ("Alpha01") before adding them to the new zone. A host can only belong to >>>> one AZ. >>>> >>>> Thanks, >>>> Ollie >>>> >>>> On Mon, 7 Dec 2020 at 11:32, Ruslanas Gžibovskis >>>> wrote: >>>> >>>>> anyone know, how to bypass aggregation group error? thank you. >>>>> >>>>> On Sat, 5 Dec 2020 at 18:08, Ruslanas Gžibovskis >>>>> wrote: >>>>> >>>>>> Hi all, >>>>>> >>>>>> Any thoughts on this one? >>>>>> >>>>>> >>>>>>> Hi all. >>>>>>>> >>>>>>>> After changing the host aggregate group and zone, I cannot run >>>>>>>> OpenStack deploy command successfully again, even after updating deployment >>>>>>>> environment files according to my setup. >>>>>>>> >>>>>>>> I receive error bigger one in [0]: >>>>>>>> 2020-12-02 10:16:18.532419 | 52540000-0001-cf95-492f-0000000003ca | >>>>>>>> FATAL | Nova: Manage aggregate and availability zone and add hosts to >>>>>>>> the zone | undercloud | error={"changed": false, "msg": "ConflictException: >>>>>>>> 409: Client Error for url: >>>>>>>> http://10.120.129.199:8774/v2.1/os-aggregates/1/action, Cannot add >>>>>>>> host to aggregate 1. Reason: One or more hosts already in availability >>>>>>>> zone(s) ['Alpha01']."} >>>>>>>> >>>>>>>> I was following this link [1] instructions for "Configuring >>>>>>>> Availability Zones (AZ)" steps to modify with OpenStack commands. And zone >>>>>>>> was created successfully, but when I needed to add additional nodes, >>>>>>>> executed deployment again with increased numbers it was complaining about >>>>>>>> an incorrect aggregate zone, and now it is complaining about not empty zone >>>>>>>> with error [0] mentioned above. I have added aggregate zones into >>>>>>>> deployment files even role file... any ideas? >>>>>>>> >>>>>>>> Also, I think, this should be mentioned, that added it after >>>>>>>> install, you lose the possibility to update using tripleo tool and you will >>>>>>>> need to modify environment files with. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> [0] http://paste.openstack.org/show/800622/ >>>>>>>> [1] >>>>>>>> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/distributed_compute_node.html#configuring-availability-zones-az >>>>>>>> >>>>>>>> >>>>>>>> >>>>>> >>>>> >>>>> -- >>>>> Ruslanas Gžibovskis >>>>> +370 6030 7030 >>>>> _______________________________________________ >>>>> users mailing list >>>>> users at lists.rdoproject.org >>>>> http://lists.rdoproject.org/mailman/listinfo/users >>>>> >>>>> To unsubscribe: users-unsubscribe at lists.rdoproject.org >>>>> >>>> >>> >>> -- >>> Ruslanas Gžibovskis >>> +370 6030 7030 >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Wed Dec 9 19:57:48 2020 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Wed, 9 Dec 2020 19:57:48 +0000 Subject: [EXTERNAL] Re: [kolla] Ussuri Horizon containers fail after Centos 8.3 release In-Reply-To: References: <0fa875fd0a90458ca2216c333ed855f4@NCEMEXGP009.CORP.CHARTERCOM.com> Message-ID: <53c97da1b62d4865b41582f91fc1d2cb@NCEMEXGP009.CORP.CHARTERCOM.com> The container restarts every few seconds. It looks like this may be specific to my unusual setup. I'm running Centos8/Ussuri containers on a Centos7/Train cluster to add Adjutant to our existing clusters. When the Horizon container is running Centos 8.3 I have to restart the memcached container on the Centos 7 box after replacing the old Centos 7 container. I didn't see this on Centos 8.2. It looks like restarting the memcached container is a successful workaround. -----Original Message----- From: Radosław Piliszek Sent: Wednesday, December 9, 2020 11:09 AM To: Braden, Albert Cc: OpenStack Discuss Subject: [EXTERNAL] Re: [kolla] Ussuri Horizon containers fail after Centos 8.3 release CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. On Wed, Dec 9, 2020 at 3:26 PM Braden, Albert wrote: > > We are running Train on Centos 7, but I’m experimenting with Adjutant under Ussuri on Centos 8. Until yesterday my Ussuri Horizon containers were ok, but after the Centos 8.3 release they started failing with “{"log":"CommandError: An error occurred during rendering /var/lib/kolla/venv/lib/python3.6/site-packages/openstack_dashboard/templates/horizon/_scripts.html: Couldn't find any precompiler in COMPRESS_PRECOMPILERS setting for mimetype '\\'text/javascript\\''.\n","stream":"stderr","time":"2020-12-08T20:08:03.267208752Z"}” > > Are they failing as in "not starting" or fail to render properly to browser? -yoctozepto E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From lbragstad at gmail.com Wed Dec 9 20:04:57 2020 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 9 Dec 2020 14:04:57 -0600 Subject: Secure RBAC work Message-ID: Hey everyone, I wanted to take an opportunity to clarify some work we have been doing upstream, specifically modifying the default policies across projects. These changes are the next phase of an initiative that’s been underway since Queens to fix some long-standing security concerns in OpenStack [0]. For context, we have been gradually improving policy enforcement for years. We started by improving policy formats, registering default policies into code [1], providing better documentation for policy writers, implementing necessary identity concepts in keystone [2], developing support for those concepts in libraries [3][4][5][6][7][8], and consuming all of those changes to provide secure default policies in a way operators can consume and roll out to their users [9][10]. All of this work is in line with some high-level documentation we started writing about three years ago [11][12][13]. There are a handful of services that have implemented the goals that define secure RBAC by default, but a community-wide goal is still out-of-reach. To help with that, the community formed a pop-up team with a focused objective and disbanding criteria [14]. The work we currently have in progress [15] is an attempt to start applying what we have learned from existing implementations to other projects. The hope is that we can complete the work for even more projects in Wallaby. Most deployers looking for this functionality won't be able to use it effectively until all services in their deployment support it. I hope this helps clarify or explain the patches being proposed. As always, I'm happy to elaborate on specific concerns if folks have them. Thanks, Lance [0] https://bugs.launchpad.net/keystone/+bug/968696/ [1] https://governance.openstack.org/tc/goals/selected/queens/policy-in-code.html [2] https://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html [3] https://review.opendev.org/c/openstack/keystoneauth/+/529665 [4] https://review.opendev.org/c/openstack/python-keystoneclient/+/524415 [5] https://review.opendev.org/c/openstack/oslo.context/+/530509 [6] https://review.opendev.org/c/openstack/keystonemiddleware/+/564072 [7] https://review.opendev.org/c/openstack/oslo.policy/+/578995 [8] https://review.opendev.org/q/topic:%22system-scope%22+(status:open%20OR%20status:merged) [9] https://review.opendev.org/q/status:merged+topic:bp/policy-defaults-refresh+branch:master [10] https://review.opendev.org/q/topic:%22implement-default-roles%22+(status:open%20OR%20status:merged) [11] https://specs.openstack.org/openstack/keystone-specs/specs/keystone/ongoing/policy-goals-and-roadmap.html [12] https://docs.openstack.org/keystone/latest/admin/service-api-protection.html [13] https://docs.openstack.org/keystone/latest/contributor/services.html#authorization-scopes [14] https://governance.openstack.org/tc/reference/popup-teams.html#secure-default-policies [15] https://review.opendev.org/q/topic:%2522secure-rbac%2522+status:open -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed Dec 9 20:53:08 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 9 Dec 2020 15:53:08 -0500 Subject: Secure RBAC work In-Reply-To: References: Message-ID: Hi Lance, This is amazing work. I think this has been one of the biggest pain points in OpenStack and this initiative is great. Super excited to flip this switch on our public/private clouds soon(tm) as the progress happens. Thank you and I hope the projects take a chance at reviewing those changes. Regards, Mohammed On Wed, Dec 9, 2020 at 3:08 PM Lance Bragstad wrote: > > Hey everyone, > > > I wanted to take an opportunity to clarify some work we have been doing upstream, specifically modifying the default policies across projects. > > > These changes are the next phase of an initiative that’s been underway since Queens to fix some long-standing security concerns in OpenStack [0]. For context, we have been gradually improving policy enforcement for years. We started by improving policy formats, registering default policies into code [1], providing better documentation for policy writers, implementing necessary identity concepts in keystone [2], developing support for those concepts in libraries [3][4][5][6][7][8], and consuming all of those changes to provide secure default policies in a way operators can consume and roll out to their users [9][10]. > > > All of this work is in line with some high-level documentation we started writing about three years ago [11][12][13]. > > > There are a handful of services that have implemented the goals that define secure RBAC by default, but a community-wide goal is still out-of-reach. To help with that, the community formed a pop-up team with a focused objective and disbanding criteria [14]. > > > The work we currently have in progress [15] is an attempt to start applying what we have learned from existing implementations to other projects. The hope is that we can complete the work for even more projects in Wallaby. Most deployers looking for this functionality won't be able to use it effectively until all services in their deployment support it. > > > I hope this helps clarify or explain the patches being proposed. > > > As always, I'm happy to elaborate on specific concerns if folks have them. > > > Thanks, > > > Lance > > > [0] https://bugs.launchpad.net/keystone/+bug/968696/ > > [1] https://governance.openstack.org/tc/goals/selected/queens/policy-in-code.html > > [2] https://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html > > [3] https://review.opendev.org/c/openstack/keystoneauth/+/529665 > > [4] https://review.opendev.org/c/openstack/python-keystoneclient/+/524415 > > [5] https://review.opendev.org/c/openstack/oslo.context/+/530509 > > [6] https://review.opendev.org/c/openstack/keystonemiddleware/+/564072 > > [7] https://review.opendev.org/c/openstack/oslo.policy/+/578995 > > [8] https://review.opendev.org/q/topic:%22system-scope%22+(status:open%20OR%20status:merged) > > [9] https://review.opendev.org/q/status:merged+topic:bp/policy-defaults-refresh+branch:master > > [10] https://review.opendev.org/q/topic:%22implement-default-roles%22+(status:open%20OR%20status:merged) > > [11] https://specs.openstack.org/openstack/keystone-specs/specs/keystone/ongoing/policy-goals-and-roadmap.html > > [12] https://docs.openstack.org/keystone/latest/admin/service-api-protection.html > > [13] https://docs.openstack.org/keystone/latest/contributor/services.html#authorization-scopes > > [14] https://governance.openstack.org/tc/reference/popup-teams.html#secure-default-policies > > [15] https://review.opendev.org/q/topic:%2522secure-rbac%2522+status:open -- Mohammed Naser VEXXHOST, Inc. From mnaser at vexxhost.com Wed Dec 9 21:20:04 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 9 Dec 2020 16:20:04 -0500 Subject: [tc] weekly meeting Message-ID: Hi everyone, Here’s the agenda for our weekly TC meeting. It will happen tomorrow (Thursday the 10th) at 1500 UTC in #openstack-tc, and I will be your chair. If you can’t attend, please put your name in the “Apologies for Absence” section. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting # ACTIVE INITIATIVES * Follow up on past action items * W cycle goal selection start * Audit and clean-up tags (gmann) * X cycle release name vote recording (gmann) * CentOS 8 releases are discontinued / switch to CentOS 8 Stream (gmann/yoctozepto) * Open Reviews Thank you, Mohammed -- Mohammed Naser VEXXHOST, Inc. From tonyliu0592 at hotmail.com Wed Dec 9 21:38:45 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Wed, 9 Dec 2020 21:38:45 +0000 Subject: [nova-compute] Failed to publish message to topic 'scheduler_fanout' Message-ID: Hi, With Ussuri, failed to launch VM and saw such error from nova-compute. Didn't see any error in nova-api, nova-conductor and nova-scheduler. Did some search and not found any similar cases. Wondering anyone has ever hit this issue? ================================================= 2020-12-09 13:24:29.333 8 ERROR oslo.messaging._drivers.impl_rabbit [req-4226ac03-a5b7-446a-a74b-2785f9818927 - - - - -] [3830bc00-1541-4951-b4ed-e73faafaa78c] AMQP server on 10.6.20.21:5672 is unreachable: connection already closed. Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: connection already closed 2020-12-09 13:24:30.361 8 INFO oslo.messaging._drivers.impl_rabbit [req-4226ac03-a5b7-446a-a74b-2785f9818927 - - - - -] [3830bc00-1541-4951-b4ed-e73faafaa78c] Reconnected to AMQP server on 10.6.20.21:5672 via [amqp] client with port 55524. 2020-12-09 13:26:28.377 8 ERROR oslo.messaging._drivers.impl_rabbit [req-4226ac03-a5b7-446a-a74b-2785f9818927 - - - - -] Failed to publish message to topic 'scheduler_fanout': : amqp.exceptions.MessageNacked 2020-12-09 13:26:28.377 8 ERROR oslo.messaging._drivers.impl_rabbit [req-4226ac03-a5b7-446a-a74b-2785f9818927 - - - - -] Unable to connect to AMQP server on 10.6.20.21:5672 after inf tries: : amqp.exceptions.MessageNacked 2020-12-09 13:26:28.378 8 ERROR oslo_service.periodic_task [req-4226ac03-a5b7-446a-a74b-2785f9818927 - - - - -] Error during ComputeManager._sync_scheduler_instance_info: oslo_messaging.exceptions.MessageDeliveryFailure: Unable to connect to AMQP server on 10.6.20.21:5672 after inf tries: ================================================= Thanks! Tony From smooney at redhat.com Thu Dec 10 00:08:05 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 10 Dec 2020 00:08:05 +0000 Subject: [all]OpenStack + CentOS In-Reply-To: <8ca753d50350dc9c3f6cc496674dabd3e4b8ea7a.camel@redhat.com> References: <0670B960225633449A24709C291A52524F9D2FC5@COM01.performair.local> <8ca753d50350dc9c3f6cc496674dabd3e4b8ea7a.camel@redhat.com> Message-ID: <325f84f5dfa1400eb090778e366bf6f5c145f506.camel@redhat.com> On Wed, 2020-12-09 at 19:43 +0000, Sean Mooney wrote: > On Wed, 2020-12-09 at 16:55 +0000, Sean Mooney wrote: > > On Wed, 2020-12-09 at 09:19 -0700, Alex Schultz wrote: > > > On Wed, Dec 9, 2020 at 8:58 AM wrote: > > > > > > > All; > > > > > > > > > > > > > > > > As you may or may not know; yesterday morning RedHat announced the end of > > > > CentOS as a rebuild distribution[1]. "CentOS" will be retired in favor of > > > > the recently announced "CentOS Stream." > > > > > > > > > > > > > > > > Can OpenStack be installed on CentOS Stream? > > > > > > > > > > > > > > > > Since CentOS Stream is currently at 8, the question really is: Can > > > > OpenStack Victoria be installed on CentOS Stream 8? How about Ussuri? > > > > > > > > > > > From a TripleO perspective we have tested it in CI and it has worked fairly > > > consistently against master[0]. > > > > > > > i have been using centos-stream for my sriov thest host with devstack for most of the last > > year. i have not hit any issues with it that were specific to centos stream. > > so anicdotally yes it has worked fine form me when installing form source. > > actully to that end i tried deploying to day and it looks like the output of > lsb_release -i -s change form CentOS to CentOSStream at some point over the last few months > > [centos at sriov-1 devstack]$ lsb_release -i -s > CentOSStream > so https://github.com/openstack/devstack/blame/97f3100c4f6cc8ae4f7059b5099654ef8b13b0d4/functions-common#L455 > now fails > its a small change butthere could be other cases. This is the fix for those that are interested https://review.opendev.org/c/openstack/devstack/+/766366 i have tested that locally in a 2 node devstack deployment and did not need to make any other changes. ========================= DevStack Component Timing (times are in seconds) ========================= run_process 32 test_with_retry 5 osc 312 wait_for_service 20 yum_install 30 git_timed 40 dbsync 25 pip_install 150 ------------------------- Unaccounted time 497 ========================= Total runtime 1111 This is your host IP address: 192.168.3.198 This is your host IPv6 address: ************************** Horizon is now available at http://192.168.3.198/dashboard Keystone is serving at http://192.168.3.198/identity/ The default users are: admin and demo The password: password Services are running under systemd unit files. For more information see: https://docs.openstack.org/devstack/latest/systemd.html DevStack Version: wallaby Change: 97f3100c4f6cc8ae4f7059b5099654ef8b13b0d4 Merge "zuul: Remove nova-live-migration from check queue" 2020-12-06 21:44:11 +0000 OS Version: CentOSStream 8 n/a also 18 mins for an 11 year old server with 2 xeon E5520 and sata3 ssds connected over sata 2... compute nodes stilll takes about 7 mins about half of which is pip > > > > installing form packages might be a different story. > > bu the ooo responce is promising in that regard. > > > > >  We will be working to align on Stream in > > > the current release so if you are using RDO + Stream it should work. If it > > > doesn't, it's something we will be addressing. I cannot vouch for the cloud > > > sig version of OpenStack that comes straight from CentOS but that should > > > also work and will likely get any fixes in the near future if required. > > > CentOS 8 support has been available since Ussuri and we've backported > > > support for Train to ensure that folks can upgrade from CentOS 7 to 8 using > > > Train. CentOS 8 Stream isn't that much different than CentOS 8 classic > > > except it'll likely get newer versions of packaging first but those > > > packages should be backwards compatible. > > > > > > [0] > > > https://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-ci-centos-8-standalone-centos8stream-master > > > > > > > > > > > > > > Thank you, > > > > > > > > > > > > > > > > Dominic L. Hilsbos, MBA > > > > > > > > Director - Information Technology > > > > > > > > Perform Air International Inc. > > > > > > > > DHilsbos at PerformAir.com > > > > > > > > www.PerformAir.com > > > > > > > > > > > > > > > > [1: https://blog.centos.org/2020/12/future-is-centos-stream/] > > > > > > > > > > > > > > > > > > > > > > > > Dominic L. Hilsbos, MBA > > > > > > > > Director – Information Technology > > > > > > > > [image: Perform Air International, Inc.] > > > > > > > > DHilsbos at PerformAir.com > > > > > > > > 300 S. Hamilton Pl. > > > > > > > > Gilbert, AZ 85233 > > > > > > > > Phone: (480) 610-3500 > > > > > > > > Fax: (480) 610-3501 > > > > > > > > www.PerformAir.com > > > > > > > > > > > > > > > > > > > > > > > > > > From yumeng_bao at yahoo.com Thu Dec 10 02:56:01 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Thu, 10 Dec 2020 10:56:01 +0800 Subject: [cyborg] No weekly meeting - December 10 References: Message-ID: Hello guys! As this week, cores of the team are either out of office, or busy with their company’s internal things before the new year holiday. As such, we are cancelling the December 10th meeting. If no other cancelling ML shows up, you can expect that meeting will be resumed next week on December 17th. Regards, Yumeng From melwittt at gmail.com Thu Dec 10 03:33:44 2020 From: melwittt at gmail.com (melanie witt) Date: Wed, 9 Dec 2020 19:33:44 -0800 Subject: [nova][gate] nova-multi-cell job failing test_*_with_qos_min_bw_allocation Message-ID: Howdy all, FYI we have gate failures of the recently added test_*_with_qos_min_bw_allocation tests [1] in the nova-multi-cell job on the master, stable/victoria, and stable/ussuri branches. The failures occur during cross cell migrations. I have opened a bug for the failure on the master branch: * https://bugs.launchpad.net/nova/+bug/1907522 The issue here is that we fail to create port bindings in neutron during a cross cell migration in the superconductor: nova.exception.PortBindingFailed: Binding failed for port and that corresponds to a failure in the neutron server log where it fails the port binding with: neutron_lib.exceptions.placement.UnknownResourceProvider: No such resource provider known by Neutron I don't yet know what is going on here ^. For the bug on stable/victoria and stable/ussuri I have opened this bug: * https://bugs.launchpad.net/nova/+bug/1907511 and have a WIP stable-only patch proposed that needs tests: https://review.opendev.org/c/openstack/nova/+/766364 I just wanted to see ASAP if the nova-multi-cell job will pass on it. The issue here ^ is that during a cross cell migration, we aren't targeting the cell database for the target host when we attempt to lookup the service record of the target host. For the stable branch failures I think the failure rate is 100% and it looks like it might also be 100% for the master branch failures. Cheers, -melanie [1] https://review.opendev.org/c/openstack/tempest/+/694539 From rosmaita.fossdev at gmail.com Thu Dec 10 03:53:51 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 9 Dec 2020 22:53:51 -0500 Subject: [cinder] wallaby R-18 mid-cycle summary available Message-ID: <6c9140c0-6851-6fad-15b8-157e5e025cf5@gmail.com> In case you missed the exciting cinder R-18 virtual mid-cycle session, I've posted a summary: https://wiki.openstack.org/wiki/CinderWallabyMidCycleSummary It includes a link to the recording (in case you want to see what you missed or if you want to re-live the excitement). We're planning to have another mid-cycle session the week of R-9. cheers, brian From eblock at nde.ag Thu Dec 10 10:11:05 2020 From: eblock at nde.ag (Eugen Block) Date: Thu, 10 Dec 2020 10:11:05 +0000 Subject: [nova-compute] Failed to publish message to topic 'scheduler_fanout' In-Reply-To: Message-ID: <20201210101105.Horde.4IFVBjJCmSqRjCED6jR-qQJ@webmail.nde.ag> Hi, apparently nova can't reach rabbitmq, I would check rabbit logs and 'rabbitmqctl cluster_status'. Regards, Eugen Zitat von Tony Liu : > Hi, > > With Ussuri, failed to launch VM and saw such error from nova-compute. > Didn't see any error in nova-api, nova-conductor and nova-scheduler. > Did some search and not found any similar cases. > Wondering anyone has ever hit this issue? > ================================================= > 2020-12-09 13:24:29.333 8 ERROR oslo.messaging._drivers.impl_rabbit > [req-4226ac03-a5b7-446a-a74b-2785f9818927 - - - - -] > [3830bc00-1541-4951-b4ed-e73faafaa78c] AMQP server on > 10.6.20.21:5672 is unreachable: connection already closed. Trying > again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: > connection already closed > 2020-12-09 13:24:30.361 8 INFO oslo.messaging._drivers.impl_rabbit > [req-4226ac03-a5b7-446a-a74b-2785f9818927 - - - - -] > [3830bc00-1541-4951-b4ed-e73faafaa78c] Reconnected to AMQP server on > 10.6.20.21:5672 via [amqp] client with port 55524. > 2020-12-09 13:26:28.377 8 ERROR oslo.messaging._drivers.impl_rabbit > [req-4226ac03-a5b7-446a-a74b-2785f9818927 - - - - -] Failed to > publish message to topic 'scheduler_fanout': : > amqp.exceptions.MessageNacked > 2020-12-09 13:26:28.377 8 ERROR oslo.messaging._drivers.impl_rabbit > [req-4226ac03-a5b7-446a-a74b-2785f9818927 - - - - -] Unable to > connect to AMQP server on 10.6.20.21:5672 after inf tries: : > amqp.exceptions.MessageNacked > 2020-12-09 13:26:28.378 8 ERROR oslo_service.periodic_task > [req-4226ac03-a5b7-446a-a74b-2785f9818927 - - - - -] Error during > ComputeManager._sync_scheduler_instance_info: > oslo_messaging.exceptions.MessageDeliveryFailure: Unable to connect > to AMQP server on 10.6.20.21:5672 after inf tries: > ================================================= > > Thanks! > Tony From tkajinam at redhat.com Thu Dec 10 11:48:08 2020 From: tkajinam at redhat.com (Takashi Kajinami) Date: Thu, 10 Dec 2020 20:48:08 +0900 Subject: [puppet] Broken CI because of missing powertools repo Message-ID: Hello, Some of you might have noticed this, but currently integration jobs and litmus jobs for puppet projects are failing because of missing powertools repo, which was caused by recent change in delorean-deps.repo provided by RDO (IIUC). I filed a bug[1] to track this issue and submitted a short-term fix which looks promising as per results in CIs. [1] https://bugs.launchpad.net/puppet-openstack-integration/+bug/1907323 [2] https://review.opendev.org/c/openstack/puppet-openstack-integration/+/766213 Please refrain from rechecking any puppet patches (I guess it also affects Victoria and Ussuri), until we land the fix. Thank you for your understanding and patience. Thank you, Takashi Kajinami -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ronald.Stone at windriver.com Thu Dec 10 12:14:11 2020 From: Ronald.Stone at windriver.com (Stone, Ronald) Date: Thu, 10 Dec 2020 12:14:11 +0000 Subject: [OpenStack-discuss] doc-build-failures Message-ID: Hi, Starting Tuesday several contributors to StarlingX docs can no longer successfully perform local doc builds: $ tox -e docs docs installed: alabaster==0.7.12,Babel==2.9.0,certifi==2020.11.8,chardet==3.0.4,colorama==0.4.4,docutils==0.15.2,dulwich==0.20.14,idna==2.10,imagesize==1.2.0,Jinja2==2.11.2,MarkupSafe==1.1.1,openstackdocstheme==2.2.7,os-api-ref==2.1.0,packaging==20.7,pbr==5.5.1,Pygments==2.7.2,pyparsing==2.4.7,pytz==2020.4,PyYAML==5.3.1,requests==2.25.0,six==1.15.0,snowballstemmer==2.0.0,Sphinx==3.3.1,sphinxcontrib-applehelp==1.0.2,sphinxcontrib-devhelp==1.0.2,sphinxcontrib-htmlhelp==1.0.3,sphinxcontrib-jsmath==1.0.1,sphinxcontrib-qthelp==1.0.3,sphinxcontrib-serializinghtml==1.1.4,urllib3==1.26.2 docs run-test-pre: PYTHONHASHSEED='662' docs run-test: commands[0] | sphinx-build -a -E -W -d doc/build/doctrees -b html doc/source doc/build/html ERROR: InvocationError for command could not find executable sphinx-build ___________________________________ summary ___________________________________ ERROR: docs: commands failed The executable sphinx-build.exe is missing from docs/.tox/docs/Scripts/ If manually restored, it is deleted the next time tox -e docs is run. Deleting .tox before rerunning does not help. Wondering if anyone is seeing this behavior with OpenStack docs and if it might be related to recent pip and virtualenv changes. Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Thu Dec 10 12:18:17 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Thu, 10 Dec 2020 13:18:17 +0100 Subject: [nova][gate] nova-multi-cell job =?UTF-8?Q?failing=0D=0A?= test_*_with_qos_min_bw_allocation In-Reply-To: References: Message-ID: On Wed, Dec 9, 2020 at 19:33, melanie witt wrote: > Howdy all, > > FYI we have gate failures of the recently added > test_*_with_qos_min_bw_allocation tests [1] in the nova-multi-cell > job on the master, stable/victoria, and stable/ussuri branches. The > failures occur during cross cell migrations. > > I have opened a bug for the failure on the master branch: > > * https://bugs.launchpad.net/nova/+bug/1907522 > > The issue here is that we fail to create port bindings in neutron > during a cross cell migration in the superconductor: > > nova.exception.PortBindingFailed: Binding failed for port > > and that corresponds to a failure in the neutron server log where it > fails the port binding with: > > neutron_lib.exceptions.placement.UnknownResourceProvider: No such > resource provider known by Neutron > > I don't yet know what is going on here ^. > > For the bug on stable/victoria and stable/ussuri I have opened this > bug: > > * https://bugs.launchpad.net/nova/+bug/1907511 > > and have a WIP stable-only patch proposed that needs tests: > > https://review.opendev.org/c/openstack/nova/+/766364 > > I just wanted to see ASAP if the nova-multi-cell job will pass on it. > > The issue here ^ is that during a cross cell migration, we aren't > targeting the cell database for the target host when we attempt to > lookup the service record of the target host. > > For the stable branch failures I think the failure rate is 100% and > it looks like it might also be 100% for the master branch failures. Thanks Melanie! A sort update. The test result in https://review.opendev.org/c/openstack/nova/+/766364 shows that after fixing the stable only https://bugs.launchpad.net/nova/+bug/1907511 we now hit the same failure on stable that is seen on master https://bugs.launchpad.net/nova/+bug/1907522 Both master and stable branches are blocked at the moment. Cheers, gibi > > Cheers, > -melanie > > [1] https://review.opendev.org/c/openstack/tempest/+/694539 > From dwakefi2 at gmu.edu Thu Dec 10 13:46:31 2020 From: dwakefi2 at gmu.edu (Thomas Wakefield) Date: Thu, 10 Dec 2020 13:46:31 +0000 Subject: New Openstack Deployment questions Message-ID: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> OpenStack deployment questions: If you were starting a new deployment of OpenStack today what OS would you use, and what tools would you use for deployment? We were thinking CentOS with Kayobe, but then CentOS changed their support plans, and I am hesitant to start a new project with CentOS. We do have access to RHEL licensing so that might be an option. We have also looked at OpenStack-Ansible for deployment. Thoughts? Thanks in advance. -Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Thu Dec 10 14:27:40 2020 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 10 Dec 2020 09:27:40 -0500 Subject: New Openstack Deployment questions In-Reply-To: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> References: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> Message-ID: I just built a new openstack using openstack-ansible on CentOS 8.2 last month before news broke out. I have no choice so i am going to stick with CentOS. What is the future of RDO and EPEL repo if centOS going away. ? On Thu, Dec 10, 2020 at 8:56 AM Thomas Wakefield wrote: > > OpenStack deployment questions: > > > > If you were starting a new deployment of OpenStack today what OS would you use, and what tools would you use for deployment? We were thinking CentOS with Kayobe, but then CentOS changed their support plans, and I am hesitant to start a new project with CentOS. We do have access to RHEL licensing so that might be an option. We have also looked at OpenStack-Ansible for deployment. Thoughts? > > > > Thanks in advance. -Tom From bcafarel at redhat.com Thu Dec 10 14:42:13 2020 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Thu, 10 Dec 2020 15:42:13 +0100 Subject: [all][stable] bandit 1.6.3 drops py2 support In-Reply-To: <20201209154133.fr5js3b5yow73aue@lyarwood-laptop.usersys.redhat.com> References: <20201209135904.3npvtzwzldsgot6c@lyarwood-laptop.usersys.redhat.com> <20201209144006.d4yxdyv5sng5bl5l@yuggoth.org> <20201209154133.fr5js3b5yow73aue@lyarwood-laptop.usersys.redhat.com> Message-ID: On Wed, 9 Dec 2020 at 16:46, Lee Yarwood wrote: > On 09-12-20 14:40:06, Jeremy Stanley wrote: > > On 2020-12-09 13:59:04 +0000 (+0000), Lee Yarwood wrote: > > > Hello all, > > > > > > $subject [1][2] is breaking various <= stable/train jobs where we > > > attempt to pull bandit in while still using py2. This has been reported > > > upstream and it looks like the 1.6.3 release may end up being yanked. > > > > > > If it isn't I've proposed the following requirements change to try to > > > cap bandit to the 1.6.2 release, assuming this is safe to do on stable: > > > > > > Cap bandit at 1.6.2 when using py2 > > > https://review.opendev.org/c/openstack/requirements/+/766170 > > [...] > > > > It's typically recommended to pin static analysis tools strictly > > less than the next major release in (test-)requirements lists of > > individual projects. Part of why it's blacklisted in the global > > requirements repository is so that the central upper-constraints.txt > > won't override project level decisions on what versions of these > > tools to run. Granted, it would also have made more sense if bandit > > uprevved to 2.0.0 when dropping Python 2.x support, so that > > in-project requirements in the form bandit<2 could have prevented > > the impact. But all that's to say, pinning bandit in stable branches > > of individual projects using it would be the more expected fix here. > > ACK thanks Jeremy, I had started that below before going back to an > earlier attempt with requirements. I'll reopen these now and test things > in the Nova change. > > https://review.opendev.org/q/topic:bug/1907438 > > This may get complicated to sort out, checking neutron cap [1], it failed in grenade job when checking out bandit per swift requirements. So it seems this one will need to be backported from the oldest affected stable to train, with some "correct order" on packages - though if we need it on 2 packages at same time to pass gates it may need overall capping? [1] https://review.opendev.org/c/openstack/neutron/+/766218 -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Dec 10 14:44:26 2020 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 10 Dec 2020 15:44:26 +0100 Subject: [release] weekly meeting skip Message-ID: Hello everyone, Since we don't have specific actions to manage for this week and nothing took place in our corresponding agenda, we will be skipping week's meeting. Also notice that the meeting of the next 3 weeks will be skipped too as we haven't specific actions to manage. If an emergency happens I'll let you know and I'll trigger a meeting next week. It's a good opportunity for us to take a look to see what action items defined during our PTG session remains to be done. At first glance more or less everything seems already done. If you plan to take PTO during the next few weeks then please let us know by updating the 'Team availability notes' that correspond to the given weeks, it could help us to avoid the "Bystander effect". Thanks for reading, Regards, -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Dec 10 14:45:27 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 10 Dec 2020 15:45:27 +0100 Subject: [neutron] Drivers meeting agenda - Friday 11.12.2020 Message-ID: <20201210144527.bxqed7xwvsmmzypv@p1.localdomain> Hi, For tomorrow drivers meeting we have 3 RFEs to discuss: * https://bugs.launchpad.net/neutron/+bug/1905295 * https://bugs.launchpad.net/neutron/+bug/1905391 * https://bugs.launchpad.net/neutron/+bug/1907089 If You have anything else to discuss with the drivers team, please add it to the "On Demand" agenda at https://wiki.openstack.org/wiki/Meetings/NeutronDrivers Please read them and see You tomorrow on the meeting :) -- Slawek Kaplonski Principal Software Engineer Red Hat From lyarwood at redhat.com Thu Dec 10 15:10:40 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 10 Dec 2020 15:10:40 +0000 Subject: [all][stable] bandit 1.6.3 drops py2 support In-Reply-To: References: <20201209135904.3npvtzwzldsgot6c@lyarwood-laptop.usersys.redhat.com> <20201209144006.d4yxdyv5sng5bl5l@yuggoth.org> <20201209154133.fr5js3b5yow73aue@lyarwood-laptop.usersys.redhat.com> Message-ID: <20201210151040.tj5mie3ejtjv2w25@lyarwood-laptop.usersys.redhat.com> On 10-12-20 15:42:13, Bernard Cafarelli wrote: > On Wed, 9 Dec 2020 at 16:46, Lee Yarwood wrote: > > > On 09-12-20 14:40:06, Jeremy Stanley wrote: > > > On 2020-12-09 13:59:04 +0000 (+0000), Lee Yarwood wrote: > > > > Hello all, > > > > > > > > $subject [1][2] is breaking various <= stable/train jobs where we > > > > attempt to pull bandit in while still using py2. This has been reported > > > > upstream and it looks like the 1.6.3 release may end up being yanked. > > > > > > > > If it isn't I've proposed the following requirements change to try to > > > > cap bandit to the 1.6.2 release, assuming this is safe to do on stable: > > > > > > > > Cap bandit at 1.6.2 when using py2 > > > > https://review.opendev.org/c/openstack/requirements/+/766170 > > > [...] > > > > > > It's typically recommended to pin static analysis tools strictly > > > less than the next major release in (test-)requirements lists of > > > individual projects. Part of why it's blacklisted in the global > > > requirements repository is so that the central upper-constraints.txt > > > won't override project level decisions on what versions of these > > > tools to run. Granted, it would also have made more sense if bandit > > > uprevved to 2.0.0 when dropping Python 2.x support, so that > > > in-project requirements in the form bandit<2 could have prevented > > > the impact. But all that's to say, pinning bandit in stable branches > > > of individual projects using it would be the more expected fix here. > > > > ACK thanks Jeremy, I had started that below before going back to an > > earlier attempt with requirements. I'll reopen these now and test things > > in the Nova change. > > > > https://review.opendev.org/q/topic:bug/1907438 > > > > This may get complicated to sort out, checking neutron cap [1], it failed > in grenade job when checking out bandit per swift requirements. > So it seems this one will need to be backported from the oldest affected > stable to train, with some "correct order" on packages - though if we need > it on 2 packages at same time to pass gates it may need overall capping? > > [1] https://review.opendev.org/c/openstack/neutron/+/766218 Yeah indeed, Elod is going to try to land things in reverse from stable/pike under the above bug topic but even then we will need to force land changes across multiple projects for gates to work again. An overall cap landing in the same order from stable/pike forward to stable/train might be better approach. -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 From lyarwood at redhat.com Thu Dec 10 15:32:00 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 10 Dec 2020 15:32:00 +0000 Subject: [all][stable] bandit 1.6.3 drops py2 support In-Reply-To: <20201210151040.tj5mie3ejtjv2w25@lyarwood-laptop.usersys.redhat.com> References: <20201209135904.3npvtzwzldsgot6c@lyarwood-laptop.usersys.redhat.com> <20201209144006.d4yxdyv5sng5bl5l@yuggoth.org> <20201209154133.fr5js3b5yow73aue@lyarwood-laptop.usersys.redhat.com> <20201210151040.tj5mie3ejtjv2w25@lyarwood-laptop.usersys.redhat.com> Message-ID: <20201210153200.xrprq2n35fstd4a4@lyarwood-laptop.usersys.redhat.com> On 10-12-20 15:10:40, Lee Yarwood wrote: > On 10-12-20 15:42:13, Bernard Cafarelli wrote: > > On Wed, 9 Dec 2020 at 16:46, Lee Yarwood wrote: > > > > > On 09-12-20 14:40:06, Jeremy Stanley wrote: > > > > On 2020-12-09 13:59:04 +0000 (+0000), Lee Yarwood wrote: > > > > > Hello all, > > > > > > > > > > $subject [1][2] is breaking various <= stable/train jobs where we > > > > > attempt to pull bandit in while still using py2. This has been reported > > > > > upstream and it looks like the 1.6.3 release may end up being yanked. > > > > > > > > > > If it isn't I've proposed the following requirements change to try to > > > > > cap bandit to the 1.6.2 release, assuming this is safe to do on stable: > > > > > > > > > > Cap bandit at 1.6.2 when using py2 > > > > > https://review.opendev.org/c/openstack/requirements/+/766170 > > > > [...] > > > > > > > > It's typically recommended to pin static analysis tools strictly > > > > less than the next major release in (test-)requirements lists of > > > > individual projects. Part of why it's blacklisted in the global > > > > requirements repository is so that the central upper-constraints.txt > > > > won't override project level decisions on what versions of these > > > > tools to run. Granted, it would also have made more sense if bandit > > > > uprevved to 2.0.0 when dropping Python 2.x support, so that > > > > in-project requirements in the form bandit<2 could have prevented > > > > the impact. But all that's to say, pinning bandit in stable branches > > > > of individual projects using it would be the more expected fix here. > > > > > > ACK thanks Jeremy, I had started that below before going back to an > > > earlier attempt with requirements. I'll reopen these now and test things > > > in the Nova change. > > > > > > https://review.opendev.org/q/topic:bug/1907438 > > > > > > > This may get complicated to sort out, checking neutron cap [1], it failed > > in grenade job when checking out bandit per swift requirements. > > So it seems this one will need to be backported from the oldest affected > > stable to train, with some "correct order" on packages - though if we need > > it on 2 packages at same time to pass gates it may need overall capping? > > > > [1] https://review.opendev.org/c/openstack/neutron/+/766218 > > Yeah indeed, Elod is going to try to land things in reverse from > stable/pike under the above bug topic but even then we will need to > force land changes across multiple projects for gates to work again. > > An overall cap landing in the same order from stable/pike forward to > stable/train might be better approach. That said it looks like bandit 1.6.3 might be yanked and replaced by a non-universal 1.6.4 release that might resolve this issue for us: https://github.com/PyCQA/bandit/issues/663 -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 From elod.illes at est.tech Thu Dec 10 15:33:42 2020 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 10 Dec 2020 16:33:42 +0100 Subject: [all][stable] bandit 1.6.3 drops py2 support In-Reply-To: <20201210151040.tj5mie3ejtjv2w25@lyarwood-laptop.usersys.redhat.com> References: <20201209135904.3npvtzwzldsgot6c@lyarwood-laptop.usersys.redhat.com> <20201209144006.d4yxdyv5sng5bl5l@yuggoth.org> <20201209154133.fr5js3b5yow73aue@lyarwood-laptop.usersys.redhat.com> <20201210151040.tj5mie3ejtjv2w25@lyarwood-laptop.usersys.redhat.com> Message-ID: <15843921-892b-4bd5-696e-65d2f4c75ac5@est.tech> I've pushed all the necessary changes for nova, neutron and swift [1]. Though as you say, we need to merge these in reverse order (from pike, till train), so it will take some time (even if we don't hit another gate breakage). Anyway, you are right that doing it in requirements repository would be quicker, but let's see first if we can start sorting this out without that. Cheers, Előd [1] https://review.opendev.org/q/topic:bug%252F1907438 On 2020. 12. 10. 16:10, Lee Yarwood wrote: > On 10-12-20 15:42:13, Bernard Cafarelli wrote: >> On Wed, 9 Dec 2020 at 16:46, Lee Yarwood wrote: >> >>> On 09-12-20 14:40:06, Jeremy Stanley wrote: >>>> On 2020-12-09 13:59:04 +0000 (+0000), Lee Yarwood wrote: >>>>> Hello all, >>>>> >>>>> $subject [1][2] is breaking various <= stable/train jobs where we >>>>> attempt to pull bandit in while still using py2. This has been reported >>>>> upstream and it looks like the 1.6.3 release may end up being yanked. >>>>> >>>>> If it isn't I've proposed the following requirements change to try to >>>>> cap bandit to the 1.6.2 release, assuming this is safe to do on stable: >>>>> >>>>> Cap bandit at 1.6.2 when using py2 >>>>> https://review.opendev.org/c/openstack/requirements/+/766170 >>>> [...] >>>> >>>> It's typically recommended to pin static analysis tools strictly >>>> less than the next major release in (test-)requirements lists of >>>> individual projects. Part of why it's blacklisted in the global >>>> requirements repository is so that the central upper-constraints.txt >>>> won't override project level decisions on what versions of these >>>> tools to run. Granted, it would also have made more sense if bandit >>>> uprevved to 2.0.0 when dropping Python 2.x support, so that >>>> in-project requirements in the form bandit<2 could have prevented >>>> the impact. But all that's to say, pinning bandit in stable branches >>>> of individual projects using it would be the more expected fix here. >>> ACK thanks Jeremy, I had started that below before going back to an >>> earlier attempt with requirements. I'll reopen these now and test things >>> in the Nova change. >>> >>> https://review.opendev.org/q/topic:bug/1907438 >>> >> This may get complicated to sort out, checking neutron cap [1], it failed >> in grenade job when checking out bandit per swift requirements. >> So it seems this one will need to be backported from the oldest affected >> stable to train, with some "correct order" on packages - though if we need >> it on 2 packages at same time to pass gates it may need overall capping? >> >> [1] https://review.opendev.org/c/openstack/neutron/+/766218 > Yeah indeed, Elod is going to try to land things in reverse from > stable/pike under the above bug topic but even then we will need to > force land changes across multiple projects for gates to work again. > > An overall cap landing in the same order from stable/pike forward to > stable/train might be better approach. > From gagehugo at gmail.com Thu Dec 10 15:57:37 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Thu, 10 Dec 2020 09:57:37 -0600 Subject: [security] Security SIG meetings cancelled until 2021 Message-ID: Hello everyone, With holidays approaching and people being out, we are going to cancel the security sig meetings for the remainder of the year. We will meet again in 2021 on Jan 7th. Have a Happy Holidays and Happy New Year! Stay Safe! -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Thu Dec 10 15:59:56 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Thu, 10 Dec 2020 09:59:56 -0600 Subject: [openstack-helm] No meetings for the rest of 2020 Message-ID: Hello everyone, With holidays approaching and people being out, we are going to cancel the openstack-helm meetings for the remainder of the year. We will meet again in 2021 on Jan 5th. Have a Happy Holidays and Happy New Year! Stay Safe! -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Dec 10 16:37:11 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 10 Dec 2020 17:37:11 +0100 Subject: New Openstack Deployment questions In-Reply-To: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> References: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> Message-ID: If you don't need baremetal provisioning (which OpenStack-Ansible also does not provide), then you can use Kolla-Ansible directly (instead of via Kayobe) which allows you to use Ubuntu and Debian as well. -yoctozepto On Thu, Dec 10, 2020 at 2:57 PM Thomas Wakefield wrote: > > OpenStack deployment questions: > > > > If you were starting a new deployment of OpenStack today what OS would you use, and what tools would you use for deployment? We were thinking CentOS with Kayobe, but then CentOS changed their support plans, and I am hesitant to start a new project with CentOS. We do have access to RHEL licensing so that might be an option. We have also looked at OpenStack-Ansible for deployment. Thoughts? > > > > Thanks in advance. -Tom From balazs.gibizer at est.tech Thu Dec 10 16:50:32 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Thu, 10 Dec 2020 17:50:32 +0100 Subject: [nova][gate] nova-multi-cell job =?UTF-8?Q?failing=0D=0A=0D=0A?= test_*_with_qos_min_bw_allocation In-Reply-To: References: Message-ID: <8SU4LQ.UJMXSYCLLIYN3@est.tech> On Thu, Dec 10, 2020 at 13:18, Balázs Gibizer wrote: > > > On Wed, Dec 9, 2020 at 19:33, melanie witt wrote: >> Howdy all, >> >> FYI we have gate failures of the recently added >> test_*_with_qos_min_bw_allocation tests [1] in the nova-multi-cell >> job on the master, stable/victoria, and stable/ussuri branches. The >> failures occur during cross cell migrations. >> >> I have opened a bug for the failure on the master branch: >> >> * https://bugs.launchpad.net/nova/+bug/1907522 >> >> The issue here is that we fail to create port bindings in neutron >> during a cross cell migration in the superconductor: >> >> nova.exception.PortBindingFailed: Binding failed for port >> >> and that corresponds to a failure in the neutron server log where it >> fails the port binding with: >> >> neutron_lib.exceptions.placement.UnknownResourceProvider: No such >> resource provider known by Neutron >> >> I don't yet know what is going on here ^. >> >> For the bug on stable/victoria and stable/ussuri I have opened this >> bug: >> >> * https://bugs.launchpad.net/nova/+bug/1907511 >> >> and have a WIP stable-only patch proposed that needs tests: >> >> https://review.opendev.org/c/openstack/nova/+/766364 >> >> I just wanted to see ASAP if the nova-multi-cell job will pass on it. >> >> The issue here ^ is that during a cross cell migration, we aren't >> targeting the cell database for the target host when we attempt to >> lookup the service record of the target host. >> >> For the stable branch failures I think the failure rate is 100% and >> it looks like it might also be 100% for the master branch failures. > > Thanks Melanie! > > A sort update. The test result in > https://review.opendev.org/c/openstack/nova/+/766364 shows that after > fixing the stable only https://bugs.launchpad.net/nova/+bug/1907511 > we now hit the same failure on stable that is seen on master > https://bugs.launchpad.net/nova/+bug/1907522 > > Both master and stable branches are blocked at the moment. Now we have patches to unblock master and stable/victoria, we just need to push the through the gate: * master: https://review.opendev.org/c/openstack/nova/+/766471 * stable/victoria: https://review.opendev.org/c/openstack/nova/+/765749 Cheers, gibi > > Cheers, > gibi >> >> Cheers, >> -melanie >> >> [1] https://review.opendev.org/c/openstack/tempest/+/694539 >> > > > From sean.mcginnis at gmx.com Thu Dec 10 16:53:00 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 10 Dec 2020 10:53:00 -0600 Subject: [TC][all] X Release name polling Message-ID: <3ffbc8d6-1d28-7575-ff9b-87969284a871@gmx.com> Hey everyone, We recently collected naming suggestions for the X release name. A lot of great suggestions by the community! Much more than I had expected for this letter. As a reminder, starting with the W release we had changed the process for selecting the name [1]. We collected suggestions from the community, then the members of the TC voted in a poll [2] to select which name(s) out of the suggestions to go with. The vetting of the top choices from that process is happening now, and we should have a official result soon. This is a bit of a mea culpa from me about an issue with how this was conducted though. The naming process specifically states: "the poll should be run in a manner that allows members of the community to see what each TC member voted for." When I set up the CIVS poll, I failed to check the box that would allow seeing the detailed results of the poll. So while we do have the winning names, we are not able to see which TC members voted and how. I apologize for missing this step (and I've noted that we really should add some detailed process for future coordinators to follow!). I believe the intent with that part of the process was to allow the community to see how your elected TC members voted as one factor to consider when reelecting anyone. Also transparency to show that no one is pushing through their own choices, circumventing any process. The two options I see at this point would be to either redo the entire naming poll, or just try to capture what TC members voted for somewhere so we have a record of that. It's been long enough now since taking the poll that I don't expect TC members to remember exactly how they ranked things. But we've also started the vetting process through the Foundation (lawyers engaged, etc) so I'd really rather not start over if we can avoid it. If TC members could respond here with what they remember voting for, I hope that is enough to satisfy the spirit of the defined process. If there are any members of the community that have a strong objection to this, please say so. I leave it up to the TC then to decide how to proceed. Again, apologies for missing this step. Otherwise, I think the process has worked well, and I hope we can declare an official X name shortly. Thanks! Sean [1] https://governance.openstack.org/tc/reference/release-naming.html#release-naming-process [2] https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_7e6e96070af39fe7 -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Dec 10 17:01:15 2020 From: marios at redhat.com (Marios Andreou) Date: Thu, 10 Dec 2020 19:01:15 +0200 Subject: [TripleO] moving stable/rocky for tripleo repos to unmaintained (+ then EOL) OK? Message-ID: Hello TripleO I would like to propose that we move all tripleo stable/rocky repos [1] to "unmaintained", with a view to tagging as end-of-life in due course. This will allow us to focus our efforts on keeping the check and gate queues green and continue to deliver weekly promotions for the more recent and active stable/* branches train ussuri victoria and master. The stable/rocky repos have not had much action in the last few months - I collected some info at [2] about the most recent stable/rocky commits for each of the tripleo repos. For many of those there are no commits in the last 6 months and for some even longer. The tripleo stable/rocky repos were tagged as "extended maintenance" (rocky-em) [2] in April 2020 with [3]. We have already reduced our CI commitment for rocky - these [4] are the current check/gate jobs and these [5] are the jobs that run for promotion to current-tripleo. However maintaining this doesn’t make sense if we are not even using it e.g. merging things into tripleo-* stable/rocky. Please raise your objections or any other comments or thoughts about this. Unless there are any blockers raised here, the plan is to put this into motion early in January. One still unanswered question I have is that since there is no ‘unmaintained’ tag, in the same way as we have the -em or for extended maintenance and end-of-life, do we simply _declare_ that the repos are unmaintained? Then after a period of “0 to 6 months” per [6] we can tag the tripleo repos with rocky-eol. If any one reading this knows please tell us! Thanks for reading! regards, marios [1] https://releases.openstack.org/teams/tripleo.html#rocky [2] http://paste.openstack.org/raw/800464/ [3] https://review.opendev.org/c/openstack/releases/+/709912 [4] http://dashboard-ci.tripleo.org/d/3-DYSmOGk/jobs-exploration?orgId=1&var-influxdb_filter=branch%7C%3D%7Cstable%2Frocky [5] http://dashboard-ci.tripleo.org/d/3-DYSmOGk/jobs-exploration?orgId=1&fullscreen&panelId=9&var-influxdb_filter=type%7C%3D%7Crdo&var-influxdb_filter=job_name%7C%3D~%7C%2Fperiodic.*-rocky%2F [6] https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Dec 10 17:04:00 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 10 Dec 2020 17:04:00 +0000 Subject: [all][stable] bandit 1.6.3 drops py2 support In-Reply-To: References: <20201209135904.3npvtzwzldsgot6c@lyarwood-laptop.usersys.redhat.com> <20201209144006.d4yxdyv5sng5bl5l@yuggoth.org> <20201209154133.fr5js3b5yow73aue@lyarwood-laptop.usersys.redhat.com> Message-ID: <20201210170400.ih7kjl7zwpvetz3y@yuggoth.org> On 2020-12-10 15:42:13 +0100 (+0100), Bernard Cafarelli wrote: [...] > This may get complicated to sort out, checking neutron cap [1], it failed > in grenade job when checking out bandit per swift requirements. > So it seems this one will need to be backported from the oldest affected > stable to train, with some "correct order" on packages - though if we need > it on 2 packages at same time to pass gates it may need overall capping? > > [1] https://review.opendev.org/c/openstack/neutron/+/766218 Oh wow, this is the first I've realized devstack installed test-requirements.txt for every project. That's a total mess since projects are totally encouraged to use different versions of test requirements where things like linters and static analyzers are concerned. Can't https://review.opendev.org/715469 be backported? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From tonyliu0592 at hotmail.com Thu Dec 10 17:12:42 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 10 Dec 2020 17:12:42 +0000 Subject: [nova-compute] Failed to publish message to topic 'scheduler_fanout' In-Reply-To: <20201210101105.Horde.4IFVBjJCmSqRjCED6jR-qQJ@webmail.nde.ag> References: <20201210101105.Horde.4IFVBjJCmSqRjCED6jR-qQJ@webmail.nde.ag> Message-ID: Connection went down, not sure about the cause, no error from server side. Then successful reconnection followed by nack. Enabled debug in logging for rabbitmq, but restart rabbitmq cluster brought everything back to normal. Will keep logging level on debug and wait for the issue happens again. Thanks! Tony > -----Original Message----- > From: Eugen Block > Sent: Thursday, December 10, 2020 2:11 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: [nova-compute] Failed to publish message to topic > 'scheduler_fanout' > > Hi, > > apparently nova can't reach rabbitmq, I would check rabbit logs and > 'rabbitmqctl cluster_status'. > > Regards, > Eugen > > > Zitat von Tony Liu : > > > Hi, > > > > With Ussuri, failed to launch VM and saw such error from nova-compute. > > Didn't see any error in nova-api, nova-conductor and nova-scheduler. > > Did some search and not found any similar cases. > > Wondering anyone has ever hit this issue? > > ================================================= > > 2020-12-09 13:24:29.333 8 ERROR oslo.messaging._drivers.impl_rabbit > > [req-4226ac03-a5b7-446a-a74b-2785f9818927 - - - - -] > > [3830bc00-1541-4951-b4ed-e73faafaa78c] AMQP server on > > 10.6.20.21:5672 is unreachable: connection already closed. Trying > > again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: > > connection already closed > > 2020-12-09 13:24:30.361 8 INFO oslo.messaging._drivers.impl_rabbit > > [req-4226ac03-a5b7-446a-a74b-2785f9818927 - - - - -] > > [3830bc00-1541-4951-b4ed-e73faafaa78c] Reconnected to AMQP server on > > 10.6.20.21:5672 via [amqp] client with port 55524. > > 2020-12-09 13:26:28.377 8 ERROR oslo.messaging._drivers.impl_rabbit > > [req-4226ac03-a5b7-446a-a74b-2785f9818927 - - - - -] Failed to publish > > message to topic 'scheduler_fanout': : > > amqp.exceptions.MessageNacked > > 2020-12-09 13:26:28.377 8 ERROR oslo.messaging._drivers.impl_rabbit > > [req-4226ac03-a5b7-446a-a74b-2785f9818927 - - - - -] Unable to connect > > to AMQP server on 10.6.20.21:5672 after inf tries: : > > amqp.exceptions.MessageNacked > > 2020-12-09 13:26:28.378 8 ERROR oslo_service.periodic_task > > [req-4226ac03-a5b7-446a-a74b-2785f9818927 - - - - -] Error during > > ComputeManager._sync_scheduler_instance_info: > > oslo_messaging.exceptions.MessageDeliveryFailure: Unable to connect to > > AMQP server on 10.6.20.21:5672 after inf tries: > > ================================================= > > > > Thanks! > > Tony > > > From elod.illes at est.tech Thu Dec 10 17:29:48 2020 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 10 Dec 2020 18:29:48 +0100 Subject: [all][stable] bandit 1.6.3 drops py2 support In-Reply-To: <20201210170400.ih7kjl7zwpvetz3y@yuggoth.org> References: <20201209135904.3npvtzwzldsgot6c@lyarwood-laptop.usersys.redhat.com> <20201209144006.d4yxdyv5sng5bl5l@yuggoth.org> <20201209154133.fr5js3b5yow73aue@lyarwood-laptop.usersys.redhat.com> <20201210170400.ih7kjl7zwpvetz3y@yuggoth.org> Message-ID: <8bf19cf4-aaec-61d8-3364-9691ef60c10b@est.tech> That patch looks promising! Thanks Jeremy! We need to be careful though as that could involve some new errors. I've found this mail [1] related to the mentioned patch with some errors and fixes. If that's all, then maybe that is the best way forward to backport these changes. @QA Team, what do you think? Are you aware of other possible issues? Thanks, Előd [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-March/013681.html On 2020. 12. 10. 18:04, Jeremy Stanley wrote: > On 2020-12-10 15:42:13 +0100 (+0100), Bernard Cafarelli wrote: > [...] >> This may get complicated to sort out, checking neutron cap [1], it failed >> in grenade job when checking out bandit per swift requirements. >> So it seems this one will need to be backported from the oldest affected >> stable to train, with some "correct order" on packages - though if we need >> it on 2 packages at same time to pass gates it may need overall capping? >> >> [1] https://review.opendev.org/c/openstack/neutron/+/766218 > Oh wow, this is the first I've realized devstack installed > test-requirements.txt for every project. That's a total mess since > projects are totally encouraged to use different versions of test > requirements where things like linters and static analyzers are > concerned. Can't https://review.opendev.org/715469 be backported? From mnaser at vexxhost.com Thu Dec 10 17:48:41 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 10 Dec 2020 12:48:41 -0500 Subject: [TC][all] X Release name polling In-Reply-To: <3ffbc8d6-1d28-7575-ff9b-87969284a871@gmx.com> References: <3ffbc8d6-1d28-7575-ff9b-87969284a871@gmx.com> Message-ID: Hi Sean, Thanks for taking care of this entire process. Personally, I recall selecting "Xanadu" as my first option, picking 2 others (which I *honestly* don't remember, to be honest) and leaving the rest ranked 30 equally. Thanks Mohammed On Thu, Dec 10, 2020 at 11:57 AM Sean McGinnis wrote: > > Hey everyone, > > We recently collected naming suggestions for the X release name. A lot of great suggestions by the community! Much more than I had expected for this letter. > > As a reminder, starting with the W release we had changed the process for selecting the name [1]. We collected suggestions from the community, then the members of the TC voted in a poll [2] to select which name(s) out of the suggestions to go with. The vetting of the top choices from that process is happening now, and we should have a official result soon. > > This is a bit of a mea culpa from me about an issue with how this was conducted though. The naming process specifically states: "the poll should be run in a manner that allows members of the community to see what each TC member voted for." When I set up the CIVS poll, I failed to check the box that would allow seeing the detailed results of the poll. So while we do have the winning names, we are not able to see which TC members voted and how. I apologize for missing this step (and I've noted that we really should add some detailed process for future coordinators to follow!). > > I believe the intent with that part of the process was to allow the community to see how your elected TC members voted as one factor to consider when reelecting anyone. Also transparency to show that no one is pushing through their own choices, circumventing any process. > > The two options I see at this point would be to either redo the entire naming poll, or just try to capture what TC members voted for somewhere so we have a record of that. > > It's been long enough now since taking the poll that I don't expect TC members to remember exactly how they ranked things. But we've also started the vetting process through the Foundation (lawyers engaged, etc) so I'd really rather not start over if we can avoid it. If TC members could respond here with what they remember voting for, I hope that is enough to satisfy the spirit of the defined process. > > If there are any members of the community that have a strong objection to this, please say so. I leave it up to the TC then to decide how to proceed. > > Again, apologies for missing this step. Otherwise, I think the process has worked well, and I hope we can declare an official X name shortly. > > Thanks! > > Sean > > [1] https://governance.openstack.org/tc/reference/release-naming.html#release-naming-process > [2] https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_7e6e96070af39fe7 -- Mohammed Naser VEXXHOST, Inc. From gmann at ghanshyammann.com Thu Dec 10 18:00:38 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 10 Dec 2020 12:00:38 -0600 Subject: [TC][all] X Release name polling In-Reply-To: References: <3ffbc8d6-1d28-7575-ff9b-87969284a871@gmx.com> Message-ID: <1764dcfe51c.d7ffc31450758.2072165022096449443@ghanshyammann.com> Thanks, Sean for taking care of release naming things, and much appreciate your effort and time for this. I have voted equal rank (30th I think which is abstaining from the vote) to all option as I still stand towards giving voting rights to all community members than just TC. -gmann ---- On Thu, 10 Dec 2020 11:48:41 -0600 Mohammed Naser wrote ---- > Hi Sean, > > Thanks for taking care of this entire process. Personally, I recall > selecting "Xanadu" as my first option, picking 2 others (which I > *honestly* don't remember, to be honest) and leaving the rest ranked > 30 equally. > > Thanks > Mohammed > > On Thu, Dec 10, 2020 at 11:57 AM Sean McGinnis wrote: > > > > Hey everyone, > > > > We recently collected naming suggestions for the X release name. A lot of great suggestions by the community! Much more than I had expected for this letter. > > > > As a reminder, starting with the W release we had changed the process for selecting the name [1]. We collected suggestions from the community, then the members of the TC voted in a poll [2] to select which name(s) out of the suggestions to go with. The vetting of the top choices from that process is happening now, and we should have a official result soon. > > > > This is a bit of a mea culpa from me about an issue with how this was conducted though. The naming process specifically states: "the poll should be run in a manner that allows members of the community to see what each TC member voted for." When I set up the CIVS poll, I failed to check the box that would allow seeing the detailed results of the poll. So while we do have the winning names, we are not able to see which TC members voted and how. I apologize for missing this step (and I've noted that we really should add some detailed process for future coordinators to follow!). > > > > I believe the intent with that part of the process was to allow the community to see how your elected TC members voted as one factor to consider when reelecting anyone. Also transparency to show that no one is pushing through their own choices, circumventing any process. > > > > The two options I see at this point would be to either redo the entire naming poll, or just try to capture what TC members voted for somewhere so we have a record of that. > > > > It's been long enough now since taking the poll that I don't expect TC members to remember exactly how they ranked things. But we've also started the vetting process through the Foundation (lawyers engaged, etc) so I'd really rather not start over if we can avoid it. If TC members could respond here with what they remember voting for, I hope that is enough to satisfy the spirit of the defined process. > > > > If there are any members of the community that have a strong objection to this, please say so. I leave it up to the TC then to decide how to proceed. > > > > Again, apologies for missing this step. Otherwise, I think the process has worked well, and I hope we can declare an official X name shortly. > > > > Thanks! > > > > Sean > > > > [1] https://governance.openstack.org/tc/reference/release-naming.html#release-naming-process > > [2] https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_7e6e96070af39fe7 > > > > -- > Mohammed Naser > VEXXHOST, Inc. > > From jungleboyj at gmail.com Thu Dec 10 18:07:03 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Thu, 10 Dec 2020 12:07:03 -0600 Subject: [TC][all] X Release name polling In-Reply-To: <3ffbc8d6-1d28-7575-ff9b-87969284a871@gmx.com> References: <3ffbc8d6-1d28-7575-ff9b-87969284a871@gmx.com> Message-ID: Sean, I echo the other TC members in thanks for you leading up this process and doing it with transparency! I voted for three top options, I believe, leaving the rest ranked as 30th.  I believe they were:     1.  Xenoblast     2.  Xenomorph     3.  Xenith Why not Xanadu from me, people may ask?  Well, honestly, because I didn't want the song stuck in my head for 6 months. Thanks! Jay On 12/10/2020 10:53 AM, Sean McGinnis wrote: > > Hey everyone, > > We recently collected naming suggestions for the X release name. A lot > of great suggestions by the community! Much more than I had expected > for this letter. > > As a reminder, starting with the W release we had changed the process > for selecting the name [1]. We collected suggestions from the > community, then the members of the TC voted in a poll [2] to select > which name(s) out of the suggestions to go with. The vetting of the > top choices from that process is happening now, and we should have a > official result soon. > > This is a bit of a mea culpa from me about an issue with how this was > conducted though. The naming process specifically states: "the poll > should be run in a manner that allows members of the community to see > what each TC member voted for." When I set up the CIVS poll, I failed > to check the box that would allow seeing the detailed results of the > poll. So while we do have the winning names, we are not able to see > which TC members voted and how. I apologize for missing this step (and > I've noted that we really should add some detailed process for future > coordinators to follow!). > > I believe the intent with that part of the process was to allow the > community to see how your elected TC members voted as one factor to > consider when reelecting anyone. Also transparency to show that no one > is pushing through their own choices, circumventing any process. > > The two options I see at this point would be to either redo the entire > naming poll, or just try to capture what TC members voted for > somewhere so we have a record of that. > > It's been long enough now since taking the poll that I don't expect TC > members to remember exactly how they ranked things. But we've also > started the vetting process through the Foundation (lawyers engaged, > etc) so I'd really rather not start over if we can avoid it. If TC > members could respond here with what they remember voting for, I hope > that is enough to satisfy the spirit of the defined process. > > If there are any members of the community that have a strong objection > to this, please say so. I leave it up to the TC then to decide how to > proceed. > > Again, apologies for missing this step. Otherwise, I think the process > has worked well, and I hope we can declare an official X name shortly. > > Thanks! > > Sean > > [1] > https://governance.openstack.org/tc/reference/release-naming.html#release-naming-process > [2] https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_7e6e96070af39fe7 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Dec 10 18:35:47 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 10 Dec 2020 18:35:47 +0000 Subject: [TC][all] X Release name polling In-Reply-To: <3ffbc8d6-1d28-7575-ff9b-87969284a871@gmx.com> References: <3ffbc8d6-1d28-7575-ff9b-87969284a871@gmx.com> Message-ID: <9bae66325c414e33b6b62d8217dad1e4a23caf6e.camel@redhat.com> On Thu, 2020-12-10 at 10:53 -0600, Sean McGinnis wrote: > Hey everyone, > > We recently collected naming suggestions for the X release name. A lot > of great suggestions by the community! Much more than I had expected for > this letter. > > As a reminder, starting with the W release we had changed the process > for selecting the name [1]. We collected suggestions from the community, > then the members of the TC voted in a poll [2] to select which name(s) > out of the suggestions to go with. > wait i tough that we still had a comuntiy poll where every one could vote on the name is that coming next? i was waitign for a vote link. i was assumign i had missed it as i have and issue with not reciving vote link in the past for tc votes or ptl elections that is partly due to the fact that i dont think the curernt process looks at all the emales on my account and since i submit code with a different email then the first one listed it does not track that properly eventhough the one i use for code is in the alternative emails. in anycase i tought there was still a comuntiy poll afgter the inital list is narrowed down. delegatign this to the TC feels like a regression from what we previously did. > The vetting of the top choices from > that process is happening now, and we should have a official result soon. > > This is a bit of a mea culpa from me about an issue with how this was > conducted though. The naming process specifically states: "the poll > should be run in a manner that allows members of the community to see > what each TC member voted for." When I set up the CIVS poll, I failed to > check the box that would allow seeing the detailed results of the poll. > So while we do have the winning names, we are not able to see which TC > members voted and how. I apologize for missing this step (and I've noted > that we really should add some detailed process for future coordinators > to follow!). > > I believe the intent with that part of the process was to allow the > community to see how your elected TC members voted as one factor to > consider when reelecting anyone. Also transparency to show that no one > is pushing through their own choices, circumventing any process. > > The two options I see at this point would be to either redo the entire > naming poll, or just try to capture what TC members voted for somewhere > so we have a record of that. > > It's been long enough now since taking the poll that I don't expect TC > members to remember exactly how they ranked things. But we've also > started the vetting process through the Foundation (lawyers engaged, > etc) so I'd really rather not start over if we can avoid it. If TC > members could respond here with what they remember voting for, I hope > that is enough to satisfy the spirit of the defined process. > > If there are any members of the community that have a strong objection > to this, please say so. I leave it up to the TC then to decide how to > proceed. > > Again, apologies for missing this step. Otherwise, I think the process > has worked well, and I hope we can declare an official X name shortly. > > Thanks! > > Sean > > [1] > https://governance.openstack.org/tc/reference/release-naming.html#release-naming-process > [2] https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_7e6e96070af39fe7 > From smooney at redhat.com Thu Dec 10 18:41:04 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 10 Dec 2020 18:41:04 +0000 Subject: [all][stable] bandit 1.6.3 drops py2 support In-Reply-To: <20201210170400.ih7kjl7zwpvetz3y@yuggoth.org> References: <20201209135904.3npvtzwzldsgot6c@lyarwood-laptop.usersys.redhat.com> <20201209144006.d4yxdyv5sng5bl5l@yuggoth.org> <20201209154133.fr5js3b5yow73aue@lyarwood-laptop.usersys.redhat.com> <20201210170400.ih7kjl7zwpvetz3y@yuggoth.org> Message-ID: On Thu, 2020-12-10 at 17:04 +0000, Jeremy Stanley wrote: > On 2020-12-10 15:42:13 +0100 (+0100), Bernard Cafarelli wrote: > [...] > > This may get complicated to sort out, checking neutron cap [1], it failed > > in grenade job when checking out bandit per swift requirements. > > So it seems this one will need to be backported from the oldest affected > > stable to train, with some "correct order" on packages - though if we need > > it on 2 packages at same time to pass gates it may need overall capping? > > > > [1] https://review.opendev.org/c/openstack/neutron/+/766218 > > Oh wow, this is the first I've realized devstack installed > test-requirements.txt for every project. > yep i have tried to stop it doing that a few times but apparently some project rely on that which causes issue. eventually https://review.opendev.org/c/openstack/devstack/+/715469/ did make that change and where we can backport it i would be in favor of that but this is not the first time that installing test requiremetn has broken dpeloyment due to linters. in partical it has broken the compliation of dpdk and ovs where the default linter configruution broke make sicne it ran the test and style check failed. > That's a total mess since > projects are totally encouraged to use different versions of test > requirements where things like linters and static analyzers are > concerned. Can't https://review.opendev.org/715469 be backported? From gouthampravi at gmail.com Thu Dec 10 18:42:47 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 10 Dec 2020 10:42:47 -0800 Subject: [manila] No IRC meeting on 24th and 31st Dec 2020 Message-ID: Hi there zorillas, As discussed during the weekly IRC meeting today, we're not going to meet on 24th and 31st December due to these dates coinciding with holidays in many parts of the world where our contributors live. If you have concerns in the meantime, please let us know on this mailing list, or on the #openstack-manila IRC channel on freenode. We're meeting on IRC as usual at 1500 UTC next Thursday, 17th Dec 2020 on #openstack-meeting-alt. [1] Since we'll be in a holiday mood, we'd love to see you all in ugly sweaters and mismatched socks during this meeting. Thanks, Goutham [1] https://wiki.openstack.org/wiki/Manila/Meetings -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Dec 10 18:50:06 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 10 Dec 2020 18:50:06 +0000 Subject: [all][stable] bandit 1.6.3 drops py2 support In-Reply-To: References: <20201209135904.3npvtzwzldsgot6c@lyarwood-laptop.usersys.redhat.com> <20201209144006.d4yxdyv5sng5bl5l@yuggoth.org> <20201209154133.fr5js3b5yow73aue@lyarwood-laptop.usersys.redhat.com> <20201210170400.ih7kjl7zwpvetz3y@yuggoth.org> Message-ID: <4b905ec864267e6b0e2f73f74de7b83370f84e97.camel@redhat.com> On Thu, 2020-12-10 at 18:41 +0000, Sean Mooney wrote: > On Thu, 2020-12-10 at 17:04 +0000, Jeremy Stanley wrote: > > On 2020-12-10 15:42:13 +0100 (+0100), Bernard Cafarelli wrote: > > [...] > > > This may get complicated to sort out, checking neutron cap [1], it failed > > > in grenade job when checking out bandit per swift requirements. > > > So it seems this one will need to be backported from the oldest affected > > > stable to train, with some "correct order" on packages - though if we need > > > it on 2 packages at same time to pass gates it may need overall capping? > > > > > > [1] https://review.opendev.org/c/openstack/neutron/+/766218 > > > > Oh wow, this is the first I've realized devstack installed > > test-requirements.txt for every project. > > > yep i have tried to stop it doing that a few times but apparently some project > rely on that which causes issue. eventually https://review.opendev.org/c/openstack/devstack/+/715469/ > did make that change and where we can backport it i would be in favor of that but > this is not the first time that installing test requiremetn has broken dpeloyment due to linters. > in partical it has broken the compliation of dpdk and ovs where the default linter configruution > broke make sicne it ran the test and style check failed. https://review.opendev.org/c/openstack/nova/+/445622 is what i was refering too. when we added flake8-import-order to novas test-requiremtns.txt it broke networking-ovs-dpdk and would have broken the neutorn ovn jobs if they exsited at that time. this broke compliation of ovs as they dont enforce the same import ordering and this cause the build test to then fail. > > >  That's a total mess since > > projects are totally encouraged to use different versions of test > > requirements where things like linters and static analyzers are > > concerned. Can't https://review.opendev.org/715469 be backported? > > From zigo at debian.org Thu Dec 10 19:47:34 2020 From: zigo at debian.org (Thomas Goirand) Date: Thu, 10 Dec 2020 20:47:34 +0100 Subject: New Openstack Deployment questions In-Reply-To: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> References: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> Message-ID: <994dd459-1370-4cc4-9065-6b718e3bc552@debian.org> On 12/10/20 2:46 PM, Thomas Wakefield wrote: > OpenStack deployment questions:  > > If you were starting a new deployment of OpenStack today what OS would > you use, and what tools would you use for deployment?  We were thinking > CentOS with Kayobe, but then CentOS changed their support plans, and I > am hesitant to start a new project with CentOS.  We do have access to > RHEL licensing so that might be an option.  We have also looked at > OpenStack-Ansible for deployment.  Thoughts?  > > Thanks in advance.  -Tom Hi Thomas, Did you consider using Debian and OCI [1] ? I've just deployed my 8th cluster in production with it, this time using floating IP for routed networks [2]. I'm of course biased in my answer because I'm the package maintainer and the main author of OCI, but he... please give it a try! One of the main point is that what happened with CentOS has no chance to happen in Debian (no vendor lock-in). Cheers, Thomas Goirand (zigo) [1] https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer [2] https://review.opendev.org/c/openstack/neutron/+/669395 From kennelson11 at gmail.com Thu Dec 10 20:21:26 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 10 Dec 2020 12:21:26 -0800 Subject: Fwd: [SAVE THE DATE] End of Year Contributor Celebration Event In-Reply-To: References: Message-ID: Hello Everyone! Wanted to take a moment to extend this invite to the k8s end of year celebration to the OpenStack community :) Their TOC asked me to invite our TC, but since they call out wanting both friends and family to attend, mnaser and I thought to invite all of you as well :) Hope to see you there! -Kendall Nelson (diablo_rojo) ---------- Forwarded message --------- From: Alison Dowdney Date: Fri, Nov 13, 2020 at 1:53 PM Subject: [SAVE THE DATE] End of Year Contributor Celebration Event To: , Hey there everyone, Thanks for your feedback! When we asked what you wanted to do for a “contributor summit”, most of you overwhelmingly chose to do something social to bring the community together. With that, we’d like to announce that this year’s “summit” will be a Contributor Celebration. There will be 🎂 TL;DR - “Hallway track” event after 1.20 release, December 10-13th - Event will use Discord [1] - Games and other social activities (taking suggestions) - Hangouts and hackathons - Friends and family are invited - Event website [2] - Registration form [3] It's been a rough year, and without any in person events we've lost the best part of the Contributor Summits - the Hallway Track. The Kubernetes Contributor Celebration is an attempt to reclaim that and celebrate our accomplishments. It's a time for us to relax, chat, and do something fun with your fellow contributors! 🏖 We realize that some of you may not be into games/trivia, so this year we’re going full casual. This isn’t meant to be stressful or yet-another-virtual event. If you just want to come and hang out in chat, or listen in on the trivia, then that’s ok. We’re also leaving enough open options to do fun things, for example someone wanted to play a saxophone for the community, that’s totally an option 🎷. The event website [2] will be the source of truth for all event information with updates being sent to the list. [1]: https://discord.com [2]: https://k8s.dev/celebration [3]: https://forms.gle/51tqQgxuHxLaeU1P8 Many Thanks, Alison on behalf of the Contributor Celebration Team -- You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group. To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/CALLkKiDfDah-YykTkyxKsC-nOs9pap0Tp0Hx2LhS2w8EAvNYDw%40mail.gmail.com . -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Dec 10 20:23:00 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 10 Dec 2020 20:23:00 +0000 Subject: [all][stable] bandit 1.6.3 drops py2 support In-Reply-To: References: <20201209135904.3npvtzwzldsgot6c@lyarwood-laptop.usersys.redhat.com> <20201209144006.d4yxdyv5sng5bl5l@yuggoth.org> <20201209154133.fr5js3b5yow73aue@lyarwood-laptop.usersys.redhat.com> <20201210170400.ih7kjl7zwpvetz3y@yuggoth.org> Message-ID: <20201210202259.va5lrtpemb3jyfqt@yuggoth.org> On 2020-12-10 18:41:04 +0000 (+0000), Sean Mooney wrote: [...] > yep i have tried to stop it doing that a few times but apparently > some project rely on that which causes issue. eventually > https://review.opendev.org/c/openstack/devstack/+/715469/ did make > that change and where we can backport it i would be in favor of > that but this is not the first time that installing test > requiremetn has broken dpeloyment due to linters. in partical it > has broken the compliation of dpdk and ovs where the default > linter configruution broke make sicne it ran the test and style > check failed. [...] Maybe an alternative would be to have DevStack reuse the filter from openstack/requirements which is what we use to prevent including these tools in the upper-constraints.txt set? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From sean.mcginnis at gmx.com Thu Dec 10 20:23:39 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 10 Dec 2020 14:23:39 -0600 Subject: [TC][all] X Release name polling In-Reply-To: <9bae66325c414e33b6b62d8217dad1e4a23caf6e.camel@redhat.com> References: <3ffbc8d6-1d28-7575-ff9b-87969284a871@gmx.com> <9bae66325c414e33b6b62d8217dad1e4a23caf6e.camel@redhat.com> Message-ID: On 12/10/20 12:35 PM, Sean Mooney wrote: > On Thu, 2020-12-10 at 10:53 -0600, Sean McGinnis wrote: >> Hey everyone, >> >> We recently collected naming suggestions for the X release name. A lot >> of great suggestions by the community! Much more than I had expected for >> this letter. >> >> As a reminder, starting with the W release we had changed the process >> for selecting the name [1]. We collected suggestions from the community, >> then the members of the TC voted in a poll [2] to select which name(s) >> out of the suggestions to go with. >> > wait i tough that we still had a comuntiy poll where every one could vote > on the name is that coming next? i was waitign for a vote link. > > i was assumign i had missed it as i have and issue with not reciving vote > link in the past for tc votes or ptl elections > > that is partly due to the fact that i dont think the curernt process looks at all > the emales on my account and since i submit code with a different email then the first > one listed it does not track that properly eventhough the one i use for code is in the alternative > emails. > > in anycase i tought there was still a comuntiy poll afgter the inital list is narrowed down. > delegatign this to the TC feels like a regression from what we previously did. > That process was changed last year with this TC resolution change: https://review.opendev.org/c/openstack/governance/+/695071 And announced along with the start of the W release naming: http://lists.openstack.org/pipermail/openstack-discuss/2020-January/012123.html From fungi at yuggoth.org Thu Dec 10 20:32:45 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 10 Dec 2020 20:32:45 +0000 Subject: [TC][all] X Release name polling In-Reply-To: <9bae66325c414e33b6b62d8217dad1e4a23caf6e.camel@redhat.com> References: <3ffbc8d6-1d28-7575-ff9b-87969284a871@gmx.com> <9bae66325c414e33b6b62d8217dad1e4a23caf6e.camel@redhat.com> Message-ID: <20201210203245.6uqjzci3v6ol7uc4@yuggoth.org> On 2020-12-10 18:35:47 +0000 (+0000), Sean Mooney wrote: [...] > wait i tough that we still had a comuntiy poll where every one > could vote on the name is that coming next? i was waitign for a > vote link. [...] The current process can be found here: https://governance.openstack.org/tc/reference/release-naming.html This is the same process used for the "W" cycle (which selected Wallaby). The change in process came at the end of a lengthy and heated debate of various potential replacement processes, and was implemented with https://review.opendev.org/695071 which merged a year ago tomorrow. I won't attempt to summarize the challenges and issues here, but the primary reason to change it was that the TC had a hard time sticking to the previous process for a variety of reasons. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From kennelson11 at gmail.com Thu Dec 10 20:33:14 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 10 Dec 2020 12:33:14 -0800 Subject: [TC][all] X Release name polling In-Reply-To: References: <3ffbc8d6-1d28-7575-ff9b-87969284a871@gmx.com> Message-ID: Hello! Continuing the transparency theme since that was a big part of why we were okay with just the TC voting. I voted for Xenon in first and I *think* I put Xerxes as second and left the rest at the default lowest value. -Kendall (diablo_rojo) On Thu, Dec 10, 2020 at 10:08 AM Jay Bryant wrote: > Sean, > > I echo the other TC members in thanks for you leading up this process and > doing it with transparency! > > I voted for three top options, I believe, leaving the rest ranked as > 30th. I believe they were: > > 1. Xenoblast > > 2. Xenomorph > > 3. Xenith > > Why not Xanadu from me, people may ask? Well, honestly, because I didn't > want the song stuck in my head for 6 months. > > Thanks! > > Jay > > > On 12/10/2020 10:53 AM, Sean McGinnis wrote: > > Hey everyone, > > We recently collected naming suggestions for the X release name. A lot of > great suggestions by the community! Much more than I had expected for this > letter. > > As a reminder, starting with the W release we had changed the process for > selecting the name [1]. We collected suggestions from the community, then > the members of the TC voted in a poll [2] to select which name(s) out of > the suggestions to go with. The vetting of the top choices from that > process is happening now, and we should have a official result soon. > > This is a bit of a mea culpa from me about an issue with how this was > conducted though. The naming process specifically states: "the poll > should be run in a manner that allows members of the community to see what > each TC member voted for." When I set up the CIVS poll, I failed to check > the box that would allow seeing the detailed results of the poll. So while > we do have the winning names, we are not able to see which TC members voted > and how. I apologize for missing this step (and I've noted that we really > should add some detailed process for future coordinators to follow!). > > I believe the intent with that part of the process was to allow the > community to see how your elected TC members voted as one factor to > consider when reelecting anyone. Also transparency to show that no one is > pushing through their own choices, circumventing any process. > > The two options I see at this point would be to either redo the entire > naming poll, or just try to capture what TC members voted for somewhere so > we have a record of that. > > It's been long enough now since taking the poll that I don't expect TC > members to remember exactly how they ranked things. But we've also started > the vetting process through the Foundation (lawyers engaged, etc) so I'd > really rather not start over if we can avoid it. If TC members could > respond here with what they remember voting for, I hope that is enough to > satisfy the spirit of the defined process. > > If there are any members of the community that have a strong objection to > this, please say so. I leave it up to the TC then to decide how to proceed. > > Again, apologies for missing this step. Otherwise, I think the process has > worked well, and I hope we can declare an official X name shortly. > > Thanks! > > Sean > > [1] > https://governance.openstack.org/tc/reference/release-naming.html#release-naming-process > [2] https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_7e6e96070af39fe7 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Thu Dec 10 20:51:26 2020 From: ltoscano at redhat.com (Luigi Toscano) Date: Thu, 10 Dec 2020 21:51:26 +0100 Subject: New Openstack Deployment questions In-Reply-To: References: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> Message-ID: <4677519.0VBMTVartN@whitebase.usersys.redhat.com> On Thursday, 10 December 2020 15:27:40 CET Satish Patel wrote: > I just built a new openstack using openstack-ansible on CentOS 8.2 > last month before news broke out. I have no choice so i am going to > stick with CentOS. > > What is the future of RDO and EPEL repo if centOS going away. ? Continue as before on CentOS Stream. -- Luigi From lyarwood at redhat.com Thu Dec 10 22:21:34 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 10 Dec 2020 22:21:34 +0000 Subject: [all][stable] bandit 1.6.3 drops py2 support In-Reply-To: References: <20201209135904.3npvtzwzldsgot6c@lyarwood-laptop.usersys.redhat.com> <20201209144006.d4yxdyv5sng5bl5l@yuggoth.org> <20201209154133.fr5js3b5yow73aue@lyarwood-laptop.usersys.redhat.com> <20201210170400.ih7kjl7zwpvetz3y@yuggoth.org> Message-ID: <20201210222134.weh4billpiomrxoy@lyarwood-laptop.usersys.redhat.com> On 10-12-20 18:41:04, Sean Mooney wrote: > On Thu, 2020-12-10 at 17:04 +0000, Jeremy Stanley wrote: > > On 2020-12-10 15:42:13 +0100 (+0100), Bernard Cafarelli wrote: > > [...] > > > This may get complicated to sort out, checking neutron cap [1], it failed > > > in grenade job when checking out bandit per swift requirements. > > > So it seems this one will need to be backported from the oldest affected > > > stable to train, with some "correct order" on packages - though if we need > > > it on 2 packages at same time to pass gates it may need overall capping? > > > > > > [1] https://review.opendev.org/c/openstack/neutron/+/766218 > > > > Oh wow, this is the first I've realized devstack installed > > test-requirements.txt for every project. > > > > yep i have tried to stop it doing that a few times but apparently some > project rely on that which causes issue. eventually > https://review.opendev.org/c/openstack/devstack/+/715469/ did make > that change and where we can backport it i would be in favor of that > but this is not the first time that installing test requiremetn has > broken dpeloyment due to linters. in partical it has broken the > compliation of dpdk and ovs where the default linter configruution > broke make sicne it ran the test and style check failed. > > > That's a total mess since projects are totally encouraged to use > > different versions of test requirements where things like linters > > and static analyzers are concerned. Can't > > https://review.opendev.org/715469 be backported? Thanks for the pointer Jeremy! I've started that below, we will need to land this in reverse from stable/pike however to appease the grenade jobs: https://review.opendev.org/q/I8f24b839bf42e2fb9803dc7df3a30ae20cf264eb Cheers -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 From emiller at genesishosting.com Thu Dec 10 22:41:37 2020 From: emiller at genesishosting.com (Eric K. Miller) Date: Thu, 10 Dec 2020 16:41:37 -0600 Subject: Multiple VM ports on provider networks / dhcp fails Message-ID: <046E9C0290DD9149B106B72FC9156BEA04814DAD@gmsxchsvr01.thecreation.com> Hi, We ran into an issue recently that I haven't been able to figure out (this environment is on Stein which uses DVR). It involves an environment with many provider networks that connect to a legacy environment as well as many tenant networks. We create a server on a tenant network and add ports to this afterwards with each port attached to its respective provider network. The provider networks have "None" for the "gateway" property, so we don't have multiple default routes added to the routing table. We have a single subnet which has DHCP enabled and a proper allocation pool on its subnet. Note that this is a CentOS 8 server we are testing with, which has "no" ifcfg files for the additional ports, and so we rely on CentOS using DHCP by default. After adding the first port, the DHCP client does its job and requests an IP, which succeeds - so the DHCP server in the respective network namespace is responding fine. However, after adding the second port, it appears the DHCP client is sending requests out the network (we can see this traffic when tcpdump'ing the respective tap interface on the host), but the DHCP server is not replying. After looking at OpenVSwitch flows, it appears that DHCP broadcast traffic is being dropped for the second network. This does NOT happen with tenant networks. I can add multiple ports, each connected to its respective tenant network, and an IP is assigned to each interface that appears in CentOS immediately after the port has been added. Is there something special that is blocking the creation of the DHCP flow for subsequent provider network ports? NOTE that non-DHCP traffic flows fine if we create an ifcfg file with a static IP set to the address that DHCP should have set the interface to - it just appears that DHCP traffic is not flowing to the DHCP namespace for second and subsequent ports/networks. Thanks! Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Dec 11 01:13:06 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 10 Dec 2020 19:13:06 -0600 Subject: [all][tc][policy] Progress report of consistent and secure default policies pop-up team Message-ID: <1764f5bd2b9.12845045060055.3550713092336622502@ghanshyammann.com> Hello Everyone, Please find this month's progress on 'Consistent and Secure Default Policies Popup Team'. Meeting notes: ============ * We discussed progress on policy format migration from JSON->YAML and updates required in oslo.upgradecheck is merged now. * As you saw in another email thread by Lance[1], he has started the work on many projects. We will be adding the test coverage also in those, I will be able to help lance on some of them in next month. * We will define the common personas in oslo.policy for reusing it on the service side[2] * Below are the 'Action items, by person': ** gmann to check with abhishekk on glance point in meeting agenda ** gmann to push common persona on oslo policy and release 3.6.1 and lbragstad to review that ** lbragstad/gmann to push common persona on Oslo policy and release 3.6.1 and lbragstad to review that ** lbragstad to finish placement as first ** raildo to update https://review.opendev.org/#/c/743318/ Progress so far: ============ * Popup team meet twice in a month and discuss and work on progress and pre-work to do. - https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting * Pre-work to provide a smooth migration path to the new policy ** Migrate Default Policy Format from JSON to YAML - This is now a community-wide goal, refer my separate ML thread for progress - https://review.opendev.org/q/topic:%22policy-json-to-yaml%22+(status:open%20OR%20status:merged) ** Improving documentation about target resources (oslo.policy) - https://bugs.launchpad.net/oslo.policy/+bug/1886857 - raildo pushed the patch which is under review: https://review.opendev.org/#/c/743318/ * Team Progress: (list of a team interested or have volunteer to work) ** Keystone (COMPLETED; use as a reference) ** Nova (COMPLETED; use as a reference) ** Cyborg (COMPLETED) ** Work started in other projects *** https://review.opendev.org/q/topic:%22secure-rbac%22+(status:open%20OR%20status:merged) [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019313.html [2] https://review.opendev.org/c/openstack/oslo.policy/+/766536 -gmann From gmann at ghanshyammann.com Fri Dec 11 01:15:13 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 10 Dec 2020 19:15:13 -0600 Subject: Secure RBAC work In-Reply-To: References: Message-ID: <1764f5dc30b.ce68476360070.2800550259733297462@ghanshyammann.com> ---- On Wed, 09 Dec 2020 14:04:57 -0600 Lance Bragstad wrote ---- > Hey everyone, > > I wanted to take an opportunity to clarify some work we have been doing upstream, specifically modifying the default policies across projects. > > These changes are the next phase of an initiative that’s been underway since Queens to fix some long-standing security concerns in OpenStack [0]. For context, we have been gradually improving policy enforcement for years. We started by improving policy formats, registering default policies into code [1], providing better documentation for policy writers, implementing necessary identity concepts in keystone [2], developing support for those concepts in libraries [3][4][5][6][7][8], and consuming all of those changes to provide secure default policies in a way operators can consume and roll out to their users [9][10]. > > All of this work is in line with some high-level documentation we started writing about three years ago [11][12][13]. > > There are a handful of services that have implemented the goals that define secure RBAC by default, but a community-wide goal is still out-of-reach. To help with that, the community formed a pop-up team with a focused objective and disbanding criteria [14]. > > The work we currently have in progress [15] is an attempt to start applying what we have learned from existing implementations to other projects. The hope is that we can complete the work for even more projects in Wallaby. Most deployers looking for this functionality won't be able to use it effectively until all services in their deployment support it. Thanks, Lance for pushing this work forwards. I completely agree and that is what we get feedback in forum sessions also that we should implement this in all the services first before we ask operators to move their cloud to the new RBAC. We discussed these in today's policy-popup meeting also and encourage every project to help in those patches to add tests and review. This will help to finish the work on priority and we can provide better RBAC experience to the deployer. -gmann > > > I hope this helps clarify or explain the patches being proposed. > > > As always, I'm happy to elaborate on specific concerns if folks have them. > > > Thanks, > > > Lance > > > [0] https://bugs.launchpad.net/keystone/+bug/968696/ > [1] https://governance.openstack.org/tc/goals/selected/queens/policy-in-code.html > [2] https://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html > [3] https://review.opendev.org/c/openstack/keystoneauth/+/529665 > [4] https://review.opendev.org/c/openstack/python-keystoneclient/+/524415 > [5] https://review.opendev.org/c/openstack/oslo.context/+/530509 > [6] https://review.opendev.org/c/openstack/keystonemiddleware/+/564072 > [7] https://review.opendev.org/c/openstack/oslo.policy/+/578995 > [8] https://review.opendev.org/q/topic:%22system-scope%22+(status:open%20OR%20status:merged) > [9] https://review.opendev.org/q/status:merged+topic:bp/policy-defaults-refresh+branch:master > [10] https://review.opendev.org/q/topic:%22implement-default-roles%22+(status:open%20OR%20status:merged) > [11] https://specs.openstack.org/openstack/keystone-specs/specs/keystone/ongoing/policy-goals-and-roadmap.html > [12] https://docs.openstack.org/keystone/latest/admin/service-api-protection.html > [13] https://docs.openstack.org/keystone/latest/contributor/services.html#authorization-scopes > [14] https://governance.openstack.org/tc/reference/popup-teams.html#secure-default-policies > [15] https://review.opendev.org/q/topic:%2522secure-rbac%2522+status:open > From gmann at ghanshyammann.com Fri Dec 11 01:33:04 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 10 Dec 2020 19:33:04 -0600 Subject: [all][policy] Canceling policy-popup's 24th Dec meeting Message-ID: <1764f6e194e.10c10e75060175.1955510518561504701@ghanshyammann.com> Hello Everyone, Due to new year vacations, we decided to cancel the policy popup 24th Dec meeting and will resume meeting from 7th Jan onwards. -gmann From balazs.gibizer at est.tech Fri Dec 11 08:59:44 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 11 Dec 2020 09:59:44 +0100 Subject: [nova] weekly meeting is canceled during the holidays Message-ID: Hi, As we agreed on the weekly meeting the nova meeting schedule will look like the following during the next weeks: Dec 17 2020 16:00 UTC, #openstack-meeting-3 Dec 24 2020 16:00 UTC, Cancelled Dec 31 2020 16:00 UTC, Cancelled Jan 7 2021 16:00 UTC, #openstack-meeting-3 Cheers, gibi From bcafarel at redhat.com Fri Dec 11 09:20:35 2020 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Fri, 11 Dec 2020 10:20:35 +0100 Subject: [stable][requirements][neutron] Capping pip in stable branches or not Message-ID: Hi, now that master branches have recovered from pip new resolver use, I started looking at stable branches status. tl;dr for those with open pending backports, all branches are broken at the moment so please do not recheck. Thinking about fixing gates for these branches, older EM branches may be fine once the bandit 1.6.3 issue [1] is sorted out, but most need a fix against the new pip resolver. pip has a flag to switch back to old resolver, but this is a temporary one that will only be there for a few weeks [2] >From a quick IRC chat, the general guidance for us was always to leave pip uncapped, and the new resolver issues are actually broken requirements. But looking at master fixes, these indicate large and complicated changes on requirements and lower-contraints. Neutron fix [3] required a few major linter bumps and major version bumps in l-c. I guess it may be doable as victoria backport, but this will be messy for previous branches. ovn-octavia-provider is a scarier example [4], from stable point of view the change by itself does not look good for backport, even just for victoria. Also, in master, some fixes were possible by bumping versions on dependencies, but how to fix them if the max possible versions have broken deps themselves? So, how do we proceed to fix stable gates? Ideas and feedback will be most welcome [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019292.html [2] http://pyfound.blogspot.com/2020/11/pip-20-3-new-resolver.html [3] https://review.opendev.org/c/openstack/neutron/+/766000 [4] https://review.opendev.org/c/openstack/ovn-octavia-provider/+/765872/32/lower-constraints.txt -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Fri Dec 11 09:37:04 2020 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Fri, 11 Dec 2020 09:37:04 +0000 Subject: New Openstack Deployment questions In-Reply-To: <994dd459-1370-4cc4-9065-6b718e3bc552@debian.org> References: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> <994dd459-1370-4cc4-9065-6b718e3bc552@debian.org> Message-ID: <20201211093704.GH8890@sync> Hey, Yes, debian packages are very nice, thank you zigo for all you did! Moreover, you can also take a look at what canonical did with ubuntu cloud archives. Packages are working very well and openstack documentation is explaining quite easily how to deploy them. I dont know, but maybe it's based on what you did for debian zigo? Cheers, -- Arnaud Morin On 10.12.20 - 20:47, Thomas Goirand wrote: > On 12/10/20 2:46 PM, Thomas Wakefield wrote: > > OpenStack deployment questions:  > > > > If you were starting a new deployment of OpenStack today what OS would > > you use, and what tools would you use for deployment?  We were thinking > > CentOS with Kayobe, but then CentOS changed their support plans, and I > > am hesitant to start a new project with CentOS.  We do have access to > > RHEL licensing so that might be an option.  We have also looked at > > OpenStack-Ansible for deployment.  Thoughts?  > > > > Thanks in advance.  -Tom > > Hi Thomas, > > Did you consider using Debian and OCI [1] ? I've just deployed my 8th > cluster in production with it, this time using floating IP for routed > networks [2]. I'm of course biased in my answer because I'm the package > maintainer and the main author of OCI, but he... please give it a try! > One of the main point is that what happened with CentOS has no chance to > happen in Debian (no vendor lock-in). > > Cheers, > > Thomas Goirand (zigo) > > [1] > https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer > [2] https://review.opendev.org/c/openstack/neutron/+/669395 > From stephenfin at redhat.com Fri Dec 11 10:42:55 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Fri, 11 Dec 2020 10:42:55 +0000 Subject: [stable][requirements][neutron] Capping pip in stable branches or not In-Reply-To: References: Message-ID: <8c411723c56ff5ee48b238b16882b009e0ae5a2a.camel@redhat.com> On Fri, 2020-12-11 at 10:20 +0100, Bernard Cafarelli wrote: > Hi, > > now that master branches have recovered from pip new resolver use, I started > looking at stable branches status. tl;dr for those with open pending > backports, all branches are broken at the moment so please do not recheck. > > Thinking about fixing gates for these branches, older EM branches may be fine > once the bandit 1.6.3 issue [1] is sorted out, but most need a fix against the > new pip resolver. > > pip has a flag to switch back to old resolver, but this is a temporary one > that will only be there for a few weeks [2] > > From a quick IRC chat, the general guidance for us was always to leave pip > uncapped, and the new resolver issues are actually broken requirements. > > But looking at master fixes, these indicate large and complicated changes on > requirements and lower-contraints. Neutron fix [3] required a few major linter > bumps and major version bumps in l-c. I guess it may be doable as victoria > backport, but this will be messy for previous branches. To make this effort slightly simpler, is there any reason we couldn't drag linters out of 'test-requirements.txt' across the board? Those seem to be the most problematic from what I've seen and they're generally not required to use the project nor to run tests. The exception to this rule is projects that have custom hacking plugins and tests for same, in which case I'm not yet sure what to do. > ovn-octavia-provider is a scarier example [4], from stable point of view the > change by itself does not look good for backport, even just for victoria. > > Also, in master, some fixes were possible by bumping versions on dependencies, > but how to fix them if the max possible versions have broken deps themselves? > > So, how do we proceed to fix stable gates? Ideas and feedback will be most > welcome This isn't a proper answer, but are there any circumstances where it would be possible to get a functioning deployment using the supposedly incorrect dependencies in lower-constraints.txt right now? For example, considering [4], would the deployment actually work with 'amqp==2.1.1' rather than 'amqp==5.0.2'? In fact, would pip < 20.3, in all its apparent brokenness, truly constrain amqp like this? I'm going to guess that in many cases it wouldn't be an issue, since these minimum dependencies were most likely selected arbitrarily, however, I also suspect there are cases where this would be an issue and we simply hadn't noticed. Assuming this to be the case, I think the question is more do we want to continue to rely on this known broken feature (by sticking to pip < 20.3) because it's "good enough" for these older branches, or do we want to spend our valuable time going through the dull but necessary work of fixing the dependencies? All of this assumes we find a way to work around dependencies that have broken dependencies themselves. That might well force our hand. Cheers, Stephen > > [1]  > http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019292.html > [2] http://pyfound.blogspot.com/2020/11/pip-20-3-new-resolver.html > [3] https://review.opendev.org/c/openstack/neutron/+/766000 > [4]  > https://review.opendev.org/c/openstack/ovn-octavia-provider/+/765872/32/lower-constraints.txt > From zigo at debian.org Fri Dec 11 12:02:51 2020 From: zigo at debian.org (Thomas Goirand) Date: Fri, 11 Dec 2020 13:02:51 +0100 Subject: New Openstack Deployment questions In-Reply-To: <20201211093704.GH8890@sync> References: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> <994dd459-1370-4cc4-9065-6b718e3bc552@debian.org> <20201211093704.GH8890@sync> Message-ID: On 12/11/20 10:37 AM, Arnaud Morin wrote: > Hey, > > Yes, debian packages are very nice, thank you zigo for all you did! > > Moreover, you can also take a look at what canonical did with ubuntu > cloud archives. Packages are working very well and openstack > documentation is explaining quite easily how to deploy them. > I dont know, but maybe it's based on what you did for debian zigo? > > Cheers, Hi, No, Canonical packages aren't the same as the ones in Debian (at least, not the core service packages). They are developed in a separate way, even though we share some of them (mainly, Ubuntu imports the dependency packages from Debian, where for many of them, I'm the package maintainer). Cheers, Thomas Goirand (zigo) From lokendrarathour at gmail.com Fri Dec 11 12:40:28 2020 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Fri, 11 Dec 2020 18:10:28 +0530 Subject: port Groups (Bonds) Configuration in Openstack Baremetal Provisioning. Message-ID: Hello, I am trying to install a baremetal on existing openstack setup. During the time of installation, is it possible to have bonds already setup when the baremetal nodes comes up. I was trying to work on the : Port groups configuration in the Bare Metal service *Documents referred :* 1. https://docs.openstack.org/ironic/pike/admin/portgroups.html#:~:text=A%20port%20group%20can%20also,attached%20to%20the%20port%20group . 2. https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/14/pdf/bare_metal_provisioning/Red_Hat_OpenStack_Platform-14-Bare_Metal_Provisioning-en-US.pdf *Setup used:* Baremetal on StarlingX Setup : https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/ironic_install.html ¶ -- ~ Lokendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Fri Dec 11 14:04:21 2020 From: zigo at debian.org (Thomas Goirand) Date: Fri, 11 Dec 2020 15:04:21 +0100 Subject: New Openstack Deployment questions In-Reply-To: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> References: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> Message-ID: <4034b812-837a-6e51-f5a9-794bddb202b6@debian.org> On 12/10/20 2:46 PM, Thomas Wakefield wrote: > OpenStack deployment questions:  > >   > > If you were starting a new deployment of OpenStack today what OS would > you use, and what tools would you use for deployment?  We were thinking > CentOS with Kayobe, but then CentOS changed their support plans, and I > am hesitant to start a new project with CentOS.  We do have access to > RHEL licensing so that might be an option.  We have also looked at > OpenStack-Ansible for deployment.  Thoughts?  > >   > > Thanks in advance.  -Tom > I would recommend reading Jonathan Carter (our super cool Debian Project Leader), blog entry about what's happening with CentOS: https://jonathancarter.org/2020/12/10/centos-stream-or-debian/ I agree with all of what he wrote. All of it from beginning to end. and that's why I've been using and contributing to Debian, and advocating that OpenStack users and operators move to it. No, neither Red Hat or Canonical have "your best interests in mind", and they "ultimately supports [their] selfish eco-system" and own corporate greedy interests, to quote Jonathan. Cheers, Thomas Goirand (zigo) From C-Albert.Braden at charter.com Fri Dec 11 14:09:34 2020 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Fri, 11 Dec 2020 14:09:34 +0000 Subject: [EXTERNAL] Re: New Openstack Deployment questions In-Reply-To: <4677519.0VBMTVartN@whitebase.usersys.redhat.com> References: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> <4677519.0VBMTVartN@whitebase.usersys.redhat.com> Message-ID: Centos Stream is fine for those who were using Centos for testing or development. It's not at all suitable for production, because rolling release doesn't provide the stability that production clusters need. Switching to Centos Stream would require significant resources to be expended to setup local mirrors and then perform exhaustive testing before each upgrade. The old Centos did this work for us; Centos was built on RHEL source that had already been tested by paying customers, and bugs fixed with the urgency that paying customers require. Adding an upstream build (Stream) to the existing downstream (Centos 8.x) was fine, but I'm disappointed by the decision to kill Centos 8. I don't want to wax eloquent about how we were betrayed; suffice it to say that even for a free operating system, suddenly changing the EOL from 2029 to 2021 is unprecedented, and places significant burdens on companies that are using Centos in production. I can understand why IBM/RH made this decision, but there's no denying that it puts production Centos users in a difficult position. I hope that Rocky Linux [1], under Gregory Kurtzer (founder of the Centos project) will turn out to be a useful alternative. {1} https://github.com/rocky-linux/rocky -----Original Message----- From: Luigi Toscano Sent: Thursday, December 10, 2020 3:51 PM To: Thomas Wakefield ; openstack-discuss at lists.openstack.org Cc: Satish Patel Subject: [EXTERNAL] Re: New Openstack Deployment questions CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. On Thursday, 10 December 2020 15:27:40 CET Satish Patel wrote: > I just built a new openstack using openstack-ansible on CentOS 8.2 > last month before news broke out. I have no choice so i am going to > stick with CentOS. > > What is the future of RDO and EPEL repo if centOS going away. ? Continue as before on CentOS Stream. -- Luigi E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From Arkady.Kanevsky at dell.com Fri Dec 11 14:25:05 2020 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 11 Dec 2020 14:25:05 +0000 Subject: [TC][all] X Release name polling In-Reply-To: <3ffbc8d6-1d28-7575-ff9b-87969284a871@gmx.com> References: <3ffbc8d6-1d28-7575-ff9b-87969284a871@gmx.com> Message-ID: So are we formally announcing Xanadu as the next release name? From: Sean McGinnis Sent: Thursday, December 10, 2020 10:53 AM To: openstack-discuss Subject: [TC][all] X Release name polling [EXTERNAL EMAIL] Hey everyone, We recently collected naming suggestions for the X release name. A lot of great suggestions by the community! Much more than I had expected for this letter. As a reminder, starting with the W release we had changed the process for selecting the name [1]. We collected suggestions from the community, then the members of the TC voted in a poll [2] to select which name(s) out of the suggestions to go with. The vetting of the top choices from that process is happening now, and we should have a official result soon. This is a bit of a mea culpa from me about an issue with how this was conducted though. The naming process specifically states: "the poll should be run in a manner that allows members of the community to see what each TC member voted for." When I set up the CIVS poll, I failed to check the box that would allow seeing the detailed results of the poll. So while we do have the winning names, we are not able to see which TC members voted and how. I apologize for missing this step (and I've noted that we really should add some detailed process for future coordinators to follow!). I believe the intent with that part of the process was to allow the community to see how your elected TC members voted as one factor to consider when reelecting anyone. Also transparency to show that no one is pushing through their own choices, circumventing any process. The two options I see at this point would be to either redo the entire naming poll, or just try to capture what TC members voted for somewhere so we have a record of that. It's been long enough now since taking the poll that I don't expect TC members to remember exactly how they ranked things. But we've also started the vetting process through the Foundation (lawyers engaged, etc) so I'd really rather not start over if we can avoid it. If TC members could respond here with what they remember voting for, I hope that is enough to satisfy the spirit of the defined process. If there are any members of the community that have a strong objection to this, please say so. I leave it up to the TC then to decide how to proceed. Again, apologies for missing this step. Otherwise, I think the process has worked well, and I hope we can declare an official X name shortly. Thanks! Sean [1] https://governance.openstack.org/tc/reference/release-naming.html#release-naming-process [2] https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_7e6e96070af39fe7 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Dec 11 14:38:18 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 11 Dec 2020 14:38:18 +0000 Subject: [stable][requirements][neutron] Capping pip in stable branches or not In-Reply-To: References: Message-ID: <20201211143818.2w24gusndhnpzvnq@yuggoth.org> On 2020-12-11 10:20:35 +0100 (+0100), Bernard Cafarelli wrote: > now that master branches have recovered from pip new resolver use, > I started looking at stable branches status. tl;dr for those with > open pending backports, all branches are broken at the moment so > please do not recheck. Was there significant breakage on master branches (aside from lower-constraints jobs, which I've always argued are inherently broken for this very reason)? If so, it didn't come to my attention. Matthew did some fairly extensive testing with the new algorithm across our coordinated dependency set well in advance of the pip release to actually turn it on by default. > Thinking about fixing gates for these branches, older EM branches > may be fine once the bandit 1.6.3 issue [1] is sorted out, but > most need a fix against the new pip resolver. > > pip has a flag to switch back to old resolver, but this is a > temporary one that will only be there for a few weeks [2] > > From a quick IRC chat, the general guidance for us was always to > leave pip uncapped, and the new resolver issues are actually > broken requirements. [...] Yes, it bears repeating that anywhere the new dep solver is breaking represents a situation where we were previously testing/building things with a different version of some package than we meant to. This is exposing latent bugs in our declared dependencies within those branches. If we decide to use "older" pip, that's basically admitting we don't care because it's easier to ignore those problems than actually fix them (which, yes, might turn out to be effectively impossible). I'm not trying to be harsh, it's certainly a valid approach, but let's be clear that this is the compromise we're making in that case. My proposal: actually come to terms with the reality that lower-constraints jobs are a fundamentally broken concept, unless someone does the hard work to implement an inverse version sort in pip itself. If pretty much all the struggle is with those jobs, then dropping them can't hurt because they failed at testing exactly the thing they were created for in the first place. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Fri Dec 11 14:43:29 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 11 Dec 2020 14:43:29 +0000 Subject: [TC][all] X Release name polling In-Reply-To: References: <3ffbc8d6-1d28-7575-ff9b-87969284a871@gmx.com> Message-ID: <20201211144329.ue6b2e42luqtzkm4@yuggoth.org> On 2020-12-11 14:25:05 +0000 (+0000), Kanevsky, Arkady wrote: > So are we formally announcing > Xanadu as the next release > name? [...] No name preference gets formally announced until the OIF legal folks perform trademark searches and assess risk of the top choices, which ultimately help inform the final decision. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From tobias.urdin at binero.com Fri Dec 11 14:58:13 2020 From: tobias.urdin at binero.com (Tobias Urdin) Date: Fri, 11 Dec 2020 14:58:13 +0000 Subject: [EXTERNAL] Re: New Openstack Deployment questions In-Reply-To: References: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> <4677519.0VBMTVartN@whitebase.usersys.redhat.com>, Message-ID: <3e75ebf38ce144f7bd1ff948f5c54cd6@binero.com> Hello, We are running solid on CentOS and will continue to do so. But this just reaffirms my ideas that OpenStack should be packaged and distributed as an application by upstream and not by downstream. One of the best ideas so far is on Mohammed Naser's line, which is a shame that there isn't more colaboration on already, is ready-to-use container images for running OpenStack services which would make the layer beneath more "not important". Seeing as a lot of projects already try to work on deploying OpenStack in containers but is working on their own fronts (except some Kolla <-> TripleO relationship, that I think is getting scaled down as well). StarlingX, TripleO, Kolla, OpenStack-Helm, all these container-related deployment tools but no common goal. /end of random post, sorry. Best regards Tobias ________________________________ From: Braden, Albert Sent: Friday, December 11, 2020 3:09:34 PM To: openstack-discuss at lists.openstack.org Subject: RE: [EXTERNAL] Re: New Openstack Deployment questions Centos Stream is fine for those who were using Centos for testing or development. It's not at all suitable for production, because rolling release doesn't provide the stability that production clusters need. Switching to Centos Stream would require significant resources to be expended to setup local mirrors and then perform exhaustive testing before each upgrade. The old Centos did this work for us; Centos was built on RHEL source that had already been tested by paying customers, and bugs fixed with the urgency that paying customers require. Adding an upstream build (Stream) to the existing downstream (Centos 8.x) was fine, but I'm disappointed by the decision to kill Centos 8. I don't want to wax eloquent about how we were betrayed; suffice it to say that even for a free operating system, suddenly changing the EOL from 2029 to 2021 is unprecedented, and places significant burdens on companies that are using Centos in production. I can understand why IBM/RH made this decision, but there's no denying that it puts production Centos users in a difficult position. I hope that Rocky Linux [1], under Gregory Kurtzer (founder of the Centos project) will turn out to be a useful alternative. {1} https://github.com/rocky-linux/rocky -----Original Message----- From: Luigi Toscano Sent: Thursday, December 10, 2020 3:51 PM To: Thomas Wakefield ; openstack-discuss at lists.openstack.org Cc: Satish Patel Subject: [EXTERNAL] Re: New Openstack Deployment questions CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. On Thursday, 10 December 2020 15:27:40 CET Satish Patel wrote: > I just built a new openstack using openstack-ansible on CentOS 8.2 > last month before news broke out. I have no choice so i am going to > stick with CentOS. > > What is the future of RDO and EPEL repo if centOS going away. ? Continue as before on CentOS Stream. -- Luigi E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Fri Dec 11 15:23:19 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 11 Dec 2020 07:23:19 -0800 Subject: [stable][requirements][neutron] Capping pip in stable branches or not In-Reply-To: <20201211143818.2w24gusndhnpzvnq@yuggoth.org> References: <20201211143818.2w24gusndhnpzvnq@yuggoth.org> Message-ID: On Fri, Dec 11, 2020 at 6:41 AM Jeremy Stanley wrote: > > On 2020-12-11 10:20:35 +0100 (+0100), Bernard Cafarelli wrote: > > now that master branches have recovered from pip new resolver use, > > I started looking at stable branches status. tl;dr for those with > > open pending backports, all branches are broken at the moment so > > please do not recheck. > > Was there significant breakage on master branches (aside from > lower-constraints jobs, which I've always argued are inherently > broken for this very reason)? If so, it didn't come to my attention. > Matthew did some fairly extensive testing with the new algorithm > across our coordinated dependency set well in advance of the pip > release to actually turn it on by default. > > > Thinking about fixing gates for these branches, older EM branches > > may be fine once the bandit 1.6.3 issue [1] is sorted out, but > > most need a fix against the new pip resolver. > > > > pip has a flag to switch back to old resolver, but this is a > > temporary one that will only be there for a few weeks [2] > > > > From a quick IRC chat, the general guidance for us was always to > > leave pip uncapped, and the new resolver issues are actually > > broken requirements. > [...] > > Yes, it bears repeating that anywhere the new dep solver is breaking > represents a situation where we were previously testing/building > things with a different version of some package than we meant to. > This is exposing latent bugs in our declared dependencies within > those branches. If we decide to use "older" pip, that's basically > admitting we don't care because it's easier to ignore those problems > than actually fix them (which, yes, might turn out to be effectively > impossible). I'm not trying to be harsh, it's certainly a valid > approach, but let's be clear that this is the compromise we're > making in that case. > > My proposal: actually come to terms with the reality that > lower-constraints jobs are a fundamentally broken concept, unless > someone does the hard work to implement an inverse version sort in > pip itself. If pretty much all the struggle is with those jobs, then > dropping them can't hurt because they failed at testing exactly the > thing they were created for in the first place. I completely agree with Jeremy's proposal. And sentiment in ironic seems to be leaning in this direction as well. The bottom line is WE as a community have one of two options: Constantly track and increment l-c, or try to roll forward with the most recent and attempt to identify issues as we go. The original push of g-r updates out seemed to be far less painful and gave us visibility to future breakages. Now we're looking at yet another round where we need to fix CI jobs on every repository and branch we maintain. This impinges on our ability to deliver new features and cripples our ability to deliver upstream bug fixes when we are constantly fighting stable CI breakages. I guess it is kind of obvious that I'm frustrated with breaking stable CI as it seems to be a giant time sink for myself. -Julia > -- > Jeremy Stanley From sean.mcginnis at gmx.com Fri Dec 11 15:58:19 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 11 Dec 2020 09:58:19 -0600 Subject: [stable][requirements][neutron] Capping pip in stable branches or not In-Reply-To: <20201211143818.2w24gusndhnpzvnq@yuggoth.org> References: <20201211143818.2w24gusndhnpzvnq@yuggoth.org> Message-ID: > Yes, it bears repeating that anywhere the new dep solver is breaking > represents a situation where we were previously testing/building > things with a different version of some package than we meant to. > This is exposing latent bugs in our declared dependencies within > those branches. If we decide to use "older" pip, that's basically > admitting we don't care because it's easier to ignore those problems > than actually fix them (which, yes, might turn out to be effectively > impossible). I'm not trying to be harsh, it's certainly a valid > approach, but let's be clear that this is the compromise we're > making in that case. +1 > > My proposal: actually come to terms with the reality that > lower-constraints jobs are a fundamentally broken concept, unless > someone does the hard work to implement an inverse version sort in > pip itself. If pretty much all the struggle is with those jobs, then > dropping them can't hurt because they failed at testing exactly the > thing they were created for in the first place. As someone that has spent some time working on l-c jobs/issues, I kind of have to agree with this. For historical reference, here's the initial proposal for performing lower constraint testing: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128352.html I wasn't part of most of the requirements team discussions around the need for this, but my understanding was it was to provide a set range of package versions that are expected to work. So anyone packaging OpenStack projects downstream would have an easy way to filter out options to figure out what versions they can include that will minimize conflicts between all the various packages included in a distro. I'm not a downstream packager, so I don't have any direct experience to go on here, but my assumption is that the value of providing this is pretty low. I don't think the time the community has put in to trying to maintain (or not maintain) their lower-constraints.txt files and making sure the jobs are configured properly to apply those constraints has been worth the effort for the value anyone downstream might get out of them. My vote would be to get rid of these jobs. Distros will need to perform testing of the versions they ultimately package together anyway, so I don't think it is worth the community's time to repeatedly struggle with keeping these things updated. I do think one useful bit can be when we're tracking our own direct dependencies. One thing that comes to mind from the recent past is we've had cases where something new has been added to something like oslo.config. Lower constraints updates were a good way to make it explicit that we needed at least the newer version of that lib so that we could safely assume the expected functionality would be present. There is some value to that. So if we keep lower-constraints, maybe we just limit it to those specific instances where we have things like that and not try to constrain the entire world. Sean From balazs.gibizer at est.tech Fri Dec 11 16:01:22 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 11 Dec 2020 17:01:22 +0100 Subject: [nova][gate] nova-multi-cell job =?UTF-8?Q?failing=0D=0A=0D=0A=0D=0A?= test_*_with_qos_min_bw_allocation In-Reply-To: <8SU4LQ.UJMXSYCLLIYN3@est.tech> References: <8SU4LQ.UJMXSYCLLIYN3@est.tech> Message-ID: On Thu, Dec 10, 2020 at 17:50, Balázs Gibizer wrote: > > > On Thu, Dec 10, 2020 at 13:18, Balázs Gibizer > wrote: >> >> >> On Wed, Dec 9, 2020 at 19:33, melanie witt >> wrote: >>> Howdy all, >>> >>> FYI we have gate failures of the recently added >>> test_*_with_qos_min_bw_allocation tests [1] in the >>> nova-multi-cell job on the master, stable/victoria, and >>> stable/ussuri branches. The failures occur during cross cell >>> migrations. >>> >>> I have opened a bug for the failure on the master branch: >>> >>> * https://bugs.launchpad.net/nova/+bug/1907522 >>> >>> The issue here is that we fail to create port bindings in neutron >>> during a cross cell migration in the superconductor: >>> >>> nova.exception.PortBindingFailed: Binding failed for port >> uuid> >>> >>> and that corresponds to a failure in the neutron server log where >>> it fails the port binding with: >>> >>> neutron_lib.exceptions.placement.UnknownResourceProvider: No such >>> resource provider known by Neutron >>> >>> I don't yet know what is going on here ^. >>> >>> For the bug on stable/victoria and stable/ussuri I have opened this >>> bug: >>> >>> * https://bugs.launchpad.net/nova/+bug/1907511 >>> >>> and have a WIP stable-only patch proposed that needs tests: >>> >>> https://review.opendev.org/c/openstack/nova/+/766364 >>> >>> I just wanted to see ASAP if the nova-multi-cell job will pass on >>> it. >>> >>> The issue here ^ is that during a cross cell migration, we aren't >>> targeting the cell database for the target host when we attempt >>> to lookup the service record of the target host. >>> >>> For the stable branch failures I think the failure rate is 100% and >>> it looks like it might also be 100% for the master branch >>> failures. >> >> Thanks Melanie! >> >> A sort update. The test result in >> https://review.opendev.org/c/openstack/nova/+/766364 shows that >> after fixing the stable only >> https://bugs.launchpad.net/nova/+bug/1907511 we now hit the same >> failure on stable that is seen on master >> https://bugs.launchpad.net/nova/+bug/1907522 >> >> Both master and stable branches are blocked at the moment. > > Now we have patches to unblock master and stable/victoria, we just > need to push the through the gate: > > * master: https://review.opendev.org/c/openstack/nova/+/766471 This patch has been merged so the master branch is unblocked now and you can recheck your patches. > * stable/victoria: > https://review.opendev.org/c/openstack/nova/+/765749 This still needs to land before stable/victoria can be used Cheers, gibi > > Cheers, > gibi > >> >> Cheers, >> gibi >>> >>> Cheers, >>> -melanie >>> >>> [1] https://review.opendev.org/c/openstack/tempest/+/694539 >>> >> >> >> > > > From gmann at ghanshyammann.com Fri Dec 11 16:15:37 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 11 Dec 2020 10:15:37 -0600 Subject: [stable][requirements][neutron] Capping pip in stable branches or not In-Reply-To: References: <20201211143818.2w24gusndhnpzvnq@yuggoth.org> Message-ID: <17652961b73.11c186083102611.2301739328973440930@ghanshyammann.com> ---- On Fri, 11 Dec 2020 09:58:19 -0600 Sean McGinnis wrote ---- > > > Yes, it bears repeating that anywhere the new dep solver is breaking > > represents a situation where we were previously testing/building > > things with a different version of some package than we meant to. > > This is exposing latent bugs in our declared dependencies within > > those branches. If we decide to use "older" pip, that's basically > > admitting we don't care because it's easier to ignore those problems > > than actually fix them (which, yes, might turn out to be effectively > > impossible). I'm not trying to be harsh, it's certainly a valid > > approach, but let's be clear that this is the compromise we're > > making in that case. > > +1 > > > > > My proposal: actually come to terms with the reality that > > lower-constraints jobs are a fundamentally broken concept, unless > > someone does the hard work to implement an inverse version sort in > > pip itself. If pretty much all the struggle is with those jobs, then > > dropping them can't hurt because they failed at testing exactly the > > thing they were created for in the first place. > > As someone that has spent some time working on l-c jobs/issues, I kind > of have to agree with this. > > For historical reference, here's the initial proposal for performing > lower constraint testing: > > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128352.html > > I wasn't part of most of the requirements team discussions around the > need for this, but my understanding was it was to provide a set range of > package versions that are expected to work. So anyone packaging > OpenStack projects downstream would have an easy way to filter out > options to figure out what versions they can include that will minimize > conflicts between all the various packages included in a distro. > > I'm not a downstream packager, so I don't have any direct experience to > go on here, but my assumption is that the value of providing this is > pretty low. I don't think the time the community has put in to trying to > maintain (or not maintain) their lower-constraints.txt files and making > sure the jobs are configured properly to apply those constraints has > been worth the effort for the value anyone downstream might get out of them. > > My vote would be to get rid of these jobs. Distros will need to perform > testing of the versions they ultimately package together anyway, so I > don't think it is worth the community's time to repeatedly struggle with > keeping these things updated. > > I do think one useful bit can be when we're tracking our own direct > dependencies. One thing that comes to mind from the recent past is we've > had cases where something new has been added to something like > oslo.config. Lower constraints updates were a good way to make it > explicit that we needed at least the newer version of that lib so that > we could safely assume the expected functionality would be present. > There is some value to that. So if we keep lower-constraints, maybe we > just limit it to those specific instances where we have things like that > and not try to constrain the entire world. I agree. One of the big chunks of work and time we spent on this is during moving the testing from Ubuntu Bionic to Focal[1] where we had to fix many lower-constraints in all the repos (nearly ~400) in OpenStack. For knowing the compatible version of deps, we have the information in the requirements.txt file, where we can get to know what is working version of all deps are. What is the min working version is not a hard thing to know (basically when env break for any incompatible one). Maintaining it up to date is not so worth compare to the effort it is taking. I will also suggest to remove this. [1] https://storyboard.openstack.org/#!/story/2007865 -gmann > > Sean > > > From zigo at debian.org Fri Dec 11 16:32:22 2020 From: zigo at debian.org (Thomas Goirand) Date: Fri, 11 Dec 2020 17:32:22 +0100 Subject: [EXTERNAL] Re: New Openstack Deployment questions In-Reply-To: References: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> <4677519.0VBMTVartN@whitebase.usersys.redhat.com> Message-ID: On 12/11/20 3:09 PM, Braden, Albert wrote: > The old Centos did this work for us; Centos was built on RHEL source that had already been tested by paying customers, and bugs fixed with the urgency that paying customers require. This sounds like free beer instead of free speach... > Adding an upstream build (Stream) to the existing downstream (Centos 8.x) was fine, but I'm disappointed by the decision to kill Centos 8. I don't want to wax eloquent about how we were betrayed; I'm surprised that you're surprised... > suffice it to say that even for a free operating system, suddenly changing the EOL from 2029 to 2021 is unprecedented, and places significant burdens on companies that are using Centos in production. I can understand why IBM/RH made this decision Simple answer: it's a commercial company that has, as first interest, making money. It's goal is not having happy non-paying users. > but there's no denying that it puts production Centos users in a difficult position. It just forces you to buy a service from a company that was previously giving it for free (as in free beer). > I hope that Rocky Linux [1], under Gregory Kurtzer (founder of the Centos project) will turn out to be a useful alternative. > > {1} https://github.com/rocky-linux/rocky So you haven't learned from this event, it seems... On 12/11/20 3:58 PM, Tobias Urdin wrote: > But this just reaffirms my ideas that OpenStack should be packaged and > distributed as an application by upstream I attempted this (ie: doing the packaging in upstream OpenStack) in 2014. The release of OpenStack in Jessie was built this way. However, nobody had interest in contributing, not even Canonical who turned away from the initiative (after they gave the initial idea and initially agreed to do so). I wont do it again unless there's strong interest and contribution. Also, you might know that this was how OpenStack started in the very beginning, where the CI was even using packages. However, the recent event about CentOS redefinition is orthogonal to this. This is the underlying distribution that we're talking about, not OpenStack that runs on top of it. I don't see how the fall of CentOS has a relation to OpenStack being packaged upstream. > One of the best ideas so far is on Mohammed Naser's line, which is a > shame that there isn't more colaboration on already, is ready-to-use > container images for running OpenStack services which would make the > layer beneath more "not important". I strongly disagree with this. If you aren't using packages, you end up reinventing them in a different context (ie: the one of a container), and rewrite all of what they do in a different way. I know I'm swimming against the tide, but eventually, the tide will change direction... :) Besides this, there's all sorts of important components that are maintained within distros that OpenStack can't work without: - qemu - openvswitch - rabbitmq - ceph - haproxy - mariadb/galera - you-name-it... (the list goes on, and on, and on... and I suppose you know this list as much as I do) Yes, you can use containers for the Python bits. But what about the rest? You will certainly end up using a distribution as a base for building (and running) the other bits, even if that's within a container. Denying that the underneath distribution is important wont drive you very far in such a context. Choosing carefully what distribution you're using (and contributing) is probably more important than everyone thought, finally... :) Cheers, Thomas Goirand (zigo) From mnaser at vexxhost.com Fri Dec 11 17:40:35 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 11 Dec 2020 12:40:35 -0500 Subject: [tc] weekly meeting summary Message-ID: Hi everyone, Here's a summary of what happened in our TC weekly meeting last Thursday, Dec 10. # ATTENDEES (LINES SAID) - mnaser (115) - apevec (60) - gmann (57) - yoctozepto (26) - mugsie (26) - fungi (20) - jungleboyj (14) - noonedeadpunk (9) - diablo_rojo__ (6) - marios|rover (6) - ricolin (6) - zbr (4) - weshay|ruck (2) - jrosser (2) - akahat (1) - gouthamr (1) # MEETING SUMMARY 1. Rollcall 2. Follow up on past action items DONE: mnaser change all reference of meeting time to go towards eavesdrop for single source of truth DONE: mnaser remove openstacksdk discussions for future meetings DONE: mnaser remove project retirement from agenda 3. X cycle goal selection start It was discussed that due to the extraordinary situation this year, perhaps we could have a stabilization goal instead. As a reminder that we don't have to do a community goal for every release, we are leaving this open and will ask folks to update the proposed goal page with something they worked on. Some ideas discussed for this "stabilization" cycle: 1) Use this time to rest and relax, no need to do something. 2) Use this time to finish up this really cool thing you've been trying to find time to, and share with us to recognize. 3) Check out these existing goals/popup teams that might interest you if you want to invest time in something different/new We will keep this as an open discussion item to track the progress and will keep collecting new ideas for goals. 4. Audit and clean-up tags (gmann) For API interoperability tag, patch is merged; gmann will start the ML to encourage projects to start applying for that tag. We will keep this action to discuss the progress of getting projects in, and we will have each tag one by. 5. X cycle release name vote recording (gmann) Because the votes were not recorded, we are asking TC members to list what they voted for on the mailing list http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019337.html 6. CentOS 8 releases are discontinued / switch to CentOS 8 Stream (gmann/yoctozepto) Community members from RH were present to help talk through the changes that are occurring with the introduction of CentOS Stream 8. mnaser work to find time for community deployment projects + centos/rdo team to meet to help teams get more information about the upcoming change # ACTION ITEMS 1. mnaser send email to ML to find volunteers to help drive goal selection 2. gmann complete retirement of searchlight & qinling 3. diablo_rojo complete retirement of karbor 4. mnaser work to find time for community deployment projects + centos/rdo team To read the full logs of the meeting, please refer to http://eavesdrop.openstack.org/meetings/tc/2020/tc.2020-12-10-15.01.log.html -- Mohammed Naser VEXXHOST, Inc. From mnaser at vexxhost.com Fri Dec 11 17:42:23 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 11 Dec 2020 12:42:23 -0500 Subject: [tc][goals] community feedback on 'stabilization' goal Message-ID: Hi there, Over the last TC meeting, we exchanged opinions about the new goal selection. It was discussed that due to the extraordinary situation this year, we could have a loose goal instead. Therefore, we would like to propose having a stabilization cycle this release. Some ideas discussed for this “stabilization” cycle (would be all of those): 1. Use this time to rest and relax, no need to do something. 2. Use this time to finish up this really cool thing you've been trying to find time to, and share with us to recognize. 3. Check out these existing goals/popup teams that might interest you if you want to invest time in something different/new. As a reminder that we don't have to do a community goal for every release, we are leaving this open and would like to know what the community thinks. Feel free to send over your thoughts, Regards, -- Mohammed Naser VEXXHOST, Inc. From C-Albert.Braden at charter.com Fri Dec 11 18:06:50 2020 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Fri, 11 Dec 2020 18:06:50 +0000 Subject: [EXTERNAL] Re: New Openstack Deployment questions In-Reply-To: References: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> <4677519.0VBMTVartN@whitebase.usersys.redhat.com> Message-ID: <6dd684f90cf049b0b369ec444e50c550@NCEMEXGP009.CORP.CHARTERCOM.com> ----Original Message----- From: Thomas Goirand Sent: Friday, December 11, 2020 11:32 AM To: openstack-discuss at lists.openstack.org Subject: Re: [EXTERNAL] Re: New Openstack Deployment questions CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. On 12/11/20 3:09 PM, Braden, Albert wrote: >> The old Centos did this work for us; Centos was built on RHEL source that had already been tested by paying customers, and bugs fixed with the urgency that paying customers require. >This sounds like free beer instead of free speach... Yes. If you, the community, help us brew the beer, you can drink it for free, or you can pay us to serve it to you on a silver platter. As an incentive to motivate you to help brew the beer, we promise to leave the tap open until 2029. >> I hope that Rocky Linux [1], under Gregory Kurtzer (founder of the Centos project) will turn out to be a useful alternative. >> >> {1} https://github.com/rocky-linux/rocky >So you haven't learned from this event, it seems... If we get another 16-year run from Rocky, that would be acceptable. I have hope that the community will remember the history, when an altruistic corporation offers to take over Rocky and make it more wonderful for everyone. Quotes from 2014: Brian Stevens, executive vice president and chief technology officer, Red Hat "It is core to our beliefs that when people who share goals or problems are free to connect and work together, their pooled innovations can change the world. We believe the open source development process produces better code, and a community of users creates an audience that makes code impactful. Cloud technologies are moving quickly, and increasingly, that code is first landing in Red Hat Enterprise Linux. Today is an exciting day for the open source community; by joining forces with the CentOS Project, we aim to build a vehicle to get emerging technologies like OpenStack and big data into the hands of millions of developers." Karanbir Singh, lead developer, CentOS Project "CentOS owes its success not just to the source code it's built from, but to the hard work and enthusiasm of its user community. Now that we are able to count Red Hat among the active contributors to the CentOS Project, we have access to the resources and expertise we'll need to expand the scope and reach of the CentOS community while remaining committed to our current and new users." I apologize for the nonsense below. So far I have not been able to stop it from being attached to my external emails. I'm working on it. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From radoslaw.piliszek at gmail.com Fri Dec 11 18:08:20 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 11 Dec 2020 19:08:20 +0100 Subject: [EXTERNAL] Re: New Openstack Deployment questions In-Reply-To: References: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> <4677519.0VBMTVartN@whitebase.usersys.redhat.com> Message-ID: Sorry for top posting but I just wanted to mention that Kolla supports also Debian and Ubuntu, in both binary (meaning using distro packages) and source (meaning using upstream sources) flavours. The only Kolla project the above is not true about is Kayobe, and that is where the misconception that we support only CentOS comes from. -yoctozepto On Fri, Dec 11, 2020 at 5:33 PM Thomas Goirand wrote: > > On 12/11/20 3:09 PM, Braden, Albert wrote: > > The old Centos did this work for us; Centos was built on RHEL source that had already been tested by paying customers, and bugs fixed with the urgency that paying customers require. > > This sounds like free beer instead of free speach... > > > Adding an upstream build (Stream) to the existing downstream (Centos 8.x) was fine, but I'm disappointed by the decision to kill Centos 8. I don't want to wax eloquent about how we were betrayed; > > I'm surprised that you're surprised... > > > suffice it to say that even for a free operating system, suddenly changing the EOL from 2029 to 2021 is unprecedented, and places significant burdens on companies that are using Centos in production. I can understand why IBM/RH made this decision > > Simple answer: it's a commercial company that has, as first interest, > making money. It's goal is not having happy non-paying users. > > > but there's no denying that it puts production Centos users in a difficult position. > > It just forces you to buy a service from a company that was previously > giving it for free (as in free beer). > > > I hope that Rocky Linux [1], under Gregory Kurtzer (founder of the Centos project) will turn out to be a useful alternative. > > > > {1} https://github.com/rocky-linux/rocky > > So you haven't learned from this event, it seems... > > On 12/11/20 3:58 PM, Tobias Urdin wrote: > > But this just reaffirms my ideas that OpenStack should be packaged and > > distributed as an application by upstream > > I attempted this (ie: doing the packaging in upstream OpenStack) in > 2014. The release of OpenStack in Jessie was built this way. However, > nobody had interest in contributing, not even Canonical who turned away > from the initiative (after they gave the initial idea and initially > agreed to do so). I wont do it again unless there's strong interest and > contribution. > > Also, you might know that this was how OpenStack started in the very > beginning, where the CI was even using packages. > > However, the recent event about CentOS redefinition is orthogonal to > this. This is the underlying distribution that we're talking about, not > OpenStack that runs on top of it. I don't see how the fall of CentOS has > a relation to OpenStack being packaged upstream. > > > One of the best ideas so far is on Mohammed Naser's line, which is a > > shame that there isn't more colaboration on already, is ready-to-use > > container images for running OpenStack services which would make the > > layer beneath more "not important". > > I strongly disagree with this. If you aren't using packages, you end up > reinventing them in a different context (ie: the one of a container), > and rewrite all of what they do in a different way. I know I'm swimming > against the tide, but eventually, the tide will change direction... :) > > Besides this, there's all sorts of important components that are > maintained within distros that OpenStack can't work without: > - qemu > - openvswitch > - rabbitmq > - ceph > - haproxy > - mariadb/galera > - you-name-it... (the list goes on, and on, and on... and I suppose you > know this list as much as I do) > > Yes, you can use containers for the Python bits. But what about the > rest? You will certainly end up using a distribution as a base for > building (and running) the other bits, even if that's within a container. > > Denying that the underneath distribution is important wont drive you > very far in such a context. > > Choosing carefully what distribution you're using (and contributing) is > probably more important than everyone thought, finally... :) > > Cheers, > > Thomas Goirand (zigo) > From radoslaw.piliszek at gmail.com Fri Dec 11 18:13:02 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 11 Dec 2020 19:13:02 +0100 Subject: [stable][requirements][neutron] Capping pip in stable branches or not In-Reply-To: <17652961b73.11c186083102611.2301739328973440930@ghanshyammann.com> References: <20201211143818.2w24gusndhnpzvnq@yuggoth.org> <17652961b73.11c186083102611.2301739328973440930@ghanshyammann.com> Message-ID: On Fri, Dec 11, 2020 at 5:16 PM Ghanshyam Mann wrote: > > Maintaining it up to date is not so worth compare to the effort it is taking. I will also suggest to > remove this. > Kolla dropped lower-constraints from all the branches. -yoctozepto From nate.johnston at redhat.com Fri Dec 11 18:14:58 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Fri, 11 Dec 2020 13:14:58 -0500 Subject: [tc][goals] community feedback on 'stabilization' goal In-Reply-To: References: Message-ID: <20201211181458.xqugrc2fxiiqxeng@grind.home> I think this is a really excellent idea. Each of us has gone through things this year that are quite unlike any other. I think it is really decent of us to be kind to each other and to ourselves in this way. One nit: your email does not specify, but I believe this is in relation to goal selection for the X(anadu) cycle? Thanks, Nate On Fri, Dec 11, 2020 at 12:42:23PM -0500, Mohammed Naser wrote: > Hi there, > > Over the last TC meeting, we exchanged opinions about the new goal > selection. It was discussed that due to the extraordinary situation > this year, we could have a loose goal instead. Therefore, we would > like to propose having a stabilization cycle this release. > > Some ideas discussed for this “stabilization” cycle (would be all of those): > 1. Use this time to rest and relax, no need to do something. > 2. Use this time to finish up this really cool thing you've been > trying to find time to, and share with us to recognize. > 3. Check out these existing goals/popup teams that might interest you > if you want to invest time in something different/new. > > As a reminder that we don't have to do a community goal for every > release, we are leaving this open and would like to know what the > community thinks. > > Feel free to send over your thoughts, > > Regards, > > -- > Mohammed Naser > VEXXHOST, Inc. > From smooney at redhat.com Fri Dec 11 18:19:25 2020 From: smooney at redhat.com (Sean Mooney) Date: Fri, 11 Dec 2020 18:19:25 +0000 Subject: [tc][goals] community feedback on 'stabilization' goal In-Reply-To: References: Message-ID: On Fri, 2020-12-11 at 12:42 -0500, Mohammed Naser wrote: > Hi there, > > Over the last TC meeting, we exchanged opinions about the new goal > selection. It was discussed that due to the extraordinary situation > this year, we could have a loose goal instead. Therefore, we would > like to propose having a stabilization cycle this release. > > Some ideas discussed for this “stabilization” cycle (would be all of those): > 1. Use this time to rest and relax, no need to do something. > 2. Use this time to finish up this really cool thing you've been > trying to find time to, and share with us to recognize. > 3. Check out these existing goals/popup teams that might interest you > if you want to invest time in something different/new. > > As a reminder that we don't have to do a community goal for every > release, we are leaving this open and would like to know what the > community thinks. > > Feel free to send over your thoughts, i like this but i would suggest it for X rather then W we are far enough in to wallaby that i think its realticlaly too late to declar it a stablisation release now but i do see merrit in doing that for X. it would give us time to get by in internally and plan to actully focus on stablisation both upstream and downstream and would be eaiser to comunicate that to stake holders. > > Regards, > From 15005176312 at 163.com Fri Dec 11 08:09:44 2020 From: 15005176312 at 163.com (sunkai) Date: Fri, 11 Dec 2020 16:09:44 +0800 (CST) Subject: About fuel Message-ID: <3638e596.51f4.17650d94400.Coremail.15005176312@163.com> Hello, I have a question to ask you。How do I modify the disk configuration of fuel for the node? I need to add some configuration items。How can I fulfill this requirement?Looking forward to your reply,Thankyou very much!!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkliczew at redhat.com Fri Dec 11 15:04:02 2020 From: pkliczew at redhat.com (Piotr Kliczewski) Date: Fri, 11 Dec 2020 16:04:02 +0100 Subject: [Openstack][FOSDEM][CFP] Virtualization & IaaS Devroom Message-ID: Friendly reminder that submission deadline for Virtualization & IaaS dev room at Fosdem is on 20th of December. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Dec 11 19:02:34 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 11 Dec 2020 13:02:34 -0600 Subject: [tc][goals] community feedback on 'stabilization' goal In-Reply-To: References: Message-ID: <176532ef37b.bb5f8b16109337.1905895354962738312@ghanshyammann.com> ---- On Fri, 11 Dec 2020 12:19:25 -0600 Sean Mooney wrote ---- > On Fri, 2020-12-11 at 12:42 -0500, Mohammed Naser wrote: > > Hi there, > > > > Over the last TC meeting, we exchanged opinions about the new goal > > selection. It was discussed that due to the extraordinary situation > > this year, we could have a loose goal instead. Therefore, we would > > like to propose having a stabilization cycle this release. > > > > Some ideas discussed for this “stabilization” cycle (would be all of those): > > 1. Use this time to rest and relax, no need to do something. > > 2. Use this time to finish up this really cool thing you've been > > trying to find time to, and share with us to recognize. > > 3. Check out these existing goals/popup teams that might interest you > > if you want to invest time in something different/new. > > > > As a reminder that we don't have to do a community goal for every > > release, we are leaving this open and would like to know what the > > community thinks. > > > > Feel free to send over your thoughts, > i like this but i would suggest it for X rather then W > we are far enough in to wallaby that i think its realticlaly too late to declar it a > stablisation release now but i do see merrit in doing that for X. Yes, this plan is for X cycle only. For W cycle, we already have two goals selected and in-progress - https://governance.openstack.org/tc/goals/selected/wallaby/index.html -gmann > > it would give us time to get by in internally and plan to actully focus on stablisation > both upstream and downstream and would be eaiser to comunicate that to stake holders. > > > > Regards, > > > > > > From gmann at ghanshyammann.com Fri Dec 11 19:23:19 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 11 Dec 2020 13:23:19 -0600 Subject: [all][interop] Reforming the refstack maintainers team Message-ID: <1765341f106.111a3f243109782.5942668683123760803@ghanshyammann.com> Hello Everyone, As Goutham mentioned in a separate ML thread[2] that there is no active maintainer for refstack repo which we discussed in today's interop meeting[1]. We had a few volunteers who can help to maintain the refstack and other interop repo which is good news. I would like to call for more volunteers (new or existing ones), if you are interested to help please do reply to this email. The role is to maintain the source code of the below repos. I will propose the ACL changes in infra sometime next Friday (18th dec) or so. For easy maintenance, we thought of merging the below repo core group into a single group called 'refstack-core' - openstack/python-tempestconf - openstack/refstack - openstack/refstack-client - x/ansible-role-refstack-client (moving to osf/ via https://review.opendev.org/765787) Current Volunteers: - martin (mkopec at redhat.com) - gouthamr (gouthampravi at gmail.com) - gmann (gmann at ghanshyammann.com) - Vida (vhariria at redhat.com) - interop-core (we will add this group also which has interop WG chairs so that it will be easy to maintain in the future changes) NOTE: there is no change in the 'interop' repo group which has interop guidelines and doc etc. [1] https://etherpad.opendev.org/p/interop [2] http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019263.html -gmann From ssbarnea at redhat.com Fri Dec 11 20:38:30 2020 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Fri, 11 Dec 2020 20:38:30 +0000 Subject: [stable][requirements][neutron] Capping pip in stable branches or not In-Reply-To: References: <20201211143818.2w24gusndhnpzvnq@yuggoth.org> <17652961b73.11c186083102611.2301739328973440930@ghanshyammann.com> Message-ID: Jeremy nailed it very well. Tripleo already removed lower-constraints from most places (some changes may be still waiting to be gated). Regarding decoupling linting from test-requirements: yes! This was already done by some when conflicts appeared. For old branches I personally do not care much even if maintainers decide to disable linting, their main benefit is on main branches. On Fri, 11 Dec 2020 at 18:14, Radosław Piliszek wrote: > On Fri, Dec 11, 2020 at 5:16 PM Ghanshyam Mann > wrote: > > > > Maintaining it up to date is not so worth compare to the effort it is > taking. I will also suggest to > > remove this. > > > > Kolla dropped lower-constraints from all the branches. > > -yoctozepto > > -- -- /sorin -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Fri Dec 11 21:06:31 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Fri, 11 Dec 2020 13:06:31 -0800 Subject: [all] Dropping lower constraints testing (WAS: Re: [stable][requirements][neutron] Capping pip in stable branches or not) In-Reply-To: References: <20201211143818.2w24gusndhnpzvnq@yuggoth.org> <17652961b73.11c186083102611.2301739328973440930@ghanshyammann.com> Message-ID: Hi, I hope you won't mind me shifting this discussion to [all] - many projects have had to make changes due to the dependency resolver catching some of our uncaught lies. In manila, i've pushed up three changes to fix the CI on the main, stable/victoria and stable/ussuri [1] branches. I used fungi's method of installing things and playing whack-a-mole [2] and Brain Rosmaita's approach [3] of taking the opportunity to raise the minimum required packages for Wallaby. However, this all seems kludgy maintenance - and possibly no-one is benefitting from the effort we're putting into this as called out. Can more distributors and deployment tooling folks comment? [1] https://review.opendev.org/q/project:openstack/manila+topic:update-requirements [2] http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019285.html [3] https://review.opendev.org/c/openstack/cinder/+/766085 On Fri, Dec 11, 2020 at 12:51 PM Sorin Sbarnea wrote: > Jeremy nailed it very well. > > Tripleo already removed lower-constraints from most places (some changes > may be still waiting to be gated). > > Regarding decoupling linting from test-requirements: yes! This was already > done by some when conflicts appeared. For old branches I personally do not > care much even if maintainers decide to disable linting, their main benefit > is on main branches. > > On Fri, 11 Dec 2020 at 18:14, Radosław Piliszek < > radoslaw.piliszek at gmail.com> wrote: > >> On Fri, Dec 11, 2020 at 5:16 PM Ghanshyam Mann >> wrote: >> > >> > Maintaining it up to date is not so worth compare to the effort it is >> taking. I will also suggest to >> > remove this. >> > >> >> Kolla dropped lower-constraints from all the branches. >> >> -yoctozepto >> >> -- > -- > /sorin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Fri Dec 11 21:20:20 2020 From: ltoscano at redhat.com (Luigi Toscano) Date: Fri, 11 Dec 2020 22:20:20 +0100 Subject: [all][interop] Reforming the refstack maintainers team In-Reply-To: <1765341f106.111a3f243109782.5942668683123760803@ghanshyammann.com> References: <1765341f106.111a3f243109782.5942668683123760803@ghanshyammann.com> Message-ID: <3078203.oiGErgHkdL@whitebase.usersys.redhat.com> On Friday, 11 December 2020 20:23:19 CET Ghanshyam Mann wrote: > I would like to call for more volunteers (new or existing ones), if you are > interested to help please do reply to this email. The role is to maintain > the source code of the below repos. I will propose the ACL changes in infra > sometime next Friday (18th dec) or so. > > For easy maintenance, we thought of merging the below repo core group into a > single group called 'refstack-core' > > - openstack/python-tempestconf > - openstack/refstack > - openstack/refstack-client > - x/ansible-role-refstack-client (moving to osf/ via > https://review.opendev.org/765787) I'm still around, and while I haven't done too much work on refstack itself, I've helped merging several patches lately. I'm also definitely very active on python-tempestconf, and I still plan to be around. -- Luigi From gouthampravi at gmail.com Fri Dec 11 21:56:40 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Fri, 11 Dec 2020 13:56:40 -0800 Subject: [all][interop] Reforming the refstack maintainers team In-Reply-To: <3078203.oiGErgHkdL@whitebase.usersys.redhat.com> References: <1765341f106.111a3f243109782.5942668683123760803@ghanshyammann.com> <3078203.oiGErgHkdL@whitebase.usersys.redhat.com> Message-ID: On Fri, Dec 11, 2020 at 1:27 PM Luigi Toscano wrote: > On Friday, 11 December 2020 20:23:19 CET Ghanshyam Mann wrote: > > I would like to call for more volunteers (new or existing ones), if you > are > > interested to help please do reply to this email. The role is to maintain > > the source code of the below repos. I will propose the ACL changes in > infra > > sometime next Friday (18th dec) or so. > > > > For easy maintenance, we thought of merging the below repo core group > into a > > single group called 'refstack-core' > > > > - openstack/python-tempestconf > > - openstack/refstack > > - openstack/refstack-client > > - x/ansible-role-refstack-client (moving to osf/ via > > https://review.opendev.org/765787) > > I'm still around, and while I haven't done too much work on refstack > itself, > I've helped merging several patches lately. I'm also definitely very > active on > python-tempestconf, and I still plan to be around. > That's great, thanks Luigi. Since you're in the refstack-core group [1], maybe you can add gmann to adjust memberships? [1] https://review.opendev.org/admin/groups/8cd7203820004ccdb67c999ca3b811534bf76d6f,members > > -- > Luigi > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Fri Dec 11 22:43:08 2020 From: ltoscano at redhat.com (Luigi Toscano) Date: Fri, 11 Dec 2020 23:43:08 +0100 Subject: [all][interop] Reforming the refstack maintainers team In-Reply-To: References: <1765341f106.111a3f243109782.5942668683123760803@ghanshyammann.com> <3078203.oiGErgHkdL@whitebase.usersys.redhat.com> Message-ID: <6259911.4vTCxPXJkl@whitebase.usersys.redhat.com> On Friday, 11 December 2020 22:56:40 CET Goutham Pacha Ravi wrote: > On Fri, Dec 11, 2020 at 1:27 PM Luigi Toscano wrote: > > On Friday, 11 December 2020 20:23:19 CET Ghanshyam Mann wrote: > > > I would like to call for more volunteers (new or existing ones), if you > > > > are > > > > > interested to help please do reply to this email. The role is to > > > maintain > > > the source code of the below repos. I will propose the ACL changes in > > > > infra > > > > > sometime next Friday (18th dec) or so. > > > > > > For easy maintenance, we thought of merging the below repo core group > > > > into a > > > > > single group called 'refstack-core' > > > > > > - openstack/python-tempestconf > > > - openstack/refstack > > > - openstack/refstack-client > > > - x/ansible-role-refstack-client (moving to osf/ via > > > https://review.opendev.org/765787) > > > > I'm still around, and while I haven't done too much work on refstack > > itself, > > I've helped merging several patches lately. I'm also definitely very > > active on > > python-tempestconf, and I still plan to be around. > > That's great, thanks Luigi. Since you're in the refstack-core group [1], > maybe you can add gmann to adjust memberships? I've added gmann, but also Martin, Vida and you, and added the interop-core group to refstack-core. -- Luigi From fungi at yuggoth.org Fri Dec 11 23:12:36 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 11 Dec 2020 23:12:36 +0000 Subject: [stable][requirements][neutron] Capping pip in stable branches or not In-Reply-To: References: <20201211143818.2w24gusndhnpzvnq@yuggoth.org> <17652961b73.11c186083102611.2301739328973440930@ghanshyammann.com> Message-ID: <20201211231236.6moz4evzigvctwsh@yuggoth.org> On 2020-12-11 20:38:30 +0000 (+0000), Sorin Sbarnea wrote: [...] > Regarding decoupling linting from test-requirements: yes! This was > already done by some when conflicts appeared. For old branches I > personally do not care much even if maintainers decide to disable > linting, their main benefit is on main branches. [...] To be honest, if I had my way, test-requirements.txt files would die in a fire. Sure it's a little more work to be specific about the individual requirements for each of your testenvs in tox.ini, but the payoff is that people aren't needlessly installing bandit when they run flake8 (for example). The thing we got into the PTI about using a separate doc/requirements.txt is a nice compromise in that direction, at least. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gouthampravi at gmail.com Sat Dec 12 01:07:46 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Fri, 11 Dec 2020 17:07:46 -0800 Subject: [all][interop] Reforming the refstack maintainers team In-Reply-To: <6259911.4vTCxPXJkl@whitebase.usersys.redhat.com> References: <1765341f106.111a3f243109782.5942668683123760803@ghanshyammann.com> <3078203.oiGErgHkdL@whitebase.usersys.redhat.com> <6259911.4vTCxPXJkl@whitebase.usersys.redhat.com> Message-ID: On Fri, Dec 11, 2020 at 2:43 PM Luigi Toscano wrote: > On Friday, 11 December 2020 22:56:40 CET Goutham Pacha Ravi wrote: > > On Fri, Dec 11, 2020 at 1:27 PM Luigi Toscano > wrote: > > > On Friday, 11 December 2020 20:23:19 CET Ghanshyam Mann wrote: > > > > I would like to call for more volunteers (new or existing ones), if > you > > > > > > are > > > > > > > interested to help please do reply to this email. The role is to > > > > maintain > > > > the source code of the below repos. I will propose the ACL changes in > > > > > > infra > > > > > > > sometime next Friday (18th dec) or so. > > > > > > > > For easy maintenance, we thought of merging the below repo core group > > > > > > into a > > > > > > > single group called 'refstack-core' > > > > > > > > - openstack/python-tempestconf > > > > - openstack/refstack > > > > - openstack/refstack-client > > > > - x/ansible-role-refstack-client (moving to osf/ via > > > > https://review.opendev.org/765787) > > > > > > I'm still around, and while I haven't done too much work on refstack > > > itself, > > > I've helped merging several patches lately. I'm also definitely very > > > active on > > > python-tempestconf, and I still plan to be around. > > > > That's great, thanks Luigi. Since you're in the refstack-core group [1], > > maybe you can add gmann to adjust memberships? > > I've added gmann, but also Martin, Vida and you, and added the > interop-core > group to refstack-core. > Awesome, thanks! > > > -- > Luigi > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Sat Dec 12 13:36:22 2020 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Sat, 12 Dec 2020 14:36:22 +0100 Subject: [stable][requirements][neutron] Capping pip in stable branches or not In-Reply-To: References: <20201211143818.2w24gusndhnpzvnq@yuggoth.org> Message-ID: Seeing the caused problems by lower-constraint jobs (not only now), and reading the opinions, I also vote for removing them. Though, the intention of lower-constraints job was good, it seems to be clearly broken and it would be quite resource (and time) consuming to fix properly every issue in every project and on every branch. (The other way is to constrain pip - or its behavior -, which does not solve really the issue just hides it). Előd On 2020. 12. 11. 16:58, Sean McGinnis wrote: > >> Yes, it bears repeating that anywhere the new dep solver is breaking >> represents a situation where we were previously testing/building >> things with a different version of some package than we meant to. >> This is exposing latent bugs in our declared dependencies within >> those branches. If we decide to use "older" pip, that's basically >> admitting we don't care because it's easier to ignore those problems >> than actually fix them (which, yes, might turn out to be effectively >> impossible). I'm not trying to be harsh, it's certainly a valid >> approach, but let's be clear that this is the compromise we're >> making in that case. > > +1 > >> >> My proposal: actually come to terms with the reality that >> lower-constraints jobs are a fundamentally broken concept, unless >> someone does the hard work to implement an inverse version sort in >> pip itself. If pretty much all the struggle is with those jobs, then >> dropping them can't hurt because they failed at testing exactly the >> thing they were created for in the first place. > > As someone that has spent some time working on l-c jobs/issues, I kind > of have to agree with this. > > For historical reference, here's the initial proposal for performing > lower constraint testing: > > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128352.html > > I wasn't part of most of the requirements team discussions around the > need for this, but my understanding was it was to provide a set range of > package versions that are expected to work. So anyone packaging > OpenStack projects downstream would have an easy way to filter out > options to figure out what versions they can include that will minimize > conflicts between all the various packages included in a distro. > > I'm not a downstream packager, so I don't have any direct experience to > go on here, but my assumption is that the value of providing this is > pretty low. I don't think the time the community has put in to trying to > maintain (or not maintain) their lower-constraints.txt files and making > sure the jobs are configured properly to apply those constraints has > been worth the effort for the value anyone downstream might get out of > them. > > My vote would be to get rid of these jobs. Distros will need to perform > testing of the versions they ultimately package together anyway, so I > don't think it is worth the community's time to repeatedly struggle with > keeping these things updated. > > I do think one useful bit can be when we're tracking our own direct > dependencies. One thing that comes to mind from the recent past is we've > had cases where something new has been added to something like > oslo.config. Lower constraints updates were a good way to make it > explicit that we needed at least the newer version of that lib so that > we could safely assume the expected functionality would be present. > There is some value to that. So if we keep lower-constraints, maybe we > just limit it to those specific instances where we have things like that > and not try to constrain the entire world. > > Sean > > From lokendrarathour at gmail.com Sat Dec 12 19:24:53 2020 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Sun, 13 Dec 2020 00:54:53 +0530 Subject: port Groups (Bonds) Configuration in Openstack Baremetal Provisioning. In-Reply-To: References: Message-ID: Hi , Any support here. Any input would help. Best Regards, Lokendra On Fri, 11 Dec 2020, 18:10 Lokendra Rathour, wrote: > Hello, > I am trying to install a baremetal on existing openstack setup. During the > time of installation, is it possible to have bonds already setup when the > baremetal nodes comes up. > > I was trying to work on the : > Port groups configuration in the Bare Metal service > > *Documents referred :* > > 1. > https://docs.openstack.org/ironic/pike/admin/portgroups.html#:~:text=A%20port%20group%20can%20also,attached%20to%20the%20port%20group > . > 2. > https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/14/pdf/bare_metal_provisioning/Red_Hat_OpenStack_Platform-14-Bare_Metal_Provisioning-en-US.pdf > > *Setup used:* > Baremetal on StarlingX Setup : > > https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/ironic_install.html > > ¶ > > > -- > ~ Lokendra > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sat Dec 12 22:19:14 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 12 Dec 2020 16:19:14 -0600 Subject: [all][interop] Reforming the refstack maintainers team In-Reply-To: <6259911.4vTCxPXJkl@whitebase.usersys.redhat.com> References: <1765341f106.111a3f243109782.5942668683123760803@ghanshyammann.com> <3078203.oiGErgHkdL@whitebase.usersys.redhat.com> <6259911.4vTCxPXJkl@whitebase.usersys.redhat.com> Message-ID: <17659095daf.cb57c9c113640.7300642118348305795@ghanshyammann.com> ---- On Fri, 11 Dec 2020 16:43:08 -0600 Luigi Toscano wrote ---- > On Friday, 11 December 2020 22:56:40 CET Goutham Pacha Ravi wrote: > > On Fri, Dec 11, 2020 at 1:27 PM Luigi Toscano wrote: > > > On Friday, 11 December 2020 20:23:19 CET Ghanshyam Mann wrote: > > > > I would like to call for more volunteers (new or existing ones), if you > > > > > > are > > > > > > > interested to help please do reply to this email. The role is to > > > > maintain > > > > the source code of the below repos. I will propose the ACL changes in > > > > > > infra > > > > > > > sometime next Friday (18th dec) or so. > > > > > > > > For easy maintenance, we thought of merging the below repo core group > > > > > > into a > > > > > > > single group called 'refstack-core' > > > > > > > > - openstack/python-tempestconf > > > > - openstack/refstack > > > > - openstack/refstack-client > > > > - x/ansible-role-refstack-client (moving to osf/ via > > > > https://review.opendev.org/765787) > > > > > > I'm still around, and while I haven't done too much work on refstack > > > itself, > > > I've helped merging several patches lately. I'm also definitely very > > > active on > > > python-tempestconf, and I still plan to be around. > > > > That's great, thanks Luigi. Since you're in the refstack-core group [1], > > maybe you can add gmann to adjust memberships? > > I've added gmann, but also Martin, Vida and you, and added the interop-core > group to refstack-core. > Thanks Luigi, I will wait for a few more days if any volunteer shows up and propose the changes accordingly. -gmann > > -- > Luigi > > > > From juliaashleykreger at gmail.com Sat Dec 12 23:58:42 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Sat, 12 Dec 2020 15:58:42 -0800 Subject: port Groups (Bonds) Configuration in Openstack Baremetal Provisioning. In-Reply-To: References: Message-ID: Greetings Lokendra, Portgroups are a bit of a complex item, unfortunately. They hold a dual purpose of representing what is desired and what exists. The delineation between those states largely being what tooling is being loaded into Neutron in the form of a portgroup supporting ML2 drivers. If portgroups are pre-configured on a switch side, they can be represented in Ironic and the virtual port (VIF) binding information can be transmitted to Neutron with this information. If an ML2 driver is loaded that understands the portgroup configuration, then it can also configure the switch to represent this port. I can't tell if you're asking about pre-configured bonds or if you're asking about ML2 enabled bonds. I don't know if StarlingX ships with/uses any of the Neutron ML2 which support such functionality, and that may be a good question for the StarlingX community specifically. If you're just trying to express pre-configured portgroups, it looks like Cloud-init since 0.7.7 has apparently supported parsing and setting up the portgroup within the deployed operating system. That is, if it is present in that operating system image you deploy. Please note, caution should be taken with LACP[0] and various switch configuration tunables. They make network booting a bit complicated and each switch vendor has somewhat different behavior and configuration available to help navigate such situations. -Julia [0] https://docs.openstack.org/ironic/latest/admin/troubleshooting.html#why-does-x-issue-occur-when-i-am-using-lacp-bonding-with-ipxe On Sat, Dec 12, 2020 at 11:28 AM Lokendra Rathour wrote: > Hi , > Any support here. > Any input would help. > > Best Regards, > Lokendra > > On Fri, 11 Dec 2020, 18:10 Lokendra Rathour, > wrote: > >> Hello, >> I am trying to install a baremetal on existing openstack setup. During >> the time of installation, is it possible to have bonds already setup when >> the baremetal nodes comes up. >> >> I was trying to work on the : >> Port groups configuration in the Bare Metal service >> >> *Documents referred :* >> >> 1. >> https://docs.openstack.org/ironic/pike/admin/portgroups.html#:~:text=A%20port%20group%20can%20also,attached%20to%20the%20port%20group >> . >> 2. >> https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/14/pdf/bare_metal_provisioning/Red_Hat_OpenStack_Platform-14-Bare_Metal_Provisioning-en-US.pdf >> >> *Setup used:* >> Baremetal on StarlingX Setup : >> >> https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/ironic_install.html >> >> ¶ >> >> >> -- >> ~ Lokendra >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Sun Dec 13 04:39:46 2020 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Sun, 13 Dec 2020 10:09:46 +0530 Subject: port Groups (Bonds) Configuration in Openstack Baremetal Provisioning. In-Reply-To: References: Message-ID: Thanks Julia, My aim here to chk the for any kind of port binding or grouping inorder to achived redundancy at the baremetal port level. So scenarion as desired or which i am trying to bring up is : I have an openstack controller computer setup(simple openstack) On top of this I am provisioning baremetal node . On this baremetal node i have one cable connected to one of the port from the switch. We inform neutron about it before we go for server create. It works and we are able to see the nodes getting added. So on the same server we are trying to enable port binding , which we are not able to do it. If using cloud init which shall be passed during the server created then what can be the standard configuration for it ? Or how to move forward? -lokendra On Sun, 13 Dec 2020, 05:28 Julia Kreger, wrote: > Greetings Lokendra, > > Portgroups are a bit of a complex item, unfortunately. > > They hold a dual purpose of representing what is desired and what exists. > The delineation between those states largely being what tooling is being > loaded into Neutron in the form of a portgroup supporting ML2 drivers. If > portgroups are pre-configured on a switch side, they can be represented in > Ironic and the virtual port (VIF) binding information can be transmitted to > Neutron with this information. If an ML2 driver is loaded that understands > the portgroup configuration, then it can also configure the switch to > represent this port. > > I can't tell if you're asking about pre-configured bonds or if you're > asking about ML2 enabled bonds. I don't know if StarlingX ships with/uses > any of the Neutron ML2 which support such functionality, and that may be a > good question for the StarlingX community specifically. If you're just > trying to express pre-configured portgroups, it looks like Cloud-init since > 0.7.7 has apparently supported parsing and setting up the portgroup within > the deployed operating system. That is, if it is present in that operating > system image you deploy. Please note, caution should be taken with LACP[0] > and various switch configuration tunables. They make network booting a bit > complicated and each switch vendor has somewhat different behavior and > configuration available to help navigate such situations. > > -Julia > > [0] > https://docs.openstack.org/ironic/latest/admin/troubleshooting.html#why-does-x-issue-occur-when-i-am-using-lacp-bonding-with-ipxe > > On Sat, Dec 12, 2020 at 11:28 AM Lokendra Rathour < > lokendrarathour at gmail.com> wrote: > >> Hi , >> Any support here. >> Any input would help. >> >> Best Regards, >> Lokendra >> >> On Fri, 11 Dec 2020, 18:10 Lokendra Rathour, >> wrote: >> >>> Hello, >>> I am trying to install a baremetal on existing openstack setup. During >>> the time of installation, is it possible to have bonds already setup when >>> the baremetal nodes comes up. >>> >>> I was trying to work on the : >>> Port groups configuration in the Bare Metal service >>> >>> *Documents referred :* >>> >>> 1. >>> https://docs.openstack.org/ironic/pike/admin/portgroups.html#:~:text=A%20port%20group%20can%20also,attached%20to%20the%20port%20group >>> . >>> 2. >>> https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/14/pdf/bare_metal_provisioning/Red_Hat_OpenStack_Platform-14-Bare_Metal_Provisioning-en-US.pdf >>> >>> *Setup used:* >>> Baremetal on StarlingX Setup : >>> >>> https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/ironic_install.html >>> >>> ¶ >>> >>> >>> -- >>> ~ Lokendra >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Sun Dec 13 10:02:04 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sun, 13 Dec 2020 11:02:04 +0100 Subject: [stable][requirements][neutron] Capping pip in stable branches or not In-Reply-To: <20201211231236.6moz4evzigvctwsh@yuggoth.org> References: <20201211143818.2w24gusndhnpzvnq@yuggoth.org> <17652961b73.11c186083102611.2301739328973440930@ghanshyammann.com> <20201211231236.6moz4evzigvctwsh@yuggoth.org> Message-ID: On Sat, Dec 12, 2020 at 12:13 AM Jeremy Stanley wrote: > To be honest, if I had my way, test-requirements.txt files would die > in a fire. You have my full support in this endeavor. -yoctozepto From ltoscano at redhat.com Sun Dec 13 13:39:58 2020 From: ltoscano at redhat.com (Luigi Toscano) Date: Sun, 13 Dec 2020 14:39:58 +0100 Subject: [stable][requirements][neutron] Capping pip in stable branches or not In-Reply-To: <20201211231236.6moz4evzigvctwsh@yuggoth.org> References: <20201211231236.6moz4evzigvctwsh@yuggoth.org> Message-ID: <3326204.V25eIC5XRa@whitebase.usersys.redhat.com> On Saturday, 12 December 2020 00:12:36 CET Jeremy Stanley wrote: > On 2020-12-11 20:38:30 +0000 (+0000), Sorin Sbarnea wrote: > [...] > > > Regarding decoupling linting from test-requirements: yes! This was > > already done by some when conflicts appeared. For old branches I > > personally do not care much even if maintainers decide to disable > > linting, their main benefit is on main branches. > > [...] > > To be honest, if I had my way, test-requirements.txt files would die > in a fire. Sure it's a little more work to be specific about the > individual requirements for each of your testenvs in tox.ini, but > the payoff is that people aren't needlessly installing bandit when > they run flake8 (for example). The thing we got into the PTI about > using a separate doc/requirements.txt is a nice compromise in that > direction, at least. Wouldn't this mean tracking requirements into two different kind of places:the main requirements.txt file, which is still going to be needed even for tests, and the tox environment definitions? -- Luigi From fungi at yuggoth.org Sun Dec 13 16:33:39 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 13 Dec 2020 16:33:39 +0000 Subject: [stable][requirements][neutron] Capping pip in stable branches or not In-Reply-To: <3326204.V25eIC5XRa@whitebase.usersys.redhat.com> References: <20201211231236.6moz4evzigvctwsh@yuggoth.org> <3326204.V25eIC5XRa@whitebase.usersys.redhat.com> Message-ID: <20201213163338.ynkd7mrxqbok5eos@yuggoth.org> On 2020-12-13 14:39:58 +0100 (+0100), Luigi Toscano wrote: > On Saturday, 12 December 2020 00:12:36 CET Jeremy Stanley wrote: > > On 2020-12-11 20:38:30 +0000 (+0000), Sorin Sbarnea wrote: > > [...] > > > Regarding decoupling linting from test-requirements: yes! This was > > > already done by some when conflicts appeared. For old branches I > > > personally do not care much even if maintainers decide to disable > > > linting, their main benefit is on main branches. > > [...] > > > > To be honest, if I had my way, test-requirements.txt files would die > > in a fire. Sure it's a little more work to be specific about the > > individual requirements for each of your testenvs in tox.ini, but > > the payoff is that people aren't needlessly installing bandit when > > they run flake8 (for example). The thing we got into the PTI about > > using a separate doc/requirements.txt is a nice compromise in that > > direction, at least. > > Wouldn't this mean tracking requirements into two different kind > of places:the main requirements.txt file, which is still going to > be needed even for tests, and the tox environment definitions? Technically we already do. The requirements.txt file contains actual runtime Python dependencies of the software (technically setup_requires in Setuptools parlance). Then we have this vague test-requirements.txt file which installs everything under the sun a test might want, including the kitchen sink. Tox doesn't reuse one virtualenv for multiple testenv definitions, it creates a separate one for each, so for example... In the nova repo, if you `tox -e bandit` or `tox -e pep8` it's going to install coverage, psycopg2, PyMySQL, requests, python-barbicanclient, python-ironicclient, and a whole host of other stuff, including the entire transitive dependency set for everything in there, rather than just the one tool it needs to run. I can't even run the pep8 testenv locally because to do that I apparently need a Python package named zVMCloudConnector which wants root access to create files like /lib/systemd/system/sdkserver.service and /etc/sudoers.d/sudoers-zvmsdk and /var/lib/zvmsdk/* and /etc/zvmsdk/* in my system. WHAT?!? Do nova's developers actually ever run any of this themselves? Okay, so that one's actually in requirements.txt (might be a good candidate for a separate extras in the setup.cfg instead), but seriously, it's trying to install 182 packages (present count on master) just to do a "quick" style check, and the resulting .tox created from that is 319MB in size. How is that in any way sane? If I tweak the testenv:pep8 definition in tox.ini to set deps=flake8,hacking,mypy and and usedevelop=False, and set skipsdist=True in the general tox section, it installs a total of 9 packages for a 36MB .tox directory. It's an extreme example, sure, but remember this is also happening in CI for each patch uploaded, and this setup cost is incurred every time in that context. This is already solved in a few places in the nova repo, in different ways. One is the docs testenv, which installs doc/requirements.txt (currently 10 mostly Sphinx-related entries) instead of combining all that into test-requirements.txt too. Another is the osprofiler extra in setup.cfg allowing you to `pip install nova[osprofiler]` to get that specific dependency. Yet still another is the bindep testenv, which explicitly declares deps=bindep and so installs absolutely nothing else (save bindep's own dependencies)... or, well, it would except skipsdist got set to False by https://review.openstack.org/622972 making that testenv effectively pointless because now `tox -e bindep` has to install nova before it can tell you what packages you're missing to be able to install nova. *sigh* So anyway, there's a lot of opportunity for improvement, and that's just in nova, I'm sure there are similar situations throughout many of our projects. Using a test-requirements.txt file as a dumping ground for every last package any tox testenv could want may be convenient for tracking things, but it's far from convenient to actually use. The main thing we risk losing is that the requirements-check job currently reports whether entries in test-requirements.txt are compatible with the global upper-constraints.txt in openstack/requirements, so extending that to check dependencies declared in tox.ini or in package extras or additional external requirements lists would be needed if we wanted to preserve that capability. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jlandersen at imada.sdu.dk Sun Dec 13 10:43:35 2020 From: jlandersen at imada.sdu.dk (Jakob Lykke Andersen) Date: Sun, 13 Dec 2020 11:43:35 +0100 Subject: bindep, pacman, and PyPI release Message-ID: Dear developers, I'm trying to use bindep on Arch but hitting a problem with the output handling. It seems that 'pacman -Q' may output warning lines if you have a file with the same name as the package. E.g.,: $ mkdir cmake $ pacman -Q cmake error: package 'cmake' was not found warning: 'cmake' is a file, you might want to use -p/--file. The current detection assumes the "was not found" is the last part of the output. Applying the patch below seems to fix it. After applying this patch, or whichever change you deem reasonable to fix the issue, it would be great if you could make a new release on PyPI. Thanks, Jakob diff --git a/bindep/depends.py b/bindep/depends.py index bb9553f..cacc863 100644 --- a/bindep/depends.py +++ b/bindep/depends.py @@ -558,7 +558,8 @@ class Pacman(Platform): stderr=subprocess.STDOUT).decode(getpreferredencoding(False))          except subprocess.CalledProcessError as e:              eoutput = e.output.decode(getpreferredencoding(False)) -            if e.returncode == 1 and eoutput.strip().endswith('was not found'): +            s = "error: package '{}' was not found".format(pkg_name) +            if e.returncode == 1 and s in eoutput:                  return None              raise          # output looks like From juliaashleykreger at gmail.com Sun Dec 13 18:37:41 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Sun, 13 Dec 2020 10:37:41 -0800 Subject: port Groups (Bonds) Configuration in Openstack Baremetal Provisioning. In-Reply-To: References: Message-ID: On Sat, Dec 12, 2020 at 8:39 PM Lokendra Rathour wrote: > Thanks Julia, > My aim here to chk the for any kind of port binding or grouping inorder to > achived redundancy at the baremetal port level. > So scenarion as desired or which i am trying to bring up is : > I have an openstack controller computer setup(simple openstack) > On top of this I am provisioning baremetal node . On this baremetal node i > have one cable connected to one of the port from the switch. We inform > neutron about it before we go for server create. It works and we are able > to see the nodes getting added. > Added to what? So on the same server we are trying to enable port binding , which we are > not able to do it. > So you would need a second cable attached to the same switch. If already connected, the node, and the switch side is pre-configured, you should: 1) Create the portgroup definition in ironic 2) create/update the physical ports in ironic attached to the portgroup. If using cloud init which shall be passed during the server created then > what can be the standard configuration for it ? Or how to move forward? > When ironic goes to perform the port bind, it provides the binding information to Neutron. This behavior is slightly different between the ironic node network_interface driver, ``flat`` or ``neutron``. You likely will be using flat if your portgroup is pre-configured. Some Fujitsu[0] folks put together a slide deck to share back in 2017, which walks through the steps. The commands are a little outdated, but you will hopefully get the idea for a static configuration. [0]: https://www.slideshare.net/vietstack/portgroups-support-in-ironic > > -lokendra > > > On Sun, 13 Dec 2020, 05:28 Julia Kreger, > wrote: > >> Greetings Lokendra, >> >> Portgroups are a bit of a complex item, unfortunately. >> >> They hold a dual purpose of representing what is desired and what exists. >> The delineation between those states largely being what tooling is being >> loaded into Neutron in the form of a portgroup supporting ML2 drivers. If >> portgroups are pre-configured on a switch side, they can be represented in >> Ironic and the virtual port (VIF) binding information can be transmitted to >> Neutron with this information. If an ML2 driver is loaded that understands >> the portgroup configuration, then it can also configure the switch to >> represent this port. >> >> I can't tell if you're asking about pre-configured bonds or if you're >> asking about ML2 enabled bonds. I don't know if StarlingX ships with/uses >> any of the Neutron ML2 which support such functionality, and that may be a >> good question for the StarlingX community specifically. If you're just >> trying to express pre-configured portgroups, it looks like Cloud-init since >> 0.7.7 has apparently supported parsing and setting up the portgroup within >> the deployed operating system. That is, if it is present in that operating >> system image you deploy. Please note, caution should be taken with LACP[0] >> and various switch configuration tunables. They make network booting a bit >> complicated and each switch vendor has somewhat different behavior and >> configuration available to help navigate such situations. >> >> -Julia >> >> [0] >> https://docs.openstack.org/ironic/latest/admin/troubleshooting.html#why-does-x-issue-occur-when-i-am-using-lacp-bonding-with-ipxe >> >> On Sat, Dec 12, 2020 at 11:28 AM Lokendra Rathour < >> lokendrarathour at gmail.com> wrote: >> >>> Hi , >>> Any support here. >>> Any input would help. >>> >>> Best Regards, >>> Lokendra >>> >>> On Fri, 11 Dec 2020, 18:10 Lokendra Rathour, >>> wrote: >>> >>>> Hello, >>>> I am trying to install a baremetal on existing openstack setup. During >>>> the time of installation, is it possible to have bonds already setup when >>>> the baremetal nodes comes up. >>>> >>>> I was trying to work on the : >>>> Port groups configuration in the Bare Metal service >>>> >>>> *Documents referred :* >>>> >>>> 1. >>>> https://docs.openstack.org/ironic/pike/admin/portgroups.html#:~:text=A%20port%20group%20can%20also,attached%20to%20the%20port%20group >>>> . >>>> 2. >>>> https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/14/pdf/bare_metal_provisioning/Red_Hat_OpenStack_Platform-14-Bare_Metal_Provisioning-en-US.pdf >>>> >>>> *Setup used:* >>>> Baremetal on StarlingX Setup : >>>> >>>> https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/ironic_install.html >>>> >>>> ¶ >>>> >>>> >>>> -- >>>> ~ Lokendra >>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Sun Dec 13 18:50:15 2020 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Mon, 14 Dec 2020 00:20:15 +0530 Subject: port Groups (Bonds) Configuration in Openstack Baremetal Provisioning. In-Reply-To: References: Message-ID: HI Julia, Thanks for your email. I have replied inline with [loke] tag. Will try the shared inputs and will let you know. -Lokendra On Mon, 14 Dec 2020, 00:07 Julia Kreger, wrote: > > > On Sat, Dec 12, 2020 at 8:39 PM Lokendra Rathour < > lokendrarathour at gmail.com> wrote: > >> Thanks Julia, >> My aim here to chk the for any kind of port binding or grouping inorder >> to achived redundancy at the baremetal port level. >> So scenarion as desired or which i am trying to bring up is : >> I have an openstack controller computer setup(simple openstack) >> On top of this I am provisioning baremetal node . On this baremetal node >> i have one cable connected to one of the port from the switch. We inform >> neutron about it before we go for server create. It works and we are able >> to see the nodes getting added. >> > > Added to what? > [Loke] - added to the environment of openstack as a baremetal server. As said it works well. New that we are looking for is what you have answered i think. Will try the same and let you know. > So on the same server we are trying to enable port binding , which we are >> not able to do it. >> > > So you would need a second cable attached to the same switch. If already > connected, the node, and the switch side is pre-configured, you should: > > 1) Create the portgroup definition in ironic > 2) create/update the physical ports in ironic attached to the portgroup. > > If using cloud init which shall be passed during the server created then >> what can be the standard configuration for it ? Or how to move forward? >> > > When ironic goes to perform the port bind, it provides the binding > information to Neutron. This behavior is slightly different between the > ironic node network_interface driver, ``flat`` or ``neutron``. You likely > will be using flat if your portgroup is pre-configured. Some Fujitsu[0] > folks put together a slide deck to share back in 2017, which walks through > the steps. The commands are a little outdated, but you will hopefully get > the idea for a static configuration. > > [0]: https://www.slideshare.net/vietstack/portgroups-support-in-ironic > >> >> -lokendra >> >> >> On Sun, 13 Dec 2020, 05:28 Julia Kreger, >> wrote: >> >>> Greetings Lokendra, >>> >>> Portgroups are a bit of a complex item, unfortunately. >>> >>> They hold a dual purpose of representing what is desired and what >>> exists. The delineation between those states largely being what tooling is >>> being loaded into Neutron in the form of a portgroup supporting ML2 >>> drivers. If portgroups are pre-configured on a switch side, they can be >>> represented in Ironic and the virtual port (VIF) binding information can be >>> transmitted to Neutron with this information. If an ML2 driver is loaded >>> that understands the portgroup configuration, then it can also configure >>> the switch to represent this port. >>> >>> I can't tell if you're asking about pre-configured bonds or if you're >>> asking about ML2 enabled bonds. I don't know if StarlingX ships with/uses >>> any of the Neutron ML2 which support such functionality, and that may be a >>> good question for the StarlingX community specifically. If you're just >>> trying to express pre-configured portgroups, it looks like Cloud-init since >>> 0.7.7 has apparently supported parsing and setting up the portgroup within >>> the deployed operating system. That is, if it is present in that operating >>> system image you deploy. Please note, caution should be taken with LACP[0] >>> and various switch configuration tunables. They make network booting a bit >>> complicated and each switch vendor has somewhat different behavior and >>> configuration available to help navigate such situations. >>> >>> -Julia >>> >>> [0] >>> https://docs.openstack.org/ironic/latest/admin/troubleshooting.html#why-does-x-issue-occur-when-i-am-using-lacp-bonding-with-ipxe >>> >>> On Sat, Dec 12, 2020 at 11:28 AM Lokendra Rathour < >>> lokendrarathour at gmail.com> wrote: >>> >>>> Hi , >>>> Any support here. >>>> Any input would help. >>>> >>>> Best Regards, >>>> Lokendra >>>> >>>> On Fri, 11 Dec 2020, 18:10 Lokendra Rathour, >>>> wrote: >>>> >>>>> Hello, >>>>> I am trying to install a baremetal on existing openstack setup. During >>>>> the time of installation, is it possible to have bonds already setup when >>>>> the baremetal nodes comes up. >>>>> >>>>> I was trying to work on the : >>>>> Port groups configuration in the Bare Metal service >>>>> >>>>> *Documents referred :* >>>>> >>>>> 1. >>>>> https://docs.openstack.org/ironic/pike/admin/portgroups.html#:~:text=A%20port%20group%20can%20also,attached%20to%20the%20port%20group >>>>> . >>>>> 2. >>>>> https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/14/pdf/bare_metal_provisioning/Red_Hat_OpenStack_Platform-14-Bare_Metal_Provisioning-en-US.pdf >>>>> >>>>> *Setup used:* >>>>> Baremetal on StarlingX Setup : >>>>> >>>>> https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/ironic_install.html >>>>> >>>>> ¶ >>>>> >>>>> >>>>> -- >>>>> ~ Lokendra >>>>> >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Mon Dec 14 09:43:24 2020 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Mon, 14 Dec 2020 10:43:24 +0100 Subject: [all][SDK][zuul][DIB] CentOS-8 Message-ID: <80E31302-CDB1-4ABD-93D7-29D791933875@gmail.com> Hi everybody, Over around a week we have patches in SDK being blocked by functional test of the nodepool (https://123b844dd0bc940cd071-d2975a8f065f970ca29e743c021b8a36.ssl.cf1.rackcdn.com/766757/1/check/nodepool-functional-container-openstack-siblings/5f6f3f5/nodepool/builds/test-image-0000000004.log ) trying to build CentOS-8 image blocked by https://review.opendev.org/c/openstack/diskimage-builder/+/765963 caused by CentOS rename their packages (the DIB change helps to fix the issue - just verified locally). Normally I would say OK, let’s merge DIB fix and everything is fine, but due to a recent announcement of RedHat (IBM) to discontinue CentOS (non stream) I wonder, whether the DIB change as such is useful at all. At least for the moment I see a perhaps more appropriate fix in nodepool to build some other image (or Fedora or at least to use CentOS stream). DIB should be fixed as well, that’s clear, but maybe differently. Any way we need to come up with solution quickly, since our CI is blocked affecting quite some projects. P.S. Actually that issue is not only affecting our tests, but every Zuul installation trying to build CentOS-8 images. Any opinions? Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Mon Dec 14 09:54:53 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Mon, 14 Dec 2020 09:54:53 +0000 Subject: [stable][requirements][neutron] Capping pip in stable branches or not In-Reply-To: <20201213163338.ynkd7mrxqbok5eos@yuggoth.org> References: <20201211231236.6moz4evzigvctwsh@yuggoth.org> <3326204.V25eIC5XRa@whitebase.usersys.redhat.com> <20201213163338.ynkd7mrxqbok5eos@yuggoth.org> Message-ID: <20201214095453.5wvb7xoucifagudl@lyarwood-laptop.usersys.redhat.com> On 13-12-20 16:33:39, Jeremy Stanley wrote: > On 2020-12-13 14:39:58 +0100 (+0100), Luigi Toscano wrote: > > On Saturday, 12 December 2020 00:12:36 CET Jeremy Stanley wrote: > > > On 2020-12-11 20:38:30 +0000 (+0000), Sorin Sbarnea wrote: > > > [...] > > > > Regarding decoupling linting from test-requirements: yes! This was > > > > already done by some when conflicts appeared. For old branches I > > > > personally do not care much even if maintainers decide to disable > > > > linting, their main benefit is on main branches. > > > [...] > > > > > > To be honest, if I had my way, test-requirements.txt files would die > > > in a fire. Sure it's a little more work to be specific about the > > > individual requirements for each of your testenvs in tox.ini, but > > > the payoff is that people aren't needlessly installing bandit when > > > they run flake8 (for example). The thing we got into the PTI about > > > using a separate doc/requirements.txt is a nice compromise in that > > > direction, at least. > > > > Wouldn't this mean tracking requirements into two different kind > > of places:the main requirements.txt file, which is still going to > > be needed even for tests, and the tox environment definitions? > > Technically we already do. The requirements.txt file contains actual > runtime Python dependencies of the software (technically > setup_requires in Setuptools parlance). Then we have this vague > test-requirements.txt file which installs everything under the sun > a test might want, including the kitchen sink. Tox doesn't reuse one > virtualenv for multiple testenv definitions, it creates a separate > one for each, so for example... That isn't technically true within Nova, multiple tox envs use the {toxworkdir}/shared envdir for the virtualenv. mypy, pep8, fast8, genconfig, genpolicy, cover, debug and bandit. > In the nova repo, if you `tox -e bandit` or `tox -e pep8` it's going > to install coverage, psycopg2, PyMySQL, requests, > python-barbicanclient, python-ironicclient, and a whole host of > other stuff, including the entire transitive dependency set for > everything in there, rather than just the one tool it needs to run. Yup that's pointless. > I can't even run the pep8 testenv locally because to do that I > apparently need a Python package named zVMCloudConnector which wants > root access to create files like > /lib/systemd/system/sdkserver.service and > /etc/sudoers.d/sudoers-zvmsdk and /var/lib/zvmsdk/* and > /etc/zvmsdk/* in my system. WHAT?!? Do nova's developers actually > ever run any of this themselves? ... Which version of that package is the pep8 env pulling in for you? I don't see any such issues with zVMCloudConnector==1.4.1 locally on Fedora 33, tox 3.19.0, pip 20.2.2 etc. Would you mind writing up a launchpad bug for this? > Okay, so that one's actually in requirements.txt (might be a good > candidate for a separate extras in the setup.cfg instead), but > seriously, it's trying to install 182 packages (present count on > master) just to do a "quick" style check, and the resulting .tox > created from that is 319MB in size. How is that in any way sane? If > I tweak the testenv:pep8 definition in tox.ini to set > deps=flake8,hacking,mypy and and usedevelop=False, and set > skipsdist=True in the general tox section, it installs a total of 9 > packages for a 36MB .tox directory. It's an extreme example, sure, > but remember this is also happening in CI for each patch uploaded, > and this setup cost is incurred every time in that context. EWww yeah this is awful. > This is already solved in a few places in the nova repo, in > different ways. One is the docs testenv, which installs > doc/requirements.txt (currently 10 mostly Sphinx-related entries) > instead of combining all that into test-requirements.txt too. > Another is the osprofiler extra in setup.cfg allowing you to `pip > install nova[osprofiler]` to get that specific dependency. Yet still > another is the bindep testenv, which explicitly declares deps=bindep > and so installs absolutely nothing else (save bindep's own > dependencies)... or, well, it would except skipsdist got set to > False by https://review.openstack.org/622972 making that testenv > effectively pointless because now `tox -e bindep` has to install > nova before it can tell you what packages you're missing to be able > to install nova. *sigh* > > So anyway, there's a lot of opportunity for improvement, and that's > just in nova, I'm sure there are similar situations throughout many > of our projects. Using a test-requirements.txt file as a dumping > ground for every last package any tox testenv could want may be > convenient for tracking things, but it's far from convenient to > actually use. The main thing we risk losing is that the > requirements-check job currently reports whether entries in > test-requirements.txt are compatible with the global > upper-constraints.txt in openstack/requirements, so extending that > to check dependencies declared in tox.ini or in package extras or > additional external requirements lists would be needed if we wanted > to preserve that capability. Gibi, should we track all of this in a few launchpad bugs for Nova? Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From thierry at openstack.org Mon Dec 14 09:58:47 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 14 Dec 2020 10:58:47 +0100 Subject: [all][interop] Reforming the refstack maintainers team In-Reply-To: <1765341f106.111a3f243109782.5942668683123760803@ghanshyammann.com> References: <1765341f106.111a3f243109782.5942668683123760803@ghanshyammann.com> Message-ID: <013e3ef9-82a8-213c-9bc2-a9ac13d835b5@openstack.org> Ghanshyam Mann wrote: > As Goutham mentioned in a separate ML thread[2] that there is no active maintainer for refstack repo > which we discussed in today's interop meeting[1]. We had a few volunteers who can help to maintain the > refstack and other interop repo which is good news. > > I would like to call for more volunteers (new or existing ones), if you are interested to help please do reply > to this email. The role is to maintain the source code of the below repos. I will propose the ACL changes in infra sometime > next Friday (18th dec) or so. > [...] Thanks Ghanshyam, Goutham and all volunteers for taking this over! Last time we discussed the future of RefStack it was pretty clear that our ecosystem values its existence and how it helps asserting products compatibility with a set of APIs. While I can't carve the time to take a core reviewer role on this, I intend to post patches wherever necessary when issues are reported by RefStack users trying to submit results. Cheers, -- Thierry Carrez (ttx) From balazs.gibizer at est.tech Mon Dec 14 10:07:01 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 14 Dec 2020 11:07:01 +0100 Subject: [stable][requirements][neutron] Capping pip in stable =?UTF-8?Q?branches=0D=0A?= or not In-Reply-To: <20201214095453.5wvb7xoucifagudl@lyarwood-laptop.usersys.redhat.com> References: <20201211231236.6moz4evzigvctwsh@yuggoth.org> <3326204.V25eIC5XRa@whitebase.usersys.redhat.com> <20201213163338.ynkd7mrxqbok5eos@yuggoth.org> <20201214095453.5wvb7xoucifagudl@lyarwood-laptop.usersys.redhat.com> Message-ID: On Mon, Dec 14, 2020 at 09:54, Lee Yarwood wrote: > On 13-12-20 16:33:39, Jeremy Stanley wrote: >> On 2020-12-13 14:39:58 +0100 (+0100), Luigi Toscano wrote: >> > On Saturday, 12 December 2020 00:12:36 CET Jeremy Stanley wrote: >> > > On 2020-12-11 20:38:30 +0000 (+0000), Sorin Sbarnea wrote: >> > > [...] >> > > > Regarding decoupling linting from test-requirements: yes! >> This was >> > > > already done by some when conflicts appeared. For old >> branches I >> > > > personally do not care much even if maintainers decide to >> disable >> > > > linting, their main benefit is on main branches. >> > > [...] >> > > >> > > To be honest, if I had my way, test-requirements.txt files >> would die >> > > in a fire. Sure it's a little more work to be specific about the >> > > individual requirements for each of your testenvs in tox.ini, >> but >> > > the payoff is that people aren't needlessly installing bandit >> when >> > > they run flake8 (for example). The thing we got into the PTI >> about >> > > using a separate doc/requirements.txt is a nice compromise in >> that >> > > direction, at least. >> > >> > Wouldn't this mean tracking requirements into two different kind >> > of places:the main requirements.txt file, which is still going to >> > be needed even for tests, and the tox environment definitions? >> >> Technically we already do. The requirements.txt file contains actual >> runtime Python dependencies of the software (technically >> setup_requires in Setuptools parlance). Then we have this vague >> test-requirements.txt file which installs everything under the sun >> a test might want, including the kitchen sink. Tox doesn't reuse one >> virtualenv for multiple testenv definitions, it creates a separate >> one for each, so for example... > > That isn't technically true within Nova, multiple tox envs use the > {toxworkdir}/shared envdir for the virtualenv. > > mypy, pep8, fast8, genconfig, genpolicy, cover, debug and bandit. > >> In the nova repo, if you `tox -e bandit` or `tox -e pep8` it's going >> to install coverage, psycopg2, PyMySQL, requests, >> python-barbicanclient, python-ironicclient, and a whole host of >> other stuff, including the entire transitive dependency set for >> everything in there, rather than just the one tool it needs to run. > > Yup that's pointless. > >> I can't even run the pep8 testenv locally because to do that I >> apparently need a Python package named zVMCloudConnector which wants >> root access to create files like >> /lib/systemd/system/sdkserver.service and >> /etc/sudoers.d/sudoers-zvmsdk and /var/lib/zvmsdk/* and >> /etc/zvmsdk/* in my system. WHAT?!? Do nova's developers actually >> ever run any of this themselves? > > ... > > Which version of that package is the pep8 env pulling in for you? > > I don't see any such issues with zVMCloudConnector==1.4.1 locally on > Fedora 33, tox 3.19.0, pip 20.2.2 etc. > > Would you mind writing up a launchpad bug for this? > >> Okay, so that one's actually in requirements.txt (might be a good >> candidate for a separate extras in the setup.cfg instead), but >> seriously, it's trying to install 182 packages (present count on >> master) just to do a "quick" style check, and the resulting .tox >> created from that is 319MB in size. How is that in any way sane? If >> I tweak the testenv:pep8 definition in tox.ini to set >> deps=flake8,hacking,mypy and and usedevelop=False, and set >> skipsdist=True in the general tox section, it installs a total of 9 >> packages for a 36MB .tox directory. It's an extreme example, sure, >> but remember this is also happening in CI for each patch uploaded, >> and this setup cost is incurred every time in that context. > > EWww yeah this is awful. > >> This is already solved in a few places in the nova repo, in >> different ways. One is the docs testenv, which installs >> doc/requirements.txt (currently 10 mostly Sphinx-related entries) >> instead of combining all that into test-requirements.txt too. >> Another is the osprofiler extra in setup.cfg allowing you to `pip >> install nova[osprofiler]` to get that specific dependency. Yet still >> another is the bindep testenv, which explicitly declares deps=bindep >> and so installs absolutely nothing else (save bindep's own >> dependencies)... or, well, it would except skipsdist got set to >> False by https://review.openstack.org/622972 making that testenv >> effectively pointless because now `tox -e bindep` has to install >> nova before it can tell you what packages you're missing to be able >> to install nova. *sigh* >> >> So anyway, there's a lot of opportunity for improvement, and that's >> just in nova, I'm sure there are similar situations throughout many >> of our projects. Using a test-requirements.txt file as a dumping >> ground for every last package any tox testenv could want may be >> convenient for tracking things, but it's far from convenient to >> actually use. The main thing we risk losing is that the >> requirements-check job currently reports whether entries in >> test-requirements.txt are compatible with the global >> upper-constraints.txt in openstack/requirements, so extending that >> to check dependencies declared in tox.ini or in package extras or >> additional external requirements lists would be needed if we wanted >> to preserve that capability. > > Gibi, should we track all of this in a few launchpad bugs for Nova? Sure, we can open couple of low prio low-hanging-fruit bugs for these. Cheers, gibi > > Cheers, > > -- > Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 > F672 2D76 From thierry at openstack.org Mon Dec 14 10:19:31 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 14 Dec 2020 11:19:31 +0100 Subject: [largescale-sig] Next meeting: December 16, 15utc Message-ID: <3353850c-ba28-408d-b8ff-ec175dc6de4f@openstack.org> Hi everyone, We'll have our last 2020 Large Scale SIG meeting this Wednesday in #openstack-meeting-3 on IRC, at 15UTC. You can doublecheck how it translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20201216T15 Our main topic will be to review the various scaling stages[1] and identify low-hanging-fruit tasks to do a first pass at improving those pages. [1] https://wiki.openstack.org/wiki/Large_Scale_SIG Feel free to add other topics to our agenda at: https://etherpad.openstack.org/p/large-scale-sig-meeting Talk to you all later, -- Thierry Carrez From chkumar246 at gmail.com Mon Dec 14 10:56:21 2020 From: chkumar246 at gmail.com (Chandan kumar) Date: Mon, 14 Dec 2020 16:26:21 +0530 Subject: [all][interop] Reforming the refstack maintainers team In-Reply-To: <013e3ef9-82a8-213c-9bc2-a9ac13d835b5@openstack.org> References: <1765341f106.111a3f243109782.5942668683123760803@ghanshyammann.com> <013e3ef9-82a8-213c-9bc2-a9ac13d835b5@openstack.org> Message-ID: Hello Ghanshyam, On Mon, Dec 14, 2020 at 3:31 PM Thierry Carrez wrote: > > Ghanshyam Mann wrote: > > As Goutham mentioned in a separate ML thread[2] that there is no active maintainer for refstack repo > > which we discussed in today's interop meeting[1]. We had a few volunteers who can help to maintain the > > refstack and other interop repo which is good news. > > > > I would like to call for more volunteers (new or existing ones), if you are interested to help please do reply > > to this email. The role is to maintain the source code of the below repos. I will propose the ACL changes in infra sometime > > next Friday (18th dec) or so. > > [...] > > Thanks Ghanshyam, Goutham and all volunteers for taking this over! > > Last time we discussed the future of RefStack it was pretty clear that > our ecosystem values its existence and how it helps asserting products > compatibility with a set of APIs. > > While I can't carve the time to take a core reviewer role on this, I > intend to post patches wherever necessary when issues are reported by > RefStack users trying to submit results. > I am already a part of the refstack-core group. Let me know how I can help there? Thanks, Chandan Kumar From dtantsur at redhat.com Mon Dec 14 12:58:47 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 14 Dec 2020 13:58:47 +0100 Subject: [stable][requirements][neutron] Capping pip in stable branches or not In-Reply-To: <20201213163338.ynkd7mrxqbok5eos@yuggoth.org> References: <20201211231236.6moz4evzigvctwsh@yuggoth.org> <3326204.V25eIC5XRa@whitebase.usersys.redhat.com> <20201213163338.ynkd7mrxqbok5eos@yuggoth.org> Message-ID: On Sun, Dec 13, 2020 at 5:36 PM Jeremy Stanley wrote: > On 2020-12-13 14:39:58 +0100 (+0100), Luigi Toscano wrote: > > On Saturday, 12 December 2020 00:12:36 CET Jeremy Stanley wrote: > > > On 2020-12-11 20:38:30 +0000 (+0000), Sorin Sbarnea wrote: > > > [...] > > > > Regarding decoupling linting from test-requirements: yes! This was > > > > already done by some when conflicts appeared. For old branches I > > > > personally do not care much even if maintainers decide to disable > > > > linting, their main benefit is on main branches. > > > [...] > > > > > > To be honest, if I had my way, test-requirements.txt files would die > > > in a fire. Sure it's a little more work to be specific about the > > > individual requirements for each of your testenvs in tox.ini, but > > > the payoff is that people aren't needlessly installing bandit when > > > they run flake8 (for example). The thing we got into the PTI about > > > using a separate doc/requirements.txt is a nice compromise in that > > > direction, at least. > > > > Wouldn't this mean tracking requirements into two different kind > > of places:the main requirements.txt file, which is still going to > > be needed even for tests, and the tox environment definitions? > > Technically we already do. The requirements.txt file contains actual > runtime Python dependencies of the software (technically > setup_requires in Setuptools parlance). Then we have this vague > test-requirements.txt file which installs everything under the sun > a test might want, including the kitchen sink. Tox doesn't reuse one > virtualenv for multiple testenv definitions, it creates a separate > one for each, so for example... > > In the nova repo, if you `tox -e bandit` or `tox -e pep8` it's going > to install coverage, psycopg2, PyMySQL, requests, > python-barbicanclient, python-ironicclient, and a whole host of > other stuff, including the entire transitive dependency set for > everything in there, rather than just the one tool it needs to run. > I can't even run the pep8 testenv locally because to do that I > apparently need a Python package named zVMCloudConnector which wants > root access to create files like > /lib/systemd/system/sdkserver.service and > /etc/sudoers.d/sudoers-zvmsdk and /var/lib/zvmsdk/* and > /etc/zvmsdk/* in my system. WHAT?!? Do nova's developers actually > ever run any of this themselves? > > Okay, so that one's actually in requirements.txt (might be a good > candidate for a separate extras in the setup.cfg instead), but > seriously, it's trying to install 182 packages (present count on > master) just to do a "quick" style check, and the resulting .tox > created from that is 319MB in size. How is that in any way sane? If > I tweak the testenv:pep8 definition in tox.ini to set > deps=flake8,hacking,mypy and and usedevelop=False, and set > skipsdist=True in the general tox section, it installs a total of 9 > packages for a 36MB .tox directory. It's an extreme example, sure, > but remember this is also happening in CI for each patch uploaded, > and this setup cost is incurred every time in that context. > Thanks for the hint btw, I'll apply it to our repos. > > This is already solved in a few places in the nova repo, in > different ways. One is the docs testenv, which installs > doc/requirements.txt (currently 10 mostly Sphinx-related entries) > instead of combining all that into test-requirements.txt too. > Another is the osprofiler extra in setup.cfg allowing you to `pip > install nova[osprofiler]` to get that specific dependency. Yet still > another is the bindep testenv, which explicitly declares deps=bindep > and so installs absolutely nothing else (save bindep's own > dependencies)... or, well, it would except skipsdist got set to > False by https://review.openstack.org/622972 making that testenv > effectively pointless because now `tox -e bindep` has to install > nova before it can tell you what packages you're missing to be able > to install nova. *sigh* > > So anyway, there's a lot of opportunity for improvement, and that's > just in nova, I'm sure there are similar situations throughout many > of our projects. Using a test-requirements.txt file as a dumping > ground for every last package any tox testenv could want may be > convenient for tracking things, but it's far from convenient to > actually use. The main thing we risk losing is that the > requirements-check job currently reports whether entries in > test-requirements.txt are compatible with the global > upper-constraints.txt in openstack/requirements, so extending that > to check dependencies declared in tox.ini or in package extras or > additional external requirements lists would be needed if we wanted > to preserve that capability. > -- > Jeremy Stanley > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Dec 14 13:04:25 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 14 Dec 2020 14:04:25 +0100 Subject: [neutron] Bug deputy - 7.12 - 14.12.2020 Message-ID: <20201214122048.4gusnjbcses27cr7@p1.localdomain> Hi, I was bug deputy this last week. Below is summary of new bugs reported during this week: **Critical** https://bugs.launchpad.net/neutron/+bug/1907491 - Patch proposed https://review.opendev.org/c/openstack/neutron/+/766775 https://bugs.launchpad.net/neutron/+bug/1907068 - fixed already https://bugs.launchpad.net/neutron/+bug/1907242 - fixed already by bcafarel **High** https://bugs.launchpad.net/neutron/+bug/1907232 - *unassigned*, related to neutron-dynamic-routing and its API https://bugs.launchpad.net/neutron/+bug/1907411 - only for stable branches. Patch proposed https://review.opendev.org/c/openstack/neutron/+/766167 **Medium** https://bugs.launchpad.net/neutron/+bug/1908057 - unassigned https://bugs.launchpad.net/neutron/+bug/1907548 - vlan transparency for Linuxbridge - we needs someone to take a look into that https://bugs.launchpad.net/nova/+bug/1907438 - patch already proposed https://review.opendev.org/c/openstack/neutron/+/766508 https://bugs.launchpad.net/neutron/+bug/1907695 - assigned to ralonsoh, patch https://review.opendev.org/c/openstack/neutron/+/766508 **Not decided yet** https://bugs.launchpad.net/neutron/+bug/1907175 - still triaging, waiting for more info now... - help from the L3 subteam is wlcome :) **Others** https://bugs.launchpad.net/neutron/+bug/1907710 - fix already done https://bugs.launchpad.net/horizon/+bug/1907843 - I marked it as Invalid in Neutron and added Horizon as affected project ## Old bug revived recently https://bugs.launchpad.net/neutron/+bug/1907710 - ralonsoh is on it, patch https://review.opendev.org/c/openstack/neutron/+/766277 -- Slawek Kaplonski Principal Software Engineer Red Hat From bcafarel at redhat.com Mon Dec 14 14:16:43 2020 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Mon, 14 Dec 2020 15:16:43 +0100 Subject: [stable][requirements][neutron] Capping pip in stable branches or not In-Reply-To: References: <20201211231236.6moz4evzigvctwsh@yuggoth.org> <3326204.V25eIC5XRa@whitebase.usersys.redhat.com> <20201213163338.ynkd7mrxqbok5eos@yuggoth.org> Message-ID: Top-posting to recap all the interesting answers and answer my initial mail. The overall feeling I get is that even with the changes that may be needed to satisfy the new resolver, we should be fine to apply these to stable branches: * lower-constraints was discussed a lot, this is where largest changes were spotted but they are OK given the current use/effectiveness of these jobs (or maybe even dropped soon) * linters can be extracted from test-requirements, to limit linters version bumps. I had quickly tried that for the neutron fix and it had failed in some other job, but I will take another look in a separate patch. Then if needed this change can be squashed with pip requirements fixes in stable branches. * For some recent branches (victoria for example), style fixes are small so this can be just cherry-picked from master to have a working branch * Other requirements bumps should be OK as they actually indicate the proper needed versions now * If we ever hit a change (old third-pary dependency) that cannot be fixed without going over upper-constraints, then we may have to cap pip. Hopefully, this will not be hit. * https://review.opendev.org/q/I8f24b839bf42e2fb9803dc7df3a30ae20cf264eb fix for bandit 1.6.3 may help to limit the impact (I did not retest yet) If all of this sounds good, then I guess it will be time to play whack-a-stable-mole On Mon, 14 Dec 2020 at 14:03, Dmitry Tantsur wrote: > > > On Sun, Dec 13, 2020 at 5:36 PM Jeremy Stanley wrote: > >> On 2020-12-13 14:39:58 +0100 (+0100), Luigi Toscano wrote: >> > On Saturday, 12 December 2020 00:12:36 CET Jeremy Stanley wrote: >> > > On 2020-12-11 20:38:30 +0000 (+0000), Sorin Sbarnea wrote: >> > > [...] >> > > > Regarding decoupling linting from test-requirements: yes! This was >> > > > already done by some when conflicts appeared. For old branches I >> > > > personally do not care much even if maintainers decide to disable >> > > > linting, their main benefit is on main branches. >> > > [...] >> > > >> > > To be honest, if I had my way, test-requirements.txt files would die >> > > in a fire. Sure it's a little more work to be specific about the >> > > individual requirements for each of your testenvs in tox.ini, but >> > > the payoff is that people aren't needlessly installing bandit when >> > > they run flake8 (for example). The thing we got into the PTI about >> > > using a separate doc/requirements.txt is a nice compromise in that >> > > direction, at least. >> > >> > Wouldn't this mean tracking requirements into two different kind >> > of places:the main requirements.txt file, which is still going to >> > be needed even for tests, and the tox environment definitions? >> >> Technically we already do. The requirements.txt file contains actual >> runtime Python dependencies of the software (technically >> setup_requires in Setuptools parlance). Then we have this vague >> test-requirements.txt file which installs everything under the sun >> a test might want, including the kitchen sink. Tox doesn't reuse one >> virtualenv for multiple testenv definitions, it creates a separate >> one for each, so for example... >> >> In the nova repo, if you `tox -e bandit` or `tox -e pep8` it's going >> to install coverage, psycopg2, PyMySQL, requests, >> python-barbicanclient, python-ironicclient, and a whole host of >> other stuff, including the entire transitive dependency set for >> everything in there, rather than just the one tool it needs to run. >> I can't even run the pep8 testenv locally because to do that I >> apparently need a Python package named zVMCloudConnector which wants >> root access to create files like >> /lib/systemd/system/sdkserver.service and >> /etc/sudoers.d/sudoers-zvmsdk and /var/lib/zvmsdk/* and >> /etc/zvmsdk/* in my system. WHAT?!? Do nova's developers actually >> ever run any of this themselves? >> >> Okay, so that one's actually in requirements.txt (might be a good >> candidate for a separate extras in the setup.cfg instead), but >> seriously, it's trying to install 182 packages (present count on >> master) just to do a "quick" style check, and the resulting .tox >> created from that is 319MB in size. How is that in any way sane? If >> I tweak the testenv:pep8 definition in tox.ini to set >> deps=flake8,hacking,mypy and and usedevelop=False, and set >> skipsdist=True in the general tox section, it installs a total of 9 >> packages for a 36MB .tox directory. It's an extreme example, sure, >> but remember this is also happening in CI for each patch uploaded, >> and this setup cost is incurred every time in that context. >> > > Thanks for the hint btw, I'll apply it to our repos. > I will have to check that too, making these jobs lighter for CI is always nice! > > >> >> This is already solved in a few places in the nova repo, in >> different ways. One is the docs testenv, which installs >> doc/requirements.txt (currently 10 mostly Sphinx-related entries) >> instead of combining all that into test-requirements.txt too. >> Another is the osprofiler extra in setup.cfg allowing you to `pip >> install nova[osprofiler]` to get that specific dependency. Yet still >> another is the bindep testenv, which explicitly declares deps=bindep >> and so installs absolutely nothing else (save bindep's own >> dependencies)... or, well, it would except skipsdist got set to >> False by https://review.openstack.org/622972 making that testenv >> effectively pointless because now `tox -e bindep` has to install >> nova before it can tell you what packages you're missing to be able >> to install nova. *sigh* >> >> So anyway, there's a lot of opportunity for improvement, and that's >> just in nova, I'm sure there are similar situations throughout many >> of our projects. Using a test-requirements.txt file as a dumping >> ground for every last package any tox testenv could want may be >> convenient for tracking things, but it's far from convenient to >> actually use. The main thing we risk losing is that the >> requirements-check job currently reports whether entries in >> test-requirements.txt are compatible with the global >> upper-constraints.txt in openstack/requirements, so extending that >> to check dependencies declared in tox.ini or in package extras or >> additional external requirements lists would be needed if we wanted >> to preserve that capability. >> -- >> Jeremy Stanley >> > > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Mon Dec 14 16:24:12 2020 From: mkopec at redhat.com (Martin Kopec) Date: Mon, 14 Dec 2020 17:24:12 +0100 Subject: [opendev][interop] How to edit Issues link in opendev.org Message-ID: Hi everyone, I have noticed that 'Issues' link right under the repo title (osf/refstack-client) [1] redirects to the old location [2] while it should be [3] after the project got moved to the osf/ namespace by [4]. The same problem is with osf/refstack, osf/python-tempestconf and osf/interop projects. I've found this change [5] in opendev/project-config which tracks the renaming process however it seems it didn't do the trick. What is the process to edit the 'Issues' link in opendev.org [1] https://opendev.org/osf/refstack-client [2] https://storyboard.openstack.org/#!/project/openstack/refstack-client [3] https://storyboard.openstack.org/#!/project/osf/refstack-client [4] https://review.opendev.org/c/openstack/project-config/+/734669 [5] https://review.opendev.org/c/opendev/project-config/+/735211 Thank you, -- Martin Kopec Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Mon Dec 14 16:49:38 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 14 Dec 2020 17:49:38 +0100 Subject: [TripleO] when running cloud deploy again to add more nodes: Database schema file with version 139 doesn't exist Message-ID: Hi all. I am getting some failing components on controller host: bb44eef40a91 10.120.129.222:8787/tripleou/centos-binary-cinder-api:current-tripleo /usr/bin/bootstra... 18 minutes ago Exited (1) 18 minutes ago cinder_api_db_sync and in controller:/var/log/containers/stdouts/ 2020-12-14T16:15:20.025430534+00:00 stdout F Error during database migration: "Database schema file with version 139 doesn't exist." how to bypass this? Thanks -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From peljasz at yahoo.co.uk Mon Dec 14 18:02:32 2020 From: peljasz at yahoo.co.uk (lejeczek) Date: Mon, 14 Dec 2020 18:02:32 +0000 Subject: unable to use/assign specific fixed IP - reserved_dhcp_port. References: <185ce2d6-a20e-2540-2c6f-dcd08d228d55.ref@yahoo.co.uk> Message-ID: <185ce2d6-a20e-2540-2c6f-dcd08d228d55@yahoo.co.uk> Hi guys. I'm try to spin up and instance: $ openstack server create midway --flavor C1.vss.tiny --volume midway --nic net-id=fd8659cf-8723-4254-bf88-837b37a50a24,v4-fixed-ip=10.0.0.4 Fixed IP address 10.0.0.4 is already in use on instance reserved_dhcp_port. (HTTP 400) (Request-ID: req-bf34b06c-d0b7-4798-8eeb-9f5bb87348e2) Having no admin access to the stack - how to troubleshoot and ideally resolve it before I have to admins demanding action :) many thanks, L. From gokul.kalal at sooktha.com Mon Dec 14 05:59:23 2020 From: gokul.kalal at sooktha.com (Gokul Kalal) Date: Mon, 14 Dec 2020 11:29:23 +0530 Subject: Openstack related quieries Message-ID: Hi All, here are the few questionnaires for which I am looking for answers. It would be of great help if anybody provides their suggestions on the below questions. 1. Is RT enabled KVM available for Openstack? It would be helpful if anyone provides a link for the same? 2. Is Openstack using devstack preferable in production or it is meant only for the dev setup? 3. For RT KVM, Is hyper-threading preferred? 4. What are the minimum system requirements for openstack setup? (Currently, I am equipped with Quad-Core i7-8559U with 16 GB RAM, will this work for Openstack setup? My requirements are 3 VMs, out of which 2 VM will be having dedicated 3 CPU cores allocated(hyperthreading enabled) and host ubuntu and another VM will be running on the remaining 2 cores - isolcpu used to isolate cores). -- Thanks & Regards, Gokul G Kalal -------------- next part -------------- An HTML attachment was scrubbed... URL: From mariusz.karpiarz at vscaler.com Mon Dec 14 16:53:42 2020 From: mariusz.karpiarz at vscaler.com (Mariusz Karpiarz) Date: Mon, 14 Dec 2020 16:53:42 +0000 Subject: [CLOUDKITTY] - Prometheus metrics.yml sample? Message-ID: <10af774a-632b-e62d-6236-670575f64a3f@vscaler.com> Try this config: https://github.com/mkarpiarz/cloudkitty-playground/tree/prometheus-collector/kolla/config Mariusz Karpiarz From rafaelweingartner at gmail.com Mon Dec 14 20:12:05 2020 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Mon, 14 Dec 2020 17:12:05 -0300 Subject: [cloudkitty] December 18th meeting is canceled Message-ID: Hello everybody, As discussed in today's meeting [1], the 18th December meeting is canceled. Therefore, our next meeting will be on January 11th. Meanwhile, let's try to review and test as many patches as possible :) [1] http://eavesdrop.openstack.org/meetings/cloudkitty/2020/cloudkitty.2020-12-14-14.00.log.html Cheers, -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From haleyb.dev at gmail.com Mon Dec 14 20:54:40 2020 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 14 Dec 2020 15:54:40 -0500 Subject: unable to use/assign specific fixed IP - reserved_dhcp_port. In-Reply-To: <185ce2d6-a20e-2540-2c6f-dcd08d228d55@yahoo.co.uk> References: <185ce2d6-a20e-2540-2c6f-dcd08d228d55.ref@yahoo.co.uk> <185ce2d6-a20e-2540-2c6f-dcd08d228d55@yahoo.co.uk> Message-ID: On 12/14/20 1:02 PM, lejeczek wrote: > Hi guys. > > I'm try to spin up and instance: > > $ openstack server create midway --flavor C1.vss.tiny --volume midway > --nic net-id=fd8659cf-8723-4254-bf88-837b37a50a24,v4-fixed-ip=10.0.0.4 > Fixed IP address 10.0.0.4 is already in use on instance > reserved_dhcp_port. (HTTP 400) (Request-ID: > req-bf34b06c-d0b7-4798-8eeb-9f5bb87348e2) > > Having no admin access to the stack - how to troubleshoot and ideally > resolve it before I have to admins demanding action :) > many thanks, L. When the device ID is set to "reserved_dhcp_port" someone has done it manually, so you might be out of luck if you don't have admin rights to the subnet. And if there are already instances booted on it the DHCP IP change could cause some connectivity issues (e.g. DNS). On a related note, if you can create your own subnet, the best way to make sure an IP is not assigned is by specifying the allocation pool when you create it, for example: $ openstack subnet create --network private --subnet-range 10.0.0.0/24 --allocation-pool start=10.0.0.5,end=10.0.0.254 --gateway 10.0.0.1 private-subnet That way it shouldn't allocate 10.0.0.4 to an instance. Without looking into the code further I can't remember if the DHCP port has to follow the allocation pool restrictions though. -Brian From whayutin at redhat.com Mon Dec 14 21:05:11 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 14 Dec 2020 14:05:11 -0700 Subject: [tripleo][ci] current status Message-ID: Greetings, *Master*: OVB jobs are working \0/ again [1] Master promoted today, logged a few tempest failures and moved to skiplist *Victoria*: Promoted today OVB jobs all RED waiting [2] *Ussuri*: RED, we need to land 65077 757836 757821 OVB RED *Train*: Most of Upstream is OK [3], update and upgrade jobs need to move to nv imho until fixed [3] non-passing jobs should just be removed [4] *ALL: *We've only merged 17 patches in the last 24 hours, we're usually closer to 30-40+ per day. I'll keep an eye on it, it's only monday. weee [5] [1] https://review.rdoproject.org/zuul/builds?job_name=tripleo-ci-centos-8-ovb-1ctlr_1comp-featureset001&job_name=tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001&branch=master [2] https://review.opendev.org/c/openstack/tripleo-heat-templates/+/766797 [3] http://dashboard-ci.tripleo.org/d/3-DYSmOGk/jobs-exploration?orgId=1&var-influxdb_filter=branch%7C%3D%7Cstable%2Ftrain [4] https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-7-standalone-upgrade-train https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-7-containerized-undercloud-upgrades https://review.opendev.org/c/openstack/tripleo-ci/+/766621 [5] http://paste.openstack.org/show/801030/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Dec 14 21:14:08 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 14 Dec 2020 21:14:08 +0000 Subject: Openstack related quieries In-Reply-To: References: Message-ID: On Mon, 2020-12-14 at 11:29 +0530, Gokul Kalal wrote: > Hi All, here are the few questionnaires for which I am looking for answers. > It would be of great help if anybody provides their suggestions on the > below questions. > > 1. Is RT enabled KVM available for Openstack? It would be helpful if anyone > provides a link for the same? yes nova support enableing realtime instsantce on kvm via libvirt as of the mitaka release https://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/libvirt-real-time.html this can be enabled via nova flavor extra_specs https://docs.openstack.org/nova/latest/user/flavors.html#extra-specs-realtime-policy > 2. Is Openstack using devstack preferable in production or it is meant only > for the dev setup? devstack is intend solely for devleopemnt and should not be used outside of a test envionrment it does not directly support upgrade and other mantaicne operation like host reboots :) in the distant past of openstack there were some who used devstack in production but unless your a developer working on openstack you should not use devstack. > 3. For RT KVM, Is hyper-threading preferred? generally it is not but it depend on your workload. > 4. What are the minimum system requirements for openstack setup? our vms that are avaiable in the CI system have 8 cores and 8GB of ram and 80GB of disk. i have ran a 4 node test deployment with devstack on my laptop which has 32GB of ram and a quad core Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz > (Currently, I am equipped with Quad-Core i7-8559U with 16 GB RAM, will this > work for Openstack setup? My requirements are 3 VMs, out of which 2 VM will > be having dedicated 3 CPU cores allocated(hyperthreading enabled) and host > ubuntu and another VM will be running on the remaining 2 cores - isolcpu > used to isolate cores). sure for a dev setup that should be fine. depending on what services you want to run you cna run a contoler/all-in-one node on 6GB of ram 8G is recommended, compute nodes can run on 2-4G depending on how much space you want to keep for nested vms. for a production deployment you will want something more substantial but for devleopment you can get by with a few intel nucs or old laptops. you could with some work even get openstack running on a few raspbery pi 4 boards butthe lack of hardware virualisation support means the vms you boot would be quite slow. if your planning to work on project other then nova then that might not be a consern openstack scalse down pretty well so you dont need to have a datacenter to develop it. personally i do most of my dev in 8G vms on a home openstack deployment. by the way we have 80G of space on the ci vms for logs and other reason. you can install on a ubuntu vm with 20G of disk or less in some cases, if you want cinder you will need more space for it to manage similar for swift but as i said the requiremnt tend to scale based on what you want to deploy and use. > From nrajaraman at unitedlayer.com Mon Dec 14 21:28:22 2020 From: nrajaraman at unitedlayer.com (Nagarjun Rajaraman) Date: Mon, 14 Dec 2020 21:28:22 +0000 Subject: Is Microsoft Windows 2019, 2016 HyperV supported for nova ? Message-ID: Hi all , Does anyone know if Microsoft Windows 2019, 2016 HyperV supported for openstack nova ? Could only see supported documentation for 2012 & 2012 R2 , what are the steps are need to follow to integrate Hyper V host that runs windows server 2019 . Best Regards , Arjun -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Dec 14 21:37:52 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 14 Dec 2020 21:37:52 +0000 Subject: [stable][requirements][neutron] Capping pip in stable branches or not In-Reply-To: <20201214095453.5wvb7xoucifagudl@lyarwood-laptop.usersys.redhat.com> References: <20201211231236.6moz4evzigvctwsh@yuggoth.org> <3326204.V25eIC5XRa@whitebase.usersys.redhat.com> <20201213163338.ynkd7mrxqbok5eos@yuggoth.org> <20201214095453.5wvb7xoucifagudl@lyarwood-laptop.usersys.redhat.com> Message-ID: <20201214213751.p57jt73hhqvgg4fo@yuggoth.org> On 2020-12-14 09:54:53 +0000 (+0000), Lee Yarwood wrote: > On 13-12-20 16:33:39, Jeremy Stanley wrote: [...] > > Tox doesn't reuse one virtualenv for multiple testenv > > definitions, it creates a separate one for each, so for > > example... > > That isn't technically true within Nova, multiple tox envs use the > {toxworkdir}/shared envdir for the virtualenv. > > mypy, pep8, fast8, genconfig, genpolicy, cover, debug and bandit. [...] Neat, I suppose that's not a terrible workaround for some of this, though I wonder if we'll see it cause problems over time with the new dep solver in pip if some of those tools grow any conflicting transitive dependencies. > > I can't even run the pep8 testenv locally because to do that I > > apparently need a Python package named zVMCloudConnector which wants > > root access to create files like > > /lib/systemd/system/sdkserver.service and > > /etc/sudoers.d/sudoers-zvmsdk and /var/lib/zvmsdk/* and > > /etc/zvmsdk/* in my system. WHAT?!? Do nova's developers actually > > ever run any of this themselves? > > ... > > Which version of that package is the pep8 env pulling in for you? > > I don't see any such issues with zVMCloudConnector==1.4.1 locally on > Fedora 33, tox 3.19.0, pip 20.2.2 etc. > > Would you mind writing up a launchpad bug for this? [...] I think I've worked out why you're not seeing it. My tox is installed with the tox-venv plugin so that it will use the venv module instead of virtualenv, and that doesn't seed a copy of the wheel library into the testenv by default. Apparently if you try to install zVMCloudConnector via `setup.py install` instead of making a wheel (which is what happens by default if the wheel library is absent), this is the result. For the curious, a simpler reproducer is: rm -rf ~/.cache/pip # in case you have a wheel cached for it python3 -m venv foo # simple venv without the wheel module foo/bin/pip install zVMCloudConnector Install wheel into the venv first and then zVMCloudConnector installs cleanly, and indeed if I test with just plain tox (no tox-venv plugin installed) it ends up getting a wheel for zVMCloudConnector so doesn't hit the root-only build steps from its sdist. A bit of research indicates tox-venv was deprecated earlier this year once virtualenv gained the ability to delegate creation to the venv module itself, and even if you set VIRTUALENV_CREATOR=venv in the setenv list in tox.ini or passenv it from the calling environment you're not going to run into this because venvs as built from virtualenv get wheel seeded in them by default. Now that I've gotten to the bottom of this, given it's a bit of a corner case, I'm on the fence about filing a bug about it against python-zvm-sdk in LP and likely won't unless folks actually think it'll be useful to relate. > Gibi, should we track all of this in a few launchpad bugs for Nova? To be clear, I wasn't trying to single out nova, it was simply a convenient example of where the idea of using a single test-requirements.txt file may have tipped over from convenience into inconvenience. (And then the zVMCloudConnector tangent of course.) -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mnaser at vexxhost.com Mon Dec 14 21:41:14 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 14 Dec 2020 16:41:14 -0500 Subject: [tc] weekly update Message-ID: Hi everyone, Here's an update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # Patches ## Open Reviews - Deprecate openstack-ansible-galera_client role https://review.opendev.org/c/openstack/governance/+/765784 - Add Resolution of TC stance on the OpenStackClient https://review.opendev.org/c/openstack/governance/+/759904 - Improve check-review-status https://review.opendev.org/c/openstack/governance/+/766249 - Clarify impact on releases for SIGs https://review.opendev.org/c/openstack/governance/+/752699 ## General Changes - Add checks for repo not to be in both project and legacy data https://review.opendev.org/c/openstack/governance/+/766642 - Remove Searchlight project team https://review.opendev.org/c/openstack/governance/+/764530 - Remove Qinling project team https://review.opendev.org/c/openstack/governance/+/764523 ## Project Updates - Revive os_monasca https://review.opendev.org/c/openstack/governance/+/765800 - Add Magpie charm to OpenStack charms https://review.opendev.org/c/openstack/governance/+/762820 ## Other Reminders - Our next [TC] Weekly meeting is scheduled on December 17 at 1500 UTC. If you would like to add topics for discussion, please go to https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting and fill out your suggestions by Wednesday, December 16, at 2100 UTC. Thanks for reading! Mohammed & Kendall -- Mohammed Naser VEXXHOST, Inc. From ekuvaja at redhat.com Mon Dec 14 22:09:42 2020 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Mon, 14 Dec 2020 22:09:42 +0000 Subject: [all] Dropping lower constraints testing (WAS: Re: [stable][requirements][neutron] Capping pip in stable branches or not) In-Reply-To: References: <20201211143818.2w24gusndhnpzvnq@yuggoth.org> <17652961b73.11c186083102611.2301739328973440930@ghanshyammann.com> Message-ID: On Fri, Dec 11, 2020 at 9:10 PM Goutham Pacha Ravi wrote: > Hi, > > I hope you won't mind me shifting this discussion to [all] - many projects > have had to make changes due to the dependency resolver catching some of > our uncaught lies. > In manila, i've pushed up three changes to fix the CI on the main, > stable/victoria and stable/ussuri [1] branches. I used fungi's method of > installing things and playing whack-a-mole [2] and Brain > Rosmaita's approach [3] of taking the opportunity to raise the minimum > required packages for Wallaby. However, this all seems kludgy maintenance - > and possibly no-one is benefitting from the effort we're putting into this > as called out. > > Can more distributors and deployment tooling folks comment? > > [1] > https://review.opendev.org/q/project:openstack/manila+topic:update-requirements > > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019285.html > [3] https://review.opendev.org/c/openstack/cinder/+/766085 > > > > On Fri, Dec 11, 2020 at 12:51 PM Sorin Sbarnea > wrote: > >> Jeremy nailed it very well. >> >> Tripleo already removed lower-constraints from most places (some changes >> may be still waiting to be gated). >> >> Regarding decoupling linting from test-requirements: yes! This was >> already done by some when conflicts appeared. For old branches I personally >> do not care much even if maintainers decide to disable linting, their main >> benefit is on main branches. >> >> On Fri, 11 Dec 2020 at 18:14, Radosław Piliszek < >> radoslaw.piliszek at gmail.com> wrote: >> >>> On Fri, Dec 11, 2020 at 5:16 PM Ghanshyam Mann >>> wrote: >>> > >>> > Maintaining it up to date is not so worth compare to the effort it is >>> taking. I will also suggest to >>> > remove this. >>> > >>> >>> Kolla dropped lower-constraints from all the branches. >>> >>> -yoctozepto >>> >>> -- >> -- >> /sorin >> > Hello all, While being frustrated to the point I was willing to throw away the check-requirements job to get around what I thought failed on my efforts to fix the lower-constraints job (due to me misreading what actually failed in the check-requirements job) , I think scrapping the lower-constraints job would be very counterproductive. We in Glance have been hands full for the past few cycles and assuming lower-constraints job actually working as intended has led us to neglect some of our requirements housekeeping quite a bit. If it had not broken now, we likely would have neglected it for quite a few cycles more. Due to fixing the said job I had to fix the minimums in our requirements.txt too. While I'm not sure maintaining the lower-constraints.txt has direct benefit for many, it actually keeps us honest with our requirements and prevents stuff breaking down the line. (Expecting that the lower-constraints job actually works from now on and highlights when we start breaking up on our dependency chain.) Yes it's a hideous task to get up to date once you have neglected it for a long time, but I see it as a very valuable tool to highlight that I should pay more attention to the requirements and what versions of dependencies we claim to work with. - jokke -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Mon Dec 14 22:55:20 2020 From: iwienand at redhat.com (Ian Wienand) Date: Tue, 15 Dec 2020 09:55:20 +1100 Subject: [all][SDK][zuul][DIB] CentOS-8 In-Reply-To: <80E31302-CDB1-4ABD-93D7-29D791933875@gmail.com> References: <80E31302-CDB1-4ABD-93D7-29D791933875@gmail.com> Message-ID: <20201214225520.GA1555154@fedora19.localdomain> On Mon, Dec 14, 2020 at 10:43:24AM +0100, Artem Goncharov wrote: > Over around a week we have patches in SDK being blocked by > functional test of the nodepool ... trying to build CentOS-8 image > blocked by > https://review.opendev.org/c/openstack/diskimage-builder/+/765963 Sorry about this; it's mostly a conflence of this change getting stuck in the gate due to a circular dependency on [1] and the people who usually debug this type of thing also being the same people who have distracted with the production Gerrit recently. Thankfully some others stepped up and squashed the changes, which should fix things. > Normally I would say OK, let’s merge DIB fix and everything is fine, > but due to a recent announcement of RedHat (IBM) to discontinue > CentOS (non stream) I wonder, whether the DIB change as such is > useful at all. > At least for the moment I see a perhaps more appropriate fix in > nodepool to build some other image (or Fedora or at least to use > CentOS stream). DIB should be fixed as well, that’s clear, but maybe > differently. The dib-nodepool-functional-openstack- tests in general should be kept in sync with what we're building as OpenDev production images. They're the full end-to-end test that (hopefully) keeps production image builds stable. So I would not want to remove the CentOS 8 tests until we don't need to build that in OpenDev any more. I guess this probably has to happen, but I think we also have about a 1 year runway (Dec 2021) to move to streams so we don't have to take drastic action. [1] https://review.opendev.org/c/openstack/diskimage-builder/+/766447 From rosmaita.fossdev at gmail.com Tue Dec 15 04:53:39 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 14 Dec 2020 23:53:39 -0500 Subject: [oslo][nova][glance][cinder] move cursive library to oslo? Message-ID: <67aae566-b142-d975-3146-0128b00d1ec3@gmail.com> Hello Oslo Team, Nova, Glance, and Cinder all make use of the 'cursive' library for image-signature-validation. The library is currently in the 'x' namespace: https://opendev.org/x/cursive The current cursive-core team entirely consists of members of the Johns Hopkins University Applied Physics Laboratory, which ended its involvement with OpenStack in July 2018 [0]. This leaves us in a position where three of the major openstack projects depend on a library to which no one currently around can approve code changes. I'd like to propose that the cursive library be moved back to the 'openstack' namespace and be put under Oslo governance with the consuming teams sharing the maintenance of the library. I don't think this will make much new work for the Oslo team--the library has been very stable and hasn't changed in over 2 years--but it will ensure that should any bugfixes be required, there will be oslo team members who can approve the patches. Thanks for thinking this over, brian [0] http://lists.openstack.org/pipermail/openstack-dev/2018-July/131978.html From amotoki at gmail.com Tue Dec 15 05:58:00 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 15 Dec 2020 14:58:00 +0900 Subject: [all][stable] rocky tempest based jobs are broken now Message-ID: Hi, All jobs based on devstack-tempest in stable/rocky are now broken. The cause is that stackviz requirements are not compatible with python 3.5 (example [1]) gmann is testing a fix in stackviz [2]. It looks like the change itself works well but we need to handle another failure in the nodejs job. gmann and I will follow-up the fix and keep you updated. Thanks, Akihiro Motoki (amotoki) [1] https://zuul.opendev.org/t/openstack/build/acf6ccdf1b304cb29ab41baa0d80ec55 [2] https://review.opendev.org/c/openstack/stackviz/+/767063 From malik.obaid at rapidcompute.com Tue Dec 15 06:42:00 2020 From: malik.obaid at rapidcompute.com (Malik Obaid) Date: Tue, 15 Dec 2020 11:42:00 +0500 Subject: [ovn][neutron] OVN Production Installation Guide Message-ID: <000001d6d2ad$649dd9d0$2dd98d70$@rapidcompute.com> Hi, I was wondering if there is any comprehensive OVN production installation guide available, or if anyone who has implemented OVN for Openstack Neutron on their Ubuntu based production environment, is willing to provide the guide here, would be a big help. Waiting for you response. Thank you and best regards, Malik Obaid -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Tue Dec 15 10:29:25 2020 From: hberaud at redhat.com (hberaud) Date: Tue, 15 Dec 2020 11:29:25 +0100 Subject: [oslo] No meeting during the next 2 weeks Message-ID: Hello Osloers, I'm on PTO during the next 2 weeks so I'll be AFK. I won't be running the next 2 meetings. If someone else from the Oslo team wants to run it, please feel free. Let me wish you happy end of the year parties! Hervé -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Tue Dec 15 10:41:52 2020 From: hberaud at redhat.com (hberaud) Date: Tue, 15 Dec 2020 11:41:52 +0100 Subject: [oslo][nova][glance][cinder] move cursive library to oslo? In-Reply-To: <67aae566-b142-d975-3146-0128b00d1ec3@gmail.com> References: <67aae566-b142-d975-3146-0128b00d1ec3@gmail.com> Message-ID: +1 from my side. As discussed yesterday with Luigi (tosky) it makes sense to me to host that under the Oslo scope, however I would appreciate to get feedback from other Oslo team members before doing anything. Even if this project seems stable we still need to continue to maintain the current code base to keep the code up-to-date and compatible with the next Python versions. Concerning the "release" point of view of this topic, if this project is stable enough I think we can adopt directly the release independent model [1]. It would help us to reduce the maintenance related to stable branches (backport fixes etc...). Do you have any opinion on this? Hervé [1] https://releases.openstack.org/reference/release_models.html#independent Le mar. 15 déc. 2020 à 06:01, Brian Rosmaita a écrit : > Hello Oslo Team, > > Nova, Glance, and Cinder all make use of the 'cursive' library for > image-signature-validation. The library is currently in the 'x' > namespace: https://opendev.org/x/cursive > > The current cursive-core team entirely consists of members of the Johns > Hopkins University Applied Physics Laboratory, which ended its > involvement with OpenStack in July 2018 [0]. > > This leaves us in a position where three of the major openstack projects > depend on a library to which no one currently around can approve code > changes. > > I'd like to propose that the cursive library be moved back to the > 'openstack' namespace and be put under Oslo governance with the > consuming teams sharing the maintenance of the library. I don't think > this will make much new work for the Oslo team--the library has been > very stable and hasn't changed in over 2 years--but it will ensure that > should any bugfixes be required, there will be oslo team members who can > approve the patches. > > Thanks for thinking this over, > brian > > > [0] > http://lists.openstack.org/pipermail/openstack-dev/2018-July/131978.html > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Tue Dec 15 12:37:45 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 15 Dec 2020 12:37:45 +0000 Subject: [oslo][nova][glance][cinder] move cursive library to oslo? In-Reply-To: <67aae566-b142-d975-3146-0128b00d1ec3@gmail.com> References: <67aae566-b142-d975-3146-0128b00d1ec3@gmail.com> Message-ID: <2f4de69d8930413bbb5b2156d9984e4f3f72f6af.camel@redhat.com> On Mon, 2020-12-14 at 23:53 -0500, Brian Rosmaita wrote: > Hello Oslo Team, > > Nova, Glance, and Cinder all make use of the 'cursive' library for > image-signature-validation. The library is currently in the 'x' > namespace: https://opendev.org/x/cursive > > The current cursive-core team entirely consists of members of the Johns > Hopkins University Applied Physics Laboratory, which ended its > involvement with OpenStack in July 2018 [0]. > > This leaves us in a position where three of the major openstack projects > depend on a library to which no one currently around can approve code > changes. > > I'd like to propose that the cursive library be moved back to the > 'openstack' namespace and be put under Oslo governance with the > consuming teams sharing the maintenance of the library. I don't think > this will make much new work for the Oslo team--the library has been > very stable and hasn't changed in over 2 years--but it will ensure that > should any bugfixes be required, there will be oslo team members who can > approve the patches. > > Thanks for thinking this over, > brian No issues from my perspective, fwiw. Stephen > > [0] http://lists.openstack.org/pipermail/openstack-dev/2018-July/131978.html > > From rosmaita.fossdev at gmail.com Tue Dec 15 13:25:22 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 15 Dec 2020 08:25:22 -0500 Subject: [oslo][nova][glance][cinder] move cursive library to oslo? In-Reply-To: References: <67aae566-b142-d975-3146-0128b00d1ec3@gmail.com> Message-ID: On 12/15/20 5:41 AM, hberaud wrote: > +1 from my side. > > As discussed yesterday with Luigi (tosky) it makes sense to me to host > that under the Oslo scope, however I would appreciate to get feedback > from other Oslo team members before doing anything. > > Even if this project seems stable we still need to continue to maintain > the current code base to keep the code up-to-date and compatible with > the next Python versions. > > Concerning the "release" point of view of this topic, if this project is > stable enough I think we can adopt directly the release independent > model [1]. It would help us to reduce the maintenance related to stable > branches (backport fixes etc...). Do you have any opinion on this? I agree that the release independent model makes sense for this library. > > Hervé > > [1] > https://releases.openstack.org/reference/release_models.html#independent > > > Le mar. 15 déc. 2020 à 06:01, Brian Rosmaita > a écrit : > > Hello Oslo Team, > > Nova, Glance, and Cinder all make use of the 'cursive' library for > image-signature-validation.  The library is currently in the 'x' > namespace: https://opendev.org/x/cursive > > The current cursive-core team entirely consists of members of the Johns > Hopkins University Applied Physics Laboratory, which ended its > involvement with OpenStack in July 2018 [0]. > > This leaves us in a position where three of the major openstack > projects > depend on a library to which no one currently around can approve code > changes. > > I'd like to propose that the cursive library be moved back to the > 'openstack' namespace and be put under Oslo governance with the > consuming teams sharing the maintenance of the library.  I don't think > this will make much new work for the Oslo team--the library has been > very stable and hasn't changed in over 2 years--but it will ensure that > should any bugfixes be required, there will be oslo team members who > can > approve the patches. > > Thanks for thinking this over, > brian > > > [0] > http://lists.openstack.org/pipermail/openstack-dev/2018-July/131978.html > > > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From lpetrut at cloudbasesolutions.com Tue Dec 15 13:54:03 2020 From: lpetrut at cloudbasesolutions.com (Lucian Petrut) Date: Tue, 15 Dec 2020 13:54:03 +0000 Subject: Is Microsoft Windows 2019, 2016 HyperV supported for nova ? In-Reply-To: References: Message-ID: Hi, Windows Server 2016 and 2019 are supported as well. Here are the Nova driver docs: https://compute-hyperv.readthedocs.io/en/latest/ MSI installers: Nova, neutron and ceilometer agents: https://cloudbase.it/openstack-hyperv-driver/#download cinder-volume, cinder-backup: https://cloudbase.it/openstack-windows-storage/#download Regards, Lucian Petrut From: Nagarjun Rajaraman Sent: Monday, December 14, 2020 11:29 PM To: openstack-discuss at lists.openstack.org Subject: Is Microsoft Windows 2019, 2016 HyperV supported for nova ? Hi all , Does anyone know if Microsoft Windows 2019, 2016 HyperV supported for openstack nova ? Could only see supported documentation for 2012 & 2012 R2 , what are the steps are need to follow to integrate Hyper V host that runs windows server 2019 . Best Regards , Arjun -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Tue Dec 15 14:40:01 2020 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Tue, 15 Dec 2020 14:40:01 +0000 Subject: New Openstack Deployment questions - how will we respond to this change? Message-ID: <1b2962ff5a8e4d5da7568ab216576b7e@NCEMEXGP009.CORP.CHARTERCOM.com> The purpose of my Openstack Deployment email was not only to rant about the decision to kill Centos 8. I want to make sure that we don't blindly go down the easy route of pretending that Centos Stream is an acceptable replacement for Centos. If we do this, we would do a disservice to the community and to everyone who depends on it. The reality is that, at the end of 2021, Centos will no longer exist as a viable production operating system, and everyone who was using it will have to adjust. For companies that already have Centos mirror repositories setup and people managing them, the increased load imposed by Stream will be incremental, but I suspect that the vast majority of companies do not have that infrastructure already setup, and we will need to carefully consider whether our needs are better met by going forward with Stream or switching to another OS. If the OpenStack community decides to continue building on Stream, we should make it crystal clear to operators and users that Stream is not a production-ready OS and that our Stream OpenStack implementation is suitable for testing and development use only, unless they devote substantial resources to mirroring and testing Stream to insulate production clusters from the instability that it will introduce. If we do decide to continue building on Stream, how will we respond to the constant stream of bug reports that will result when Stream changes cause OpenStack components to fail? Many component teams are already understaffed. Will we have the bandwidth to constantly chase changes? Should we replace our Centos builds with RHEL, or with Rocky? Does the community have (or can we find) the resources to do the work of maintaining stable Stream mirrors and only building OpenStack on our stable versions of Stream? Or would it be better to drop Centos support and focus our efforts on operating systems that have not implemented unilateral changes that harm the community? -----Original Message----- From: Braden, Albert Sent: Friday, December 11, 2020 9:10 AM To: openstack-discuss at lists.openstack.org Subject: RE: [EXTERNAL] Re: New Openstack Deployment questions Centos Stream is fine for those who were using Centos for testing or development. It's not at all suitable for production, because rolling release doesn't provide the stability that production clusters need. Switching to Centos Stream would require significant resources to be expended to setup local mirrors and then perform exhaustive testing before each upgrade. The old Centos did this work for us; Centos was built on RHEL source that had already been tested by paying customers, and bugs fixed with the urgency that paying customers require. Adding an upstream build (Stream) to the existing downstream (Centos 8.x) was fine, but I'm disappointed by the decision to kill Centos 8. I don't want to wax eloquent about how we were betrayed; suffice it to say that even for a free operating system, suddenly changing the EOL from 2029 to 2021 is unprecedented, and places significant burdens on companies that are using Centos in production. I can understand why IBM/RH made this decision, but there's no denying that it puts production Centos users in a difficult position. I hope that Rocky Linux [1], under Gregory Kurtzer (founder of the Centos project) will turn out to be a useful alternative. {1} https://github.com/rocky-linux/rocky -----Original Message----- From: Luigi Toscano Sent: Thursday, December 10, 2020 3:51 PM To: Thomas Wakefield ; openstack-discuss at lists.openstack.org Cc: Satish Patel Subject: [EXTERNAL] Re: New Openstack Deployment questions CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. On Thursday, 10 December 2020 15:27:40 CET Satish Patel wrote: > I just built a new openstack using openstack-ansible on CentOS 8.2 > last month before news broke out. I have no choice so i am going to > stick with CentOS. > > What is the future of RDO and EPEL repo if centOS going away. ? Continue as before on CentOS Stream. -- Luigi I apologize for the nonsense below. So far I have not been able to stop it from being attached to my external emails. I'm working on it. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From akekane at redhat.com Tue Dec 15 14:48:23 2020 From: akekane at redhat.com (Abhishek Kekane) Date: Tue, 15 Dec 2020 20:18:23 +0530 Subject: [oslo][nova][glance][cinder] move cursive library to oslo? In-Reply-To: References: <67aae566-b142-d975-3146-0128b00d1ec3@gmail.com> Message-ID: +1 from my side. Thank you Brian for bringing this up!!! Thanks & Best Regards, Abhishek Kekane On Tue, Dec 15, 2020 at 6:59 PM Brian Rosmaita wrote: > On 12/15/20 5:41 AM, hberaud wrote: > > +1 from my side. > > > > As discussed yesterday with Luigi (tosky) it makes sense to me to host > > that under the Oslo scope, however I would appreciate to get feedback > > from other Oslo team members before doing anything. > > > > Even if this project seems stable we still need to continue to maintain > > the current code base to keep the code up-to-date and compatible with > > the next Python versions. > > > > Concerning the "release" point of view of this topic, if this project is > > stable enough I think we can adopt directly the release independent > > model [1]. It would help us to reduce the maintenance related to stable > > branches (backport fixes etc...). Do you have any opinion on this? > > I agree that the release independent model makes sense for this library. > > > > > Hervé > > > > [1] > > https://releases.openstack.org/reference/release_models.html#independent > > < > https://releases.openstack.org/reference/release_models.html#independent> > > > > Le mar. 15 déc. 2020 à 06:01, Brian Rosmaita > > a écrit : > > > > Hello Oslo Team, > > > > Nova, Glance, and Cinder all make use of the 'cursive' library for > > image-signature-validation. The library is currently in the 'x' > > namespace: https://opendev.org/x/cursive < > https://opendev.org/x/cursive> > > > > The current cursive-core team entirely consists of members of the > Johns > > Hopkins University Applied Physics Laboratory, which ended its > > involvement with OpenStack in July 2018 [0]. > > > > This leaves us in a position where three of the major openstack > > projects > > depend on a library to which no one currently around can approve code > > changes. > > > > I'd like to propose that the cursive library be moved back to the > > 'openstack' namespace and be put under Oslo governance with the > > consuming teams sharing the maintenance of the library. I don't > think > > this will make much new work for the Oslo team--the library has been > > very stable and hasn't changed in over 2 years--but it will ensure > that > > should any bugfixes be required, there will be oslo team members who > > can > > approve the patches. > > > > Thanks for thinking this over, > > brian > > > > > > [0] > > > http://lists.openstack.org/pipermail/openstack-dev/2018-July/131978.html > > < > http://lists.openstack.org/pipermail/openstack-dev/2018-July/131978.html> > > > > > > > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Tue Dec 15 14:53:54 2020 From: amy at demarco.com (Amy Marrich) Date: Tue, 15 Dec 2020 08:53:54 -0600 Subject: [all][interop] Reforming the refstack maintainers team In-Reply-To: <1765341f106.111a3f243109782.5942668683123760803@ghanshyammann.com> References: <1765341f106.111a3f243109782.5942668683123760803@ghanshyammann.com> Message-ID: Let me know if I can help. Thanks, Amy (spotz) On Fri, Dec 11, 2020 at 1:25 PM Ghanshyam Mann wrote: > Hello Everyone, > > As Goutham mentioned in a separate ML thread[2] that there is no active > maintainer for refstack repo > which we discussed in today's interop meeting[1]. We had a few volunteers > who can help to maintain the > refstack and other interop repo which is good news. > > I would like to call for more volunteers (new or existing ones), if you > are interested to help please do reply > to this email. The role is to maintain the source code of the below repos. > I will propose the ACL changes in infra sometime > next Friday (18th dec) or so. > > For easy maintenance, we thought of merging the below repo core group into > a single group called 'refstack-core' > > - openstack/python-tempestconf > - openstack/refstack > - openstack/refstack-client > - x/ansible-role-refstack-client (moving to osf/ via > https://review.opendev.org/765787) > > Current Volunteers: > - martin (mkopec at redhat.com) > - gouthamr (gouthampravi at gmail.com) > - gmann (gmann at ghanshyammann.com) > - Vida (vhariria at redhat.com) > > - interop-core (we will add this group also which has interop WG chairs so > that it will be easy to maintain in the future changes) > > NOTE: there is no change in the 'interop' repo group which has interop > guidelines and doc etc. > > [1] https://etherpad.opendev.org/p/interop > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019263.html > > -gmann > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Tue Dec 15 15:00:46 2020 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 15 Dec 2020 08:00:46 -0700 Subject: New Openstack Deployment questions - how will we respond to this change? In-Reply-To: <1b2962ff5a8e4d5da7568ab216576b7e@NCEMEXGP009.CORP.CHARTERCOM.com> References: <1b2962ff5a8e4d5da7568ab216576b7e@NCEMEXGP009.CORP.CHARTERCOM.com> Message-ID: On Tue, Dec 15, 2020 at 7:46 AM Braden, Albert wrote: > > The purpose of my Openstack Deployment email was not only to rant about the decision to kill Centos 8. I want to make sure that we don't blindly go down the easy route of pretending that Centos Stream is an acceptable replacement for Centos. If we do this, we would do a disservice to the community and to everyone who depends on it. The reality is that, at the end of 2021, Centos will no longer exist as a viable production operating system, and everyone who was using it will have to adjust. For companies that already have Centos mirror repositories setup and people managing them, the increased load imposed by Stream will be incremental, but I suspect that the vast majority of companies do not have that infrastructure already setup, and we will need to carefully consider whether our needs are better met by going forward with Stream or switching to another OS. > >From an OpenStack development perspective, centos8 stream is an acceptable centos replacement because it finally allows us to get a head of "stable" versions and get the issues resolved early and get fixes committed upstream faster. It may not be an acceptable replacement for operators, but that's up to the operator to determine. An operator can delay updates from stream and implement their own rolling updates as necessary via CI/CD (via repo mirroring) which previously was not the case when CentOS dropped exposing specific minor point releases. Since the inception of 8, there hasn't been an 8.1 or 8.2 you could pin to. You got rolling updates whenever 8.2 hit, you got it without being able to go back. > If the OpenStack community decides to continue building on Stream, we should make it crystal clear to operators and users that Stream is not a production-ready OS and that our Stream OpenStack implementation is suitable for testing and development use only, unless they devote substantial resources to mirroring and testing Stream to insulate production clusters from the instability that it will introduce. > The determination of a 'production-ready' OS is likely not something that OpenStack should be expressing. IMHO, that decision is an end user decision based on their comfort for the risk involved. From an RDO/TirpleO perspective, we will be aligning on CentOS Stream going forward so we will be addressing issues as they come up. We will likely need to improve our documentation around supported module versions, etc. > If we do decide to continue building on Stream, how will we respond to the constant stream of bug reports that will result when Stream changes cause OpenStack components to fail? Many component teams are already understaffed. Will we have the bandwidth to constantly chase changes? Should we replace our Centos builds with RHEL, or with Rocky? Does the community have (or can we find) the resources to do the work of maintaining stable Stream mirrors and only building OpenStack on our stable versions of Stream? Or would it be better to drop Centos support and focus our efforts on operating systems that have not implemented unilateral changes that harm the community? > TBH I don't think things change. We've always had to respond to breaking versions even with CentOS. We just hit a fair number of issues with 8.3 being released last week. I think this actually reduces the impact because we're likely to get individual failures that can be identified/addressed instead of large sweeping changes that we get when there's an 8.2 -> 8.3 transition. > -----Original Message----- > From: Braden, Albert > Sent: Friday, December 11, 2020 9:10 AM > To: openstack-discuss at lists.openstack.org > Subject: RE: [EXTERNAL] Re: New Openstack Deployment questions > > Centos Stream is fine for those who were using Centos for testing or development. It's not at all suitable for production, because rolling release doesn't provide the stability that production clusters need. Switching to Centos Stream would require significant resources to be expended to setup local mirrors and then perform exhaustive testing before each upgrade. The old Centos did this work for us; Centos was built on RHEL source that had already been tested by paying customers, and bugs fixed with the urgency that paying customers require. > > Adding an upstream build (Stream) to the existing downstream (Centos 8.x) was fine, but I'm disappointed by the decision to kill Centos 8. I don't want to wax eloquent about how we were betrayed; suffice it to say that even for a free operating system, suddenly changing the EOL from 2029 to 2021 is unprecedented, and places significant burdens on companies that are using Centos in production. I can understand why IBM/RH made this decision, but there's no denying that it puts production Centos users in a difficult position. > > I hope that Rocky Linux [1], under Gregory Kurtzer (founder of the Centos project) will turn out to be a useful alternative. > > {1} https://github.com/rocky-linux/rocky > > -----Original Message----- > From: Luigi Toscano > Sent: Thursday, December 10, 2020 3:51 PM > To: Thomas Wakefield ; openstack-discuss at lists.openstack.org > Cc: Satish Patel > Subject: [EXTERNAL] Re: New Openstack Deployment questions > > CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. > > On Thursday, 10 December 2020 15:27:40 CET Satish Patel wrote: > > I just built a new openstack using openstack-ansible on CentOS 8.2 > > last month before news broke out. I have no choice so i am going to > > stick with CentOS. > > > > What is the future of RDO and EPEL repo if centOS going away. ? > > Continue as before on CentOS Stream. > > > -- > Luigi > > I apologize for the nonsense below. So far I have not been able to stop it from being attached to my external emails. I'm working on it. > > > > > > > E-MAIL CONFIDENTIALITY NOTICE: > The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. > > From kklimonda at syntaxhighlighted.com Tue Dec 15 15:11:25 2020 From: kklimonda at syntaxhighlighted.com (Krzysztof Klimonda) Date: Tue, 15 Dec 2020 16:11:25 +0100 Subject: =?UTF-8?Q?[magnum]_[neutron]_[ovn]_No_inter-node_pod-to-pod_communicatio?= =?UTF-8?Q?n_due_to_missing_ACLs_in_OVN?= Message-ID: <0aadfbb2-8a46-4a75-bd75-4c9e2c5cd463@www.fastmail.com> Hi, This email is a follow-up to a discussion I've openened on ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling disabled (calico_ipv4pool_ipip label set to a default value of Off). As a short introduction, during magnum testing in ussuri deployment with ml2/ovn neutron driver I've noticed lack of communication between pods deployed on different nodes as part of magnum deployment with calico configured to *not* encapsulate traffic in IPIP tunnel, but route it directly between nodes. In theory, magnum configures adds defined pod network to k8s nodes ports' allowed_address_pairs[2] and then security group is created allowing for ICMP and TCP/UDP traffic between ports belonging to that security group[3]. This doesn't work with ml2/ovn as TCP/UDP traffic between IP addresses in pod network is not matching ACLs defined in OVN. I can't verify this behaviour under ml2/ovs for the next couple of weeks, as I'm taking them off for holidays, but perhaps someone knows if that specific usecase (security group rules with remote groups used with allowed address pairs) is supposed to be working, or should magnum use pod network cidr to allow traffic between nodes instead. [1] https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html [2] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml [3] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubecluster.yaml#L1038 -- Best Regards, - Chris From skaplons at redhat.com Tue Dec 15 15:39:55 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 15 Dec 2020 16:39:55 +0100 Subject: [magnum] [neutron] [ovn] No inter-node pod-to-pod communication due to missing ACLs in OVN In-Reply-To: <0aadfbb2-8a46-4a75-bd75-4c9e2c5cd463@www.fastmail.com> References: <0aadfbb2-8a46-4a75-bd75-4c9e2c5cd463@www.fastmail.com> Message-ID: <20201215153955.mkxlg72qskxihb2d@p1.localdomain> Hi, On Tue, Dec 15, 2020 at 04:11:25PM +0100, Krzysztof Klimonda wrote: > Hi, > > This email is a follow-up to a discussion I've openened on ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling disabled (calico_ipv4pool_ipip label set to a default value of Off). > > As a short introduction, during magnum testing in ussuri deployment with ml2/ovn neutron driver I've noticed lack of communication between pods deployed on different nodes as part of magnum deployment with calico configured to *not* encapsulate traffic in IPIP tunnel, but route it directly between nodes. In theory, magnum configures adds defined pod network to k8s nodes ports' allowed_address_pairs[2] and then security group is created allowing for ICMP and TCP/UDP traffic between ports belonging to that security group[3]. This doesn't work with ml2/ovn as TCP/UDP traffic between IP addresses in pod network is not matching ACLs defined in OVN. > > I can't verify this behaviour under ml2/ovs for the next couple of weeks, as I'm taking them off for holidays, but perhaps someone knows if that specific usecase (security group rules with remote groups used with allowed address pairs) is supposed to be working, or should magnum use pod network cidr to allow traffic between nodes instead. Security group rules with remote groups should works with allowed address pairs for ML2/OVS. Because of that we even have note in our docs that You shouldn't add e.g. 0.0.0.0/0 as allowed address pair for one port as it would effectively open all Your traffic to all Your ports which are using the same SG. But from the other hand, we have known issues with scalability of the security groups with remote group ids as reference in ML2/OVS. If You have many ports which are using such group, every time new port is added, all other ports has to be updated to add new IP address to the ipset (or OF rule) and that make take long time. So using e.g. CIDRs in SG rules works better for sure. > > [1] https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html > [2] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml > [3] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubecluster.yaml#L1038 > > -- > Best Regards, > - Chris > -- Slawek Kaplonski Principal Software Engineer Red Hat From whayutin at redhat.com Tue Dec 15 15:55:20 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 15 Dec 2020 08:55:20 -0700 Subject: [tripleo][ci] update - current status In-Reply-To: References: Message-ID: On Mon, Dec 14, 2020 at 2:05 PM Wesley Hayutin wrote: > Greetings, > > *Master*: > OVB jobs are working \0/ again [1] > Master promoted today, logged a few tempest failures and moved to skiplist > > *Victoria*: > Promoted today > OVB jobs all RED waiting [2] > > *Ussuri*: > RED, we need to land 65077 757836 757821 > 65077 757836 757821 all merged last night. Big Thank you to fungi a.k.a Jeremy Stanley for getting the patches prioritized into the gate queue!!! OVB RED > > *Train*: > Most of Upstream is OK [3], update and upgrade jobs need to move to nv > imho until fixed [3] non-passing jobs should just be removed [4] > > *ALL: *We've only merged 17 patches in the last 24 hours, we're usually > closer to 30-40+ per day. I'll keep an eye on it, it's only monday. weee [5] > > *ALL: *We've merged 25 patches in the last 24 hours, an improvement and will continue to monitor. OVB is still red on c8 except for master. Patch is in the gate for victoria ovb We promoted master and victoria yesterday ( victoria w/o ovb ) this was done to refresh content for victoria and speed ci up. Train and Ussuri are blocked on container builds, possible buildah issue https://bugs.launchpad.net/tripleo/+bug/1908276 Thanks > [1] > https://review.rdoproject.org/zuul/builds?job_name=tripleo-ci-centos-8-ovb-1ctlr_1comp-featureset001&job_name=tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001&branch=master > [2] https://review.opendev.org/c/openstack/tripleo-heat-templates/+/766797 > > [3] > http://dashboard-ci.tripleo.org/d/3-DYSmOGk/jobs-exploration?orgId=1&var-influxdb_filter=branch%7C%3D%7Cstable%2Ftrain > [4] > https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-7-standalone-upgrade-train > > https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-7-containerized-undercloud-upgrades > https://review.opendev.org/c/openstack/tripleo-ci/+/766621 > [5] http://paste.openstack.org/show/801030/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalvarez at redhat.com Tue Dec 15 15:59:51 2020 From: dalvarez at redhat.com (Daniel Alvarez Sanchez) Date: Tue, 15 Dec 2020 16:59:51 +0100 Subject: [magnum] [neutron] [ovn] No inter-node pod-to-pod communication due to missing ACLs in OVN In-Reply-To: <0aadfbb2-8a46-4a75-bd75-4c9e2c5cd463@www.fastmail.com> References: <0aadfbb2-8a46-4a75-bd75-4c9e2c5cd463@www.fastmail.com> Message-ID: Hi Chris, thanks for moving this here. On Tue, Dec 15, 2020 at 4:22 PM Krzysztof Klimonda < kklimonda at syntaxhighlighted.com> wrote: > Hi, > > This email is a follow-up to a discussion I've openened on ovs-discuss > ML[1] regarding lack of TCP/UDP connectivity between pods deployed on > magnum-managed k8s cluster with calico CNI and IPIP tunneling disabled > (calico_ipv4pool_ipip label set to a default value of Off). > > As a short introduction, during magnum testing in ussuri deployment with > ml2/ovn neutron driver I've noticed lack of communication between pods > deployed on different nodes as part of magnum deployment with calico > configured to *not* encapsulate traffic in IPIP tunnel, but route it > directly between nodes. In theory, magnum configures adds defined pod > network to k8s nodes ports' allowed_address_pairs[2] and then security > group is created allowing for ICMP and TCP/UDP traffic between ports > belonging to that security group[3]. This doesn't work with ml2/ovn as > TCP/UDP traffic between IP addresses in pod network is not matching ACLs > defined in OVN. > > I can't verify this behaviour under ml2/ovs for the next couple of weeks, > as I'm taking them off for holidays, but perhaps someone knows if that > specific usecase (security group rules with remote groups used with allowed > address pairs) is supposed to be working, or should magnum use pod network > cidr to allow traffic between nodes instead. > In ML2/OVN we're adding the allowed address pairs to the 'addresses' field only when the MAC address of the pair is the same as the port MAC [0]. I think that we can change the code to accomplish what you want (if it matches ML2/OVS which I think it does) by adding all IP-MAC pairs of the allowed-address pairs to the 'addresses' column. E.g: addresses = [ MAC1 IP1, AP_MAC1 AP_IP1, AP_MAC2 AP_IP2 ] (right now it's just addresses = [ MAC1 IP1 ]) port_security column will be kept as it is today. This way, when ovn-northd generates the Address_Set in the SB database for the corresponding remote group, the allowed-address pairs IP addresses will be added to it and honored by the security groups. +Numan Siddique to confirm that this doesn't have any unwanted side effects. [0] https://opendev.org/openstack/neutron/src/commit/6a8fa65302b45f32958e7fc2b73614715780b997/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L122-L125 > > [1] > https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html > [2] > https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml > [3] > https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubecluster.yaml#L1038 > > -- > Best Regards, > - Chris > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalvarez at redhat.com Tue Dec 15 16:03:33 2020 From: dalvarez at redhat.com (Daniel Alvarez Sanchez) Date: Tue, 15 Dec 2020 17:03:33 +0100 Subject: [magnum] [neutron] [ovn] No inter-node pod-to-pod communication due to missing ACLs in OVN In-Reply-To: References: <0aadfbb2-8a46-4a75-bd75-4c9e2c5cd463@www.fastmail.com> Message-ID: On Tue, Dec 15, 2020 at 4:59 PM Daniel Alvarez Sanchez wrote: > Hi Chris, thanks for moving this here. > > On Tue, Dec 15, 2020 at 4:22 PM Krzysztof Klimonda < > kklimonda at syntaxhighlighted.com> wrote: > >> Hi, >> >> This email is a follow-up to a discussion I've openened on ovs-discuss >> ML[1] regarding lack of TCP/UDP connectivity between pods deployed on >> magnum-managed k8s cluster with calico CNI and IPIP tunneling disabled >> (calico_ipv4pool_ipip label set to a default value of Off). >> >> As a short introduction, during magnum testing in ussuri deployment with >> ml2/ovn neutron driver I've noticed lack of communication between pods >> deployed on different nodes as part of magnum deployment with calico >> configured to *not* encapsulate traffic in IPIP tunnel, but route it >> directly between nodes. In theory, magnum configures adds defined pod >> network to k8s nodes ports' allowed_address_pairs[2] and then security >> group is created allowing for ICMP and TCP/UDP traffic between ports >> belonging to that security group[3]. This doesn't work with ml2/ovn as >> TCP/UDP traffic between IP addresses in pod network is not matching ACLs >> defined in OVN. >> >> I can't verify this behaviour under ml2/ovs for the next couple of weeks, >> as I'm taking them off for holidays, but perhaps someone knows if that >> specific usecase (security group rules with remote groups used with allowed >> address pairs) is supposed to be working, or should magnum use pod network >> cidr to allow traffic between nodes instead. >> > > In ML2/OVN we're adding the allowed address pairs to the 'addresses' field > only when the MAC address of the pair is the same as the port MAC [0]. > I think that we can change the code to accomplish what you want (if it > matches ML2/OVS which I think it does) by adding all IP-MAC pairs of the > allowed-address pairs to the 'addresses' column. E.g: > > addresses = [ MAC1 IP1, AP_MAC1 AP_IP1, AP_MAC2 AP_IP2 ] (right now > it's just addresses = [ MAC1 IP1 ]) > port_security column will be kept as it is today. > > This way, when ovn-northd generates the Address_Set in the SB database for > the corresponding remote group, the allowed-address pairs IP addresses will > be added to it and honored by the security groups. > > +Numan Siddique to confirm that this doesn't have > any unwanted side effects. > On top of this I'd say that if the behavior with ML2/OVN is different from ML2/OVS we'll also need to add testing coverage in Neutron for allowed address pairs and remote SGs simultaneously. > > [0] > https://opendev.org/openstack/neutron/src/commit/6a8fa65302b45f32958e7fc2b73614715780b997/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L122-L125 > > >> >> [1] >> https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html >> [2] >> https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml >> [3] >> https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubecluster.yaml#L1038 >> >> -- >> Best Regards, >> - Chris >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue Dec 15 16:08:16 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 15 Dec 2020 17:08:16 +0100 Subject: [ironic] IPA images: CentOS 8, Stream and Debian Message-ID: Hi wonderful ironicers! Unless you spend the whole last week under a rock (good for you - I mean it!), you must be already aware that the classical CentOS as we know it is going away in favour of CentOS Stream, and maybe as soon as in the end of 2021! For us it means two things: 1) Bifrost will test with CentOS Stream, which may not 100% match the current stable RHEL. 2) We need to decide what to do with our published DIB images [1], currently based on CentOS 8. I'd like to concentrate on problem #2. We have conducted an experiment of switching to CentOS Stream, and it did not go so well. The image size has increased from ~ 340 MiB to nearly 450 MiB, causing serious issues in CI jobs using it. Because of this we're reverting the switch and considering other options. One is to figure out what causes the size increase and remove packages or manually delete files. We're already doing it quite intensively, so I expect this option to be time-consuming. We'll probably have to repeat this exercise regularly to keep up with the distribution changes. The other option is to switch to another distro. Debian looks promising in this context: 3 years of support and the image size is just 273 MiB. I have a patch [2] up for people who want to try the resulting image on their bare metal machines. What do you think? Are there any concerns with either option? [1] https://tarballs.openstack.org/ironic-python-agent/dib/files/ [2] https://review.opendev.org/c/openstack/ironic-python-agent-builder/+/767158 -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From kklimonda at syntaxhighlighted.com Tue Dec 15 16:14:29 2020 From: kklimonda at syntaxhighlighted.com (Krzysztof Klimonda) Date: Tue, 15 Dec 2020 17:14:29 +0100 Subject: =?UTF-8?Q?Re:_[magnum]_[neutron]_[ovn]_No_inter-node_pod-to-pod_communic?= =?UTF-8?Q?ation_due_to_missing_ACLs_in_OVN?= In-Reply-To: References: <0aadfbb2-8a46-4a75-bd75-4c9e2c5cd463@www.fastmail.com> Message-ID: <119f8b0d-4beb-4507-99f6-69cb53726c91@www.fastmail.com> On Tue, Dec 15, 2020, at 16:59, Daniel Alvarez Sanchez wrote: > Hi Chris, thanks for moving this here. > > On Tue, Dec 15, 2020 at 4:22 PM Krzysztof Klimonda wrote: >> Hi, >> >> This email is a follow-up to a discussion I've openened on ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling disabled (calico_ipv4pool_ipip label set to a default value of Off). >> >> As a short introduction, during magnum testing in ussuri deployment with ml2/ovn neutron driver I've noticed lack of communication between pods deployed on different nodes as part of magnum deployment with calico configured to *not* encapsulate traffic in IPIP tunnel, but route it directly between nodes. In theory, magnum configures adds defined pod network to k8s nodes ports' allowed_address_pairs[2] and then security group is created allowing for ICMP and TCP/UDP traffic between ports belonging to that security group[3]. This doesn't work with ml2/ovn as TCP/UDP traffic between IP addresses in pod network is not matching ACLs defined in OVN. >> >> I can't verify this behaviour under ml2/ovs for the next couple of weeks, as I'm taking them off for holidays, but perhaps someone knows if that specific usecase (security group rules with remote groups used with allowed address pairs) is supposed to be working, or should magnum use pod network cidr to allow traffic between nodes instead. > > In ML2/OVN we're adding the allowed address pairs to the 'addresses' field only when the MAC address of the pair is the same as the port MAC [0]. > I think that we can change the code to accomplish what you want (if it matches ML2/OVS which I think it does) by adding all IP-MAC pairs of the allowed-address pairs to the 'addresses' column. E.g: > > addresses = [ MAC1 IP1, AP_MAC1 AP_IP1, AP_MAC2 AP_IP2 ] (right now it's just addresses = [ MAC1 IP1 ]) > port_security column will be kept as it is today. How does [AP_MAC1 AP_IP1 AP_MAC2 AP_IP2] scale with a number of IP addresses set in allowed_address_pairs? Given how default pod network is 10.100.0.0/16 will that generate 65k flows in ovs, or is it not a 1:1 mapping? If ml2/ovs is also having scaling issues when remote groups are used, perhaps magnum should switch to defining remote-ip in its security groups instead, even if the underlying issue on ml2/ovn is fixed? > > This way, when ovn-northd generates the Address_Set in the SB database for the corresponding remote group, the allowed-address pairs IP addresses will be added to it and honored by the security groups. > > +Numan Siddique to confirm that this doesn't have any unwanted side effects. > > [0] https://opendev.org/openstack/neutron/src/commit/6a8fa65302b45f32958e7fc2b73614715780b997/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L122-L125 >> >> [1] https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html >> [2] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml >> [3] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubecluster.yaml#L1038 >> >> -- >> Best Regards, >> - Chris >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Dec 15 16:56:56 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 15 Dec 2020 16:56:56 +0000 Subject: [oslo][nova][glance][cinder] move cursive library to oslo? In-Reply-To: <67aae566-b142-d975-3146-0128b00d1ec3@gmail.com> References: <67aae566-b142-d975-3146-0128b00d1ec3@gmail.com> Message-ID: <20201215165655.qthdygng5oimiev6@yuggoth.org> On 2020-12-14 23:53:39 -0500 (-0500), Brian Rosmaita wrote: [...] > The current cursive-core team entirely consists of members of the Johns > Hopkins University Applied Physics Laboratory, which ended its involvement > with OpenStack in July 2018 [...] > I'd like to propose that the cursive library be moved back to the > 'openstack' namespace and be put under Oslo governance with the consuming > teams sharing the maintenance of the library. [...] Purely from a logistics perspective, it would be good to get the permission of at least one of the current core reviewers, preferably by having them include oslo-core into their core group. Right now it's an independently developed project within the OpenDev Collaboratory, and OpenStack lacks the authority to just "take over" a non-OpenStack project without first making sure that's okay with the prior authors. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Tue Dec 15 18:07:22 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 15 Dec 2020 18:07:22 +0000 Subject: New Openstack Deployment questions - how will we respond to this change? In-Reply-To: <1b2962ff5a8e4d5da7568ab216576b7e@NCEMEXGP009.CORP.CHARTERCOM.com> References: <1b2962ff5a8e4d5da7568ab216576b7e@NCEMEXGP009.CORP.CHARTERCOM.com> Message-ID: <20201215180722.s2jztyl2ujqs7dbe@yuggoth.org> On 2020-12-15 14:40:01 +0000 (+0000), Braden, Albert wrote: [...] > If the OpenStack community decides to continue building on Stream, > we should make it crystal clear to operators and users that Stream > is not a production-ready OS and that our Stream OpenStack > implementation is suitable for testing and development use only, > unless they devote substantial resources to mirroring and testing > Stream to insulate production clusters from the instability that > it will introduce. Up until https://review.openstack.org/638045 merged last year, we used to put it plainly in our testing specs that: "The following free operating systems are representative of platforms regularly used to deploy OpenStack on: [...] Latest CentOS Major [...] The CentOS distribution is derived from the sources of Red Hat Enterprise Linux (RHEL). In reality, RHEL is more popular than CentOS but we can't use this platform on upstream gates, so we rely on CentOS." In essence, we've always been targeting RHEL and using CentOS as a stand-in substitute. For that purpose, CentOS 8 Stream ought to suffice for continued testing of our future releases to make sure they remain compatible with RHEL 8. In fact, it may actually be superior for that purpose, as it allows us to test what's going to appear in impending minor and point releases of RHEL 8 rather than testing a laggy copy of what's already been added in RHEL. > Should we replace our Centos builds with RHEL, or with Rocky? RHEL is still not a possibility for our CI from a licensing perspective, from what I understand. Rocky Linux might be a possibility, sure, once it's more than just a readme file and vapor. > Does the community have (or can we find) the resources to do the > work of maintaining stable Stream mirrors and only building > OpenStack on our stable versions of Stream? We already do: > Or would it be better to drop Centos support and focus our efforts > on operating systems that have not implemented unilateral changes > that harm the community? [...] The TripleO project is by far the largest user of our CI infrastructure (in aggregate node-hours), and they only work on RHEL/RDO or close derivatives like CentOS, so I expect at least they'll see value in continuing to have something RHEL-like to test changes against. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From adrian at fleio.com Tue Dec 15 18:10:36 2020 From: adrian at fleio.com (Adrian Andreias) Date: Tue, 15 Dec 2020 20:10:36 +0200 Subject: [all] Dropping lower constraints testing (WAS: Re: [stable][requirements][neutron] Capping pip in stable branches or not) In-Reply-To: References: <20201211143818.2w24gusndhnpzvnq@yuggoth.org> <17652961b73.11c186083102611.2301739328973440930@ghanshyammann.com> Message-ID: Hi, I'm probably missing something, but not sure why multiple OpenStack projects that only communicate through APIs would need to coexist in the same virtual environment (which leads to exponential dependency hell). Regardless of the deployment type or packager, makes sense to always have exactly one virtual environment per OpenStack project. Projects have various needs and priorities, own upgrade paths for third party libraries, therefore totally independent requirements.txt. And all lib versions pinpointed, no low or highs. The usual best practice. So what am I missing? Regards, Adrian Andreias https://fleio.com On Fri, Dec 11, 2020, 11:10 PM Goutham Pacha Ravi wrote: > Hi, > > I hope you won't mind me shifting this discussion to [all] - many projects > have had to make changes due to the dependency resolver catching some of > our uncaught lies. > In manila, i've pushed up three changes to fix the CI on the main, > stable/victoria and stable/ussuri [1] branches. I used fungi's method of > installing things and playing whack-a-mole [2] and Brain > Rosmaita's approach [3] of taking the opportunity to raise the minimum > required packages for Wallaby. However, this all seems kludgy maintenance - > and possibly no-one is benefitting from the effort we're putting into this > as called out. > > Can more distributors and deployment tooling folks comment? > > [1] > https://review.opendev.org/q/project:openstack/manila+topic:update-requirements > > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019285.html > [3] https://review.opendev.org/c/openstack/cinder/+/766085 > > > > On Fri, Dec 11, 2020 at 12:51 PM Sorin Sbarnea > wrote: > >> Jeremy nailed it very well. >> >> Tripleo already removed lower-constraints from most places (some changes >> may be still waiting to be gated). >> >> Regarding decoupling linting from test-requirements: yes! This was >> already done by some when conflicts appeared. For old branches I personally >> do not care much even if maintainers decide to disable linting, their main >> benefit is on main branches. >> >> On Fri, 11 Dec 2020 at 18:14, Radosław Piliszek < >> radoslaw.piliszek at gmail.com> wrote: >> >>> On Fri, Dec 11, 2020 at 5:16 PM Ghanshyam Mann >>> wrote: >>> > >>> > Maintaining it up to date is not so worth compare to the effort it is >>> taking. I will also suggest to >>> > remove this. >>> > >>> >>> Kolla dropped lower-constraints from all the branches. >>> >>> -yoctozepto >>> >>> -- >> -- >> /sorin >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Dec 15 18:23:59 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 15 Dec 2020 18:23:59 +0000 Subject: [ironic] IPA images: CentOS 8, Stream and Debian In-Reply-To: References: Message-ID: <20201215182359.jhtqopfzwlwevjdt@yuggoth.org> On 2020-12-15 17:08:16 +0100 (+0100), Dmitry Tantsur wrote: [...] > The other option is to switch to another distro. Debian looks > promising in this context: 3 years of support and the image size > is just 273 MiB. [...] Not to be a Debian apologist, but depending on what you mean by "support" it could be usable for Ironic's purposes even longer: https://www.debian.org/lts/ The Zuul project and the OpenDev Collaboratory both use Debian as the basis of their container images too (though executed from Ubuntu virtual servers for now, in the latter case). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Tue Dec 15 18:29:39 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 15 Dec 2020 18:29:39 +0000 Subject: [all] Dropping lower constraints testing (WAS: Re: [stable][requirements][neutron] Capping pip in stable branches or not) In-Reply-To: References: <20201211143818.2w24gusndhnpzvnq@yuggoth.org> <17652961b73.11c186083102611.2301739328973440930@ghanshyammann.com> Message-ID: <20201215182939.bvck42rgpfhh5yzs@yuggoth.org> On 2020-12-15 20:10:36 +0200 (+0200), Adrian Andreias wrote: > I'm probably missing something, but not sure why multiple > OpenStack projects that only communicate through APIs would need > to coexist in the same virtual environment (which leads to > exponential dependency hell). > > Regardless of the deployment type or packager, makes sense to > always have exactly one virtual environment per OpenStack project. > Projects have various needs and priorities, own upgrade paths for > third party libraries, therefore totally independent > requirements.txt. And all lib versions pinpointed, no low or > highs. The usual best practice. [...] Got it. So you've developed some magic new containment technology which will allow you to use incompatible versions of nova and oslo.messaging, for example? Those separate OpenStack projects no longer need to be coinstallable? ;) But also, coinstallability is fundamental to inclusion in any coordinated software distribution. Red Hat or Debian are not going to want to have to maintain lots of different versions of the same dependencies (and duplicate security fix backporting work that many times over). Being able to use consistent versions of your dependency chain has lots of benefits even if you're not going to actually install all the components into one system together at the same time. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From zigo at debian.org Tue Dec 15 19:53:30 2020 From: zigo at debian.org (Thomas Goirand) Date: Tue, 15 Dec 2020 20:53:30 +0100 Subject: [ironic] IPA images: CentOS 8, Stream and Debian In-Reply-To: References: Message-ID: <1e106ff3-6156-6134-bcd9-bd3d906ed74a@debian.org> On 12/15/20 5:08 PM, Dmitry Tantsur wrote: > Hi wonderful ironicers! > > Unless you spend the whole last week under a rock (good for you - I mean > it!), you must be already aware that the classical CentOS as we know it > is going away in favour of CentOS Stream, and maybe as soon as in the > end of 2021! > > For us it means two things: > 1) Bifrost will test with CentOS Stream, which may not 100% match the > current stable RHEL. > 2) We need to decide what to do with our published DIB images [1], > currently based on CentOS 8. > > I'd like to concentrate on problem #2. > > We have conducted an experiment of switching to CentOS Stream, and it > did not go so well. The image size has increased from ~ 340 MiB to > nearly 450 MiB, causing serious issues in CI jobs using it. Because of > this we're reverting the switch and considering other options. > > One is to figure out what causes the size increase and remove packages > or manually delete files. We're already doing it quite intensively, so I > expect this option to be time-consuming. We'll probably have to repeat > this exercise regularly to keep up with the distribution changes. > > The other option is to switch to another distro. Debian looks promising > in this context: 3 years of support and the image size is just 273 MiB. > I have a patch [2] up for people who want to try the resulting image on > their bare metal machines. > > What do you think? Are there any concerns with either option? > > [1] https://tarballs.openstack.org/ironic-python-agent/dib/files/ > > [2] > https://review.opendev.org/c/openstack/ironic-python-agent-builder/+/767158 > Hi Dmitry! I wonder what Debian image you've been using. Have you tried one of the daily image that the Debian Cloud Image Team prepares? [1] If you choose the Debian path, you have all of my support, and I'll try to help as much as I can. Cheers, Thomas Goirand (zigo) [1] http://cdimage.debian.org/cdimage/cloud/ and look for the "generic" images, which should work for Ironic. From fungi at yuggoth.org Tue Dec 15 20:22:29 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 15 Dec 2020 20:22:29 +0000 Subject: [ironic] IPA images: CentOS 8, Stream and Debian In-Reply-To: <1e106ff3-6156-6134-bcd9-bd3d906ed74a@debian.org> References: <1e106ff3-6156-6134-bcd9-bd3d906ed74a@debian.org> Message-ID: <20201215202229.vssg2jruwazd5zpy@yuggoth.org> On 2020-12-15 20:53:30 +0100 (+0100), Thomas Goirand wrote: [...] > I wonder what Debian image you've been using. Have you tried one of the > daily image that the Debian Cloud Image Team prepares? [...] While admitting that I don't really know much about it, I suspect "cloud" images would be unsuitable since what Ironic needs is really a RAMdisk to bootstrap bare metal server inventorying and deployment. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From skaplons at redhat.com Tue Dec 15 21:18:09 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 15 Dec 2020 22:18:09 +0100 Subject: [magnum] [neutron] [ovn] No inter-node pod-to-pod communication due to missing ACLs in OVN In-Reply-To: <119f8b0d-4beb-4507-99f6-69cb53726c91@www.fastmail.com> References: <0aadfbb2-8a46-4a75-bd75-4c9e2c5cd463@www.fastmail.com> <119f8b0d-4beb-4507-99f6-69cb53726c91@www.fastmail.com> Message-ID: <20201215211809.fi25gel5n7pjuhfs@p1.localdomain> Hi, On Tue, Dec 15, 2020 at 05:14:29PM +0100, Krzysztof Klimonda wrote: > On Tue, Dec 15, 2020, at 16:59, Daniel Alvarez Sanchez wrote: > > Hi Chris, thanks for moving this here. > > > > On Tue, Dec 15, 2020 at 4:22 PM Krzysztof Klimonda wrote: > >> Hi, > >> > >> This email is a follow-up to a discussion I've openened on ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling disabled (calico_ipv4pool_ipip label set to a default value of Off). > >> > >> As a short introduction, during magnum testing in ussuri deployment with ml2/ovn neutron driver I've noticed lack of communication between pods deployed on different nodes as part of magnum deployment with calico configured to *not* encapsulate traffic in IPIP tunnel, but route it directly between nodes. In theory, magnum configures adds defined pod network to k8s nodes ports' allowed_address_pairs[2] and then security group is created allowing for ICMP and TCP/UDP traffic between ports belonging to that security group[3]. This doesn't work with ml2/ovn as TCP/UDP traffic between IP addresses in pod network is not matching ACLs defined in OVN. > >> > >> I can't verify this behaviour under ml2/ovs for the next couple of weeks, as I'm taking them off for holidays, but perhaps someone knows if that specific usecase (security group rules with remote groups used with allowed address pairs) is supposed to be working, or should magnum use pod network cidr to allow traffic between nodes instead. > > > > In ML2/OVN we're adding the allowed address pairs to the 'addresses' field only when the MAC address of the pair is the same as the port MAC [0]. > > I think that we can change the code to accomplish what you want (if it matches ML2/OVS which I think it does) by adding all IP-MAC pairs of the allowed-address pairs to the 'addresses' column. E.g: > > > > addresses = [ MAC1 IP1, AP_MAC1 AP_IP1, AP_MAC2 AP_IP2 ] (right now it's just addresses = [ MAC1 IP1 ]) > > port_security column will be kept as it is today. > > How does [AP_MAC1 AP_IP1 AP_MAC2 AP_IP2] scale with a number of IP addresses set in allowed_address_pairs? Given how default pod network is 10.100.0.0/16 will that generate 65k flows in ovs, or is it not a 1:1 mapping? > > If ml2/ovs is also having scaling issues when remote groups are used, perhaps magnum should switch to defining remote-ip in its security groups instead, even if the underlying issue on ml2/ovn is fixed? IIRC Kuryr moved already to such solution as they had problems with scaling on ML2/OVS when remote_group ids where used. > > > > > This way, when ovn-northd generates the Address_Set in the SB database for the corresponding remote group, the allowed-address pairs IP addresses will be added to it and honored by the security groups. > > > > +Numan Siddique to confirm that this doesn't have any unwanted side effects. > > > > [0] https://opendev.org/openstack/neutron/src/commit/6a8fa65302b45f32958e7fc2b73614715780b997/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L122-L125 > >> > >> [1] https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html > >> [2] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml > >> [3] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubecluster.yaml#L1038 > >> > >> -- > >> Best Regards, > >> - Chris > >> -- Slawek Kaplonski Principal Software Engineer Red Hat From dalvarez at redhat.com Tue Dec 15 21:57:02 2020 From: dalvarez at redhat.com (Daniel Alvarez) Date: Tue, 15 Dec 2020 22:57:02 +0100 Subject: [magnum] [neutron] [ovn] No inter-node pod-to-pod communication due to missing ACLs in OVN In-Reply-To: <20201215211809.fi25gel5n7pjuhfs@p1.localdomain> References: <20201215211809.fi25gel5n7pjuhfs@p1.localdomain> Message-ID: > On 15 Dec 2020, at 22:18, Slawek Kaplonski wrote: > > Hi, > >> On Tue, Dec 15, 2020 at 05:14:29PM +0100, Krzysztof Klimonda wrote: >>> On Tue, Dec 15, 2020, at 16:59, Daniel Alvarez Sanchez wrote: >>> Hi Chris, thanks for moving this here. >>> >>> On Tue, Dec 15, 2020 at 4:22 PM Krzysztof Klimonda wrote: >>>> Hi, >>>> >>>> This email is a follow-up to a discussion I've openened on ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling disabled (calico_ipv4pool_ipip label set to a default value of Off). >>>> >>>> As a short introduction, during magnum testing in ussuri deployment with ml2/ovn neutron driver I've noticed lack of communication between pods deployed on different nodes as part of magnum deployment with calico configured to *not* encapsulate traffic in IPIP tunnel, but route it directly between nodes. In theory, magnum configures adds defined pod network to k8s nodes ports' allowed_address_pairs[2] and then security group is created allowing for ICMP and TCP/UDP traffic between ports belonging to that security group[3]. This doesn't work with ml2/ovn as TCP/UDP traffic between IP addresses in pod network is not matching ACLs defined in OVN. >>>> >>>> I can't verify this behaviour under ml2/ovs for the next couple of weeks, as I'm taking them off for holidays, but perhaps someone knows if that specific usecase (security group rules with remote groups used with allowed address pairs) is supposed to be working, or should magnum use pod network cidr to allow traffic between nodes instead. >>> >>> In ML2/OVN we're adding the allowed address pairs to the 'addresses' field only when the MAC address of the pair is the same as the port MAC [0]. >>> I think that we can change the code to accomplish what you want (if it matches ML2/OVS which I think it does) by adding all IP-MAC pairs of the allowed-address pairs to the 'addresses' column. E.g: >>> >>> addresses = [ MAC1 IP1, AP_MAC1 AP_IP1, AP_MAC2 AP_IP2 ] (right now it's just addresses = [ MAC1 IP1 ]) >>> port_security column will be kept as it is today. >> >> How does [AP_MAC1 AP_IP1 AP_MAC2 AP_IP2] scale with a number of IP addresses set in allowed_address_pairs? Given how default pod network is 10.100.0.0/16 will that generate 65k flows in ovs, or is it not a 1:1 mapping? It will use conjunctive flows but yes it will be huge no matter what. If we follow the approach of adding match conditions to the ACLs for each address pair it is going to be even worse when expanded by ovn-controller. >> >> If ml2/ovs is also having scaling issues when remote groups are used, perhaps magnum should switch to defining remote-ip in its security groups instead, even if the underlying issue on ml2/ovn is fixed? > > IIRC Kuryr moved already to such solution as they had problems with scaling on > ML2/OVS when remote_group ids where used. That’s right. Remote groups are expensive in any case. Mind opening a launchpad bug for OVN though? Thanks! > >> >>> >>> This way, when ovn-northd generates the Address_Set in the SB database for the corresponding remote group, the allowed-address pairs IP addresses will be added to it and honored by the security groups. >>> >>> +Numan Siddique to confirm that this doesn't have any unwanted side effects. >>> >>> [0] https://opendev.org/openstack/neutron/src/commit/6a8fa65302b45f32958e7fc2b73614715780b997/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L122-L125 >>>> >>>> [1] https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html >>>> [2] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml >>>> [3] https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubecluster.yaml#L1038 >>>> >>>> -- >>>> Best Regards, >>>> - Chris >>>> > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > From zigo at debian.org Tue Dec 15 22:42:01 2020 From: zigo at debian.org (Thomas Goirand) Date: Tue, 15 Dec 2020 23:42:01 +0100 Subject: [ironic] IPA images: CentOS 8, Stream and Debian In-Reply-To: <20201215202229.vssg2jruwazd5zpy@yuggoth.org> References: <1e106ff3-6156-6134-bcd9-bd3d906ed74a@debian.org> <20201215202229.vssg2jruwazd5zpy@yuggoth.org> Message-ID: On 12/15/20 9:22 PM, Jeremy Stanley wrote: > On 2020-12-15 20:53:30 +0100 (+0100), Thomas Goirand wrote: > [...] >> I wonder what Debian image you've been using. Have you tried one of the >> daily image that the Debian Cloud Image Team prepares? > [...] > > While admitting that I don't really know much about it, I suspect > "cloud" images would be unsuitable since what Ironic needs is really > a RAMdisk to bootstrap bare metal server inventorying and > deployment. You're assuming wrongly here. There's 2 types of images, one which we call "generic" that contains all the drivers, and one which we call "genericcloud" which uses the cloud kernel (ie: stripped with most hardware support, better suited for OpenStack Qemu hypervisor). The generic image is 284 MB, the cloud one is 221 MB (so 63 MB difference). So the "generic" image should be ok as an image for Ironic. As I never deployed Ironic myself, I'd be curious to know if the image is working well under that environment. Please let me know! Cheers, Thomas Goirand (zigo) From emiller at genesishosting.com Tue Dec 15 23:43:20 2020 From: emiller at genesishosting.com (Eric K. Miller) Date: Tue, 15 Dec 2020 17:43:20 -0600 Subject: [ironic] Securing physical hosts in hostile environments Message-ID: <046E9C0290DD9149B106B72FC9156BEA04814DF6@gmsxchsvr01.thecreation.com> Hi, We have considered ironic for deploying physical hosts for our public cloud platform, but have not found any way to properly secure the hosts, or rather, how to reset a physical host back to factory defaults between uses - such as BIOS and BMC settings. Since users (bad actors) can access the BMC via SMBus, reset BIOS password(s), change firmware versions, etc., there appears to be no proper way to secure a platform. This is especially true when resetting BIOS/BMC configurations since this typically involves shorting a jumper and power cycling a unit (physically removing power from the power supplies - not just a power down from the BMC). Manufacturers have not made this easy/possible, and we have yet to find a commercial device that can assist with this out-of-band. We have actually thought of building our own, but thought we would ask the community first. Thanks! Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From emiller at genesishosting.com Wed Dec 16 00:59:18 2020 From: emiller at genesishosting.com (Eric K. Miller) Date: Tue, 15 Dec 2020 18:59:18 -0600 Subject: [ironic] Securing physical hosts in hostile environments In-Reply-To: <046E9C0290DD9149B106B72FC9156BEA04814DF6@gmsxchsvr01.thecreation.com> References: <046E9C0290DD9149B106B72FC9156BEA04814DF6@gmsxchsvr01.thecreation.com> Message-ID: <046E9C0290DD9149B106B72FC9156BEA04814DF8@gmsxchsvr01.thecreation.com> Looks like I forgot to ask a question after my statements. :) What are others doing to secure their physical hosts in hostile environments? Eric From skaplons at redhat.com Wed Dec 16 08:56:34 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 16 Dec 2020 09:56:34 +0100 Subject: [neutron] Team meetings this year cancelled Message-ID: <33405295.0rLbERne6g@p1> Hi, As we discussed during our last meeting, we are going to cancel our team meetings in next 2 weeks. Have a great holidays and see You all on the meeting at 5.01.2021 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Wed Dec 16 08:57:14 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 16 Dec 2020 09:57:14 +0100 Subject: [neutron]CI meetings in next 2 weeks canceled Message-ID: <5533829.oSiggymxeP@p1> Hi, As we discussed during our last meeting, we are going to cancel our CI meetings in next 2 weeks. Have a great holidays and see You all on the meeting at 5.01.2021 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From dtantsur at redhat.com Wed Dec 16 10:35:21 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 16 Dec 2020 11:35:21 +0100 Subject: [ironic] IPA images: CentOS 8, Stream and Debian In-Reply-To: <1e106ff3-6156-6134-bcd9-bd3d906ed74a@debian.org> References: <1e106ff3-6156-6134-bcd9-bd3d906ed74a@debian.org> Message-ID: On Tue, Dec 15, 2020 at 8:56 PM Thomas Goirand wrote: > On 12/15/20 5:08 PM, Dmitry Tantsur wrote: > > Hi wonderful ironicers! > > > > Unless you spend the whole last week under a rock (good for you - I mean > > it!), you must be already aware that the classical CentOS as we know it > > is going away in favour of CentOS Stream, and maybe as soon as in the > > end of 2021! > > > > For us it means two things: > > 1) Bifrost will test with CentOS Stream, which may not 100% match the > > current stable RHEL. > > 2) We need to decide what to do with our published DIB images [1], > > currently based on CentOS 8. > > > > I'd like to concentrate on problem #2. > > > > We have conducted an experiment of switching to CentOS Stream, and it > > did not go so well. The image size has increased from ~ 340 MiB to > > nearly 450 MiB, causing serious issues in CI jobs using it. Because of > > this we're reverting the switch and considering other options. > > > > One is to figure out what causes the size increase and remove packages > > or manually delete files. We're already doing it quite intensively, so I > > expect this option to be time-consuming. We'll probably have to repeat > > this exercise regularly to keep up with the distribution changes. > > > > The other option is to switch to another distro. Debian looks promising > > in this context: 3 years of support and the image size is just 273 MiB. > > I have a patch [2] up for people who want to try the resulting image on > > their bare metal machines. > > > > What do you think? Are there any concerns with either option? > > > > [1] https://tarballs.openstack.org/ironic-python-agent/dib/files/ > > > > [2] > > > https://review.opendev.org/c/openstack/ironic-python-agent-builder/+/767158 > > < > https://review.opendev.org/c/openstack/ironic-python-agent-builder/+/767158 > > > > Hi Dmitry! > > I wonder what Debian image you've been using. Have you tried one of the > daily image that the Debian Cloud Image Team prepares? [1] > Hi Thomas, We're using diskimage-builder, which, I think, uses debootstrap. This matches our requirements pretty well since we need as small images as it is possible. > > If you choose the Debian path, you have all of my support, and I'll try > to help as much as I can. > Thank you! Dmitry > > Cheers, > > Thomas Goirand (zigo) > > [1] http://cdimage.debian.org/cdimage/cloud/ and look for the "generic" > images, which should work for Ironic. > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Wed Dec 16 10:44:51 2020 From: zigo at debian.org (Thomas Goirand) Date: Wed, 16 Dec 2020 11:44:51 +0100 Subject: [ironic] IPA images: CentOS 8, Stream and Debian In-Reply-To: References: <1e106ff3-6156-6134-bcd9-bd3d906ed74a@debian.org> Message-ID: <81e9d5a3-5b86-dbc8-6083-1eb20754d1f6@debian.org> On 12/16/20 11:35 AM, Dmitry Tantsur wrote: > > > On Tue, Dec 15, 2020 at 8:56 PM Thomas Goirand > wrote: > > On 12/15/20 5:08 PM, Dmitry Tantsur wrote: > > Hi wonderful ironicers! > > > > Unless you spend the whole last week under a rock (good for you - > I mean > > it!), you must be already aware that the classical CentOS as we > know it > > is going away in favour of CentOS Stream, and maybe as soon as in the > > end of 2021! > > > > For us it means two things: > > 1) Bifrost will test with CentOS Stream, which may not 100% match the > > current stable RHEL. > > 2) We need to decide what to do with our published DIB images [1], > > currently based on CentOS 8. > > > > I'd like to concentrate on problem #2. > > > > We have conducted an experiment of switching to CentOS Stream, and it > > did not go so well. The image size has increased from ~ 340 MiB to > > nearly 450 MiB, causing serious issues in CI jobs using it. Because of > > this we're reverting the switch and considering other options. > > > > One is to figure out what causes the size increase and remove packages > > or manually delete files. We're already doing it quite > intensively, so I > > expect this option to be time-consuming. We'll probably have to repeat > > this exercise regularly to keep up with the distribution changes. > > > > The other option is to switch to another distro. Debian looks > promising > > in this context: 3 years of support and the image size is just 273 > MiB. > > I have a patch [2] up for people who want to try the resulting > image on > > their bare metal machines. > > > > What do you think? Are there any concerns with either option? > > > > [1] https://tarballs.openstack.org/ironic-python-agent/dib/files/ > > > > > > [2] > > > https://review.opendev.org/c/openstack/ironic-python-agent-builder/+/767158 > > > > > > > Hi Dmitry! > > I wonder what Debian image you've been using. Have you tried one of the > daily image that the Debian Cloud Image Team prepares? [1] > > > Hi Thomas, > > We're using diskimage-builder, which, I think, uses debootstrap. This > matches our requirements pretty well since we need as small images as it > is possible. >   > > > If you choose the Debian path, you have all of my support, and I'll try > to help as much as I can. > > > Thank you! > > Dmitry Hi Dmitry, Could you please have a quick try with the official "Generic" image from the Debian team? Best maybe would be to try this one: http://cdimage.debian.org/cdimage/cloud/bullseye/daily/20201216-486/debian-11-generic-amd64-daily-20201216-486.qcow2 Indeed, I would very much like to be able to validate the image works for Ironic. Cheers, Thomas Goirand (zigo) From dtantsur at redhat.com Wed Dec 16 10:59:07 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 16 Dec 2020 11:59:07 +0100 Subject: [ironic] IPA images: CentOS 8, Stream and Debian In-Reply-To: <81e9d5a3-5b86-dbc8-6083-1eb20754d1f6@debian.org> References: <1e106ff3-6156-6134-bcd9-bd3d906ed74a@debian.org> <81e9d5a3-5b86-dbc8-6083-1eb20754d1f6@debian.org> Message-ID: On Wed, Dec 16, 2020 at 11:47 AM Thomas Goirand wrote: > On 12/16/20 11:35 AM, Dmitry Tantsur wrote: > > > > > > On Tue, Dec 15, 2020 at 8:56 PM Thomas Goirand > > wrote: > > > > On 12/15/20 5:08 PM, Dmitry Tantsur wrote: > > > Hi wonderful ironicers! > > > > > > Unless you spend the whole last week under a rock (good for you - > > I mean > > > it!), you must be already aware that the classical CentOS as we > > know it > > > is going away in favour of CentOS Stream, and maybe as soon as in > the > > > end of 2021! > > > > > > For us it means two things: > > > 1) Bifrost will test with CentOS Stream, which may not 100% match > the > > > current stable RHEL. > > > 2) We need to decide what to do with our published DIB images [1], > > > currently based on CentOS 8. > > > > > > I'd like to concentrate on problem #2. > > > > > > We have conducted an experiment of switching to CentOS Stream, and > it > > > did not go so well. The image size has increased from ~ 340 MiB to > > > nearly 450 MiB, causing serious issues in CI jobs using it. > Because of > > > this we're reverting the switch and considering other options. > > > > > > One is to figure out what causes the size increase and remove > packages > > > or manually delete files. We're already doing it quite > > intensively, so I > > > expect this option to be time-consuming. We'll probably have to > repeat > > > this exercise regularly to keep up with the distribution changes. > > > > > > The other option is to switch to another distro. Debian looks > > promising > > > in this context: 3 years of support and the image size is just 273 > > MiB. > > > I have a patch [2] up for people who want to try the resulting > > image on > > > their bare metal machines. > > > > > > What do you think? Are there any concerns with either option? > > > > > > [1] https://tarballs.openstack.org/ironic-python-agent/dib/files/ > > > > > > > > > > [2] > > > > > > https://review.opendev.org/c/openstack/ironic-python-agent-builder/+/767158 > > < > https://review.opendev.org/c/openstack/ironic-python-agent-builder/+/767158 > > > > > > > < > https://review.opendev.org/c/openstack/ironic-python-agent-builder/+/767158 > > < > https://review.opendev.org/c/openstack/ironic-python-agent-builder/+/767158 > >> > > > > Hi Dmitry! > > > > I wonder what Debian image you've been using. Have you tried one of > the > > daily image that the Debian Cloud Image Team prepares? [1] > > > > > > Hi Thomas, > > > > We're using diskimage-builder, which, I think, uses debootstrap. This > > matches our requirements pretty well since we need as small images as it > > is possible. > > > > > > > > If you choose the Debian path, you have all of my support, and I'll > try > > to help as much as I can. > > > > > > Thank you! > > > > Dmitry > > Hi Dmitry, > > Could you please have a quick try with the official "Generic" image from > the Debian team? Best maybe would be to try this one: > > > http://cdimage.debian.org/cdimage/cloud/bullseye/daily/20201216-486/debian-11-generic-amd64-daily-20201216-486.qcow2 Only if DIB provides support for it. I'm not inclined to reproduce the whole generation process manually. Dmitry > > > Indeed, I would very much like to be able to validate the image works > for Ironic. > > Cheers, > > Thomas Goirand (zigo) > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalvarez at redhat.com Wed Dec 16 11:23:02 2020 From: dalvarez at redhat.com (Daniel Alvarez Sanchez) Date: Wed, 16 Dec 2020 12:23:02 +0100 Subject: [magnum] [neutron] [ovn] No inter-node pod-to-pod communication due to missing ACLs in OVN In-Reply-To: References: <20201215211809.fi25gel5n7pjuhfs@p1.localdomain> Message-ID: On Tue, Dec 15, 2020 at 10:57 PM Daniel Alvarez wrote: > > > > > On 15 Dec 2020, at 22:18, Slawek Kaplonski wrote: > > > > Hi, > > > >> On Tue, Dec 15, 2020 at 05:14:29PM +0100, Krzysztof Klimonda wrote: > >>> On Tue, Dec 15, 2020, at 16:59, Daniel Alvarez Sanchez wrote: > >>> Hi Chris, thanks for moving this here. > >>> > >>> On Tue, Dec 15, 2020 at 4:22 PM Krzysztof Klimonda < > kklimonda at syntaxhighlighted.com> wrote: > >>>> Hi, > >>>> > >>>> This email is a follow-up to a discussion I've openened on > ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods > deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling > disabled (calico_ipv4pool_ipip label set to a default value of Off). > >>>> > >>>> As a short introduction, during magnum testing in ussuri deployment > with ml2/ovn neutron driver I've noticed lack of communication between pods > deployed on different nodes as part of magnum deployment with calico > configured to *not* encapsulate traffic in IPIP tunnel, but route it > directly between nodes. In theory, magnum configures adds defined pod > network to k8s nodes ports' allowed_address_pairs[2] and then security > group is created allowing for ICMP and TCP/UDP traffic between ports > belonging to that security group[3]. This doesn't work with ml2/ovn as > TCP/UDP traffic between IP addresses in pod network is not matching ACLs > defined in OVN. > >>>> > >>>> I can't verify this behaviour under ml2/ovs for the next couple of > weeks, as I'm taking them off for holidays, but perhaps someone knows if > that specific usecase (security group rules with remote groups used with > allowed address pairs) is supposed to be working, or should magnum use pod > network cidr to allow traffic between nodes instead. > >>> > >>> In ML2/OVN we're adding the allowed address pairs to the 'addresses' > field only when the MAC address of the pair is the same as the port MAC [0]. > >>> I think that we can change the code to accomplish what you want (if it > matches ML2/OVS which I think it does) by adding all IP-MAC pairs of the > allowed-address pairs to the 'addresses' column. E.g: > >>> > >>> addresses = [ MAC1 IP1, AP_MAC1 AP_IP1, AP_MAC2 AP_IP2 ] (right now > it's just addresses = [ MAC1 IP1 ]) > >>> port_security column will be kept as it is today. > >> > >> How does [AP_MAC1 AP_IP1 AP_MAC2 AP_IP2] scale with a number of IP > addresses set in allowed_address_pairs? Given how default pod network is > 10.100.0.0/16 will that generate 65k flows in ovs, or is it not a 1:1 > mapping? > > It will use conjunctive flows but yes it will be huge no matter what. If > we follow the approach of adding match conditions to the ACLs for each > address pair it is going to be even worse when expanded by ovn-controller. > >> > >> If ml2/ovs is also having scaling issues when remote groups are used, > perhaps magnum should switch to defining remote-ip in its security groups > instead, even if the underlying issue on ml2/ovn is fixed? > > > > IIRC Kuryr moved already to such solution as they had problems with > scaling on > > ML2/OVS when remote_group ids where used. > @Slaweq, ML2/OVS accounts for allowed address pairs for remote security groups but not for FIPs right? I wonder why the distinction. Documentation is not clear but I'm certain that FIPs are not accounted for by remote groups. If we decide to go ahead and implement this in ML2/OVN, the same thing can be applied for FIPs adding the FIP to the 'addresses' field but there might be scaling issues. > That’s right. Remote groups are expensive in any case. > > Mind opening a launchpad bug for OVN though? > > Thanks! > > > >> > >>> > >>> This way, when ovn-northd generates the Address_Set in the SB database > for the corresponding remote group, the allowed-address pairs IP addresses > will be added to it and honored by the security groups. > >>> > >>> +Numan Siddique to confirm that this > doesn't have any unwanted side effects. > >>> > >>> [0] > https://opendev.org/openstack/neutron/src/commit/6a8fa65302b45f32958e7fc2b73614715780b997/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L122-L125 > >>>> > >>>> [1] > https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html > >>>> [2] > https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml > >>>> [3] > https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubecluster.yaml#L1038 > >>>> > >>>> -- > >>>> Best Regards, > >>>> - Chris > >>>> > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Dec 16 11:57:36 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 16 Dec 2020 12:57:36 +0100 Subject: [magnum] [neutron] [ovn] No inter-node pod-to-pod communication due to missing ACLs in OVN In-Reply-To: References: <20201215211809.fi25gel5n7pjuhfs@p1.localdomain> Message-ID: <20201216115736.wtnpszo3m4dlv6ki@p1.localdomain> Hi, On Wed, Dec 16, 2020 at 12:23:02PM +0100, Daniel Alvarez Sanchez wrote: > On Tue, Dec 15, 2020 at 10:57 PM Daniel Alvarez wrote: > > > > > > > > > > On 15 Dec 2020, at 22:18, Slawek Kaplonski wrote: > > > > > > Hi, > > > > > >> On Tue, Dec 15, 2020 at 05:14:29PM +0100, Krzysztof Klimonda wrote: > > >>> On Tue, Dec 15, 2020, at 16:59, Daniel Alvarez Sanchez wrote: > > >>> Hi Chris, thanks for moving this here. > > >>> > > >>> On Tue, Dec 15, 2020 at 4:22 PM Krzysztof Klimonda < > > kklimonda at syntaxhighlighted.com> wrote: > > >>>> Hi, > > >>>> > > >>>> This email is a follow-up to a discussion I've openened on > > ovs-discuss ML[1] regarding lack of TCP/UDP connectivity between pods > > deployed on magnum-managed k8s cluster with calico CNI and IPIP tunneling > > disabled (calico_ipv4pool_ipip label set to a default value of Off). > > >>>> > > >>>> As a short introduction, during magnum testing in ussuri deployment > > with ml2/ovn neutron driver I've noticed lack of communication between pods > > deployed on different nodes as part of magnum deployment with calico > > configured to *not* encapsulate traffic in IPIP tunnel, but route it > > directly between nodes. In theory, magnum configures adds defined pod > > network to k8s nodes ports' allowed_address_pairs[2] and then security > > group is created allowing for ICMP and TCP/UDP traffic between ports > > belonging to that security group[3]. This doesn't work with ml2/ovn as > > TCP/UDP traffic between IP addresses in pod network is not matching ACLs > > defined in OVN. > > >>>> > > >>>> I can't verify this behaviour under ml2/ovs for the next couple of > > weeks, as I'm taking them off for holidays, but perhaps someone knows if > > that specific usecase (security group rules with remote groups used with > > allowed address pairs) is supposed to be working, or should magnum use pod > > network cidr to allow traffic between nodes instead. > > >>> > > >>> In ML2/OVN we're adding the allowed address pairs to the 'addresses' > > field only when the MAC address of the pair is the same as the port MAC [0]. > > >>> I think that we can change the code to accomplish what you want (if it > > matches ML2/OVS which I think it does) by adding all IP-MAC pairs of the > > allowed-address pairs to the 'addresses' column. E.g: > > >>> > > >>> addresses = [ MAC1 IP1, AP_MAC1 AP_IP1, AP_MAC2 AP_IP2 ] (right now > > it's just addresses = [ MAC1 IP1 ]) > > >>> port_security column will be kept as it is today. > > >> > > >> How does [AP_MAC1 AP_IP1 AP_MAC2 AP_IP2] scale with a number of IP > > addresses set in allowed_address_pairs? Given how default pod network is > > 10.100.0.0/16 will that generate 65k flows in ovs, or is it not a 1:1 > > mapping? > > > > It will use conjunctive flows but yes it will be huge no matter what. If > > we follow the approach of adding match conditions to the ACLs for each > > address pair it is going to be even worse when expanded by ovn-controller. > > >> > > >> If ml2/ovs is also having scaling issues when remote groups are used, > > perhaps magnum should switch to defining remote-ip in its security groups > > instead, even if the underlying issue on ml2/ovn is fixed? > > > > > > IIRC Kuryr moved already to such solution as they had problems with > > scaling on > > > ML2/OVS when remote_group ids where used. > > > > @Slaweq, ML2/OVS accounts for allowed address pairs for remote security > groups but not for FIPs right? I wonder why the distinction. > Documentation is not clear but I'm certain that FIPs are not accounted for > by remote groups. Right. FIPs aren't added to the list of allowed IPs in the ipset. > > If we decide to go ahead and implement this in ML2/OVN, the same thing can > be applied for FIPs adding the FIP to the 'addresses' field but there might > be scaling issues. > > > > That’s right. Remote groups are expensive in any case. > > > > Mind opening a launchpad bug for OVN though? > > > > Thanks! > > > > > >> > > >>> > > >>> This way, when ovn-northd generates the Address_Set in the SB database > > for the corresponding remote group, the allowed-address pairs IP addresses > > will be added to it and honored by the security groups. > > >>> > > >>> +Numan Siddique to confirm that this > > doesn't have any unwanted side effects. > > >>> > > >>> [0] > > https://opendev.org/openstack/neutron/src/commit/6a8fa65302b45f32958e7fc2b73614715780b997/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L122-L125 > > >>>> > > >>>> [1] > > https://mail.openvswitch.org/pipermail/ovs-discuss/2020-December/050836.html > > >>>> [2] > > https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml > > >>>> [3] > > https://github.com/openstack/magnum/blob/c556b8964fab129f33e766b1c33908b2eb001df4/magnum/drivers/k8s_fedora_coreos_v1/templates/kubecluster.yaml#L1038 > > >>>> > > >>>> -- > > >>>> Best Regards, > > >>>> - Chris > > >>>> > > > > > > -- > > > Slawek Kaplonski > > > Principal Software Engineer > > > Red Hat > > > > > -- Slawek Kaplonski Principal Software Engineer Red Hat From rosmaita.fossdev at gmail.com Wed Dec 16 13:30:53 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 16 Dec 2020 08:30:53 -0500 Subject: [oslo][nova][glance][cinder] move cursive library to oslo? In-Reply-To: <20201215165655.qthdygng5oimiev6@yuggoth.org> References: <67aae566-b142-d975-3146-0128b00d1ec3@gmail.com> <20201215165655.qthdygng5oimiev6@yuggoth.org> Message-ID: <428542bc-3f25-ae0e-07d9-3595c7f46f1d@gmail.com> On 12/15/20 11:56 AM, Jeremy Stanley wrote: > On 2020-12-14 23:53:39 -0500 (-0500), Brian Rosmaita wrote: > [...] >> The current cursive-core team entirely consists of members of the Johns >> Hopkins University Applied Physics Laboratory, which ended its involvement >> with OpenStack in July 2018 > [...] >> I'd like to propose that the cursive library be moved back to the >> 'openstack' namespace and be put under Oslo governance with the consuming >> teams sharing the maintenance of the library. > [...] > > Purely from a logistics perspective, it would be good to get the > permission of at least one of the current core reviewers, preferably > by having them include oslo-core into their core group. Right now > it's an independently developed project within the OpenDev > Collaboratory, and OpenStack lacks the authority to just "take over" > a non-OpenStack project without first making sure that's okay with > the prior authors. > I'll reach out to the current cores (they are still at JHUAPL), but the library was in fact developed as an openstack project and was moved from the 'openstack' namespace into the 'x' space by this patch: https://opendev.org/x/cursive/commit/f8e9d5870fa7049df67c59204988767291f08ec0 cheers, brian From zigo at debian.org Wed Dec 16 15:23:51 2020 From: zigo at debian.org (Thomas Goirand) Date: Wed, 16 Dec 2020 16:23:51 +0100 Subject: [ironic] IPA images: CentOS 8, Stream and Debian In-Reply-To: References: <1e106ff3-6156-6134-bcd9-bd3d906ed74a@debian.org> <81e9d5a3-5b86-dbc8-6083-1eb20754d1f6@debian.org> Message-ID: <9fe306f7-e50e-60bb-8e9b-99a722cdddae@debian.org> On 12/16/20 11:59 AM, Dmitry Tantsur wrote: > On Wed, Dec 16, 2020 at 11:47 AM Thomas Goirand Hi Dmitry, > > Could you please have a quick try with the official "Generic" image from > the Debian team? Best maybe would be to try this one: > > http://cdimage.debian.org/cdimage/cloud/bullseye/daily/20201216-486/debian-11-generic-amd64-daily-20201216-486.qcow2 > > > > Only if DIB provides support for it. I'm not inclined to reproduce the > whole generation process manually. > > Dmitry Hi Dmitry, I'm not asking that you switch within the CI, I'm just asking if you can just try the Debian image *once* and *anywhere* you like, just to confirm that the image works. Anyone else volunteering would to do... :) Cheers, Thomas Goirand (zigo) From jay.faulkner at verizonmedia.com Wed Dec 16 15:34:37 2020 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Wed, 16 Dec 2020 07:34:37 -0800 Subject: [E] Re: [ironic] IPA images: CentOS 8, Stream and Debian In-Reply-To: <9fe306f7-e50e-60bb-8e9b-99a722cdddae@debian.org> References: <1e106ff3-6156-6134-bcd9-bd3d906ed74a@debian.org> <81e9d5a3-5b86-dbc8-6083-1eb20754d1f6@debian.org> <9fe306f7-e50e-60bb-8e9b-99a722cdddae@debian.org> Message-ID: I think there's still a basic disconnect. Ironic builds a ramdisk image from scratch, currently using DIB, to run IPA in an ephemeral environment for provisioning and cleaning. We can't run just a simple qcow image. I think the current methods using debootstrap are great because it makes it trivial to setup a new image using DIB. If we were to use published debian images, we'd have to edit them, embed the IPA ramdisk, and ensure they're in a separate kernel:ramdisk format to be used as a ramdisk. In my experience, that's pretty difficult to do with a standard cloud image. - Jay Faulkner On Wed, Dec 16, 2020 at 7:29 AM Thomas Goirand wrote: > On 12/16/20 11:59 AM, Dmitry Tantsur wrote: > > On Wed, Dec 16, 2020 at 11:47 AM Thomas Goirand > Hi Dmitry, > > > > Could you please have a quick try with the official "Generic" image > from > > the Debian team? Best maybe would be to try this one: > > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__cdimage.debian.org_cdimage_cloud_bullseye_daily_20201216-2D486_debian-2D11-2Dgeneric-2Damd64-2Ddaily-2D20201216-2D486.qcow2&d=DwIFaQ&c=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY&r=NKR1jXf8to59hDGraABDUb4djWcsAXM11_v4c7uz0Tg&m=uESkxZk94GL3tbchP2DfHN_zngJl2NupwL8fB-OnEbU&s=dlEFTBtQS8Hp5qeia6c82SqK81nv9dAvui5-z-zhASI&e= > > < > https://urldefense.proofpoint.com/v2/url?u=http-3A__cdimage.debian.org_cdimage_cloud_bullseye_daily_20201216-2D486_debian-2D11-2Dgeneric-2Damd64-2Ddaily-2D20201216-2D486.qcow2&d=DwIFaQ&c=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY&r=NKR1jXf8to59hDGraABDUb4djWcsAXM11_v4c7uz0Tg&m=uESkxZk94GL3tbchP2DfHN_zngJl2NupwL8fB-OnEbU&s=dlEFTBtQS8Hp5qeia6c82SqK81nv9dAvui5-z-zhASI&e= > > > > > > > > Only if DIB provides support for it. I'm not inclined to reproduce the > > whole generation process manually. > > > > Dmitry > > Hi Dmitry, > > I'm not asking that you switch within the CI, I'm just asking if you can > just try the Debian image *once* and *anywhere* you like, just to > confirm that the image works. Anyone else volunteering would to do... :) > > Cheers, > > Thomas Goirand (zigo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Dec 16 15:53:41 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 16 Dec 2020 15:53:41 +0000 Subject: [kolla] Cancelling next 3 IRC meetings Message-ID: Hi, Due to various holidays, we will cancel the next 3 IRC meetings and meet again on 13th January. Regards, Mark From thierry at openstack.org Wed Dec 16 16:31:00 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 16 Dec 2020 17:31:00 +0100 Subject: [largescale-sig] Next meeting: December 16, 15utc In-Reply-To: <3353850c-ba28-408d-b8ff-ec175dc6de4f@openstack.org> References: <3353850c-ba28-408d-b8ff-ec175dc6de4f@openstack.org> Message-ID: <08f3275e-87de-c1ec-4749-1878f30ebd4f@openstack.org> We held our meeting today and reviewed the wiki pages for the various stages of the scaling journey. Meeting logs at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2020/large_scale_sig.2020-12-16-15.00.html Our next meeting will be Wednesday, January 13 at 15utc in #openstack-meeting-3 on Freenode IRC. We will be rebooting the Large Scale SIG engine for the new year. Between now and then, enjoy the holidays! -- Thierry Carrez (ttx) From fungi at yuggoth.org Wed Dec 16 16:36:56 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 16 Dec 2020 16:36:56 +0000 Subject: [oslo][nova][glance][cinder] move cursive library to oslo? In-Reply-To: <428542bc-3f25-ae0e-07d9-3595c7f46f1d@gmail.com> References: <67aae566-b142-d975-3146-0128b00d1ec3@gmail.com> <20201215165655.qthdygng5oimiev6@yuggoth.org> <428542bc-3f25-ae0e-07d9-3595c7f46f1d@gmail.com> Message-ID: <20201216163656.p2ouze4oyz2dfvcq@yuggoth.org> On 2020-12-16 08:30:53 -0500 (-0500), Brian Rosmaita wrote: > On 12/15/20 11:56 AM, Jeremy Stanley wrote: > > On 2020-12-14 23:53:39 -0500 (-0500), Brian Rosmaita wrote: > > [...] > > > The current cursive-core team entirely consists of members of the Johns > > > Hopkins University Applied Physics Laboratory, which ended its involvement > > > with OpenStack in July 2018 > > [...] > > > I'd like to propose that the cursive library be moved back to the > > > 'openstack' namespace and be put under Oslo governance with the consuming > > > teams sharing the maintenance of the library. > > [...] > > > > Purely from a logistics perspective, it would be good to get the > > permission of at least one of the current core reviewers, preferably > > by having them include oslo-core into their core group. Right now > > it's an independently developed project within the OpenDev > > Collaboratory, and OpenStack lacks the authority to just "take over" > > a non-OpenStack project without first making sure that's okay with > > the prior authors. > > > > I'll reach out to the current cores (they are still at JHUAPL), but the > library was in fact developed as an openstack project and was moved from the > 'openstack' namespace into the 'x' space by this patch: > > https://opendev.org/x/cursive/commit/f8e9d5870fa7049df67c59204988767291f08ec0 It was developed similarly to official OpenStack projects and (along with hundreds of other non-OpenStack projects) was hosted within the "openstack/" Git namespace in our Gerrit because we moved all projects into that namespace around the same time we ceased keeping a separate "stackforge/" namespace, but cursive was never officially under OpenStack governance. The change you mention is the result of the OpenStack TC choosing to evict non-OpenStack projects from that namespace during the big OpenDev reorganization, and not an indication that it was actually governed by OpenStack. If it were previously a deliverable of some official team, it would be listed in the reference/legacy.yaml file in the governance repository, but I've also double-checked the entire Git history for openstack/governance and see no evidence that it was ever under governance and somehow missed having its removal recorded there. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From openstack at nemebean.com Wed Dec 16 16:53:22 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 16 Dec 2020 10:53:22 -0600 Subject: tox -e pep8 In-Reply-To: References: <20201205034400.rmnohg3z3tfkuiyn@yuggoth.org> Message-ID: On 12/5/20 1:39 AM, Sorin Sbarnea wrote: > My impression was that the newer recommended tox environment was > “linters’ and it would decouple the implementation from the process > name, making easy for each project too adapt their linters based on > their needs. > > A grep on codesearch could show how popular is each. > > I think that one of the reasons many projects were not converted is > because job is defined by a shared template and making a bulk transition > requires a lot of effort. We stopped moving to "linters" because the PTI explicitly called for a "pep8" target. Since that still appears to be the case[0] it would require a governance change to stop using pep8. At least for Python projects. 0: https://governance.openstack.org/tc/reference/pti/python.html From fungi at yuggoth.org Wed Dec 16 17:02:19 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 16 Dec 2020 17:02:19 +0000 Subject: [ironic] Securing physical hosts in hostile environments In-Reply-To: <046E9C0290DD9149B106B72FC9156BEA04814DF6@gmsxchsvr01.thecreation.com> References: <046E9C0290DD9149B106B72FC9156BEA04814DF6@gmsxchsvr01.thecreation.com> Message-ID: <20201216170219.kw4zhi74hzfx5h5n@yuggoth.org> On 2020-12-15 17:43:20 -0600 (-0600), Eric K. Miller wrote: [...] > Since users (bad actors) can access the BMC via SMBus, reset BIOS > password(s), change firmware versions, etc., there appears to be > no proper way to secure a platform. [...] > Manufacturers have not made this easy/possible, and we have yet to > find a commercial device that can assist with this out-of-band. > We have actually thought of building our own, but thought we would > ask the community first. My understanding is that one of the primary reasons why https://www.opencompute.org/ formed was to collaboratively design hardware which can't be compromised in-band by its users. The Elastic Secure Infrastructure effort happening in OpenInfra Labs is also attempting to template and document repeatable solutions for the first half of the problem (centrally detecting tainted BIOS/firmware via signature verification and attestation): https://www.bu.edu/rhcollab/projects/esi/ -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Wed Dec 16 17:12:41 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 16 Dec 2020 12:12:41 -0500 Subject: [oslo][nova][glance][cinder] move cursive library to oslo? In-Reply-To: <20201216163656.p2ouze4oyz2dfvcq@yuggoth.org> References: <67aae566-b142-d975-3146-0128b00d1ec3@gmail.com> <20201215165655.qthdygng5oimiev6@yuggoth.org> <428542bc-3f25-ae0e-07d9-3595c7f46f1d@gmail.com> <20201216163656.p2ouze4oyz2dfvcq@yuggoth.org> Message-ID: On 12/16/20 11:36 AM, Jeremy Stanley wrote: > On 2020-12-16 08:30:53 -0500 (-0500), Brian Rosmaita wrote: >> On 12/15/20 11:56 AM, Jeremy Stanley wrote: >>> On 2020-12-14 23:53:39 -0500 (-0500), Brian Rosmaita wrote: >>> [...] >>>> The current cursive-core team entirely consists of members of the Johns >>>> Hopkins University Applied Physics Laboratory, which ended its involvement >>>> with OpenStack in July 2018 >>> [...] >>>> I'd like to propose that the cursive library be moved back to the >>>> 'openstack' namespace and be put under Oslo governance with the consuming >>>> teams sharing the maintenance of the library. >>> [...] >>> >>> Purely from a logistics perspective, it would be good to get the >>> permission of at least one of the current core reviewers, preferably >>> by having them include oslo-core into their core group. Right now >>> it's an independently developed project within the OpenDev >>> Collaboratory, and OpenStack lacks the authority to just "take over" >>> a non-OpenStack project without first making sure that's okay with >>> the prior authors. >>> >> >> I'll reach out to the current cores (they are still at JHUAPL), but the >> library was in fact developed as an openstack project and was moved from the >> 'openstack' namespace into the 'x' space by this patch: >> >> https://opendev.org/x/cursive/commit/f8e9d5870fa7049df67c59204988767291f08ec0 > > It was developed similarly to official OpenStack projects and (along > with hundreds of other non-OpenStack projects) was hosted within the > "openstack/" Git namespace in our Gerrit because we moved all > projects into that namespace around the same time we ceased keeping > a separate "stackforge/" namespace, but cursive was never officially > under OpenStack governance. > > The change you mention is the result of the OpenStack TC choosing to > evict non-OpenStack projects from that namespace during the big > OpenDev reorganization, and not an indication that it was actually > governed by OpenStack. If it were previously a deliverable of some > official team, it would be listed in the reference/legacy.yaml file > in the governance repository, but I've also double-checked the > entire Git history for openstack/governance and see no evidence that > it was ever under governance and somehow missed having its removal > recorded there. > OK, thanks for the explanation and for doing some archaeological research. In the meantime, I (finally) noticed that barbican-core is an included group in cursive-core, so the situation is not as dire as I thought in terms of having someone around who can approve patches. I think cursive should be pulled into openstack governance, however. I'll restart the thread focused on that issue and include [barbican] in the subject line. From openstack at nemebean.com Wed Dec 16 17:12:57 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 16 Dec 2020 11:12:57 -0600 Subject: [oslo][nova][glance][cinder] move cursive library to oslo? In-Reply-To: <67aae566-b142-d975-3146-0128b00d1ec3@gmail.com> References: <67aae566-b142-d975-3146-0128b00d1ec3@gmail.com> Message-ID: Based on the readme saying "The cursive project contains code extracted from various OpenStack projects for verifying digital signatures" and the fact that it's being used in multiple other projects it sounds like a perfect fit for Oslo. On 12/14/20 10:53 PM, Brian Rosmaita wrote: > Hello Oslo Team, > > Nova, Glance, and Cinder all make use of the 'cursive' library for > image-signature-validation.  The library is currently in the 'x' > namespace: https://opendev.org/x/cursive > > The current cursive-core team entirely consists of members of the Johns > Hopkins University Applied Physics Laboratory, which ended its > involvement with OpenStack in July 2018 [0]. > > This leaves us in a position where three of the major openstack projects > depend on a library to which no one currently around can approve code > changes. > > I'd like to propose that the cursive library be moved back to the > 'openstack' namespace and be put under Oslo governance with the > consuming teams sharing the maintenance of the library.  I don't think > this will make much new work for the Oslo team--the library has been > very stable and hasn't changed in over 2 years--but it will ensure that > should any bugfixes be required, there will be oslo team members who can > approve the patches. > > Thanks for thinking this over, > brian > > > [0] > http://lists.openstack.org/pipermail/openstack-dev/2018-July/131978.html > > From fungi at yuggoth.org Wed Dec 16 17:16:31 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 16 Dec 2020 17:16:31 +0000 Subject: tox -e pep8 In-Reply-To: References: <20201205034400.rmnohg3z3tfkuiyn@yuggoth.org> Message-ID: <20201216171631.gzufffmeyfbn5jwq@yuggoth.org> On 2020-12-16 10:53:22 -0600 (-0600), Ben Nemec wrote: > On 12/5/20 1:39 AM, Sorin Sbarnea wrote: > > My impression was that the newer recommended tox environment was > > “linters’ and it would decouple the implementation from the process > > name, making easy for each project too adapt their linters based on > > their needs. > > > > A grep on codesearch could show how popular is each. > > > > I think that one of the reasons many projects were not converted is > > because job is defined by a shared template and making a bulk transition > > requires a lot of effort. > > We stopped moving to "linters" because the PTI explicitly called for a > "pep8" target. Since that still appears to be the case[0] it would require a > governance change to stop using pep8. At least for Python projects. > > 0: https://governance.openstack.org/tc/reference/pti/python.html A project could of course have both if they wanted, the PTI doesn't prohibit that. If tox provided a feature to alias testenv names then it would be fairly trivial to maintain, though a testenv:pep8 can still explicitly inherit each individual option from the testenv:linters section (yes it is sort of ugly). I personally have little concern for what we call it as long as we keep consistent between projects, but changing this across every project does seem like a bit of unwarranted additional work for everyone. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jay.faulkner at verizonmedia.com Wed Dec 16 17:16:21 2020 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Wed, 16 Dec 2020 09:16:21 -0800 Subject: [E] [ironic] Securing physical hosts in hostile environments In-Reply-To: <046E9C0290DD9149B106B72FC9156BEA04814DF6@gmsxchsvr01.thecreation.com> References: <046E9C0290DD9149B106B72FC9156BEA04814DF6@gmsxchsvr01.thecreation.com> Message-ID: I've attempted to secure physical hardware at a previous job. The primary tools we used were vendor relationships and extensive testing. There's no silver bullet to getting hardware safe against a "root" user. Not trying to give an unhelpful answer; but outside of the groups that Jeremy linked, there's been very little innovation enabling you to secure your hardware, unless you work directly with a vendor (and have the buying power to make them listen). - Jay Faulkner On Tue, Dec 15, 2020 at 3:48 PM Eric K. Miller wrote: > Hi, > > > > We have considered ironic for deploying physical hosts for our public > cloud platform, but have not found any way to properly secure the hosts, or > rather, how to reset a physical host back to factory defaults between uses > - such as BIOS and BMC settings. Since users (bad actors) can access the > BMC via SMBus, reset BIOS password(s), change firmware versions, etc., > there appears to be no proper way to secure a platform. > > > > This is especially true when resetting BIOS/BMC configurations since this > typically involves shorting a jumper and power cycling a unit (physically > removing power from the power supplies - not just a power down from the > BMC). Manufacturers have not made this easy/possible, and we have yet to > find a commercial device that can assist with this out-of-band. We have > actually thought of building our own, but thought we would ask the > community first. > > > > Thanks! > > > Eric > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emiller at genesishosting.com Wed Dec 16 17:25:07 2020 From: emiller at genesishosting.com (Eric K. Miller) Date: Wed, 16 Dec 2020 11:25:07 -0600 Subject: [ironic] Securing physical hosts in hostile environments In-Reply-To: <20201216170219.kw4zhi74hzfx5h5n@yuggoth.org> References: <046E9C0290DD9149B106B72FC9156BEA04814DF6@gmsxchsvr01.thecreation.com> <20201216170219.kw4zhi74hzfx5h5n@yuggoth.org> Message-ID: <046E9C0290DD9149B106B72FC9156BEA04814E04@gmsxchsvr01.thecreation.com> > My understanding is that one of the primary reasons why > https://www.opencompute.org/ formed was to collaboratively design > hardware which can't be compromised in-band by its users. > The Elastic Secure Infrastructure effort happening in OpenInfra Labs is also > attempting to template and document repeatable solutions for the first half > of the problem (centrally detecting tainted BIOS/firmware via signature > verification and attestation): > https://www.bu.edu/rhcollab/projects/esi/ > -- > Jeremy Stanley Thanks Jeremy! I have some reading to do. It seems that, instead of detecting tainted "anything", it would be better to assume zero trust in the hardware after use, and instead reset/re-flash everything upon re-provisioning. I can understand that re-flashing can be hard on the flash, but now that most (all?) firmware has digital signature checks, this can be used to avoid re-flashing when the signature matches. However, the issue still remains that typical server hardware (I need to check OpenCompute's hardware) requires jumpers to be changed for re-flashing/resetting configs, which is a real pain. So, even if you did detect something bad, this needs to be done to fix the issue. Eric From emiller at genesishosting.com Wed Dec 16 17:30:58 2020 From: emiller at genesishosting.com (Eric K. Miller) Date: Wed, 16 Dec 2020 11:30:58 -0600 Subject: [E] [ironic] Securing physical hosts in hostile environments In-Reply-To: References: <046E9C0290DD9149B106B72FC9156BEA04814DF6@gmsxchsvr01.thecreation.com> Message-ID: <046E9C0290DD9149B106B72FC9156BEA04814E05@gmsxchsvr01.thecreation.com> > I've attempted to secure physical hardware at a previous job. The primary tools we used were vendor relationships and extensive testing. There's no silver bullet to getting hardware safe against a "root" user. > > Not trying to give an unhelpful answer; but outside of the groups that Jeremy linked, there's been very little innovation enabling you to secure  your hardware,  unless you work directly with a vendor (and have the buying power to make them listen). > - > Jay Faulkner Thanks Jay! I suspected as much. It does seem that there is likely a big market for this - an out-of-band device/PCI card that can assist with initiating re-flashing, power management (outside of the switchable power supplies), and jumper changes. I was a bit shocked that it didn't exist. I thought SMC would have built something like this into their SuperBlade systems, but their chassis-level BMC reset functions simply use the network to connect to the blades' BMCs, which isn't too helpful when the user changes the IP address of the BMC… ugh. Eric From juliaashleykreger at gmail.com Wed Dec 16 17:33:13 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 16 Dec 2020 09:33:13 -0800 Subject: [E] [ironic] Securing physical hosts in hostile environments In-Reply-To: References: <046E9C0290DD9149B106B72FC9156BEA04814DF6@gmsxchsvr01.thecreation.com> Message-ID: Some operators have taken an approach of attestation and system measurement as a means to try and combat these sorts of vectors, however, if the TPM can't read the firmware to "measure" checksum out of the inband firmware channel, i.e. access the flash directly, not what malicious byte code could reply to, then it is a little difficult to trust that mechanism. The positive is that this mainly means things like drives are the items at risk at this point. Not exactly comforting as the first firmware POC I can think of that spoofs on checking the firmware was against a SATA disk. I know some operators have brought up trying to drive their vendors into means of having an out of band mechanism to be able to check and assert these things, where in the meantime they are performing in-band flashing on upon each cleaning in hope to scrub malicious firmware in hopes of squashing any malicious user's actions. This is an approach a number of operators have publicly stated they've taken, however it requires creating your own custom hardware manager to align with the hardware you have and the firmware versions you want/expect. I think this is a good topic for the baremetal SIG to try and discuss and push forward, because as Jay said, there is no silver bullet, and most of these patterns are basically highly customized sorts of patterns and interactions based upon your environment, your hardware, and the attack vectors you're concerned about. -Julia On Wed, Dec 16, 2020 at 9:19 AM Jay Faulkner wrote: > > I've attempted to secure physical hardware at a previous job. The primary tools we used were vendor relationships and extensive testing. There's no silver bullet to getting hardware safe against a "root" user. > > Not trying to give an unhelpful answer; but outside of the groups that Jeremy linked, there's been very little innovation enabling you to secure your hardware, unless you work directly with a vendor (and have the buying power to make them listen). > > - > Jay Faulkner > > > On Tue, Dec 15, 2020 at 3:48 PM Eric K. Miller wrote: >> >> Hi, >> >> >> >> We have considered ironic for deploying physical hosts for our public cloud platform, but have not found any way to properly secure the hosts, or rather, how to reset a physical host back to factory defaults between uses - such as BIOS and BMC settings. Since users (bad actors) can access the BMC via SMBus, reset BIOS password(s), change firmware versions, etc., there appears to be no proper way to secure a platform. >> >> >> >> This is especially true when resetting BIOS/BMC configurations since this typically involves shorting a jumper and power cycling a unit (physically removing power from the power supplies - not just a power down from the BMC). Manufacturers have not made this easy/possible, and we have yet to find a commercial device that can assist with this out-of-band. We have actually thought of building our own, but thought we would ask the community first. >> >> >> >> Thanks! >> >> >> Eric >> >> From smooney at redhat.com Wed Dec 16 17:53:47 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 16 Dec 2020 17:53:47 +0000 Subject: tox -e pep8 In-Reply-To: <20201216171631.gzufffmeyfbn5jwq@yuggoth.org> References: <20201205034400.rmnohg3z3tfkuiyn@yuggoth.org> <20201216171631.gzufffmeyfbn5jwq@yuggoth.org> Message-ID: <25d0d8cecbbc1f9eaf09907842fd781984267912.camel@redhat.com> On Wed, 2020-12-16 at 17:16 +0000, Jeremy Stanley wrote: > On 2020-12-16 10:53:22 -0600 (-0600), Ben Nemec wrote: > > On 12/5/20 1:39 AM, Sorin Sbarnea wrote: > > > My impression was that the newer recommended tox environment was > > > “linters’ and it would decouple the implementation from the process > > > name, making easy for each project too adapt their linters based on > > > their needs. > > > > > > A grep on codesearch could show how popular is each. > > > > > > I think that one of the reasons many projects were not converted is > > > because job is defined by a shared template and making a bulk transition > > > requires a lot of effort. > > > > We stopped moving to "linters" because the PTI explicitly called for a > > "pep8" target. Since that still appears to be the case[0] it would require a > > governance change to stop using pep8. At least for Python projects. > > > > 0: https://governance.openstack.org/tc/reference/pti/python.html > > A project could of course have both if they wanted, the PTI doesn't > prohibit that. If tox provided a feature to alias testenv names then > it would be fairly trivial to maintain, though a testenv:pep8 can > still explicitly inherit each individual option from the > testenv:linters section (yes it is sort of ugly). > > I personally have little concern for what we call it as long as we > keep consistent between projects, but changing this across every > project does seem like a bit of unwarranted additional work for > everyone. tox -e pep8 and tox -e linters wont neessisarly run the same tests on all project that have both. linters has been used in the past to run optional addtionall linteres that were not gated on. i dont recall what repo that was in or if its still the case but they are not nessisarialy aliases of each other. From rosmaita.fossdev at gmail.com Wed Dec 16 18:02:40 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 16 Dec 2020 13:02:40 -0500 Subject: [barbican][oslo][nova][glance][cinder] cursive library status Message-ID: <35dfc43f-6613-757b-ed7b-b6530df21289@gmail.com> Hello Barbican team, Apologies for not including barbican in the previous thread on this topic: http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019430.html The situation is that cursive is used by Nova, Glance, and Cinder and we'd like to move it out of the 'x' namespace into openstack governance. The question is then what team would oversee it. It seems like a good fit for Oslo, and the Oslo team seems OK with that, but since barbican-core is currently included in cursive-core, it make sense to give the Barbican team first dibs. From the consuming teams' side, I don't think we have a preference as long as it's clear who we need to bother about approvals if a bugfix is posted for review. Thus my ask is that the Barbican team indicate whether they'd like to move cursive to the 'openstack' namespace under their governance, or whether they'd prefer Oslo to oversee the library. Thank you! brian From juliaashleykreger at gmail.com Wed Dec 16 18:06:28 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 16 Dec 2020 10:06:28 -0800 Subject: [E] [ironic] Securing physical hosts in hostile environments In-Reply-To: <046E9C0290DD9149B106B72FC9156BEA04814E05@gmsxchsvr01.thecreation.com> References: <046E9C0290DD9149B106B72FC9156BEA04814DF6@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA04814E05@gmsxchsvr01.thecreation.com> Message-ID: On Wed, Dec 16, 2020 at 9:33 AM Eric K. Miller wrote: > > > I've attempted to secure physical hardware at a previous job. The primary tools we used were vendor relationships and extensive testing. There's no silver bullet to getting hardware safe against a "root" user. > > > > Not trying to give an unhelpful answer; but outside of the groups that Jeremy linked, there's been very little innovation enabling you to secure your hardware, unless you work directly with a vendor (and have the buying power to make them listen). > > - > > Jay Faulkner > > Thanks Jay! I suspected as much. It does seem that there is likely a big market for this - an out-of-band device/PCI card that can assist with initiating re-flashing, power management (outside of the switchable power supplies), and jumper changes. I was a bit shocked that it didn't exist. I thought SMC would have built something like this into their SuperBlade systems, but their chassis-level BMC reset functions simply use the network to connect to the blades' BMCs, which isn't too helpful when the user changes the IP address of the BMC… ugh. > > Eric > I think in the SMC case, it is kind of designed that way to always trust the user. I think the IPMI inband interface can be disabled on some vendors' gear, which would definitely help. However in the SMC case, if memory serves to reset the bmc to factory default you do have to move the jumper, reset power, reset the bmc password via an in-operating system tool and reset addressing via the bios. :\ From emiller at genesishosting.com Wed Dec 16 18:34:42 2020 From: emiller at genesishosting.com (Eric K. Miller) Date: Wed, 16 Dec 2020 12:34:42 -0600 Subject: [E] [ironic] Securing physical hosts in hostile environments In-Reply-To: References: <046E9C0290DD9149B106B72FC9156BEA04814DF6@gmsxchsvr01.thecreation.com> Message-ID: <046E9C0290DD9149B106B72FC9156BEA04814E06@gmsxchsvr01.thecreation.com> > Some operators have taken an approach of attestation and system > measurement as a means to try and combat these sorts of vectors, > however, if the TPM can't read the firmware to "measure" checksum out > of the inband firmware channel, i.e. access the flash directly, not > what malicious byte code could reply to, then it is a little difficult > to trust that mechanism. The positive is that this mainly means things > like drives are the items at risk at this point. Not exactly > comforting as the first firmware POC I can think of that spoofs on > checking the firmware was against a SATA disk. We thought about that too - potential firmware corruption of NVMe drives, or the configuration of drives that support NVMe namespaces, and undoing this upon reprovisioning of the server. Lots of things to think about. I'm not 100% sure how the firmware signature checks work, but it seems that this would be done within the firmware itself, and not with a separate management processor inside the device. So, then we have to deal with the potential firmware flash of an older firmware version that did not have digital signature checks, which would open a channel to install anything the attacker wanted on that device. > I know some operators have brought up trying to drive their vendors > into means of having an out of band mechanism to be able to check and > assert these things, where in the meantime they are performing in-band > flashing on upon each cleaning in hope to scrub malicious firmware in > hopes of squashing any malicious user's actions. This is an approach a > number of operators have publicly stated they've taken, however it > requires creating your own custom hardware manager to align with the > hardware you have and the firmware versions you want/expect. Exactly - so quite an effort, and labor intensive. > I think this is a good topic for the baremetal SIG to try and discuss > and push forward, because as Jay said, there is no silver bullet, and > most of these patterns are basically highly customized sorts of > patterns and interactions based upon your environment, your hardware, > and the attack vectors you're concerned about. I think the answer is to keep the hardware as simple as possible - meaning no internal drives or other cards that could be modified. It would actually be nice if machines had a "loadable BIOS firmware" from external media, where everytime the machine booted, the BIOS firmware would load from a trusted source (a locally attached drive - directly to the BIOS chip) - and maybe the same for BMC firmware. BIOS firmware already loads a shadow copy of the BIOS into memory already - why not just load it from external media instead somehow. Somewhat like UEFI firmware provides for BIOS configuration data. This strategy leaves the hardware in a "bare" state with no software, so resetting the device would always return to a clean state. I'll have to look for the baremetal SIG and participate. Thanks for pointing it out! Eric From fungi at yuggoth.org Wed Dec 16 18:46:20 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 16 Dec 2020 18:46:20 +0000 Subject: [ironic] Securing physical hosts in hostile environments In-Reply-To: <046E9C0290DD9149B106B72FC9156BEA04814E04@gmsxchsvr01.thecreation.com> References: <046E9C0290DD9149B106B72FC9156BEA04814DF6@gmsxchsvr01.thecreation.com> <20201216170219.kw4zhi74hzfx5h5n@yuggoth.org> <046E9C0290DD9149B106B72FC9156BEA04814E04@gmsxchsvr01.thecreation.com> Message-ID: <20201216184619.65lrjlek47kxm3tt@yuggoth.org> On 2020-12-16 11:25:07 -0600 (-0600), Eric K. Miller wrote: [...] > It seems that, instead of detecting tainted "anything", it would > be better to assume zero trust in the hardware after use, and > instead reset/re-flash everything upon re-provisioning. I can > understand that re-flashing can be hard on the flash, but now that > most (all?) firmware has digital signature checks, this can be > used to avoid re-flashing when the signature matches. I too raised this in one discussion. The organizations involved see it as an incremental approach, one which allows them to forego any automated recovery process for now on the assumption that incidence of this problem will be extremely infrequent. Instead they can bill the customer for the cost of manually recovering the machine to a clean state, or even simply charge them for the hardware itself and not bother with recovery at all. "You break it, you buy it." > However, the issue still remains that typical server hardware (I > need to check OpenCompute's hardware) requires jumpers to be > changed for re-flashing/resetting configs, which is a real pain. > So, even if you did detect something bad, this needs to be done to > fix the issue. This article suggests OCP wants to tackle it via firmware authentication both when it's called and also when it's being rewritten: https://www.datacenterknowledge.com/security/open-compute-project-releases-hardware-root-trust-spec-data-centers But that aside, if you wire those "jumpers" back to a central header for some group of machines, you can in theory just do something like this to inexpensively remote control banks of them over your isolated management network: https://elinux.org/RPi_GPIO_Interface_Circuits#Using_an_NPN_transistor -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From openstack at nemebean.com Wed Dec 16 18:47:15 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 16 Dec 2020 12:47:15 -0600 Subject: tox -e pep8 In-Reply-To: <25d0d8cecbbc1f9eaf09907842fd781984267912.camel@redhat.com> References: <20201205034400.rmnohg3z3tfkuiyn@yuggoth.org> <20201216171631.gzufffmeyfbn5jwq@yuggoth.org> <25d0d8cecbbc1f9eaf09907842fd781984267912.camel@redhat.com> Message-ID: <3c32d6e1-4b41-dd35-9113-5bbd85206e9a@nemebean.com> On 12/16/20 11:53 AM, Sean Mooney wrote: > On Wed, 2020-12-16 at 17:16 +0000, Jeremy Stanley wrote: >> On 2020-12-16 10:53:22 -0600 (-0600), Ben Nemec wrote: >>> On 12/5/20 1:39 AM, Sorin Sbarnea wrote: >>>> My impression was that the newer recommended tox environment was >>>> “linters’ and it would decouple the implementation from the process >>>> name, making easy for each project too adapt their linters based on >>>> their needs. >>>> >>>> A grep on codesearch could show how popular is each. >>>> >>>> I think that one of the reasons many projects were not converted is >>>> because job is defined by a shared template and making a bulk transition >>>> requires a lot of effort. >>> >>> We stopped moving to "linters" because the PTI explicitly called for a >>> "pep8" target. Since that still appears to be the case[0] it would require a >>> governance change to stop using pep8. At least for Python projects. >>> >>> 0: https://governance.openstack.org/tc/reference/pti/python.html >> >> A project could of course have both if they wanted, the PTI doesn't >> prohibit that. If tox provided a feature to alias testenv names then >> it would be fairly trivial to maintain, though a testenv:pep8 can >> still explicitly inherit each individual option from the >> testenv:linters section (yes it is sort of ugly). >> >> I personally have little concern for what we call it as long as we >> keep consistent between projects, but changing this across every >> project does seem like a bit of unwarranted additional work for >> everyone. > tox -e pep8 and tox -e linters wont neessisarly run the same tests on all project that have both. > linters has been used in the past to run optional addtionall linteres that were not gated on. > i dont recall what repo that was in or if its still the case but they are not nessisarialy aliases of > each other. Sure, I was only intending to point out that linters is not a replacement for pep8, and possibly to head off a rash of s/pep8/linters/ tox.ini changes. ;-) From openstack at nemebean.com Wed Dec 16 18:50:15 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 16 Dec 2020 12:50:15 -0600 Subject: [barbican][oslo][nova][glance][cinder] cursive library status In-Reply-To: <35dfc43f-6613-757b-ed7b-b6530df21289@gmail.com> References: <35dfc43f-6613-757b-ed7b-b6530df21289@gmail.com> Message-ID: On 12/16/20 12:02 PM, Brian Rosmaita wrote: > Hello Barbican team, > > Apologies for not including barbican in the previous thread on this topic: > > http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019430.html > > > The situation is that cursive is used by Nova, Glance, and Cinder and > we'd like to move it out of the 'x' namespace into openstack governance. >  The question is then what team would oversee it.  It seems like a good > fit for Oslo, and the Oslo team seems OK with that, but since > barbican-core is currently included in cursive-core, it make sense to > give the Barbican team first dibs. > > From the consuming teams' side, I don't think we have a preference as > long as it's clear who we need to bother about approvals if a bugfix is > posted for review. > > Thus my ask is that the Barbican team indicate whether they'd like to > move cursive to the 'openstack' namespace under their governance, or > whether they'd prefer Oslo to oversee the library. Note that this is not necessarily an either/or thing. Castellan is under Oslo governance but is co-owned by the Oslo and Barbican teams. We could do a similar thing with Cursive. From fungi at yuggoth.org Wed Dec 16 18:53:14 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 16 Dec 2020 18:53:14 +0000 Subject: [E] [ironic] Securing physical hosts in hostile environments In-Reply-To: References: <046E9C0290DD9149B106B72FC9156BEA04814DF6@gmsxchsvr01.thecreation.com> Message-ID: <20201216185314.sfzsvqbi6hvoxdkp@yuggoth.org> On 2020-12-16 09:33:13 -0800 (-0800), Julia Kreger wrote: [...] > in the meantime they are performing in-band flashing on upon each > cleaning in hope to scrub malicious firmware in hopes of squashing > any malicious user's actions. This is an approach a number of > operators have publicly stated they've taken, however it requires > creating your own custom hardware manager to align with the > hardware you have and the firmware versions you want/expect. [...] It's also worth reminding everyone this is an incomplete solution. How do you know the in-band reflashing worked? Because the (possibly backdoored) firmware says it did, of course! It's certainly not going to just claim to have reflashed with exactly the bits you supplied while actually reinjecting its persistent backdoor, right? Of course, that's ultimately the reason we keep having this conversation over and over. ;) -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Wed Dec 16 19:29:36 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 16 Dec 2020 14:29:36 -0500 Subject: [ops][cinder] notice of incorrect default policy value Message-ID: <7f3529e7-002b-7515-e2a6-6503fcfcc038@gmail.com> Hello operators, While reviewing Cinder policies recently, Bug #1908315 [0] was discovered: "Policy group:reset_group_snapshot_status has incorrect checkstring". This policy governs the "Reset a snapshot's status" action [1]. The action is supposed to be admin-only, but the default policy setting is admin-or-owner. This is not a security issue, but it does allow an end user to put a group snapshot that they own into an invalid status, with indeterminate consequences. A fix has been posted for review [2], but if you wish to correct this immediately, you can put the following line into your cinder policy file: "group:reset_group_snapshot_status": "rule:admin_api" More information about the cinder policy file can be found at [3]. [0] https://bugs.launchpad.net/cinder/+bug/1908315 [1] https://docs.openstack.org/api-ref/block-storage/v3/#reset-a-snapshot-s-status [2] https://review.opendev.org/c/openstack/cinder/+/767226 [3] https://docs.openstack.org/cinder/latest/configuration/block-storage/samples/policy.yaml.html From mnaser at vexxhost.com Wed Dec 16 20:59:29 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 16 Dec 2020 15:59:29 -0500 Subject: [tc] weekly meeting agenda Message-ID: Hi everyone, Here’s the agenda for our weekly TC meeting. It will happen tomorrow (Thursday the 17th) at 1500 UTC in #openstack-tc and I will be your chair. If you can’t attend, please put your name in the “Apologies for Absence” section. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting * ACTIVE INITIATIVES - Follow up on past action items - Skipping next 2 meetings - Audit SIG list and chairs (diablo_rojo) - Annual report suggestions (diablo_rojo) - X cycle goal selection start - Audit and clean-up tags (gmann) - X cycle release name vote recording (gmann) - CentOS 8 releases are discontinued / switch to CentOS 8 Stream (gmann/yoctozepto) - Open Reviews We'll clean up the agenda tomorrow should it need changes. Thanks, Mohammed -- Mohammed Naser VEXXHOST, Inc. From Arkady.Kanevsky at dell.com Wed Dec 16 22:49:19 2020 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Wed, 16 Dec 2020 22:49:19 +0000 Subject: [Cinder] CHAP security Message-ID: Team, As more storage products start supporting CHAP, cinder drivers adding that also that becomes part of cinder.conf. Should we include CHAP support as an optional feature in https://docs.openstack.org/cinder/latest/reference/support-matrix.html? May as security feature? Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell EMC office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Dec 16 22:57:06 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 16 Dec 2020 22:57:06 +0000 Subject: [Cinder] CHAP security In-Reply-To: References: Message-ID: <20201216225706.7nness3r2khssoda@yuggoth.org> On 2020-12-16 22:49:19 +0000 (+0000), Kanevsky, Arkady wrote: > As more storage products start supporting CHAP, cinder drivers > adding that also that becomes part of cinder.conf. Should we > include CHAP support as an optional feature in > https://docs.openstack.org/cinder/latest/reference/support-matrix.html? > May as security feature? Neat, CHAP as in IETF RFC 1334, the successor to PAP, used to authenticate PPP encapsulation for serial dial-up connections? That sure brings back some fond memories. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gouthampravi at gmail.com Thu Dec 17 01:38:13 2020 From: gouthampravi at gmail.com (gouthampravi at gmail.com) Date: Wed, 16 Dec 2020 20:38:13 -0500 Subject: [OSSN-0087] Ceph user credential leakage to consumers of OpenStack Manila Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Ceph user credential leakage to consumers of OpenStack Manila - ------------------------------------------------------------- ### Summary ### OpenStack Manila users can request access on a share to any arbitrary cephx user, including privileged pre-existing users of a Ceph cluster. They can then retrieve access secret keys for these pre-existing ceph users via Manila APIs. A cephx client user name and access secret key are required to mount a Native CephFS manila share. With a secret key, a manila user can impersonate a pre-existing ceph user and gain capabilities to manipulate resources that the manila user was never intended to have access to. It is possible to even obtain the default ceph "admin" user's key in this manner, and execute any commands as the ceph administrator. ### Affected Services / Software ### - - OpenStack Shared File Systems Service (Manila) versions Mitaka (2.0.0) through Victoria (11.0.0) - - Ceph Luminous (<=v12.2.13), Mimic (<=v13.2.10), Nautilus (<=v14.2.15), Octopus (<=v15.2.7) ### Discussion ### OpenStack Manila can provide users with Native CephFS shared file systems. When a user creates a "share" (short for "shared file system") via Manila, a CephFS "subvolume" is created on the Ceph cluster and exported. After creating their share, a user can specify who can have access to the share with the help of "cephx" client user names. A cephx client corresponds to Ceph Client Users [2]. When access is provided, a client user "access key" is returned via manila. A ceph client user account is required to access any ceph resource. This includes interacting with Ceph cluster infrastructure daemons (ceph-mgr, ceph-mds, ceph-mon, ceph-osd) or consuming Ceph storage via RBD, RGW or CephFS. Deployment and orchestration services like ceph-ansible, nfs-ganesha, kolla, tripleo need ceph client users to work, as do OpenStack services such as cinder, manila, glance and nova for their own interactions with Ceph. For the purpose of illustrating this vulnerability, we'll call them "pre-existing" users of the Ceph cluster. Another example of a pre-existing user includes the "admin" user that is created by default on the ceph cluster. In theory, manila's cephx users are no different from a ceph client user. When a manila user requests access to a share, a corresponding ceph user account is created if one already does not exist. If a ceph user account already exists, the existing capabilities of that user are adjusted to provide them permissions to access the manila share in question. There is no reasonable way for this mechanism to know what pre-existing ceph client users must be protected against unauthorized abuse. Therefore there is a risk that a manila user can claim to be a pre-existing ceph user to steal their access secret key. To resolve this issue, the ceph interface that manila uses was patched to no longer allow manila to claim a pre-existing user account that didn't create. By consequence this means that manila users cannot use cephx usernames that correspond to ceph client users that exist outside of manila. ### Recommended Actions ### #. Upgrade your ceph software to the latest patched releases of ceph to take advantage of the fix to this vulnerability. #. Audit cephx access keys provisioned via manila. You may use "ceph auth ls" and ensure that no clients have been compromised. If they have been, you may need to delete and recreate the client credentials to prevent unauthorized access. #. The audit can also be performed on manila by enumerating all CephFS shares and their access rules as a system administrator. If a reserved ceph client username has been used, you may deny access and recreate the client credential on ceph to refresh the access secret. No code changes were necessary in the OpenStack Shared File System service (manila). With an upgraded ceph, when manila users attempt to provide share access to a cephx username that they cannot use, the access rule's "state" attribute is set to "error" because this operation is no longer permitted. ### Patches ### The Ceph community has provided the following patches: Ceph Octopus: https://github.com/ceph/ceph/commit/1b8a634fdcd94dfb3ba650793fb1b6d09af65e05 Ceph Nautilus: https://github.com/ceph/ceph/commit/7e3e4e73783a98bb07ab399438eb3aab41a6fc8b Ceph Luminous: https://github.com/ceph/ceph/commit/956ceb853a58f6b6847b31fac34f2f0228a70579 The fixes are in the latest releases of Ceph Nautilus (14.2.16) and Ceph Octopus (15.2.8). The patch for Luminous was provided as a courtesy to possible users of OpenStack Manila, however the Ceph community no longer produces releases for Luminous or Mimic as they are end of life. See `here for information about ceph releases. `_ ### Contacts / References ### Author: - - Pacha Ravi, Goutham gouthamr at redhat.com (Red Hat) Credits: - - Garbutt, John john at johngarbutt.com (StackHPC) - - Babel, Jahson jahson.babel at cc.in2p3.fr (Centre de Calcul de l'IN2P3) This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0087 Original LaunchPad Bug : https://launchpad.net/bugs/1904015 Mailing List : [Security] tag on openstack-discuss at lists.openstack.org OpenStack Security Project : https://launchpad.net/~openstack-ossg CVE: CVE-2020-27781 -----BEGIN PGP SIGNATURE----- wsFcBAEBCAAGBQJf2raCAAoJENJP32eYWZR3mREP/ij+So0KHK7dD3WAdcVK 0JdGzwOjX2Bc4/7g5RPzn4RaxZKicsBOWqESCTTBl94oG4XvTax3fW0E6VlL L6XoV+At1cEvptONDoZ0faCSHfTng1J73rHMo9v+cmxmOuEwXReghArS86tS KIeRWviW9hyNmZfhJxuAC9ICR0HglhT5VHqNtAjL5WzoFGMtC4VeJ7e8rf8r PLjvYGOPzNDj8wAn5UvTnJgkT1tbbIZQai4o+QlDJK5eEuEQnwGTUQ/umx/a z2DeuCnDDxJeOFcWEgkDzTQsE6e7dO4FvoIIsZ3u5pA0Rhw31QfpupUsLAYH WhAjt7cImKRTfza/zVuS7PAko2fMmuNHyHEQQh2Y80S4nkdo/WAfUaBft8eO vdNlvunBQA2E7mlK6oxNF22k/pX49N47vVqGttPFA1kNPFg2qLcc8mJIxpjf V4sJVfMO/DuQaU1zTH/P9KsYm/hlyVzLELEvDR43jmWg4p4btZT8Z2OVD80r 9/dRMcbRQl3yq1C2L7yyKVOc6Pw9HJ90ixXpgv7ZT6TbwAPzFe/euITmA6H0 EGzinRg7JmfFFnf/9FBEZJRL46/idMLVNRo2XioA6+o2kngkwSsD+uJTlMFG XQJINZwrbL1xpuh3VGaOI44yCimsQwNGURH2NmZ7uFBKy6CHYXX42ydnQk51 w9qp =o0Lx -----END PGP SIGNATURE----- From rosmaita.fossdev at gmail.com Thu Dec 17 04:24:46 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 16 Dec 2020 23:24:46 -0500 Subject: [cinder] meeting time change poll Message-ID: <4f13a3a6-6acc-48a5-99c6-86fa7a944759@gmail.com> At today's meeting, Lucio proposed moving the cinder weekly meeting time. Please respond to this poll to assess some options before Tuesday 22 December at 1200 UTC. https://rosmaita.wufoo.com/forms/wallaby-cinder-meeting-time-poll/ There's a free-form field on the form so you can propose other alternatives. We aren't considering changing the day of the meeting (Wednesday) at this time, but that's a possibility if it would encourage more attendance. NOTE: next week's meeting, Wednesday 23 December, will be held at the usual time of 1400 UTC. cheers, brian From coolsvap at gmail.com Thu Dec 17 05:58:47 2020 From: coolsvap at gmail.com (=?UTF-8?B?yoLKjcmSz4HGnsSvxYIg0p7GsMi0xLfJksqByonJqA==?=) Date: Thu, 17 Dec 2020 11:28:47 +0530 Subject: [cinder] NVMe-oF with TCP transport protocol Message-ID: hello cinder team, I wanted to know if we have support for NVMEoF with TCP transport between servers and the target? Any documentation/reference would be appreciated. Best Regards, Swapnil Kulkarni irc : coolsvap coolsvap at gmail dot com From yumeng_bao at yahoo.com Thu Dec 17 06:24:47 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Thu, 17 Dec 2020 14:24:47 +0800 Subject: [cyborg][IRC]Vote for moving cyborg weekly meeting one hour ahead References: <040A6570-4D8E-4FEC-87F5-DB1ED842C2FC.ref@yahoo.com> Message-ID: <040A6570-4D8E-4FEC-87F5-DB1ED842C2FC@yahoo.com> Hi all, Cyborg weekly meeting is held at #openstack-cyborg IRC channel every Thursday at UTC 0300 (China @11 am Thu; US West Coast @8 pm Wed). For now, most of the attendees are located at China. Attending meeting during 11:00 - 12:00 am for them means giving up partial lunch break time, otherwise they have to give up part of the meeting. This makes it harder to have all the core contributors together in one meeting. As such, we decide to move the meeting time one hour ahead. Please vote for +1/-1, if most cores and attendees agree/disagree, we will make it effective/ineffective. Regards, Yumeng From xin-ran.wang at intel.com Thu Dec 17 06:41:36 2020 From: xin-ran.wang at intel.com (Wang, Xin-ran) Date: Thu, 17 Dec 2020 06:41:36 +0000 Subject: [cyborg][IRC]Vote for moving cyborg weekly meeting one hour ahead In-Reply-To: <040A6570-4D8E-4FEC-87F5-DB1ED842C2FC@yahoo.com> References: <040A6570-4D8E-4FEC-87F5-DB1ED842C2FC.ref@yahoo.com> <040A6570-4D8E-4FEC-87F5-DB1ED842C2FC@yahoo.com> Message-ID: +1 for meeting earlier. Thanks, Xin-Ran -----Original Message----- From: yumeng bao Sent: Thursday, December 17, 2020 2:25 PM To: openstack maillist Subject: [cyborg][IRC]Vote for moving cyborg weekly meeting one hour ahead Hi all, Cyborg weekly meeting is held at #openstack-cyborg IRC channel every Thursday at UTC 0300 (China @11 am Thu; US West Coast @8 pm Wed). For now, most of the attendees are located at China. Attending meeting during 11:00 - 12:00 am for them means giving up partial lunch break time, otherwise they have to give up part of the meeting. This makes it harder to have all the core contributors together in one meeting. As such, we decide to move the meeting time one hour ahead. Please vote for +1/-1, if most cores and attendees agree/disagree, we will make it effective/ineffective. Regards, Yumeng From zhangbailin at inspur.com Thu Dec 17 06:50:13 2020 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Thu, 17 Dec 2020 06:50:13 +0000 Subject: =?gb2312?B?tPC4tDogW2N5Ym9yZ11bSVJDXVZvdGUgZm9yIG1vdmluZyBjeWJvcmcgd2Vl?= =?gb2312?Q?kly_meeting_one_hour_ahead?= In-Reply-To: References: <040A6570-4D8E-4FEC-87F5-DB1ED842C2FC.ref@yahoo.com> <040A6570-4D8E-4FEC-87F5-DB1ED842C2FC@yahoo.com> Message-ID: <421f0f40227d49848f4d72b06c0b433b@inspur.com> +1 for meeting time. If anyone has other suggestion, that we can consider too. Thanks. brinzhang Inspur Electronic Information Industry Co.,Ltd. -----邮件原件----- 发件人: Wang, Xin-ran [mailto:xin-ran.wang at intel.com] 发送时间: 2020年12月17日 14:42 收件人: yumeng bao ; openstack maillist 主题: RE: [cyborg][IRC]Vote for moving cyborg weekly meeting one hour ahead +1 for meeting earlier. Thanks, Xin-Ran -----Original Message----- From: yumeng bao Sent: Thursday, December 17, 2020 2:25 PM To: openstack maillist Subject: [cyborg][IRC]Vote for moving cyborg weekly meeting one hour ahead Hi all, Cyborg weekly meeting is held at #openstack-cyborg IRC channel every Thursday at UTC 0300 (China @11 am Thu; US West Coast @8 pm Wed). For now, most of the attendees are located at China. Attending meeting during 11:00 - 12:00 am for them means giving up partial lunch break time, otherwise they have to give up part of the meeting. This makes it harder to have all the core contributors together in one meeting. As such, we decide to move the meeting time one hour ahead. Please vote for +1/-1, if most cores and attendees agree/disagree, we will make it effective/ineffective. Regards, Yumeng From songwenping at inspur.com Thu Dec 17 06:54:43 2020 From: songwenping at inspur.com (=?gb2312?B?QWxleCBTb25nICjLzs7Exr0p?=) Date: Thu, 17 Dec 2020 06:54:43 +0000 Subject: =?gb2312?B?tPC4tDogW2xpc3RzLm9wZW5zdGFjay5vcme0+reiXVtjeWJvcmddW0lSQ11W?= =?gb2312?B?b3RlIGZvciBtb3ZpbmcgY3lib3JnIHdlZWtseSBtZWV0aW5nIG9uZSBob3Vy?= =?gb2312?Q?_ahead?= In-Reply-To: <040A6570-4D8E-4FEC-87F5-DB1ED842C2FC@yahoo.com> References: <493fe118501af2799011dfcaa6d091ef@sslemail.net> <040A6570-4D8E-4FEC-87F5-DB1ED842C2FC@yahoo.com> Message-ID: <9c05dd5da7c24043bab695365cb7987c@inspur.com> +1 -----邮件原件----- 发件人: yumeng bao [mailto:yumeng_bao at yahoo.com] 发送时间: 2020年12月17日 14:25 收件人: openstack maillist 主题: [lists.openstack.org代发][cyborg][IRC]Vote for moving cyborg weekly meeting one hour ahead Hi all, Cyborg weekly meeting is held at #openstack-cyborg IRC channel every Thursday at UTC 0300 (China @11 am Thu; US West Coast @8 pm Wed). For now, most of the attendees are located at China. Attending meeting during 11:00 - 12:00 am for them means giving up partial lunch break time, otherwise they have to give up part of the meeting. This makes it harder to have all the core contributors together in one meeting. As such, we decide to move the meeting time one hour ahead. Please vote for +1/-1, if most cores and attendees agree/disagree, we will make it effective/ineffective. Regards, Yumeng -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3774 bytes Desc: not available URL: From geguileo at redhat.com Thu Dec 17 12:49:46 2020 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 17 Dec 2020 13:49:46 +0100 Subject: [cinder] NVMe-oF with TCP transport protocol In-Reply-To: References: Message-ID: <20201217124946.tmglo2v5txnmgfu3@localhost> On 17/12, ʂʍɒρƞįł Ҟưȴķɒʁʉɨ wrote: > hello cinder team, > > I wanted to know if we have support for NVMEoF with TCP transport > between servers and the target? Any documentation/reference would be > appreciated. > > Best Regards, > Swapnil Kulkarni > irc : coolsvap > coolsvap at gmail dot com > Hi, As far as I know both Cinder's target [1] and OS-Brick's connector [2] support TCP/IP transport, but I have never tested it. What I'm not too sure about is how thorough the testing of the NVMe-oF code in OpenStack really is, since it's not currently flushing the buffers on disconnect [3], and recently found out that it's unlikely to work with encrypted volumes (because it's returning a real device and not a symlink). These 2 bugs should be easy to fix, but both time and the appropriate hardware is required to validate it. Cheers, Gorka. [1]: https://github.com/openstack/cinder/blob/5c620c6232f8444c1a55424363149920a8f67699/cinder/volume/targets/nvmeof.py#L72 [2]: https://github.com/openstack/os-brick/blob/4d4c5e82c97fe69c7f8fa4ba3a36a69bf0a27e22/os_brick/initiator/connectors/nvmeof.py#L233 [3]: https://bugs.launchpad.net/os-brick/+bug/1903032 From rosmaita.fossdev at gmail.com Thu Dec 17 14:00:37 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 17 Dec 2020 09:00:37 -0500 Subject: [ops][cinder] notice of incorrect default policy value In-Reply-To: <7f3529e7-002b-7515-e2a6-6503fcfcc038@gmail.com> References: <7f3529e7-002b-7515-e2a6-6503fcfcc038@gmail.com> Message-ID: <38aee9c3-d871-79b0-1aa9-43cefcdcef8b@gmail.com> Please note the correction below. Apologies for any confusion. On 12/16/20 2:29 PM, Brian Rosmaita wrote: > Hello operators, > > While reviewing Cinder policies recently, Bug #1908315 [0] was > discovered: "Policy group:reset_group_snapshot_status has incorrect > checkstring". > > This policy governs the "Reset a snapshot's status" action [1].  The > action is supposed to be admin-only, but the default policy setting is > admin-or-owner. Correction: the API action governed is (of course, given the policy name) "Reset group snapshot status": https://docs.openstack.org/api-ref/block-storage/v3/#reset-group-snapshot-status > > This is not a security issue, but it does allow an end user to put a > group snapshot that they own into an invalid status, with indeterminate > consequences. > > A fix has been posted for review [2], but if you wish to correct this > immediately, you can put the following line into your cinder policy file: > >   "group:reset_group_snapshot_status": "rule:admin_api" > > More information about the cinder policy file can be found at [3]. > > > [0] https://bugs.launchpad.net/cinder/+bug/1908315 > [1] > https://docs.openstack.org/api-ref/block-storage/v3/#reset-a-snapshot-s-status > > [2] https://review.opendev.org/c/openstack/cinder/+/767226 > [3] > https://docs.openstack.org/cinder/latest/configuration/block-storage/samples/policy.yaml.html > From akekane at redhat.com Thu Dec 17 14:41:21 2020 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 17 Dec 2020 20:11:21 +0530 Subject: [Glance] Cancelling next 2 weekly meetings Message-ID: Hi All, Due to Christmas and New year, we are cancelling our weekly meetings on 24 and 31 December. The next meeting will be held on 7th January. Ping us on #openstack-glance if you have any queries/doubts. Happy holidays and happy new year in advance. Regards, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Dec 17 14:42:53 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 17 Dec 2020 08:42:53 -0600 Subject: [tc][all] Encouraging projects to apply for tag 'assert:supports-api-interoperability' Message-ID: <17671275da9.121a655b1251298.6157149575252344776@ghanshyammann.com> Hello Everyone, TC defined a tag for API interoperability (cover both stable and compatible APIs) called 'assert:supports-api-interoperability' which assert on API won’t break any users when they upgrade a cloud or start using their code on a new OpenStack cloud. Basically, Projects will not change (or remove) an API in a way that will break existing users of an API. We have updated the tag documentation to clarify its definition and requirements. If your projects follow the API interoperability guidelines[1] and some API versioning mechanism that does not need to be microversion then you should start thinking to apply for this tag. The complete requirement can be found here[2]. Currently, only nova has this tag but I am sure many projects are eligible for this, and TC encourage them to apply for this. [1] https://specs.openstack.org/openstack/api-wg/guidelines/api_interoperability.html [2] https://governance.openstack.org/tc/reference/tags/assert_supports-api-interoperability.html -gmann From marios at redhat.com Thu Dec 17 14:43:08 2020 From: marios at redhat.com (Marios Andreou) Date: Thu, 17 Dec 2020 16:43:08 +0200 Subject: [infra][tripleo] is it possible to control (tripleo) gate queue priority? Message-ID: Hello illustrious members of openstack infra tripleo-ci squad is wondering if it is possible for (some subset of) us to be able to set the priority of a particular patch/es in the tripleo queue. We've done this "manually" in the past, by abandoning all patches in the gate & then restoring in order and putting the priority patch at the top of the dependency queue. However abandoning all the things is completely disruptive for everyone else (sometimes that might be necessary if your queue is way too long but still...). So the question is, is there a better way to put a particular patch at the top of our queue when we need to do that? thanks for your thoughts, sorry if this has come up before I couldn't quickly find something in the list archives. regards, marios -------------- next part -------------- An HTML attachment was scrubbed... URL: From coolsvap at gmail.com Thu Dec 17 14:44:57 2020 From: coolsvap at gmail.com (=?UTF-8?B?yoLKjcmSz4HGnsSvxYIg0p7GsMi0xLfJksqByonJqA==?=) Date: Thu, 17 Dec 2020 20:14:57 +0530 Subject: [cinder] NVMe-oF with TCP transport protocol In-Reply-To: <20201217124946.tmglo2v5txnmgfu3@localhost> References: <20201217124946.tmglo2v5txnmgfu3@localhost> Message-ID: On Thu, Dec 17, 2020 at 6:19 PM Gorka Eguileor wrote: > > On 17/12, ʂʍɒρƞįł Ҟưȴķɒʁʉɨ wrote: > > hello cinder team, > > > > I wanted to know if we have support for NVMEoF with TCP transport > > between servers and the target? Any documentation/reference would be > > appreciated. > > > > Best Regards, > > Swapnil Kulkarni > > irc : coolsvap > > coolsvap at gmail dot com > > > > Hi, > > As far as I know both Cinder's target [1] and OS-Brick's connector [2] > support TCP/IP transport, but I have never tested it. > > What I'm not too sure about is how thorough the testing of the NVMe-oF > code in OpenStack really is, since it's not currently flushing the > buffers on disconnect [3], and recently found out that it's unlikely to > work with encrypted volumes (because it's returning a real device and > not a symlink). > > These 2 bugs should be easy to fix, but both time and the appropriate > hardware is required to validate it. > > Cheers, > Gorka. > > [1]: https://github.com/openstack/cinder/blob/5c620c6232f8444c1a55424363149920a8f67699/cinder/volume/targets/nvmeof.py#L72 > [2]: https://github.com/openstack/os-brick/blob/4d4c5e82c97fe69c7f8fa4ba3a36a69bf0a27e22/os_brick/initiator/connectors/nvmeof.py#L233 > [3]: https://bugs.launchpad.net/os-brick/+bug/1903032 > Thanks a lot Gorka! Appreciate your help. Best Regards, Swapnil Kulkarni irc : coolsvap coolsvap at gmail dot com From raubvogel at gmail.com Thu Dec 17 14:56:25 2020 From: raubvogel at gmail.com (Mauricio Tavares) Date: Thu, 17 Dec 2020 09:56:25 -0500 Subject: [nova] PCI hotplugging Message-ID: As some of you know, libvirt supports PCI hotplugging[1]. How would that work using nova (or, if there is a better way, I am all ears)? [1]https://www.libvirt.org/pci-hotplug.html From strigazi at gmail.com Thu Dec 17 14:57:43 2020 From: strigazi at gmail.com (Spyros Trigazis) Date: Thu, 17 Dec 2020 15:57:43 +0100 Subject: [infra][magnum][ci] Issues installing bashate and coverage In-Reply-To: <20201208171248.6dffedoymqj7dgkr@yuggoth.org> References: <20201208171248.6dffedoymqj7dgkr@yuggoth.org> Message-ID: Hello Jeremy, Thanks for the reply. Others are attempting to fix here https://review.opendev.org/c/openstack/magnum/+/767228 Not sure why only magnum is affected by this. I'll point them here. Thanks, Spyros On Tue, Dec 8, 2020 at 6:13 PM Jeremy Stanley wrote: > On 2020-12-08 13:36:09 +0100 (+0100), Spyros Trigazis wrote: > > openstack-tox-lower-constraints fails for bashate and coverage. > > (Maybe more, I bumped bashate and it failed for coverage. I don;t > > want to waste more resources on our CI) > > eg https://review.opendev.org/c/openstack/magnum/+/765881 > > https://review.opendev.org/c/openstack/magnum/+/765979 > > > > Do we miss something? > > Pip 20.3.0, released 8 days ago, turned on a new and much more > thorough dependency resolver. Earlier versions of pip did not try > particularly hard to make sure the dependencies claimed by packages > were all satisfied. Virtualenv 20.2.2 released yesterday and > increased the version of pip it's vendoring to a version which uses > the new solver as well. These changes mean that latent version > conflicts are now being correctly identified as bugs, and these jobs > will do a far better job of actually confirming the declared > versions of dependencies are able to be tested. > > One thing which looks really weird and completely contradictory to > me is that your lower-constraints job on change 765881 is applying > both upper and lower constraints lists to the pip install command. > Maybe the lower constraints list is expected to override the earlier > upper constraints, but is that really going to represent a > compatible set? That aside, trying to reproduce locally I run into > yet a third error: > > Could not find a version that satisfies the requirement > warlock!=1.3.0,<2,>=1.0.1 (from python-glanceclient) > > And indeed, python-glanceclient insists warlock 1.3.0 should be > skipped, while magnum's lower-constraints.txt says you must install > warlock==1.3.0 so that's a clear contradiction as well. > > My recommendation is to work on reproducing this locally first and > play a bit of whack-a-mole with the entries in your > lower-constraints.txt to find versions of things which will actually > be coinstallable with current versions of pip. You don't need to run > the full tox testenv, just try installing your constrainted deps > into a venv with upgraded pip like so: > > python3.8 -m venv foo > foo/bin/pip install -U pip > foo/bin/pip install -c lower-constraints.txt \ > -r test-requirements.txt -r requirements.txt > > You'll also likely want to delete and recreate the venv each time > you try, since pip will now also try to take the requirements of > already installed packages into account, and that might further > change the behavior you see. Hope that helps! > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Thu Dec 17 15:16:38 2020 From: tobias.urdin at binero.com (Tobias Urdin) Date: Thu, 17 Dec 2020 15:16:38 +0000 Subject: [infra][magnum][ci] Issues installing bashate and coverage In-Reply-To: References: <20201208171248.6dffedoymqj7dgkr@yuggoth.org>, Message-ID: <9b59c7e3d46949b09e00caa866b92682@binero.com> Hello, There is also an issue with hacking. See this example for magnum-ui https://review.opendev.org/c/openstack/magnum-ui/+/767367 Just pushed that directly to stable/train to test with backport of https://review.opendev.org/c/openstack/magnum-ui/+/767309 but we can probably propose that to master if needed and backport. Best regards ________________________________ From: Spyros Trigazis Sent: Thursday, December 17, 2020 3:57:43 PM To: Jeremy Stanley Cc: openstack-discuss Subject: Re: [infra][magnum][ci] Issues installing bashate and coverage Hello Jeremy, Thanks for the reply. Others are attempting to fix here https://review.opendev.org/c/openstack/magnum/+/767228 Not sure why only magnum is affected by this. I'll point them here. Thanks, Spyros On Tue, Dec 8, 2020 at 6:13 PM Jeremy Stanley > wrote: On 2020-12-08 13:36:09 +0100 (+0100), Spyros Trigazis wrote: > openstack-tox-lower-constraints fails for bashate and coverage. > (Maybe more, I bumped bashate and it failed for coverage. I don;t > want to waste more resources on our CI) > eg https://review.opendev.org/c/openstack/magnum/+/765881 > https://review.opendev.org/c/openstack/magnum/+/765979 > > Do we miss something? Pip 20.3.0, released 8 days ago, turned on a new and much more thorough dependency resolver. Earlier versions of pip did not try particularly hard to make sure the dependencies claimed by packages were all satisfied. Virtualenv 20.2.2 released yesterday and increased the version of pip it's vendoring to a version which uses the new solver as well. These changes mean that latent version conflicts are now being correctly identified as bugs, and these jobs will do a far better job of actually confirming the declared versions of dependencies are able to be tested. One thing which looks really weird and completely contradictory to me is that your lower-constraints job on change 765881 is applying both upper and lower constraints lists to the pip install command. Maybe the lower constraints list is expected to override the earlier upper constraints, but is that really going to represent a compatible set? That aside, trying to reproduce locally I run into yet a third error: Could not find a version that satisfies the requirement warlock!=1.3.0,<2,>=1.0.1 (from python-glanceclient) And indeed, python-glanceclient insists warlock 1.3.0 should be skipped, while magnum's lower-constraints.txt says you must install warlock==1.3.0 so that's a clear contradiction as well. My recommendation is to work on reproducing this locally first and play a bit of whack-a-mole with the entries in your lower-constraints.txt to find versions of things which will actually be coinstallable with current versions of pip. You don't need to run the full tox testenv, just try installing your constrainted deps into a venv with upgraded pip like so: python3.8 -m venv foo foo/bin/pip install -U pip foo/bin/pip install -c lower-constraints.txt \ -r test-requirements.txt -r requirements.txt You'll also likely want to delete and recreate the venv each time you try, since pip will now also try to take the requirements of already installed packages into account, and that might further change the behavior you see. Hope that helps! -- Jeremy Stanley -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Thu Dec 17 16:09:50 2020 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Thu, 17 Dec 2020 17:09:50 +0100 Subject: [nova] PCI hotplugging In-Reply-To: References: Message-ID: <71614ddf-4b43-e1f4-6aee-3d45db6a4971@linaro.org> W dniu 17.12.2020 o 15:56, Mauricio Tavares pisze: > As some of you know, libvirt supports PCI hotplugging[1]. How would > that work using nova (or, if there is a better way, I am all ears)? > > [1]https://www.libvirt.org/pci-hotplug.html It just works. Each network interface you add to your VM instance is extra PCI(e) card. Each USB controller and/or other PCI(e) device. If you use Q35 on x86(-64) or you use AArch64 then you use PCI Express instead of plain PCI. Then it gets a bit more complicated but still manageable. I have a two blog posts [2] [3] about it from time I worked on getting it working on AArch64 architecture. 2. https://marcin.juszkiewicz.com.pl/2018/02/01/everyone-loves-90s-pc-hardware/ 3. https://marcin.juszkiewicz.com.pl/2018/02/19/hotplug-in-vm-easy-to-say/ From fungi at yuggoth.org Thu Dec 17 16:45:04 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 17 Dec 2020 16:45:04 +0000 Subject: [infra][tripleo] is it possible to control (tripleo) gate queue priority? In-Reply-To: References: Message-ID: <20201217164503.qvx4kr5rf4w5g7qu@yuggoth.org> On 2020-12-17 16:43:08 +0200 (+0200), Marios Andreou wrote: > tripleo-ci squad is wondering if it is possible for (some subset > of) us to be able to set the priority of a particular patch/es in > the tripleo queue. Not directly, no, it's an administrative function of the Zuul scheduler which can't be delegated by queue. > We've done this "manually" in the past, by abandoning all patches > in the gate & then restoring in order and putting the priority > patch at the top of the dependency queue. However abandoning all > the things is completely disruptive for everyone else (sometimes > that might be necessary if your queue is way too long but > still...). It's actually not as terrible a solution as it sounds, you're basically signalling to your contributors that your jobs are unhealthy and your immediate priority is to focus on merging identified fixes for that problem rather than other patches. It also frees up our CI resources which you would otherwise be monopolizing due to churn from repeated gate resets of massively long change queues, ultimately helping those fixes merge more quickly. Of course it also depends on your core review teams getting on the same page and not continuing to approve unrelated changes which are unlikely to merge at that point, but this is more of a social issue and not a technical one. > So the question is, is there a better way to put a particular > patch at the top of our queue when we need to do that? [...] OpenDev's Zuul administrators have access to reorder queues in dependent pipelines. Reach out to us through the OpenStack TaCT SIG's #openstack-infra IRC channel on Freenode or here on openstack-discuss with the [infra] subject tag, explaining which approved changes you need moved to the front and why. Ideally coordinate this with the rest of your team, since we don't want to wind up in the middle of a team squabble where different contributors are asking to have their changes prioritized at odds with one another. To avoid confusion, we typically want to at least see some acknowledgement of the request from your PTL or designated Infra Liaison[*]. [*] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Infra -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From skaplons at redhat.com Thu Dec 17 22:26:51 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 17 Dec 2020 23:26:51 +0100 Subject: [neutron] Drivers meeting agenda Message-ID: <20201217222651.vsctw22ol7uzg3r5@p1.localdomain> Hi, On this week's drivers meeting we have 2 RFEs to discuss: * https://bugs.launchpad.net/neutron/+bug/1907089 - new RFE proposed by Lajos, spec is also proposed at https://review.opendev.org/c/openstack/neutron-specs/+/767337 * https://bugs.launchpad.net/neutron/+bug/1905295 - this was discussed last week, Bence provided new informations related to that one so lets get back to it too. See You tomorrow on the meeting :) -- Slawek Kaplonski Principal Software Engineer Red Hat From yumeng_bao at yahoo.com Fri Dec 18 01:52:37 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Fri, 18 Dec 2020 09:52:37 +0800 Subject: [cyborg][IRC]Vote for moving cyborg weekly meeting one hour ahead References: Message-ID: ok. We've got all +1 from active cores. Also a big +1 from my side. So from next weekly meeting, we will follow the new meeting time[0]: Where: #openstack-cyborg IRC channel When:Every Thursday at UTC 0200 (China @10 am Thu; US West Coast @7 pm Wed) [0]https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting#Weekly_IRC_Cyborg_team_meeting Regards, Yumeng > On Dec 17, 2020, at 2:41 PM, Wang, Xin-ran wrote: > From amotoki at gmail.com Fri Dec 18 06:52:02 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Fri, 18 Dec 2020 15:52:02 +0900 Subject: [all][stable] rocky tempest based jobs are broken now In-Reply-To: References: Message-ID: Hi, devstack-tempest based jobs in stable/rocky are now fixed. https://review.opendev.org/c/openstack/stackviz/+/767063 fixed the issue. Thanks, Akihiro On Tue, Dec 15, 2020 at 2:58 PM Akihiro Motoki wrote: > > Hi, > > All jobs based on devstack-tempest in stable/rocky are now broken. > > The cause is that stackviz requirements are not compatible with python > 3.5 (example [1]) > gmann is testing a fix in stackviz [2]. > It looks like the change itself works well but we need to handle > another failure in the nodejs job. > gmann and I will follow-up the fix and keep you updated. > > Thanks, > Akihiro Motoki (amotoki) > > [1] https://zuul.opendev.org/t/openstack/build/acf6ccdf1b304cb29ab41baa0d80ec55 > [2] https://review.opendev.org/c/openstack/stackviz/+/767063 From marios at redhat.com Fri Dec 18 07:06:19 2020 From: marios at redhat.com (Marios Andreou) Date: Fri, 18 Dec 2020 09:06:19 +0200 Subject: [infra][tripleo] is it possible to control (tripleo) gate queue priority? In-Reply-To: <20201217164503.qvx4kr5rf4w5g7qu@yuggoth.org> References: <20201217164503.qvx4kr5rf4w5g7qu@yuggoth.org> Message-ID: On Thu, Dec 17, 2020 at 6:46 PM Jeremy Stanley wrote: > On 2020-12-17 16:43:08 +0200 (+0200), Marios Andreou wrote: > > tripleo-ci squad is wondering if it is possible for (some subset > > of) us to be able to set the priority of a particular patch/es in > > the tripleo queue. > > Not directly, no, it's an administrative function of the Zuul > scheduler which can't be delegated by queue. > > ack, we suspected that permissions might be an issue (i.e. that we cannot be given administrative access for 'just the tripleo queue' is at least one of the obstacles here ;) ). > > We've done this "manually" in the past, by abandoning all patches > > in the gate & then restoring in order and putting the priority > > patch at the top of the dependency queue. However abandoning all > > the things is completely disruptive for everyone else (sometimes > > that might be necessary if your queue is way too long but > > still...). > > It's actually not as terrible a solution as it sounds, you're > basically signalling to your contributors that your jobs are > unhealthy and your immediate priority is to focus on merging > identified fixes for that problem rather than other patches. It also > frees up our CI resources which you would otherwise be monopolizing > due to churn from repeated gate resets of massively long change > queues, ultimately helping those fixes merge more quickly. Of course > it also depends on your core review teams getting on the same page > and not continuing to approve unrelated changes which are unlikely > to merge at that point, but this is more of a social issue and not a > technical one. > Indeed this has been done in the past and obviously signalled on the mailing list so folks can stop approving patches (and it typically works out fine). > > > So the question is, is there a better way to put a particular > > patch at the top of our queue when we need to do that? > [...] > > OpenDev's Zuul administrators have access to reorder queues in > dependent pipelines. Reach out to us through the OpenStack TaCT > SIG's #openstack-infra IRC channel on Freenode or here on > openstack-discuss with the [infra] subject tag, explaining which > approved changes you need moved to the front and why. Ideally > coordinate this with the rest of your team, since we don't want to > wind up in the middle of a team squabble where different > contributors are asking to have their changes prioritized at odds > with one another. To avoid confusion, we typically want to at least > see some acknowledgement of the request from your PTL or designated > Infra Liaison[*]. > > [*] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Infra > > ACK thanks this is good to know. This topic came up in our team discussions recently and we felt it was at least worth asking if there was another way to manipulate the queue ourselves that didn't involve abandoning all the things. Thank you very much for taking the time to reply marios > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Dec 18 07:45:25 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 18 Dec 2020 08:45:25 +0100 Subject: [all][stable] rocky tempest based jobs are broken now In-Reply-To: References: Message-ID: <20201218074525.tymgb7ahpppcivna@p1.localdomain> Hi, On Fri, Dec 18, 2020 at 03:52:02PM +0900, Akihiro Motoki wrote: > Hi, > > devstack-tempest based jobs in stable/rocky are now fixed. > https://review.opendev.org/c/openstack/stackviz/+/767063 fixed the issue. Thx amotoki :) > > Thanks, > Akihiro > > On Tue, Dec 15, 2020 at 2:58 PM Akihiro Motoki wrote: > > > > Hi, > > > > All jobs based on devstack-tempest in stable/rocky are now broken. > > > > The cause is that stackviz requirements are not compatible with python > > 3.5 (example [1]) > > gmann is testing a fix in stackviz [2]. > > It looks like the change itself works well but we need to handle > > another failure in the nodejs job. > > gmann and I will follow-up the fix and keep you updated. > > > > Thanks, > > Akihiro Motoki (amotoki) > > > > [1] https://zuul.opendev.org/t/openstack/build/acf6ccdf1b304cb29ab41baa0d80ec55 > > [2] https://review.opendev.org/c/openstack/stackviz/+/767063 > -- Slawek Kaplonski Principal Software Engineer Red Hat From luca.tagliaferri at gmail.com Fri Dec 18 11:14:14 2020 From: luca.tagliaferri at gmail.com (Luca Tagliaferri) Date: Fri, 18 Dec 2020 12:14:14 +0100 Subject: Get openstack public url of an object Message-ID: I am using openstack to upload an object to a container like explained here: https://docs.openstack.org/openstacksdk/latest//user/guides/object_store.html#uploading-objects I would like to know if it is possible to programmatically know the public url of the file that I just uploaded. -- -- *Luca Tagliaferri* / Ing luca.tagliaferri at gmail.com / +393382346436 *Ooros* http://www.desidoo.com corso rosai 26/6 Get your own signature -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionut at fleio.com Fri Dec 18 13:22:09 2020 From: ionut at fleio.com (Ionut Biru) Date: Fri, 18 Dec 2020 15:22:09 +0200 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Message-ID: Hi guys, I have an issue with magnum api returning an error after a while: Server-side error: "[('system library', 'fopen', 'Too many open files'), ('BIO routines', 'BIO_new_file', 'system lib'), ('x509 certificate routines', 'X509_load_cert_crl_file', 'system lib')]" Log file: https://paste.xinu.at/6djE/ This started to appear after I enabled the template auto_healing_controller = magnum-auto-healer, magnum_auto_healer_tag = v1.19.0. Currently, I only have 4 clusters. After that the API is in error state and doesn't work unless I restart it. -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionut at fleio.com Fri Dec 18 13:27:13 2020 From: ionut at fleio.com (Ionut Biru) Date: Fri, 18 Dec 2020 15:27:13 +0200 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: Message-ID: Hi again, I failed to mention that is stable/victoria with couples of patches from review. Ignore the fact that in logs it shows the 19.1.4 version in venv path. On Fri, Dec 18, 2020 at 3:22 PM Ionut Biru wrote: > Hi guys, > > I have an issue with magnum api returning an error after a while: > Server-side error: "[('system library', 'fopen', 'Too many open files'), > ('BIO routines', 'BIO_new_file', 'system lib'), ('x509 certificate > routines', 'X509_load_cert_crl_file', 'system lib')]" > > Log file: https://paste.xinu.at/6djE/ > > This started to appear after I enabled the > template auto_healing_controller = magnum-auto-healer, > magnum_auto_healer_tag = v1.19.0. > > Currently, I only have 4 clusters. > > After that the API is in error state and doesn't work unless I restart it. > > > -- > Ionut Biru - https://fleio.com > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Dec 18 14:54:26 2020 From: hberaud at redhat.com (hberaud) Date: Fri, 18 Dec 2020 15:54:26 +0100 Subject: [oslo][TC] Dropping lower-constraints testing Message-ID: Hello, As you already surely know, we (the openstack project) currently face some issues with our lower-constraints jobs due to pip's latest resolver feature. By discussing this topic with Thierry Carrez (ttx) from an oslo point of view, we reached the same conclusion that it is more appropriate to drop this kind of tests because the complexity and recurring pain needed to maintain them now exceeds the benefits provided by this mechanismes. Also we should notice that the number of active maintainers is declining, so we think that this is the shortest path to solve this problem on oslo for now and for the future too. In a first time I tried to fix our gates by fixing our lower-constraints project by project but with around ~36 projects to maintain this is a painful task, especially due to nested oslo layers inside oslo himself... I saw the face of the hell of dependencies. So, in a second time I submitted a series of patches to drop these tests [1]. But before moving further with that we would appreciate discussing this with the TC. For now the patches are ready and we just have to push the good button accordingly to our choices (+W or abandon). Normally all the oslo projects that need to be fixed are covered by [1]. Thoughts? Thanks for reading. [1] https://review.opendev.org/q/topic:%22oslo_lc_drop%22+(status:open%20OR%20status:merged) -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Fri Dec 18 15:18:53 2020 From: marios at redhat.com (Marios Andreou) Date: Fri, 18 Dec 2020 17:18:53 +0200 Subject: [tripleo] next meeting Tuesday Dec 22 @ 1400 UTC in #tripleo Message-ID: For anyone that is around next week (and not on well earned break) the last scheduled TripleO irc meeting for 2020 is ** Tuesday 22nd December at 1400 UTC in #tripleo. ** ** https://etherpad.opendev.org/p/tripleo-meeting-items ** Please add anything you want to hilight at https://etherpad.opendev.org/p/tripleo-meeting-items This could be anything including review requests, blocking issues or to socialise ongoing or planned work. Our last meeting was held on Dec 08th - you can find the logs there http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-12-08-14.00.log.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Dec 18 15:46:17 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 18 Dec 2020 09:46:17 -0600 Subject: [oslo][TC] Dropping lower-constraints testing In-Reply-To: References: Message-ID: <1767687c4ad.10b9514f7310538.2670005890023858557@ghanshyammann.com> ---- On Fri, 18 Dec 2020 08:54:26 -0600 hberaud wrote ---- > Hello, > As you already surely know, we (the openstack project) currently face some issues with our lower-constraints jobs due to pip's latest resolver feature. > By discussing this topic with Thierry Carrez (ttx) from an oslo point of view, we reached the same conclusion that it is more appropriate to drop this kind of tests because the complexity and recurring pain neededto maintain them now exceeds the benefits provided by this mechanismes. > Also we should notice that the number of active maintainers is declining, so we think that this is the shortest path to solve this problem on oslo for now and for the future too. > > In a first time I tried to fix our gates by fixing our lower-constraints project by project but with around ~36 projects to maintain this is a painful task, especially due to nested oslo layers inside oslo himself... I saw the face of the hell of dependencies. > > So, in a second time I submitted a series of patches to drop these tests [1]. > But before moving further with that we would appreciate discussing this with the TC. For now the patches are ready and we just have to push the good button accordingly to our choices (+W or abandon). > > Normally all the oslo projects that need to be fixed are covered by [1]. > > Thoughts? +1, I think it's not worth to keep maintaining them which is taking too much effort. -gmann > > Thanks for reading. > > [1] https://review.opendev.org/q/topic:%22oslo_lc_drop%22+(status:open%20OR%20status:merged) > -- > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps://github.com/4383/https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > From skaplons at redhat.com Fri Dec 18 16:12:31 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 18 Dec 2020 17:12:31 +0100 Subject: [neutron] Drivers meeting Message-ID: <4592467.XN6ifN3tOX@p1> Hi, Due to upcoming holiday season lets cancel next 2 drivers meetings. See You all on the meeting at 8.01.2021. Have a great holidays :) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From moguimar at redhat.com Fri Dec 18 19:30:01 2020 From: moguimar at redhat.com (Moises Guimaraes de Medeiros) Date: Fri, 18 Dec 2020 20:30:01 +0100 Subject: [oslo][TC] Dropping lower-constraints testing In-Reply-To: <1767687c4ad.10b9514f7310538.2670005890023858557@ghanshyammann.com> References: <1767687c4ad.10b9514f7310538.2670005890023858557@ghanshyammann.com> Message-ID: +1 On Fri, Dec 18, 2020 at 4:46 PM Ghanshyam Mann wrote: > ---- On Fri, 18 Dec 2020 08:54:26 -0600 hberaud > wrote ---- > > Hello, > > As you already surely know, we (the openstack project) currently face > some issues with our lower-constraints jobs due to pip's latest resolver > feature. > > By discussing this topic with Thierry Carrez (ttx) from an oslo point > of view, we reached the same conclusion that it is more appropriate to drop > this kind of tests because the complexity and recurring pain neededto > maintain them now exceeds the benefits provided by this mechanismes. > > Also we should notice that the number of active maintainers is > declining, so we think that this is the shortest path to solve this problem > on oslo for now and for the future too. > > > > In a first time I tried to fix our gates by fixing our > lower-constraints project by project but with around ~36 projects to > maintain this is a painful task, especially due to nested oslo layers > inside oslo himself... I saw the face of the hell of dependencies. > > > > So, in a second time I submitted a series of patches to drop these > tests [1]. > > But before moving further with that we would appreciate discussing this > with the TC. For now the patches are ready and we just have to push the > good button accordingly to our choices (+W or abandon). > > > > Normally all the oslo projects that need to be fixed are covered by [1]. > > > > Thoughts? > > +1, I think it's not worth to keep maintaining them which is taking too > much effort. > > -gmann > > > > > Thanks for reading. > > > > [1] > https://review.opendev.org/q/topic:%22oslo_lc_drop%22+(status:open%20OR%20status:merged) > > -- > > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps:// > github.com/4383/https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > > > -- Moisés Guimarães Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmendiza at redhat.com Fri Dec 18 21:05:54 2020 From: dmendiza at redhat.com (Douglas Mendizabal) Date: Fri, 18 Dec 2020 15:05:54 -0600 Subject: [barbican][oslo][nova][glance][cinder] cursive library status In-Reply-To: References: <35dfc43f-6613-757b-ed7b-b6530df21289@gmail.com> Message-ID: On 12/16/20 12:50 PM, Ben Nemec wrote: > > > On 12/16/20 12:02 PM, Brian Rosmaita wrote: >> Hello Barbican team, >> >> Apologies for not including barbican in the previous thread on this >> topic: >> >> http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019430.html >> >> >> The situation is that cursive is used by Nova, Glance, and Cinder and >> we'd like to move it out of the 'x' namespace into openstack >> governance.   The question is then what team would oversee it.  It >> seems like a good fit for Oslo, and the Oslo team seems OK with that, >> but since barbican-core is currently included in cursive-core, it make >> sense to give the Barbican team first dibs. >> >>  From the consuming teams' side, I don't think we have a preference as >> long as it's clear who we need to bother about approvals if a bugfix >> is posted for review. >> >> Thus my ask is that the Barbican team indicate whether they'd like to >> move cursive to the 'openstack' namespace under their governance, or >> whether they'd prefer Oslo to oversee the library. > > Note that this is not necessarily an either/or thing. Castellan is under > Oslo governance but is co-owned by the Oslo and Barbican teams. We could > do a similar thing with Cursive. > Hi Brian and Ben, Sorry I missed the original thread. Given that the end of the year is around the corner, most of the Barbican team is out on PTO and we haven't had a chance to discuss this in our weekly meeting. That said, I doubt anyone would object to moving cursive into the openstack namespace. I personally do not mind the Oslo team taking over maintenace, and I am also willing to help review patches if the Oslo team would like to co-own this library just like we currently do for Castellan. - Douglas Mendizábal (redrobot) From rosmaita.fossdev at gmail.com Fri Dec 18 23:09:46 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 18 Dec 2020 18:09:46 -0500 Subject: [cinder] spec freeze exceptions Message-ID: <908bae2c-db2c-5a14-d844-e18d0416c00d@gmail.com> The Cinder spec freeze is now in effect. There are a few proposed specs that have been reviewed and not yet accepted, but whose final revision looks pretty straightforward, so the following are granted a spec freeze exception for Wallaby: NVMe connector support MD replication spec. - https://review.opendev.org/c/openstack/cinder-specs/+/766730 NVMe monitoring and healing agent for NVMe connector. - https://review.opendev.org/c/openstack/cinder-specs/+/766732 Support storing volume format info - https://review.opendev.org/c/openstack/cinder-specs/+/760999 Specs must be approved before 1700 UTC on Wednesday 23 December. The following specs require more than a little revision and/or their authors have not been responsive to comments, but in the interest of fairness, an exception is granted to these as well: Migration support for a volume with replication status enabled - https://review.opendev.org/c/openstack/cinder-specs/+/766130 Support revert any snapshot to the volume - https://review.opendev.org/c/openstack/cinder-specs/+/736111 Remove quota usage cache - https://review.opendev.org/c/openstack/cinder-specs/+/730701 These must also be approved before 1700 UTC on Wednesday 23 December. From kira034 at 163.com Sun Dec 20 10:24:21 2020 From: kira034 at 163.com (Hongbin Lu) Date: Sun, 20 Dec 2020 18:24:21 +0800 (CST) Subject: [neutron] Bug Deputy Report (Dec14 - 20) Message-ID: <7f061ab7.15aa.1767fadbe1b.Coremail.kira034@163.com> Hi, I was bug deputy this last week. Please find below for the report. Critical: * https://bugs.launchpad.net/neutron/+bug/1908711 [neutron-lib] Bump PyYAML to 5.3.1 High: * https://bugs.launchpad.net/neutron/+bug/1908382 [OVN] Missing OVN ACLs for security groups that utilize remote groups attached to ports with allowed_address_pairs Medium: * https://bugs.launchpad.net/neutron/+bug/1908057 Ensure "keepalived" is correctly disabled (the process does not exist) Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Mon Dec 21 15:50:55 2020 From: akekane at redhat.com (Abhishek Kekane) Date: Mon, 21 Dec 2020 21:20:55 +0530 Subject: OOO Second half on 22 December In-Reply-To: References: Message-ID: Correction, I will be out in second half of the day. Will work during regular IST time. Abhishek On Mon, 21 Dec 2020 at 8:08 PM, Abhishek Kekane wrote: > Hi Team, > > I will be out in the first half tomorrow for personal reasons, and will > resume work around 1400 UTC. > > Thanks & Best Regards, > > Abhishek Kekane > -- Thanks & Best Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Mon Dec 21 11:57:33 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 21 Dec 2020 12:57:33 +0100 Subject: [tipleo][ussuri][CentOSLinux8] failing on partitioning when building images Message-ID: Hi all, My image build fails on partitioning [1] and below error logs you can find yaml file I use to build images. [1] http://paste.openstack.org/show/801211/ -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Mon Dec 21 10:53:08 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 21 Dec 2020 11:53:08 +0100 Subject: [tripleo][centosLinux8][podman-1.6] during undercloud deployment missing podman 1.6 cause we have podman 2.0 Message-ID: Hi all. Somewhere there is a mismatch. When deploying fresh undercloud I get error ./builddir/install-undercloud.log:2020-12-21 11:07:02,763 p=30959 u=root n=ansible | 2020-12-21 11:07:02.762141 | 52540000-0011-172f-d7e6-000000000826 | FATAL | ensure podman and deps are installed | remote-u | error={"changed": false, "failures": ["podman-1.6.4 All matches were filtered out by modular filtering for argument: podman-1.6.4"], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []} ./builddir/install-undercloud.log: "podman-1.6.4" ./builddir/install-undercloud.log: "podman-1.6.4 All matches were filtered out by modular filtering for argument: podman-1.6.4" ./builddir/install-undercloud.log: "podman-1.6.4" so I found a file and updated with version found in repos: [stack at remote-u ~]$ cat /usr/share/ansible/roles/tripleo_podman/vars/redhat.yml --- _tripleo_podman_packages: - podman-2.0.5 _tripleo_buildah_packages: - buildah-1.15.1 _tripleo_podman_purge_packages: - docker - docker-ce [stack at remote-u ~]$ -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Mon Dec 21 16:44:39 2020 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 21 Dec 2020 09:44:39 -0700 Subject: [tripleo][centosLinux8][podman-1.6] during undercloud deployment missing podman 1.6 cause we have podman 2.0 In-Reply-To: References: Message-ID: You need to disable the container-tools rhel8 version and enable container-tools 2.0 prior to installing tripleoclient sudo dnf modules disable container-tools:rhel8 sudo dnf modules enable container-tools:2.0 On Mon, Dec 21, 2020, 9:39 AM Ruslanas Gžibovskis wrote: > Hi all. > > Somewhere there is a mismatch. When deploying fresh undercloud I get error > ./builddir/install-undercloud.log:2020-12-21 11:07:02,763 p=30959 u=root > n=ansible | 2020-12-21 11:07:02.762141 | > 52540000-0011-172f-d7e6-000000000826 | FATAL | ensure podman and deps > are installed | remote-u | error={"changed": false, "failures": > ["podman-1.6.4 All matches were filtered out by modular filtering for > argument: podman-1.6.4"], "msg": "Failed to install some of the specified > packages", "rc": 1, "results": []} > ./builddir/install-undercloud.log: "podman-1.6.4" > ./builddir/install-undercloud.log: "podman-1.6.4 All matches were > filtered out by modular filtering for argument: podman-1.6.4" > ./builddir/install-undercloud.log: "podman-1.6.4" > > so I found a file and updated with version found in repos: > > [stack at remote-u ~]$ cat > /usr/share/ansible/roles/tripleo_podman/vars/redhat.yml > --- > _tripleo_podman_packages: > - podman-2.0.5 > > _tripleo_buildah_packages: > - buildah-1.15.1 > > _tripleo_podman_purge_packages: > - docker > - docker-ce > [stack at remote-u ~]$ > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Mon Dec 21 12:39:15 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 21 Dec 2020 13:39:15 +0100 Subject: [tripleo][centosLinux8][podman-1.6] during undercloud deployment missing podman 1.6 cause we have podman 2.0 In-Reply-To: References: Message-ID: If I remember from Last time, I hope I managed to choose the right options... https://bugs.launchpad.net/tripleo/+bug/1908899 On Mon, 21 Dec 2020 at 13:21, Ruslanas Gžibovskis wrote: > yes, this helped. > > On Mon, 21 Dec 2020 at 11:53, Ruslanas Gžibovskis > wrote: > >> Hi all. >> >> Somewhere there is a mismatch. When deploying fresh undercloud I get error >> ./builddir/install-undercloud.log:2020-12-21 11:07:02,763 p=30959 u=root >> n=ansible | 2020-12-21 11:07:02.762141 | >> 52540000-0011-172f-d7e6-000000000826 | FATAL | ensure podman and deps >> are installed | remote-u | error={"changed": false, "failures": >> ["podman-1.6.4 All matches were filtered out by modular filtering for >> argument: podman-1.6.4"], "msg": "Failed to install some of the specified >> packages", "rc": 1, "results": []} >> ./builddir/install-undercloud.log: "podman-1.6.4" >> ./builddir/install-undercloud.log: "podman-1.6.4 All matches were >> filtered out by modular filtering for argument: podman-1.6.4" >> ./builddir/install-undercloud.log: "podman-1.6.4" >> >> so I found a file and updated with version found in repos: >> >> [stack at remote-u ~]$ cat >> /usr/share/ansible/roles/tripleo_podman/vars/redhat.yml >> --- >> _tripleo_podman_packages: >> - podman-2.0.5 >> >> _tripleo_buildah_packages: >> - buildah-1.15.1 >> >> _tripleo_podman_purge_packages: >> - docker >> - docker-ce >> [stack at remote-u ~]$ >> >> -- >> Ruslanas Gžibovskis >> +370 6030 7030 >> > > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Mon Dec 21 12:21:05 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 21 Dec 2020 13:21:05 +0100 Subject: [tripleo][centosLinux8][podman-1.6] during undercloud deployment missing podman 1.6 cause we have podman 2.0 In-Reply-To: References: Message-ID: yes, this helped. On Mon, 21 Dec 2020 at 11:53, Ruslanas Gžibovskis wrote: > Hi all. > > Somewhere there is a mismatch. When deploying fresh undercloud I get error > ./builddir/install-undercloud.log:2020-12-21 11:07:02,763 p=30959 u=root > n=ansible | 2020-12-21 11:07:02.762141 | > 52540000-0011-172f-d7e6-000000000826 | FATAL | ensure podman and deps > are installed | remote-u | error={"changed": false, "failures": > ["podman-1.6.4 All matches were filtered out by modular filtering for > argument: podman-1.6.4"], "msg": "Failed to install some of the specified > packages", "rc": 1, "results": []} > ./builddir/install-undercloud.log: "podman-1.6.4" > ./builddir/install-undercloud.log: "podman-1.6.4 All matches were > filtered out by modular filtering for argument: podman-1.6.4" > ./builddir/install-undercloud.log: "podman-1.6.4" > > so I found a file and updated with version found in repos: > > [stack at remote-u ~]$ cat > /usr/share/ansible/roles/tripleo_podman/vars/redhat.yml > --- > _tripleo_podman_packages: > - podman-2.0.5 > > _tripleo_buildah_packages: > - buildah-1.15.1 > > _tripleo_podman_purge_packages: > - docker > - docker-ce > [stack at remote-u ~]$ > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Mon Dec 21 14:38:45 2020 From: akekane at redhat.com (Abhishek Kekane) Date: Mon, 21 Dec 2020 20:08:45 +0530 Subject: OOO First half on 22 December Message-ID: Hi Team, I will be out in the first half tomorrow for personal reasons, and will resume work around 1400 UTC. Thanks & Best Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Dec 21 17:23:57 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 21 Dec 2020 17:23:57 +0000 Subject: New Openstack Deployment questions In-Reply-To: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> References: <19B90555-B97D-4B83-A83A-84DB95D4FDCF@gmu.edu> Message-ID: On Thu, 10 Dec 2020 at 13:47, Thomas Wakefield wrote: > > OpenStack deployment questions: > > > > If you were starting a new deployment of OpenStack today what OS would you use, and what tools would you use for deployment? We were thinking CentOS with Kayobe, but then CentOS changed their support plans, and I am hesitant to start a new project with CentOS. We do have access to RHEL licensing so that might be an option. We have also looked at OpenStack-Ansible for deployment. Thoughts? > Hi Tom, While it is *very* early days, you might be interested in this patch [1], which starts to add Ubuntu support to Kayobe. We're just starting this effort, and there are potential obstacles before it may be considered production ready, but we are not entirely starting from zero since many of the components that Kayobe is built from already support Ubuntu. If you are keen to use Kayobe with Ubuntu, and have resources to help, please let me know. Cheers, Mark [1] https://review.opendev.org/c/openstack/kayobe/+/767705 > > > Thanks in advance. -Tom From sean.mcginnis at gmx.com Mon Dec 21 17:27:35 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 21 Dec 2020 11:27:35 -0600 Subject: [all] Our next release name - OpenStack Xena! Message-ID: <20201221172735.GA1932658@sm-workstation> Hello everyone! The naming poll is done [0], the legal vetting is complete, and we now have our X release name. All hail OpenStack Xena! Observant readers will note that Xena was not the 1st place name from the poll. After review for legal and copyright concerns, it was determined there would be conflicts with the top place Xanadu. Apologies to the Olivia Newton John fans out there. Xena was determined to be safe for our use, and was a close second. More details about the release naming process can be found on the governance page [1]. Please consider getting involved for the Y release naming. Thanks! Sean [0] https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_7e6e96070af39fe7 [1] https://governance.openstack.org/tc/reference/release-naming.html From mnaser at vexxhost.com Mon Dec 21 20:03:07 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 21 Dec 2020 15:03:07 -0500 Subject: [tc] weekly update Message-ID: Hi everyone, Here's an update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # Patches ## Open Reviews - Add glance-tempest-plugin to Glance https://review.opendev.org/c/openstack/governance/+/767666 - Remove Karbor project team https://review.opendev.org/c/openstack/governance/+/767056 - Add Resolution of TC stance on the OpenStackClient https://review.opendev.org/c/openstack/governance/+/759904 ## Project Updates - Deprecate openstack-ansible-galera_client role https://review.opendev.org/c/openstack/governance/+/765784 ## General Changes - Improve check-review-status https://review.opendev.org/c/openstack/governance/+/766249 - Clarify impact on releases for SIGs https://review.opendev.org/c/openstack/governance/+/752699 # Other Reminders - Due to the upcoming holidays we are skipping the next TC meetings on Dec 24th and Dec 31st. We will resume with our regular schedule on Thursday January 7th. Thanks for reading! Happy Holidays! Mohammed & Kendall -- Mohammed Naser VEXXHOST, Inc. From romain.chanu at univ-lyon1.fr Mon Dec 21 20:16:41 2020 From: romain.chanu at univ-lyon1.fr (CHANU ROMAIN) Date: Mon, 21 Dec 2020 20:16:41 +0000 Subject: [placement] Train upgrade warning In-Reply-To: <90557ebe-caec-bfa7-79f1-f909474235ff@gmail.com> References: <54217759-eed4-5330-8b55-735ab622074c@gmail.com> <1603620657403.92138@univ-lyon1.fr> <272037d7-48f7-2f10-d7fb-cd1cc7b71e87@gmail.com> <79e6ffd0-83d1-f1ab-b0fa-6a4e8fc9a93c@gmail.com> <2d6ee04a-cff7-c3ad-4a1f-221c03dc0ef3@gurukuli.co.uk> <92da9dd6-d84f-efa5-ab8e-fc8124548b89@gmail.com> <90557ebe-caec-bfa7-79f1-f909474235ff@gmail.com> Message-ID: <21f6684d24e8c07fb93147d8a790b7dca20bddbd.camel@univ-lyon1.fr> Hello, Sorry I could not work on this for a while. To fix this issue I just added one request to my previous message. I will write down my entire procedure: use nova; update instance_id_mappings set deleted = id where uuid in (select uuid from instances where deleted != 0); exit nova-manage db archive_deleted_rows --all-cells --until-complete delete from nova.instance_id_mappings where uuid not in (select uuid from nova.instances); delete from nova_api.consumers where nova_api.consumers.uuid not in (select nova_api.instance_mappings.instance_uuid from nova_api.instance_mappings); Thus new request: delete from placement.consumers where placement.consumers.uuid not in (select nova_api.instance_mappings.instance_uuid from nova_api.instance_mappings); I already executed the db migrate script so I have to clear placement tables. If you still have negative values I think there are many cases. I faced these: - All instances in a deleted project are still present in placement/nova_api consumers. I removed them from nova_api.instance_mappings before nova-manage db archive_deleted_rows - Last one is weird: A very old shelved instance which appeared after running placement-manage db online_data_migrations Best regards, Romain On Wed, 2020-11-04 at 09:08 -0800, melanie witt wrote: > On 11/4/20 08:54, Seth Tunstall wrote: > > Hello, > > > > In case it helps anyone else searching for this in future: > > Melanie's > > suggestion to clean out the orphaned consumers worked perfectly in > > my > > situation. > > > > The last two I had were apparently left over from the original > > build of > > this environment. I brute-force cleaned them out of the DB > > manually: > > > > DELETE FROM nova_cell0.block_device_mapping WHERE > > nova_cell0.block_device_mapping.instance_uuid IN (SELECT uuid FROM > > nova_api.consumers WHERE nova_api.consumers.uuid NOT IN (SELECT > > nova_api.allocations.consumer_id FROM nova_api.allocations)); > > > > > Caveat: I am not intimately familiar with how the ORM handles these > > DB > > tables, I may have done something stupid here. > > Hm, sorry, this isn't what I was suggesting you do ... I was making a > guess that you might have instances with 'deleted' != 0 in your > nova_cell0 database and that if so, they needed to be archived using > 'nova-manage db archive_deleted_rows' and then that might take care > of > removing their corresponding nova_api.instance_mappings which would > make > the manual cleanup find more rows (the rows that were being > complained > about). > > What you did is "OK" (not harmful) if the nova_cell0.instances > records > associated with those records were 'deleted' column != 0. But there's > likely more cruft rows left behind that will never be removed. > nova-manage db archive_deleted_rows should be used whenever possible > because it knows how to remove all the things. > > > I tried to run: > > > > nova-manage db archive_deleted_rows --verbose --until-complete -- > > all-cells > > > > but nova-db-manage complained that it didn't recognise --no-cells > > This is with the train code? --all-cells was added in train [1]. If > you > are running with code prior to train, you have to pass a nova config > file to the nova-manage command that has its [api_database]connection > set to the nova_api database connection url and the > [database]connection > set to the nova_cell0 database. Example: > > nova-manage --config-file db > archive_deleted_rows ... > > Cheers, > -melanie > > [1] > https://docs.openstack.org/nova/train/cli/nova-manage.html#nova-database > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3217 bytes Desc: not available URL: From knikolla at bu.edu Mon Dec 21 22:14:43 2020 From: knikolla at bu.edu (Nikolla, Kristi) Date: Mon, 21 Dec 2020 22:14:43 +0000 Subject: [keystone] No weekly meeting Dec 22 Message-ID: Hi all, I won't be able to host the weekly IRC team meeting for keystone tomorrow. Best, Kristi Nikolla From feilong at catalyst.net.nz Tue Dec 22 02:12:12 2020 From: feilong at catalyst.net.nz (feilong) Date: Tue, 22 Dec 2020 15:12:12 +1300 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: Message-ID: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> Hi Ionut, I didn't see this before on our production. Magnum auto healer just simply sends a POST request to Magnum api to update the health status. So I would suggest write a small script or even use curl to see if you can reproduce this firstly. On 19/12/20 2:27 am, Ionut Biru wrote: > Hi again, > > I failed to mention that is stable/victoria with couples of patches > from review. Ignore the fact that in logs it  shows the 19.1.4 version > in venv path. > > On Fri, Dec 18, 2020 at 3:22 PM Ionut Biru > wrote: > > Hi guys, > > I have an issue with magnum api returning an error after a while: > |Server-side error: "[('system library', 'fopen', 'Too many open > files'), ('BIO routines', 'BIO_new_file', 'system lib'), ('x509 > certificate routines', 'X509_load_cert_crl_file', 'system lib')]"| > > Log file: https://paste.xinu.at/6djE/ > > This started to appear after I enabled the > template auto_healing_controller = magnum-auto-healer,  > magnum_auto_healer_tag = v1.19.0. > > Currently, I only have 4 clusters. > > After that the API is in error state and doesn't work unless I > restart it. > > > -- > Ionut Biru - https://fleio.com > > > > -- > Ionut Biru - https://fleio.com -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue Dec 22 07:34:10 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 22 Dec 2020 08:34:10 +0100 Subject: [blazar] No IRC meetings during holiday season Message-ID: Hello, I am on leave for the end of the year, so there will be no IRC meetings for the Blazar project until the week of January 4. I wish everyone a very happy holiday! Pierre Riteau (priteau) From ccamacho at redhat.com Tue Dec 22 14:23:16 2020 From: ccamacho at redhat.com (Carlos Camacho Gonzalez) Date: Tue, 22 Dec 2020 15:23:16 +0100 Subject: [off-topic][tripleo] Merry Christmas! In-Reply-To: References: Message-ID: Hi everyone! Its been a complex year for everyone, however, we managed to do amazing things, I hope we can get a boost of energy these holidays to do even better things next year, I wish you all the best and to stay healthy. https://imgur.com/a/5rSXMTe As usual, all the sources are in GH [1]. Merry Christmas! Carlos. [1]: https://github.com/ccamacho/tripleo-graphics/tree/master/x-mas/2020 From ssbarnea at redhat.com Tue Dec 22 14:33:28 2020 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Tue, 22 Dec 2020 11:33:28 -0300 Subject: [infra][tripleo] is it possible to control (tripleo) gate queue priority? In-Reply-To: References: <20201217164503.qvx4kr5rf4w5g7qu@yuggoth.org> Message-ID: Could we find a way to extend the ability to alter queue to a select group of non-zuul admins? I personally find the need to ping zuul admins about queue alteration problematic for at least two reasons: - increases load of zuul-admins, which likely have other more pressing (or interesting) issues to deal with - it depends directly on the availability of zuul admins, which is not really a 24x7 service, not even a 24x5. If we would have a token that allow some of us to use the new zuul client to put some patches on top of the queue, we could likely avoid having to depend on other humans for unblocking some pipelines. As these operations would be very easy to track, I doubt this would be abused. Currently that is achievable only by admins with something like: zuul promote --tenant openstack --pipeline gate --changes 123,1 What if we can also have some power users? aka queue owners/stewards? How hard it would be? Thanks Sorin Sbarnea On 18 Dec 2020 at 07:06:19, Marios Andreou wrote: > > > On Thu, Dec 17, 2020 at 6:46 PM Jeremy Stanley wrote: > >> On 2020-12-17 16:43:08 +0200 (+0200), Marios Andreou wrote: >> > tripleo-ci squad is wondering if it is possible for (some subset >> > of) us to be able to set the priority of a particular patch/es in >> > the tripleo queue. >> >> Not directly, no, it's an administrative function of the Zuul >> scheduler which can't be delegated by queue. >> >> > > ack, we suspected that permissions might be an issue (i.e. that we cannot > be given administrative access for 'just the tripleo queue' is at least one > of the obstacles here ;) ). > > > >> > We've done this "manually" in the past, by abandoning all patches >> > in the gate & then restoring in order and putting the priority >> > patch at the top of the dependency queue. However abandoning all >> > the things is completely disruptive for everyone else (sometimes >> > that might be necessary if your queue is way too long but >> > still...). >> >> It's actually not as terrible a solution as it sounds, you're >> basically signalling to your contributors that your jobs are >> unhealthy and your immediate priority is to focus on merging >> identified fixes for that problem rather than other patches. It also >> frees up our CI resources which you would otherwise be monopolizing >> due to churn from repeated gate resets of massively long change >> queues, ultimately helping those fixes merge more quickly. Of course >> it also depends on your core review teams getting on the same page >> and not continuing to approve unrelated changes which are unlikely >> to merge at that point, but this is more of a social issue and not a >> technical one. >> > > > Indeed this has been done in the past and obviously signalled on the > mailing list so folks can stop approving patches (and it typically works > out fine). > > > >> >> > So the question is, is there a better way to put a particular >> > patch at the top of our queue when we need to do that? >> [...] >> >> OpenDev's Zuul administrators have access to reorder queues in >> dependent pipelines. Reach out to us through the OpenStack TaCT >> SIG's #openstack-infra IRC channel on Freenode or here on >> openstack-discuss with the [infra] subject tag, explaining which >> approved changes you need moved to the front and why. Ideally >> coordinate this with the rest of your team, since we don't want to >> wind up in the middle of a team squabble where different >> contributors are asking to have their changes prioritized at odds >> with one another. To avoid confusion, we typically want to at least >> see some acknowledgement of the request from your PTL or designated >> Infra Liaison[*]. >> >> [*] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Infra >> >> > ACK thanks this is good to know. > > This topic came up in our team discussions recently and we felt it was at > least worth asking if there was another way to manipulate the queue > ourselves that didn't involve abandoning all the things. > > Thank you very much for taking the time to reply > > marios > > > >> -- >> Jeremy Stanley >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Tue Dec 22 16:33:41 2020 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 22 Dec 2020 11:33:41 -0500 Subject: oslo message direct_mandatory_flag question Message-ID: Folks, I am getting a very strange error from time to time on senlin logs and when it hits this error i can't do anything in senlin until i restart service. After googling i found this bug https://bugs.launchpad.net/oslo.messaging/+bug/1905965 It's pretty much related to my issue because I am running the latest victoria. Dec 22 13:56:44 os-lab-infra-1-senlin-container-16f24bbe senlin-wsgi-api[8188]: 2020-12-22 13:56:44.212 8188 ERROR oslo.messaging._drivers.impl_rabbit [-] [df314561-6415-4103-a1fd-14ab95182cfb] AMQP server on 10.65.6.176:5671 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: Dec 22 13:56:44 os-lab-infra-1-senlin-container-16f24bbe senlin-wsgi-api[8188]: 2020-12-22 13:56:44.220 8188 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: Server unexpectedly closed connection Dec 22 13:56:44 os-lab-infra-1-senlin-container-16f24bbe senlin-conductor[8250]: 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server [req-3c89475b-89fc-404b-8537-df7a587261d9 462618bed32745d2a9166bcc33fc117e f1502c79c70f4651be8ffc7b844b584f - - -] MessageUndeliverable error, source exception: Basic.return: (312) NO_ROUTE, routing_key: reply_54d93c43fe894ed18ce8092f4497306b, exchange: : : oslo_messaging.exceptions.MessageUndeliverable 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/oslo_messaging/rpc/server.py", line 184, in _process_incoming 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server message.reply(res) 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 149, in reply 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server self._send_reply(conn, reply, failure) 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 125, in _send_reply 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server conn.direct_send(self.reply_q, rpc_common.serialize_msg(msg)) 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 1320, in direct_send 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server self._ensure_publishing(self._publish_and_raises_on_missing_exchange, 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 1202, in _ensure_publishing 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server self.ensure(method, retry=retry, error_callback=_error_callback) 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in ensure 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server ret, channel = autoretry_method() 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/kombu/connection.py", line 525, in _ensured 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server return fun(*args, **kwargs) 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/kombu/connection.py", line 601, in __call__ 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server return fun(*args, channel=channels[0], **kwargs), channels[0] 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 815, in execute_method 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server method() 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 1294, in _publish_and_raises_on_missing_exchange 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server self._publish(exchange, msg, routing_key=routing_key, 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 1238, in _publish 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server self._producer.publish( 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/kombu/messaging.py", line 175, in publish 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server return _publish( 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/kombu/messaging.py", line 197, in _publish 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server return channel.basic_publish( 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/amqp/channel.py", line 1782, in basic_publish_confirm 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server self.wait([spec.Basic.Ack, spec.Basic.Nack], callback=confirm_handler) 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/amqp/abstract_channel.py", line 86, in wait 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server self.connection.drain_events(timeout=timeout) 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/amqp/connection.py", line 514, in drain_events 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server while not self.blocking_read(timeout): 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/amqp/connection.py", line 520, in blocking_read 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server return self.on_inbound_frame(frame) 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/amqp/method_framing.py", line 77, in on_frame 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server callback(channel, msg.frame_method, msg.frame_args, msg) 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/amqp/connection.py", line 526, in on_inbound_method 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server return self.channels[channel_id].dispatch_method( 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/amqp/abstract_channel.py", line 143, in dispatch_method 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server listener(*args) 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/amqp/channel.py", line 2006, in _on_basic_return 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server callback(exc, exchange, routing_key, message) 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server File "/openstack/venvs/senlin-22.0.0.0b2.dev56/lib/python3.8/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 852, in on_return 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server raise exceptions.MessageUndeliverable(exception, exchange, routing_key, 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server oslo_messaging.exceptions.MessageUndeliverable 2020-12-22 13:56:44.461 8250 ERROR oslo_messaging.rpc.server Dec 22 13:56:45 os-lab-infra-1-senlin-container-16f24bbe senlin-wsgi-api[8188]: 2020-12-22 13:56:45.248 8188 INFO oslo.messaging._drivers.impl_rabbit [-] [df314561-6415-4103-a1fd-14ab95182cfb] Reconnected to AMQP server on 10.65.6.176:5671 via [amqp] client with port 44246. Dec 22 13:56:53 os-lab-infra-1-senlin-container-16f24bbe senlin-health-manager[8218]: 2020-12-22 13:56:53.954 8218 INFO senlin.engine.health_manager [req-c0118d24-9f32-47c8-926e-eb4b60f922cd 462618bed32745d2a9166bcc33fc117e f1502c79c70f4651be8ffc7b844b584f - - -] Health check passed for all nodes in cluster f51b3154-fb8e-43f2-b68b-03b091c1e0bf. Dec 22 13:57:23 os-lab-infra-1-senlin-container-16f24bbe senlin-health-manager[8218]: 2020-12-22 13:57:23.962 8218 INFO senlin.engine.health_manager [req-472f4403-fd66-422a-8a84-a0b0f51d94cc 462618bed32745d2a9166bcc33fc117e f1502c79c70f4651be8ffc7b844b584f - - -] Health check passed for all nodes in cluster f51b3154-fb8e-43f2-b68b-03b091c1e0bf. Dec 22 13:57:23 os-lab-infra-1-senlin-container-16f24bbe senlin-health-manager[8218]: 2020-12-22 13:57:23.963 8218 INFO senlin.engine.health_manager [req-a5ff1fe3-35e8-443b-b91a-aa1d78656569 462618bed32745d2a9166bcc33fc117e f1502c79c70f4651be8ffc7b844b584f - - -] Health check passed for all nodes in cluster f51b3154-fb8e-43f2-b68b-03b091c1e0bf. Dec 22 13:57:44 os-lab-infra-1-senlin-container-16f24bbe uwsgi[8188]: Tue Dec 22 13:57:44 2020 - uwsgi_response_writev_headers_and_body_do(): Connection reset by peer [core/writer.c line 306] during GET /v1/clusters?global_project=False (10.65.6.17) Dec 22 13:57:44 os-lab-infra-1-senlin-container-16f24bbe senlin-wsgi-api[8188]: 2020-12-22 13:57:44.448 8188 CRITICAL senlin-api [req-3c89475b-89fc-404b-8537-df7a587261d9 462618bed32745d2a9166bcc33fc117e f1502c79c70f4651be8ffc7b844b584f - - -] Unhandled error: OSError: write error 2020-12-22 13:57:44.448 8188 ERROR senlin-api OSError: write error 2020-12-22 13:57:44.448 8188 ERROR senlin-api Dec 22 13:57:53 os-lab-infra-1-senlin-container-16f24bbe senlin-health-manager[8218]: 2020-12-22 13:57:53.955 8218 INFO senlin.engine.health_manager [req-8a578e8e-438d-45c1-b81d-c6f3aed5fc81 462618bed32745d2a9166bcc33fc117e f1502c79c70f4651be8ffc7b844b584f - - -] Health check passed for all nodes in cluster f51b3154-fb8e-43f2-b68b-03b091c1e0bf. As per advice i have tired to add following snippet to fix this issue [DEFAULT] transport_url = rabbit://senlin:94d7aecb853145779db8f1dcb at 10.65.6.176:5671//senlin?ssl=1 [oslo_messaging_rabbit] ssl = True direct_mandatory_flag = False But i got error when trying to set direct_mandatory_flag option ERROR oslo_service.service [-] Error starting t hread.: oslo_config.cfg.ConfigFileValueError: Value for option direct_mandatory_flag from LocationInfo(location=, detail='/etc/senlin/senlin.conf') is not valid: invalid literal for int() with base 10: 'False' Any idea what is going on here? From lyarwood at redhat.com Tue Dec 22 17:06:02 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 22 Dec 2020 17:06:02 +0000 Subject: [stable][grenade][qa][nova][swift] s-proxy unable to start due to missing runtime deps Message-ID: <20201222170602.bqlwjlikjwdhoc7c@lyarwood-laptop.usersys.redhat.com> Hello all, I wanted to raise awareness of the following issue and to seek some feedback on my approach to workaround it: ImportError: No module named keystonemiddleware.auth_token https://bugs.launchpad.net/swift/+bug/1909018 This was introduced after I landed the following devstack backport stopping projects from installing their test-requirements.txt deps: Stop installing test-requirements with projects https://review.opendev.org/q/I8f24b839bf42e2fb9803dc7df3a30ae20cf264eb For the time being to workaround this in various other gates I've suggested that we disable Swift in Grenade on stable/train: zuul: Disable swift services until bug #1909018 is resolved https://review.opendev.org/c/openstack/grenade/+/768224 This finally allowed openstack/nova to pass on stable/train with the following changes to lower-constraints.txt and test-requirements.txt: [stable-only] Cap bandit to 1.6.2 and raise hacking, flake8 and stestr https://review.opendev.org/c/openstack/nova/+/766171/ Are there any objections to disabling Swift in Grenade for the time being on stable/train? Would anyone have any objections to also disabling it on stable/stein via devstack-gate? Many thanks in advance, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From fungi at yuggoth.org Tue Dec 22 17:14:41 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 22 Dec 2020 17:14:41 +0000 Subject: [infra][tripleo] is it possible to control (tripleo) gate queue priority? In-Reply-To: References: <20201217164503.qvx4kr5rf4w5g7qu@yuggoth.org> Message-ID: <20201222171441.qhwtgncih2acnekm@yuggoth.org> On 2020-12-22 11:33:28 -0300 (-0300), Sorin Sbarnea wrote: > Could we find a way to extend the ability to alter queue to a > select group of non-zuul admins? > > I personally find the need to ping zuul admins about queue > alteration problematic for at least two reasons: > > - increases load of zuul-admins, which likely have other more > pressing (or interesting) issues to deal with > - it depends directly on the availability of zuul admins, which is > not really a 24x7 service, not even a 24x5. > > If we would have a token that allow some of us to use the new zuul > client to put some patches on top of the queue, we could likely > avoid having to depend on other humans for unblocking some > pipelines. > > As these operations would be very easy to track, I doubt this > would be abused. > > Currently that is achievable only by admins with something like: > > zuul promote --tenant openstack --pipeline gate --changes 123,1 We quite often also precede it with `zuul enqueue ...` to put the change into the gate pipeline, either because the changes in question have preexisting Verified -1/-2 due to unrelated failures, or no Verified vote because it hasn't completed check pipeline jobs. > What if we can also have some power users? aka queue > owners/stewards? How hard it would be? Currently we access the scheduler's RPC socket via sudo locally on the server (the CLI utility writes to a named pipe owned by the zuuld user). An alternative would be to set up authentication for the REST API, which the client supports but we haven't used in OpenDev yet. However, I question whether it's worthwhile spending time engineering a two-tiered administrative solution for our scheduler. This comes up once or maybe twice a month, takes only a minute, and never really has immediate urgency (except for security fixes which are generally scheduled and coordinated with someone well in advance), so it's not a particular burden on the current sysadmin team and there's generally someone around within 24 hours or less to process a request of that nature. As previously discussed, if it were particularly urgent, there are other remedies available to core review teams already. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From cboylan at sapwetik.org Tue Dec 22 17:36:40 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 22 Dec 2020 09:36:40 -0800 Subject: =?UTF-8?Q?Re:_[infra][tripleo]_is_it_possible_to_control_(tripleo)_gate_?= =?UTF-8?Q?queue_priority=3F?= In-Reply-To: <20201222171441.qhwtgncih2acnekm@yuggoth.org> References: <20201217164503.qvx4kr5rf4w5g7qu@yuggoth.org> <20201222171441.qhwtgncih2acnekm@yuggoth.org> Message-ID: <6cfd753b-20dc-4712-bc7d-ab6896c39b82@www.fastmail.com> On Tue, Dec 22, 2020, at 9:14 AM, Jeremy Stanley wrote: > On 2020-12-22 11:33:28 -0300 (-0300), Sorin Sbarnea wrote: > > Could we find a way to extend the ability to alter queue to a > > select group of non-zuul admins? > > > > I personally find the need to ping zuul admins about queue > > alteration problematic for at least two reasons: > > > > - increases load of zuul-admins, which likely have other more > > pressing (or interesting) issues to deal with > > - it depends directly on the availability of zuul admins, which is > > not really a 24x7 service, not even a 24x5. > > > > If we would have a token that allow some of us to use the new zuul > > client to put some patches on top of the queue, we could likely > > avoid having to depend on other humans for unblocking some > > pipelines. > > > > As these operations would be very easy to track, I doubt this > > would be abused. > > > > Currently that is achievable only by admins with something like: > > > > zuul promote --tenant openstack --pipeline gate --changes 123,1 > > We quite often also precede it with `zuul enqueue ...` to put the > change into the gate pipeline, either because the changes in > question have preexisting Verified -1/-2 due to unrelated failures, > or no Verified vote because it hasn't completed check pipeline jobs. > > > What if we can also have some power users? aka queue > > owners/stewards? How hard it would be? > > Currently we access the scheduler's RPC socket via sudo locally on > the server (the CLI utility writes to a named pipe owned by the > zuuld user). An alternative would be to set up authentication for > the REST API, which the client supports but we haven't used in > OpenDev yet. Worth noting that the only scoping available to us in the current authenticate setup for Zuul is tenant scoping. This means any tokens issued to allow promote for tripleo would allow it for all openstack tenant projects. The token would also be able to perform autohold, enqueue/enqueue-ref, and dequeue/dequeue-ref in addition to promote. I don't think these permissions are currently fine grained enough to work in our current tenant setup. https://zuul-ci.org/docs/zuul/discussion/tenant-scoped-rest-api.html > > However, I question whether it's worthwhile spending time > engineering a two-tiered administrative solution for our scheduler. > This comes up once or maybe twice a month, takes only a minute, and > never really has immediate urgency (except for security fixes which > are generally scheduled and coordinated with someone well in > advance), so it's not a particular burden on the current sysadmin > team and there's generally someone around within 24 hours or less to > process a request of that nature. As previously discussed, if it > were particularly urgent, there are other remedies available to core > review teams already. It is worth remembering that Zuul's original (and arguably primary) method of receiving instruction is via the code review systems it listens to. Zuul supports the reorganization of queues via code review system state changes as a result. They are clunky and expensive, but that reflects the cost on both sides of a promotion. Dumping all existing job states and test nodes on the zuul and nodepool side in order to create a new queue state is an expensive operation for Zuul too. I don't think it is necessarily a bug to bubble that pain up to the users. Promotions should be done infrequently when necessary to avoid this resource thrashing. Ideally, projects would instead prioritize work on the review side and ensure that things are only approved when they are expected to pass gating and merge. > -- > Jeremy Stanley From gmann at ghanshyammann.com Tue Dec 22 18:40:07 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 22 Dec 2020 12:40:07 -0600 Subject: [stable][grenade][qa][nova][swift] s-proxy unable to start due to missing runtime deps In-Reply-To: <20201222170602.bqlwjlikjwdhoc7c@lyarwood-laptop.usersys.redhat.com> References: <20201222170602.bqlwjlikjwdhoc7c@lyarwood-laptop.usersys.redhat.com> Message-ID: <1768bc057cf.f1a8241c453709.6689913672331717101@ghanshyammann.com> ---- On Tue, 22 Dec 2020 11:06:02 -0600 Lee Yarwood wrote ---- > Hello all, > > I wanted to raise awareness of the following issue and to seek some > feedback on my approach to workaround it: > > ImportError: No module named keystonemiddleware.auth_token > https://bugs.launchpad.net/swift/+bug/1909018 > > This was introduced after I landed the following devstack backport > stopping projects from installing their test-requirements.txt deps: > > Stop installing test-requirements with projects > https://review.opendev.org/q/I8f24b839bf42e2fb9803dc7df3a30ae20cf264eb > > For the time being to workaround this in various other gates I've > suggested that we disable Swift in Grenade on stable/train: > > zuul: Disable swift services until bug #1909018 is resolved > https://review.opendev.org/c/openstack/grenade/+/768224 > > This finally allowed openstack/nova to pass on stable/train with the > following changes to lower-constraints.txt and test-requirements.txt: > > [stable-only] Cap bandit to 1.6.2 and raise hacking, flake8 and stestr > https://review.opendev.org/c/openstack/nova/+/766171/ > > Are there any objections to disabling Swift in Grenade for the time > being on stable/train? > > Would anyone have any objections to also disabling it on stable/stein > via devstack-gate? Thanks, Lee for reporting this. keystonemiddleware is listed as an extras requirement in swift - https://github.com/openstack/swift/blob/e0d46d77fa740768f1dd5b989a63be85ff1fec20/setup.cfg#L79 But devstack does not install any extras requirement for swift. I am trying to install the swift's keystone extras and see if it work fine. - https://review.opendev.org/q/I02c692e95d70017eea03d82d75ae6c5e87bde8b1 NOTE, this is an issue for swift running on py2 env which is what <=stable/train is. That is why we can see swift-dsvm-functional job failing and swift-dsvm-functional-py3 passing on ussuri gate. >From Victoria onwards, swift-dsvm-functional-py3 and swift-dsvm-functional both running on py3 env so not causing any issue on victoria onwards gate. One more thing I observed, swift-dsvm-functional-py3 is removed from swift side which leads that swift-dsvm-functional-py3 is being skipped in devstack gate even though it is present in the check pipeline. I am removing this confusing job from devstak gate - https://review.opendev.org/c/openstack/devstack/+/768244 -gmann > > Many thanks in advance, > > -- > Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 > From openstack at nemebean.com Tue Dec 22 19:39:10 2020 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 22 Dec 2020 13:39:10 -0600 Subject: oslo message direct_mandatory_flag question In-Reply-To: References: Message-ID: On 12/22/20 10:33 AM, Satish Patel wrote: > As per advice i have tired to add following snippet to fix this issue > > [DEFAULT] > transport_url = > rabbit://senlin:94d7aecb853145779db8f1dcb at 10.65.6.176:5671//senlin?ssl=1 > > [oslo_messaging_rabbit] > ssl = True > direct_mandatory_flag = False > > But i got error when trying to set direct_mandatory_flag option > > ERROR oslo_service.service [-] Error starting t > hread.: oslo_config.cfg.ConfigFileValueError: Value for option > direct_mandatory_flag from LocationInfo(location= True)>, detail='/etc/senlin/senlin.conf') is not > valid: invalid literal for int() with base 10: 'False' > > Any idea what is going on here? > It's a bug in the opt definition[0]. We appear to have created an IntOpt that expects a boolean value. It's possible you could work around the problem by setting it to 0 instead of False, but that's assuming there isn't any other code that requires an actual boolean. I opened a bug[1] for the opt definition problem since it's separate from the existing bug about the mandatory flag. 0: https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/_drivers/impl_rabbit.py#L172 1: https://bugs.launchpad.net/oslo.messaging/+bug/1909036 From satish.txt at gmail.com Tue Dec 22 19:48:24 2020 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 22 Dec 2020 14:48:24 -0500 Subject: oslo message direct_mandatory_flag question In-Reply-To: References: Message-ID: Cool, thanks! I will try and see if that fixes it. Thank you again. I didn't noticed that input value just read doc and follow instructions. On Tue, Dec 22, 2020 at 2:39 PM Ben Nemec wrote: > > > > On 12/22/20 10:33 AM, Satish Patel wrote: > > As per advice i have tired to add following snippet to fix this issue > > > > [DEFAULT] > > transport_url = > > rabbit://senlin:94d7aecb853145779db8f1dcb at 10.65.6.176:5671//senlin?ssl=1 > > > > [oslo_messaging_rabbit] > > ssl = True > > direct_mandatory_flag = False > > > > But i got error when trying to set direct_mandatory_flag option > > > > ERROR oslo_service.service [-] Error starting t > > hread.: oslo_config.cfg.ConfigFileValueError: Value for option > > direct_mandatory_flag from LocationInfo(location= > True)>, detail='/etc/senlin/senlin.conf') is not > > valid: invalid literal for int() with base 10: 'False' > > > > Any idea what is going on here? > > > > It's a bug in the opt definition[0]. We appear to have created an IntOpt > that expects a boolean value. It's possible you could work around the > problem by setting it to 0 instead of False, but that's assuming there > isn't any other code that requires an actual boolean. > > I opened a bug[1] for the opt definition problem since it's separate > from the existing bug about the mandatory flag. > > 0: > https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/_drivers/impl_rabbit.py#L172 > 1: https://bugs.launchpad.net/oslo.messaging/+bug/1909036 From gmann at ghanshyammann.com Tue Dec 22 22:09:56 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 22 Dec 2020 16:09:56 -0600 Subject: [stable][grenade][qa][nova][swift] s-proxy unable to start due to missing runtime deps In-Reply-To: <1768bc057cf.f1a8241c453709.6689913672331717101@ghanshyammann.com> References: <20201222170602.bqlwjlikjwdhoc7c@lyarwood-laptop.usersys.redhat.com> <1768bc057cf.f1a8241c453709.6689913672331717101@ghanshyammann.com> Message-ID: <1768c8071d4.1064de03a458204.3580547345653933493@ghanshyammann.com> ---- On Tue, 22 Dec 2020 12:40:07 -0600 Ghanshyam Mann wrote ---- > ---- On Tue, 22 Dec 2020 11:06:02 -0600 Lee Yarwood wrote ---- > > Hello all, > > > > I wanted to raise awareness of the following issue and to seek some > > feedback on my approach to workaround it: > > > > ImportError: No module named keystonemiddleware.auth_token > > https://bugs.launchpad.net/swift/+bug/1909018 > > > > This was introduced after I landed the following devstack backport > > stopping projects from installing their test-requirements.txt deps: > > > > Stop installing test-requirements with projects > > https://review.opendev.org/q/I8f24b839bf42e2fb9803dc7df3a30ae20cf264eb > > > > For the time being to workaround this in various other gates I've > > suggested that we disable Swift in Grenade on stable/train: > > > > zuul: Disable swift services until bug #1909018 is resolved > > https://review.opendev.org/c/openstack/grenade/+/768224 > > > > This finally allowed openstack/nova to pass on stable/train with the > > following changes to lower-constraints.txt and test-requirements.txt: > > > > [stable-only] Cap bandit to 1.6.2 and raise hacking, flake8 and stestr > > https://review.opendev.org/c/openstack/nova/+/766171/ > > > > Are there any objections to disabling Swift in Grenade for the time > > being on stable/train? > > > > Would anyone have any objections to also disabling it on stable/stein > > via devstack-gate? > > Thanks, Lee for reporting this. > > keystonemiddleware is listed as an extras requirement in swift > - https://github.com/openstack/swift/blob/e0d46d77fa740768f1dd5b989a63be85ff1fec20/setup.cfg#L79 > > But devstack does not install any extras requirement for swift. I am trying to install > the swift's keystone extras and see if it work fine. > > - https://review.opendev.org/q/I02c692e95d70017eea03d82d75ae6c5e87bde8b1 This fix working fine tested in https://review.opendev.org/c/openstack/swift/+/766214 grenade job will be working once we merge the devstack fixes in stable branches -gmann > > NOTE, this is an issue for swift running on py2 env which is what <=stable/train is. That is why we > can see swift-dsvm-functional job failing and swift-dsvm-functional-py3 passing on ussuri gate. > > From Victoria onwards, swift-dsvm-functional-py3 and swift-dsvm-functional both running on py3 env > so not causing any issue on victoria onwards gate. > > One more thing I observed, swift-dsvm-functional-py3 is removed from swift side > which leads that swift-dsvm-functional-py3 is being skipped in devstack gate even though it is present > in the check pipeline. I am removing this confusing job from devstak gate > - https://review.opendev.org/c/openstack/devstack/+/768244 > > -gmann > > > > > Many thanks in advance, > > > > -- > > Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 > > > > From lyarwood at redhat.com Wed Dec 23 09:54:07 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 23 Dec 2020 09:54:07 +0000 Subject: [stable][grenade][qa][nova][swift] s-proxy unable to start due to missing runtime deps In-Reply-To: <1768c8071d4.1064de03a458204.3580547345653933493@ghanshyammann.com> References: <20201222170602.bqlwjlikjwdhoc7c@lyarwood-laptop.usersys.redhat.com> <1768bc057cf.f1a8241c453709.6689913672331717101@ghanshyammann.com> <1768c8071d4.1064de03a458204.3580547345653933493@ghanshyammann.com> Message-ID: <20201223095407.jdlkppm66t7gizye@lyarwood-laptop.usersys.redhat.com> On 22-12-20 16:09:56, Ghanshyam Mann wrote: > ---- On Tue, 22 Dec 2020 12:40:07 -0600 Ghanshyam Mann wrote ---- > > ---- On Tue, 22 Dec 2020 11:06:02 -0600 Lee Yarwood wrote ---- > > > Hello all, > > > > > > I wanted to raise awareness of the following issue and to seek some > > > feedback on my approach to workaround it: > > > > > > ImportError: No module named keystonemiddleware.auth_token > > > https://bugs.launchpad.net/swift/+bug/1909018 > > > > > > This was introduced after I landed the following devstack backport > > > stopping projects from installing their test-requirements.txt deps: > > > > > > Stop installing test-requirements with projects > > > https://review.opendev.org/q/I8f24b839bf42e2fb9803dc7df3a30ae20cf264eb > > > > > > For the time being to workaround this in various other gates I've > > > suggested that we disable Swift in Grenade on stable/train: > > > > > > zuul: Disable swift services until bug #1909018 is resolved > > > https://review.opendev.org/c/openstack/grenade/+/768224 > > > > > > This finally allowed openstack/nova to pass on stable/train with the > > > following changes to lower-constraints.txt and test-requirements.txt: > > > > > > [stable-only] Cap bandit to 1.6.2 and raise hacking, flake8 and stestr > > > https://review.opendev.org/c/openstack/nova/+/766171/ > > > > > > Are there any objections to disabling Swift in Grenade for the time > > > being on stable/train? > > > > > > Would anyone have any objections to also disabling it on stable/stein > > > via devstack-gate? > > > > Thanks, Lee for reporting this. > > > > keystonemiddleware is listed as an extras requirement in swift > > - https://github.com/openstack/swift/blob/e0d46d77fa740768f1dd5b989a63be85ff1fec20/setup.cfg#L79 > > > > But devstack does not install any extras requirement for swift. I am trying to install > > the swift's keystone extras and see if it work fine. > > > > - https://review.opendev.org/q/I02c692e95d70017eea03d82d75ae6c5e87bde8b1 > > This fix working fine tested in https://review.opendev.org/c/openstack/swift/+/766214 > > grenade job will be working once we merge the devstack fixes in stable branches ACK thanks, I hope you don't mind but I've addressed some nits raised in the review this morning. I'll repropose backports once it's in the gate. -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mnaser at vexxhost.com Wed Dec 23 14:04:54 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 23 Dec 2020 09:04:54 -0500 Subject: [tc] weekly meeting skip Message-ID: Hi there, Due to the upcoming holidays, we are skipping the next TC meetings on Dec 24th and Dec 31st. We will resume with our regular schedule on Thursday, January 7th. Thank you & happy holidays! Regards Mohammed -- Mohammed Naser VEXXHOST, Inc. From mnaser at vexxhost.com Wed Dec 23 14:28:51 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 23 Dec 2020 09:28:51 -0500 Subject: [tc] weekly meeting summary Message-ID: Hi everyone, Here's a summary of what happened in our TC weekly meeting last Thursday, Dec 17th. # ATTENDEES (LINES SAID) 1. mnaser (82) 2. gmann (46) 3. diablo_rojo (25) 4. jungleboyj (18) 5. fungi (10) 6. belmoreira (6) 7. ricolin (4) # MEETING SUMMARY 1. Rollcall 2. Skipping the next meetings. Due to the upcoming holidays, we are skipping the next two meetings on Dec 24 and Dec 31st. Mnaser to send an email with details. 3. Follow up on past action items - mnaser send email to ML to find volunteers to help drive goal selection: email sent, and no one strongly opposed. mnaser will write a proposed goal for X about stabilization/cooldown. - gmann complete retirement of searchlight & qinling: DONE - diablo_rojo complete retirement of karbor: In progress, getting there. 4. Audit SIG list and chairs (diablo_rojo) - This list is out of date and a lot of chairs are not active anymore. If any SIG has completed their purpose then we move them to an advisory state, otherwise close. diablo_rojo reach out to SIGs/ML and start auditing states of SIGs 5. Annual report suggestions (diablo_rojo) - mnaser and diablo_rojo are working on the annual report for OpenStack, ideas are welcome. Once it's mostly written a draft will be sent out before submitting it. 6. X cycle goal selection start This item has to go with the action item above regarding the stabilization goal proposal. We'll remove this item from the agenda. 7. Audit and clean-up tags (gmann) - http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019505.html gmann started ML on API tag and will see how many projects start and will continue on other tag audits in parallel. We'll remove sub-items in there in the meantime. gmann continue to audit tags + outreach to the community to apply for them 8. X cycle release name vote recording (gmann) - Votes are recorded in ML, mnaser will remove X cycle release name vote recording from the agenda. 9. CentOS 8 releases are discontinued / switch to CentOS 8 Stream (gmann/yoctozepto) - We have an action item to get the community to get together and the QA team is doing the right thing. We'll remove centos 8 topic from the upcoming agenda. 10. Open reviews - https://review.opendev.org/q/projects:openstack/governance+is:open - https://review.opendev.org/c/openstack/governance/+/759904 - https://review.opendev.org/c/openstack/governance/+/759904/7/resolutions/20201028-openstackclient-tc-policy.rst#11 # ACTION ITEMS - mnaser send out email about skipping both upcoming meetings - mnaser write a proposed goal for X about stabilization/cooldown - diablo_rojo complete retirement of karbor - diablo_rojo reach out to SIGs/ML and start auditing states of SIGs - mnaser drop X cycle goal selection start from agenda - gmann continue to audit tags + outreach to the community to apply for them - mnaser drop X cycle release name vote recording - mnaser remove centos 8 topic from upcoming agenda To read the full logs of the meeting, please refer to http://eavesdrop.openstack.org/meetings/tc/2020/tc.2020-12-17-15.00.log.html -- Mohammed Naser VEXXHOST, Inc. From rosmaita.fossdev at gmail.com Wed Dec 23 17:07:20 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 23 Dec 2020 12:07:20 -0500 Subject: [cinder] meeting time change poll In-Reply-To: <4f13a3a6-6acc-48a5-99c6-86fa7a944759@gmail.com> References: <4f13a3a6-6acc-48a5-99c6-86fa7a944759@gmail.com> Message-ID: <99c30b8a-8ec5-1db0-1d21-5e2ca65e4d32@gmail.com> The result of the poll is that the time of the cinder weekly meeting will NOT change. The next meeting is 6 January 2021 at 1400 UTC in #openstack-meeting-alt. On 12/16/20 11:24 PM, Brian Rosmaita wrote: > At today's meeting, Lucio proposed moving the cinder weekly meeting > time.  Please respond to this poll to assess some options before Tuesday > 22 December at 1200 UTC. > > https://rosmaita.wufoo.com/forms/wallaby-cinder-meeting-time-poll/ > > There's a free-form field on the form so you can propose other > alternatives.  We aren't considering changing the day of the meeting > (Wednesday) at this time, but that's a possibility if it would encourage > more attendance. > > NOTE: next week's meeting, Wednesday 23 December, will be held at the > usual time of 1400 UTC. > > > cheers, > brian From rosmaita.fossdev at gmail.com Wed Dec 23 17:26:23 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 23 Dec 2020 12:26:23 -0500 Subject: [cinder] limited review bandwidth until 4 January 2021 Message-ID: <3aac0faa-5404-810d-578b-211484791416@gmail.com> Just a reminder that most cinder cores will not be active from tomorrow (24 December) through 4 January 2021. Happy holidays! brian From rosmaita.fossdev at gmail.com Wed Dec 23 17:27:09 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 23 Dec 2020 12:27:09 -0500 Subject: [cinder] patches for lower-constraints Message-ID: As discussed at today's cinder meeting, there are patches available that address the lower constraints problems arising from the upgraded pip dependency resolver for most cinder deliverables. Except for master, I've tried to keep the changes minimal. One possibly controversial change has been to add indirect dependencies to test-requirements.txt in a few cases where the version range was so wide that the resolver was taking a very long time to figure out a satisfactory resolution. An example of this is the cinder stable/train patch: https://review.opendev.org/c/openstack/cinder/+/767954 I'm not sure when these patches will be merged, but you can always add the appropriate one as a dependency in gerrit to get the check jobs to pass on your patch. You can find them by using the gerrit hashtag 'fix-l-c': https://review.opendev.org/q/hashtag:fix-l-c Apparently only the owner of a review can add a hashtag, so os-brick/master (merged, owned by lpetrut) and the patches for the non-releaseable stable branches that e0ne is working on aren't in the list yet. From grant at civo.com Thu Dec 24 16:33:11 2020 From: grant at civo.com (Grant Morley) Date: Thu, 24 Dec 2020 16:33:11 +0000 Subject: New nova compute nodes not correctly adding into service Message-ID: <26a37330-80e7-093e-d7d3-60f168cda4fa@civo.com> Hi All, I was wondering if anyone could point me in the right direction with an issue we are having adding in new compute nodes to our platform. We have been trying to add a few new compute nodes to increase our capacity and upon configuring them with OSA and them appearing to be in service we have noticed that no new instances are provisioning onto them. We did a test by disabling all compute nodes except for the new ones and an instance launch failed with "No valid compute hosts were found" We have enabled debug mode on Nova and also looked at the nova-api containers ( scheduler, compute etc ) and it appears there is not even an attempt to try and launch an instance on the new hosts from what we can see. The only thing we have noticed on the newly added hosts is that the "privsep-helper" doesn't appear to be started whereas on a working node it is running. We have not changed the software versions of OpenStack or OSA, we are running Queens. The newly added compute node is reporting itself back fine from what we can see:  Total usable vcpus: 72, total allocated vcpus: 0 _report_final_resource_view /openstack/venvs/nova-17.1.2/lib/python2.7/site-packages/nova/compute/resource_tracker.py:827 2020-12-24 16:26:32.895 6491 INFO nova.compute.resource_tracker [req-6c58a97e-b7cb-4987-a219-da5b1ce156f8 - - - - -] Final resource view: name=compute-27.openstack.local phys_ram=257826MB used_ram=2048MB phys_disk=30166GB used_disk=0GB total_vcpus=72 used_vcpus=0 pci_stats=[] 2020-12-24 16:26:33.022 6491 DEBUG nova.compute.resource_tracker [req-6c58a97e-b7cb-4987-a219-da5b1ce156f8 - - - - -] Compute_service record updated for compute-27:compute-27.openstack.local _update_available_resource /openstack/venvs/nova-17.1.2/lib/python2.7/site-packages/nova/compute/resource_tracker.py:767 2020-12-24 16:26:33.023 6491 DEBUG oslo_concurrency.lockutils [req-6c58a97e-b7cb-4987-a219-da5b1ce156f8 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.334s inner /openstack/venvs/nova-17.1.2/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 2020-12-24 16:26:35.001 6491 DEBUG oslo_service.periodic_task [req-6c58a97e-b7cb-4987-a219-da5b1ce156f8 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /openstack/venvs/nova-17.1.2/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 Does anyone know of any other logs we can check / enable that might lead us in the right direction to figure out why the new hosts are not adding in? Any help would be greatly appreciated as we are a bit stumped at the moment. Many thanks, -- Grant Morley Site Reliability Engineer, Civo Ltd Unit H-K, Gateway 1000, Whittle Way Stevenage, Herts, SG1 2FP, UK Visit us at www.civo.com Signup for an account now > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionut at fleio.com Fri Dec 25 09:30:20 2020 From: ionut at fleio.com (Ionut Biru) Date: Fri, 25 Dec 2020 11:30:20 +0200 Subject: New nova compute nodes not correctly adding into service In-Reply-To: <26a37330-80e7-093e-d7d3-60f168cda4fa@civo.com> References: <26a37330-80e7-093e-d7d3-60f168cda4fa@civo.com> Message-ID: Hi, Check the nova scheduler log. You will see what filter is failing. That's a good starting point. Maybe you have an aggregate group and you need to add the new compute node into the group. On Thu, 24 Dec 2020 at 18:38 Grant Morley wrote: > Hi All, > > I was wondering if anyone could point me in the right direction with an > issue we are having adding in new compute nodes to our platform. We have > been trying to add a few new compute nodes to increase our capacity and > upon configuring them with OSA and them appearing to be in service we have > noticed that no new instances are provisioning onto them. We did a test by > disabling all compute nodes except for the new ones and an instance launch > failed with "No valid compute hosts were found" > > We have enabled debug mode on Nova and also looked at the nova-api > containers ( scheduler, compute etc ) and it appears there is not even an > attempt to try and launch an instance on the new hosts from what we can see. > > The only thing we have noticed on the newly added hosts is that the > "privsep-helper" doesn't appear to be started whereas on a working node it > is running. > > We have not changed the software versions of OpenStack or OSA, we are > running Queens. > > The newly added compute node is reporting itself back fine from what we > can see: > > Total usable vcpus: 72, total allocated vcpus: 0 > _report_final_resource_view > /openstack/venvs/nova-17.1.2/lib/python2.7/site-packages/nova/compute/resource_tracker.py:827 > 2020-12-24 16:26:32.895 6491 INFO nova.compute.resource_tracker > [req-6c58a97e-b7cb-4987-a219-da5b1ce156f8 - - - - -] Final resource view: > name=compute-27.openstack.local phys_ram=257826MB used_ram=2048MB > phys_disk=30166GB used_disk=0GB total_vcpus=72 used_vcpus=0 pci_stats=[] > 2020-12-24 16:26:33.022 6491 DEBUG nova.compute.resource_tracker > [req-6c58a97e-b7cb-4987-a219-da5b1ce156f8 - - - - -] Compute_service record > updated for compute-27:compute-27.openstack.local > _update_available_resource > /openstack/venvs/nova-17.1.2/lib/python2.7/site-packages/nova/compute/resource_tracker.py:767 > 2020-12-24 16:26:33.023 6491 DEBUG oslo_concurrency.lockutils > [req-6c58a97e-b7cb-4987-a219-da5b1ce156f8 - - - - -] Lock > "compute_resources" released by > "nova.compute.resource_tracker._update_available_resource" :: held 0.334s > inner > /openstack/venvs/nova-17.1.2/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 > 2020-12-24 16:26:35.001 6491 DEBUG oslo_service.periodic_task > [req-6c58a97e-b7cb-4987-a219-da5b1ce156f8 - - - - -] Running periodic task > ComputeManager._poll_rebooting_instances run_periodic_tasks > /openstack/venvs/nova-17.1.2/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 > > Does anyone know of any other logs we can check / enable that might lead > us in the right direction to figure out why the new hosts are not adding in? > > Any help would be greatly appreciated as we are a bit stumped at the > moment. > > Many thanks, > > > -- > Grant Morley > Site Reliability Engineer, Civo Ltd > Unit H-K, Gateway 1000, Whittle Way > Stevenage, Herts, SG1 2FP, UK > Visit us at www.civo.com > Signup for an account now > > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Fri Dec 25 21:41:10 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 25 Dec 2020 16:41:10 -0500 Subject: New nova compute nodes not correctly adding into service In-Reply-To: <26a37330-80e7-093e-d7d3-60f168cda4fa@civo.com> References: <26a37330-80e7-093e-d7d3-60f168cda4fa@civo.com> Message-ID: The nova.filter log entry for the nova-scheduler should tell you why computes are being filtered out. You can also check the flavors you are using to see if it's targeting a host-aggregate. On Thu, Dec 24, 2020 at 11:40 AM Grant Morley wrote: > Hi All, > > I was wondering if anyone could point me in the right direction with an > issue we are having adding in new compute nodes to our platform. We have > been trying to add a few new compute nodes to increase our capacity and > upon configuring them with OSA and them appearing to be in service we have > noticed that no new instances are provisioning onto them. We did a test by > disabling all compute nodes except for the new ones and an instance launch > failed with "No valid compute hosts were found" > > We have enabled debug mode on Nova and also looked at the nova-api > containers ( scheduler, compute etc ) and it appears there is not even an > attempt to try and launch an instance on the new hosts from what we can see. > > The only thing we have noticed on the newly added hosts is that the > "privsep-helper" doesn't appear to be started whereas on a working node it > is running. > > We have not changed the software versions of OpenStack or OSA, we are > running Queens. > > The newly added compute node is reporting itself back fine from what we > can see: > > Total usable vcpus: 72, total allocated vcpus: 0 > _report_final_resource_view > /openstack/venvs/nova-17.1.2/lib/python2.7/site-packages/nova/compute/resource_tracker.py:827 > 2020-12-24 16:26:32.895 6491 INFO nova.compute.resource_tracker > [req-6c58a97e-b7cb-4987-a219-da5b1ce156f8 - - - - -] Final resource view: > name=compute-27.openstack.local phys_ram=257826MB used_ram=2048MB > phys_disk=30166GB used_disk=0GB total_vcpus=72 used_vcpus=0 pci_stats=[] > 2020-12-24 16:26:33.022 6491 DEBUG nova.compute.resource_tracker > [req-6c58a97e-b7cb-4987-a219-da5b1ce156f8 - - - - -] Compute_service record > updated for compute-27:compute-27.openstack.local > _update_available_resource > /openstack/venvs/nova-17.1.2/lib/python2.7/site-packages/nova/compute/resource_tracker.py:767 > 2020-12-24 16:26:33.023 6491 DEBUG oslo_concurrency.lockutils > [req-6c58a97e-b7cb-4987-a219-da5b1ce156f8 - - - - -] Lock > "compute_resources" released by > "nova.compute.resource_tracker._update_available_resource" :: held 0.334s > inner > /openstack/venvs/nova-17.1.2/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 > 2020-12-24 16:26:35.001 6491 DEBUG oslo_service.periodic_task > [req-6c58a97e-b7cb-4987-a219-da5b1ce156f8 - - - - -] Running periodic task > ComputeManager._poll_rebooting_instances run_periodic_tasks > /openstack/venvs/nova-17.1.2/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 > > Does anyone know of any other logs we can check / enable that might lead > us in the right direction to figure out why the new hosts are not adding in? > > Any help would be greatly appreciated as we are a bit stumped at the > moment. > > Many thanks, > -- > Grant Morley > Site Reliability Engineer, Civo Ltd > Unit H-K, Gateway 1000, Whittle Way > Stevenage, Herts, SG1 2FP, UK > Visit us at www.civo.com > Signup for an account now > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Dec 25 22:35:24 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 25 Dec 2020 16:35:24 -0600 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-16 Update Message-ID: <1769c0ad573.cf348131542704.5195004248291055264@ghanshyammann.com> Hello Everyone, Please find the week's R-16 updates on 'Migrate RBAC Policy Format from JSON to YAML' community-wide goals. Tracking: https://etherpad.opendev.org/p/migrate-policy-format-from-json-to-yaml Gerrit Topic: https://review.opendev.org/q/topic:%22policy-json-to-yaml%22+(status:open%20OR%20status:merged) Progress: ======= * Projects completed: 5 * Projects left to merge the patches: 25 * Projects left to push the patches: 2 (horizon and Openstackansible) * Projects do not need any work: 17 Updates: ======= * I have pushed the patches for all the required service projects. ** Because of many services gate is already broken for lower constraints job, these patches might not be green in the test results. I request projects to fix the gate so that we can merge this goal work before m-2. ** There are many project tests where CONF object was not fully initialized before the policy is init. This was working till now as policy init did not use the CONF object but oslo_policy 3.6.0 onwards it needs fully initialized CONF object during init only. ** Aodh work for this goal is blocked because it needs oslo_policy 3.6.0 but gnocchi is capped for oslo_policy 3.4.0 [1] - https://review.opendev.org/c/openstack/aodh/+/768499 * Horizon and Openstackansible work is pending to use/deploy the YAML formatted policy file. I will start exploring this next week or so. [1] https://github.com/gnocchixyz/gnocchi/blob/e19fda590c7f7f07f1df0ba93177df07d9802300/setup.cfg#L33 Merry Christmas and Happy Holidays! -gmann From doka.ua at gmx.com Sat Dec 26 12:30:10 2020 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Sat, 26 Dec 2020 14:30:10 +0200 Subject: grouping resources on specific set of nodes Message-ID: <37abf895-b569-971c-7801-05f424fdc84c@gmx.com> Hi colleagues, I can't realize how to do the simple-looking thing: there are few locations with small amount of compute/storage/network resources (like small branches with 3 physical servers for both compute, storage and network roles) and I want to see all these locations as single cloud and manage them centrally using single "controller cluster", while localizing resource usage to the specific location. _Kind of_ assign specific project to location "A" and, thus, all resources (VMs, networks, storage resources, ...) in this project will be allocated from nodes, physically located in location "A". Are there any ways to implement this kind of behavior in Openstack? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison From ankelezhang at gmail.com Sat Dec 26 01:22:37 2020 From: ankelezhang at gmail.com (Ankele zhang) Date: Sat, 26 Dec 2020 09:22:37 +0800 Subject: something abount ironic-python-agent Message-ID: Hi ~ I have deployed OpenStack platform in Rocky version and integreted Ironic(Rocky) into it according to the official documents. I download the coreOS vmlinuz and ramdisk images from https://tarballs.opendev.org/openstack/ironic-python-agent/coreos/files/. and if I don't clean my devices I can deploy baremetal node server currently. When I cleaning my disk using "openstack baremetal node clean --clean-steps '[{"interface": "deploy", "step": "erase_devices"}]' my_bm", I got an error message "Clean step failed: Error performing clean_step erase_devices: No HardwareManager found to handle method: Could not find method: erase_block_device" from ironic-conductor.log and I got the same error in coreOS by "sudo journalctl -u ironic-python-agent -f". I download the Rocky ironic-python-agent source code from github and I search 'erase_block_devices' ,it returned from hardware.py +781. So, the first problem , I don't know why I got this error message. I downloaded the source code of the ironic-python-agent and tried to build the coreOS image by 'Makefile', but I always got error as following, this is my second problem. [image: image.png] Finally, I want to code my own 'custom HardwareManager' to support 'raid config' for the ironic-python-agent, but I don't know how to get started. Looking forward to your help. Thank you. Ankele -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 166568 bytes Desc: not available URL: From gmann at ghanshyammann.com Sat Dec 26 19:13:06 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 26 Dec 2020 13:13:06 -0600 Subject: [qa][tempest-plugins][release][stable] Releasing stein-em tag for Tempest plugins Message-ID: <176a077fa2f.c3481d03540762.4748799635706391055@ghanshyammann.com> Hello Everyone, As the stable/stein branch is in Extended Maintenance now[1], Tempest has dropped the support of stable/stein[2]. Tempest and its plugins are branchless which means the master version of Tempest and its plugins are used to test the supported stable branches. Once the stable branch is moved to EM state then, Tempest and its plugins compatible tag needs to be released so that we can keep testing the EM stable branches with this tag once the master Tempest and its plugins are not compatible[3]. This tag can be used as the latest compatible version of Tempest and its plugins for testing the stable/stein in upstream as well as production cloud testing. I have proposed all release patches: - https://review.opendev.org/q/topic:%22tempest-plugin-stein-em%22+(status:open%20OR%20status:merged) [1] https://releases.openstack.org/stein/index.html [2] https://review.opendev.org/c/openstack/tempest/+/766770 [3] https://docs.openstack.org/tempest/latest/stable_branch_support_policy.html -gmann From zaitcev at redhat.com Sat Dec 26 20:48:27 2020 From: zaitcev at redhat.com (Pete Zaitcev) Date: Sat, 26 Dec 2020 14:48:27 -0600 Subject: ssh authentication error with Gerrit Message-ID: <20201226144827.04d3b028@suzdal.zaitcev.lan> Hello: Does anyone here happen to know how to deal with something like this: ........... debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ssh-rsa SHA256:RXNl/GKyDaKiIQ93BoDvrNSKUPFvA1PNeAO9QiirYZU debug1: Host '[review.opendev.org]:29418' is known and matches the RSA host key. debug1: Found key in /q/zaitcev/.ssh/known_hosts:133 debug1: rekey out after 4294967296 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: rekey in after 4294967296 blocks debug1: Will attempt key: /q/zaitcev/.ssh/id_rsa_ostk2014 RSA SHA256:nz5*** explicit agent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering public key: /q/zaitcev/.ssh/id_rsa_ostk2014 RSA SHA256:nz5*** explicit agent debug1: send_pubkey_test: no mutual signature algorithm debug1: No more authentication methods to try. zaitcev at review.opendev.org: Permission denied (publickey). [zaitcev at suzdal swift-dark]$ I guess that ssh client in Fedora 33 has a cipher suite that has no intersection with the ssh server at review.opendev.org. But I do not understand what the server is offering, so I do not know what I need to enable. Thanks in advance, -- Pete From fungi at yuggoth.org Sat Dec 26 21:14:10 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 26 Dec 2020 21:14:10 +0000 Subject: [infra] ssh authentication error with Gerrit In-Reply-To: <20201226144827.04d3b028@suzdal.zaitcev.lan> References: <20201226144827.04d3b028@suzdal.zaitcev.lan> Message-ID: <20201226211409.k3ftgstjmmsfx5q6@yuggoth.org> On 2020-12-26 14:48:27 -0600 (-0600), Pete Zaitcev wrote: > Does anyone here happen to know how to deal with something like > this: [...] > debug1: Offering public key: /q/zaitcev/.ssh/id_rsa_ostk2014 RSA SHA256:nz5*** explicit agent > debug1: send_pubkey_test: no mutual signature algorithm > debug1: No more authentication methods to try. [...] > I guess that ssh client in Fedora 33 has a cipher suite that has > no intersection with the ssh server at review.opendev.org. But I > do not understand what the server is offering, so I do not know > what I need to enable. You're basically on track with your assumptions. OpenSSH 8.4 (client included in Fedora 33) has deprecated[*] ssh-rsa authentication because it relies on SHA-1 hashes but Fedora decided[**] to go a step further and update their own crypto policy to just go ahead and break it completely. You might try and see whether the UpdateHostKeys option works around this (our current Gerrit version does have SHA-2 support for RSA keys). Supposedly, switching to using elliptic curve keys (ed25519 or ecdsa) is another way to solve it. If that doesn't do the trick, you can add a host entry for review.opendev.org in your ~/.ssh/config file to set PubkeyAcceptedKeyTypes +rsa-sha2-256,rsa-sha2-512 so that it will look for them. There are also ways to downgrade the security of your connections, but I won't enumerate them here since you presumably chose Fedora 33 for a reason and I would rather not argue against their system security choices. [*] https://www.openssh.com/releasenotes.html [**] https://fedoraproject.org/wiki/Changes/StrongCryptoSettings2 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From zaitcev at redhat.com Sat Dec 26 21:35:29 2020 From: zaitcev at redhat.com (Pete Zaitcev) Date: Sat, 26 Dec 2020 15:35:29 -0600 Subject: [infra] ssh authentication error with Gerrit In-Reply-To: <20201226211409.k3ftgstjmmsfx5q6@yuggoth.org> References: <20201226144827.04d3b028@suzdal.zaitcev.lan> <20201226211409.k3ftgstjmmsfx5q6@yuggoth.org> Message-ID: <20201226153529.04fc5f1a@suzdal.zaitcev.lan> On Sat, 26 Dec 2020 21:14:10 +0000 Jeremy Stanley wrote: > > debug1: send_pubkey_test: no mutual signature algorithm > > debug1: No more authentication methods to try. > You're basically on track with your assumptions. OpenSSH 8.4 (client > included in Fedora 33) has deprecated[*] ssh-rsa authentication > because it relies on SHA-1 hashes but Fedora decided[**] to go a > step further and update their own crypto policy to just go ahead and > break it completely. > [*] https://www.openssh.com/releasenotes.html Jeremy, thanks a lot. That's a piece of documentation that I didn't think to check. I was able to submit my review to Gerrit by allowing ssh-rsa with PubkeyAcceptedKeyTypes. -- Pete From kaifeng.w at gmail.com Sun Dec 27 06:04:38 2020 From: kaifeng.w at gmail.com (Kaifeng Wang) Date: Sun, 27 Dec 2020 14:04:38 +0800 Subject: something abount ironic-python-agent In-Reply-To: References: Message-ID: On Sun, Dec 27, 2020 at 12:12 AM Ankele zhang wrote: > Hi ~ > I have deployed OpenStack platform in Rocky version and integreted > Ironic(Rocky) into it according to the official documents. I download the > coreOS vmlinuz and ramdisk images from > https://tarballs.opendev.org/openstack/ironic-python-agent/coreos/files/. > and if I don't clean my devices I can deploy baremetal node server > currently. When I cleaning my disk using "openstack baremetal node clean > --clean-steps '[{"interface": "deploy", "step": "erase_devices"}]' my_bm", > I got an error message "Clean step failed: Error performing clean_step > erase_devices: No HardwareManager found to handle method: Could not find > method: erase_block_device" from ironic-conductor.log and I got the same > error in coreOS by "sudo journalctl -u ironic-python-agent -f". I download > the Rocky ironic-python-agent source code from github and I search > 'erase_block_devices' ,it returned from hardware.py +781. So, the first > problem , I don't know why I got this error message. > CoreOS based ramdisk was not maintained for a while, you could try a rocky release at https://tarballs.opendev.org/openstack/ironic-python-agent/dib/ > I downloaded the source code of the ironic-python-agent and tried to build > the coreOS image by 'Makefile', but I always got error as following, this > is my second problem. > > Building an IPA ramdisk is moved from diskimage-builder to ironic-python-agent-builder, you can find the document here: https://docs.openstack.org/ironic-python-agent-builder/latest/ > > Finally, I want to code my own 'custom HardwareManager' to support 'raid > config' for the ironic-python-agent, but I don't know how to get started. > > For an in-band RAID support, you'll need to create a new hardware manager and implement some deploy steps to support RAID configuration, specifically, the create_configuration and delete_configuration steps. The following links may provide some starting points for you. https://docs.openstack.org/ironic/latest/contributor/deploy-steps.html https://docs.openstack.org/ironic/latest/admin/cleaning.html#raid-interface Hope this helps. > Looking forward to your help. Thank you. > > Ankele > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaifeng.w at gmail.com Mon Dec 28 12:43:22 2020 From: kaifeng.w at gmail.com (Kaifeng Wang) Date: Mon, 28 Dec 2020 20:43:22 +0800 Subject: something abount ironic-python-agent In-Reply-To: References: Message-ID: On Sun, Dec 27, 2020 at 8:39 PM Ankele zhang wrote: > Hi Kaifeng: > Before receiving your reply, I have downloaded > the ronic-python-agent-builder-2.3.0, but I don't know how to build it and > how to use it, I can't find the direction for use. > And I try 'python setup.py install' to install it. and the following is my > execution. > > I don't know the optional values of 'distribution', 'release', 'element', > I hope you can help me. > > distribution refers to the base image that the ramdisk will be built on, release is used to specify the distribution variant. element is a concept of diskimage-builder, each element is handling part of the building process, you can take a look at the diskimage-builder documentation for a complete list [1]. In case further customization on the ramdisk is required, elements can be included by the argument. We have some examples on how to use the IPA-B at [2]. [1] https://docs.openstack.org/diskimage-builder/latest/elements.html [2] https://docs.openstack.org/ironic-python-agent-builder/latest/admin/dib.html Ankele > > Kaifeng Wang 于2020年12月27日周日 下午2:04写道: > >> >> >> On Sun, Dec 27, 2020 at 12:12 AM Ankele zhang >> wrote: >> >>> Hi ~ >>> I have deployed OpenStack platform in Rocky version and integreted >>> Ironic(Rocky) into it according to the official documents. I download the >>> coreOS vmlinuz and ramdisk images from >>> https://tarballs.opendev.org/openstack/ironic-python-agent/coreos/files/. >>> and if I don't clean my devices I can deploy baremetal node server >>> currently. When I cleaning my disk using "openstack baremetal node clean >>> --clean-steps '[{"interface": "deploy", "step": "erase_devices"}]' my_bm", >>> I got an error message "Clean step failed: Error performing clean_step >>> erase_devices: No HardwareManager found to handle method: Could not find >>> method: erase_block_device" from ironic-conductor.log and I got the same >>> error in coreOS by "sudo journalctl -u ironic-python-agent -f". I download >>> the Rocky ironic-python-agent source code from github and I search >>> 'erase_block_devices' ,it returned from hardware.py +781. So, the first >>> problem , I don't know why I got this error message. >>> >> >> CoreOS based ramdisk was not maintained for a while, you could try a >> rocky release at >> https://tarballs.opendev.org/openstack/ironic-python-agent/dib/ >> >> >>> I downloaded the source code of the ironic-python-agent and tried to >>> build the coreOS image by 'Makefile', but I always got error as following, >>> this is my second problem. >>> >>> >> Building an IPA ramdisk is moved from diskimage-builder to >> ironic-python-agent-builder, you can find the document here: >> https://docs.openstack.org/ironic-python-agent-builder/latest/ >> >> >>> >>> Finally, I want to code my own 'custom HardwareManager' to support 'raid >>> config' for the ironic-python-agent, but I don't know how to get started. >>> >>> >> For an in-band RAID support, you'll need to create a new hardware manager >> and implement some deploy steps to support RAID configuration, >> specifically, the create_configuration and delete_configuration steps. >> The following links may provide some starting points for you. >> https://docs.openstack.org/ironic/latest/contributor/deploy-steps.html >> >> https://docs.openstack.org/ironic/latest/admin/cleaning.html#raid-interface >> >> Hope this helps. >> >> >>> Looking forward to your help. Thank you. >>> >>> Ankele >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ankelezhang at gmail.com Mon Dec 28 05:42:54 2020 From: ankelezhang at gmail.com (Ankele zhang) Date: Mon, 28 Dec 2020 13:42:54 +0800 Subject: some error abount ironic-python-agent-builder Message-ID: Hi~ I have an OpenStack platform in Rocky version. I use ironic-python-agent-builder to build a tinyipa image to customing HardwareManager for 'RAID configuration' in Ironic cleaning steps. While I follow the steps ' https://docs.openstack.org/ironic-python-agent-builder/latest/admin/tinyipa.html#building-ramdisk' to build tinyipa image, it occurs error as following: [image: image.png] [image: image.png] so, what the IPA_SOURCE_DIR means? Do I need to download the source code of the ironic-python-agent and copy it to /opt/stack/ before this? Looking forward to your reply. Ankele -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 11770 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 63866 bytes Desc: not available URL: From miguel at mlavalle.com Tue Dec 29 00:07:57 2020 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 28 Dec 2020 18:07:57 -0600 Subject: [neutron] bug deputy report December 21st to 27th Message-ID: Hi, This week Slawek and I cooperated to perform the bug deputy duties. It was a relatively quiet week. These are the bugs triaged during the week: High ==== https://bugs.launchpad.net/neutron/+bug/1908957 iptable rules collision deployed with k8s iptables kube-proxy enabled. Owner: Norman Shen https://bugs.launchpad.net/neutron/+bug/1909038 [ovn] TypeError: lrp_set_options() takes 2 positional arguments but 3 were given. Owner: Flavio Fernandes https://bugs.launchpad.net/neutron/+bug/1909234 [fullstack] "test_min_bw_qos_port_removed" failing randomly. Owner: Rodolfo Alonso RFE === https://bugs.launchpad.net/neutron/+bug/1909100 [RFE]add new vnic type "cyborg" -------------- next part -------------- An HTML attachment was scrubbed... URL: From emiller at genesishosting.com Tue Dec 29 04:36:40 2020 From: emiller at genesishosting.com (Eric K. Miller) Date: Mon, 28 Dec 2020 22:36:40 -0600 Subject: [osc][neutron] security group list slow in Stein Message-ID: <046E9C0290DD9149B106B72FC9156BEA04814F2C@gmsxchsvr01.thecreation.com> Hi, We are working on upgrade planning for our Stein deployments, but since we have some time before this will be finalized, I thought I'd ask if there has been any performance improvement/fixes in the performance of the "security group list" command in the OpenStack Client and/or Neutron API. "security group rule list" is nearly instant (when authenticated to a project), and returns more info than security group list, but "security group list" can take up to 29 seconds in the worst case when authenticated to a project. Oddly enough, when logged in as cloud admin, "security group list" only takes about 14 seconds, even though it is displaying every security group in the cloud (in our case, quite a few!). Also, just wondering if anyone else has this issue. Thanks! Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Tue Dec 29 07:56:22 2020 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Tue, 29 Dec 2020 09:56:22 +0200 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-16 Update In-Reply-To: <1769c0ad573.cf348131542704.5195004248291055264@ghanshyammann.com> References: <1769c0ad573.cf348131542704.5195004248291055264@ghanshyammann.com> Message-ID: <884141609228423@mail.yandex.ru> An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue Dec 29 15:15:43 2020 From: tkajinam at redhat.com (Takashi Kajinami) Date: Wed, 30 Dec 2020 00:15:43 +0900 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-16 Update In-Reply-To: <884141609228423@mail.yandex.ru> References: <1769c0ad573.cf348131542704.5195004248291055264@ghanshyammann.com> <884141609228423@mail.yandex.ru> Message-ID: Hello, For Puppet OpenStack projects I have submitted a series of changes to use policy.yaml instead of policy.json[1]. [1] https://review.opendev.org/q/topic:%22policy-yaml%22+(status:open%20OR%20status:merged) One problem I noticed during making these patches is that Gnocchi still uses policy.json. IIUC that policy-in-code is not implemented in gnocchi and the default contents should be migrated to policy.yaml appropriately to keep its functionality. I have submitted a pull request to introduce policy.yaml to replace policy.json to replace policy.json by policy.yaml. [2] https://github.com/gnocchixyz/gnocchi/pull/1108 I know that Gnocchi is not a part of OpenStack projects but we should be careful about it before we make any changes in oslo.policy because Gnocchi is currently consuming the library. Thank you, Takashi On Tue, Dec 29, 2020 at 4:58 PM Dmitriy Rabotyagov wrote: > Hi! > > Regarding OpenStack-Ansible I was planning to land patches early January. > We eventually need to patch every role to change "dest" and "config_type" > for placing template, ie. [1] > > Also we will need to think through removal of old json file for ppl that > will perform upgrade, to avoid any possible conflicts and confusions > because of the prescence of both files. > > [1] > https://opendev.org/openstack/openstack-ansible-os_glance/src/branch/master/tasks/glance_post_install.yml#L78-L82 > > > 26.12.2020, 00:41, "Ghanshyam Mann" : > > Hello Everyone, > > Please find the week's R-16 updates on 'Migrate RBAC Policy Format from > JSON to YAML' community-wide goals. > > Tracking: > https://etherpad.opendev.org/p/migrate-policy-format-from-json-to-yaml > > Gerrit Topic: > https://review.opendev.org/q/topic:%22policy-json-to-yaml%22+(status:open%20OR%20status:merged) > > Progress: > ======= > * Projects completed: 5 > * Projects left to merge the patches: 25 > * Projects left to push the patches: 2 (horizon and Openstackansible) > * Projects do not need any work: 17 > > Updates: > ======= > * I have pushed the patches for all the required service projects. > > ** Because of many services gate is already broken for lower constraints > job, these patches might not be green in the > test results. I request projects to fix the gate so that we can merge this > goal work before m-2. > > ** There are many project tests where CONF object was not fully > initialized before the policy is init. This was working till now > as policy init did not use the CONF object but oslo_policy 3.6.0 onwards > it needs fully initialized CONF object during init only. > > ** Aodh work for this goal is blocked because it needs oslo_policy 3.6.0 > but gnocchi is capped for oslo_policy 3.4.0 [1] > - https://review.opendev.org/c/openstack/aodh/+/768499 > > * Horizon and Openstackansible work is pending to use/deploy the YAML > formatted policy file. I will start exploring this > next week or so. > > [1] > https://github.com/gnocchixyz/gnocchi/blob/e19fda590c7f7f07f1df0ba93177df07d9802300/setup.cfg#L33 > > Merry Christmas and Happy Holidays! > > -gmann > > > > > -- > Kind Regards, > Dmitriy Rabotyagov > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mighani6406 at gmail.com Tue Dec 29 09:25:46 2020 From: mighani6406 at gmail.com (mohammad mighani) Date: Tue, 29 Dec 2020 12:55:46 +0330 Subject: Instance on provider network not working Message-ID: Hello I use openstack Ussuri in Ubuntu 18 and one controller node and one compute node with two interface, management and Provider that connected to internet i install keystone, glance, placement, nova, neutron on self service option *what i can * i can create provider network and self service network and router between them i can ping router i can launch instance on self service network and it has access to internet and everything looks fine *what i can NOT* I can't launch instances on the provider network. When I launch an instance on the provider network, the provider interface of the compute node will be corrupted and not respond and the instance will not be created. error in instance : *Message *Build of instance 2a86a84b-50fc-415c-a15c-cb87398f37ea aborted: Failed to allocate the network(s), not rescheduling. *Code *500 *Details *Traceback (most recent call last): File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 6529, in _create_domain_and_network post_xml_callback=post_xml_callback) File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__ next(self.gen) File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 513, in wait_for_instance_event actual_event = event.wait() File "/usr/lib/python3/dist-packages/eventlet/event.py", line 125, in wait result = hub.switch() File "/usr/lib/python3/dist-packages/eventlet/hubs/hub.py", line 298, in switch return self.greenlet.switch() eventlet.timeout.Timeout: 300 seconds During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2378, in _build_and_run_instance accel_info=accel_info) File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 3663, in spawn cleanup_instance_disks=created_disks) File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 6552, in _create_domain_and_network raise exception.VirtualInterfaceCreateException() nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2200, in _do_build_and_run_instance filter_properties, request_spec, accel_uuids) File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2444, in _build_and_run_instance reason=msg) nova.exception.BuildAbortException: Build of instance 2a86a84b-50fc-415c-a15c-cb87398f37ea aborted: Failed to allocate the network(s), not rescheduling. *Created *Dec. 29, 2020, 9:18 a.m. *After this I can't ping the computer from the provider network interface.* I reinstalled openstack so many times and also I tried the train and victoria version and have the same problem. I attached log files of nova compute and neutron linux bridge from compute node and neutron server from controller node. thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: neutron-linuxbridge-agent.log Type: text/x-log Size: 10984 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nova-compute.log Type: text/x-log Size: 1608 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: neutron-server.log Type: text/x-log Size: 124422 bytes Desc: not available URL: From gmann at ghanshyammann.com Tue Dec 29 17:41:21 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 29 Dec 2020 11:41:21 -0600 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-16 Update In-Reply-To: References: <1769c0ad573.cf348131542704.5195004248291055264@ghanshyammann.com> <884141609228423@mail.yandex.ru> Message-ID: <176af970df7.e860a15d627132.7325390507772546912@ghanshyammann.com> ---- On Tue, 29 Dec 2020 09:15:43 -0600 Takashi Kajinami wrote ---- > Hello, > For Puppet OpenStack projects I have submitted a series of changes to use policy.yaml instead of policy.json[1]. [1] https://review.opendev.org/q/topic:%22policy-yaml%22+(status:open%20OR%20status:merged) Thanks, Takashi for taking care of it. I have modified the review topic to 'policy-json-to-yaml' so that we track all work together. - https://review.opendev.org/q/topic:%22policy-json-to-yaml%22+(status:open%20OR%20status:merged) > One problem I noticed during making these patches is that Gnocchi still uses policy.json.IIUC that policy-in-code is not implemented in gnocchi and the default contents shouldbe migrated to policy.yaml appropriately to keep its functionality. > I have submitted a pull request to introduce policy.yaml to replace policy.json to replacepolicy.json by policy.yaml. > [2] https://github.com/gnocchixyz/gnocchi/pull/1108 > I know that Gnocchi is not a part of OpenStack projects but we should be careful about itbefore we make any changes in oslo.policy because Gnocchi is currently consuming the library. I agree, also gnoochi capped the oslo.policy with 3.4.0 and we need 3.6.0 for this migration. Matthias already pushed the PR for that https://github.com/gnocchixyz/gnocchi/pull/1097 I hope both of these PRs will be merged soon. -gmann > > Thank you,Takashi > > On Tue, Dec 29, 2020 at 4:58 PM Dmitriy Rabotyagov wrote: > Hi! Regarding OpenStack-Ansible I was planning to land patches early January. We eventually need to patch every role to change "dest" and "config_type" for placing template, ie. [1] Also we will need to think through removal of old json file for ppl that will perform upgrade, to avoid any possible conflicts and confusions because of the prescence of both files. [1] https://opendev.org/openstack/openstack-ansible-os_glance/src/branch/master/tasks/glance_post_install.yml#L78-L82 > 26.12.2020, 00:41, "Ghanshyam Mann" :Hello Everyone, > > Please find the week's R-16 updates on 'Migrate RBAC Policy Format from JSON to YAML' community-wide goals. > > Tracking: https://etherpad.opendev.org/p/migrate-policy-format-from-json-to-yaml > > Gerrit Topic: https://review.opendev.org/q/topic:%22policy-json-to-yaml%22+(status:open%20OR%20status:merged) > > Progress: > ======= > * Projects completed: 5 > * Projects left to merge the patches: 25 > * Projects left to push the patches: 2 (horizon and Openstackansible) > * Projects do not need any work: 17 > > Updates: > ======= > * I have pushed the patches for all the required service projects. > > ** Because of many services gate is already broken for lower constraints job, these patches might not be green in the > test results. I request projects to fix the gate so that we can merge this goal work before m-2. > > ** There are many project tests where CONF object was not fully initialized before the policy is init. This was working till now > as policy init did not use the CONF object but oslo_policy 3.6.0 onwards it needs fully initialized CONF object during init only. > > ** Aodh work for this goal is blocked because it needs oslo_policy 3.6.0 but gnocchi is capped for oslo_policy 3.4.0 [1] > - https://review.opendev.org/c/openstack/aodh/+/768499 > > * Horizon and Openstackansible work is pending to use/deploy the YAML formatted policy file. I will start exploring this > next week or so. > > [1] https://github.com/gnocchixyz/gnocchi/blob/e19fda590c7f7f07f1df0ba93177df07d9802300/setup.cfg#L33 > > Merry Christmas and Happy Holidays! > > -gmann > > -- > Kind Regards,Dmitriy Rabotyagov From ignaziocassano at gmail.com Tue Dec 29 18:12:13 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 29 Dec 2020 19:12:13 +0100 Subject: [Nova][queens] disk.config not found issue Message-ID: Hello All, after a compute node failure if I try to reset state and hard reboot an instance it fails because disk.config file does not exist. I am using cinder with netapp nfs driver and I am using disk config drive. It happens also on stein. Any help, please ? It can be' solved creating the disk.config with touch command but it does not seem a good solution . Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Dec 29 18:34:07 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 29 Dec 2020 19:34:07 +0100 Subject: [Nova][queens] disk.config not found issue In-Reply-To: References: Message-ID: Hello, I think this is because I inserted the force config drive In nova configuration when I decided to use config drive. Instances created before this configuration do not have disk.config file. Probably I must create a script with a for cycle and touch disk.config file. Is it correct? Ignazio Il Mar 29 Dic 2020, 19:12 Ignazio Cassano ha scritto: > Hello All, after a compute node failure if I try to reset state and hard > reboot an instance it fails because disk.config file does not exist. > I am using cinder with netapp nfs driver and I am using disk config drive. > It happens also on stein. > Any help, please ? > It can be' solved creating the disk.config with touch command but it does > not seem a good solution . > Ignazio > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Dec 29 18:37:35 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 29 Dec 2020 12:37:35 -0600 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-16 Update In-Reply-To: <884141609228423@mail.yandex.ru> References: <1769c0ad573.cf348131542704.5195004248291055264@ghanshyammann.com> <884141609228423@mail.yandex.ru> Message-ID: <176afca89ab.1012b6043628566.4875338583213848682@ghanshyammann.com> ---- On Tue, 29 Dec 2020 01:56:22 -0600 Dmitriy Rabotyagov wrote ---- > Hi! Regarding OpenStack-Ansible I was planning to land patches early January. We eventually need to patch every role to change "dest" and "config_type" for placing template, ie. [1] Also we will need to think through removal of old json file for ppl that will perform upgrade, to avoid any possible conflicts and confusions because of the prescence of both files. [1] https://opendev.org/openstack/openstack-ansible-os_glance/src/branch/master/tasks/glance_post_install.yml#L78-L82 Thanks, Dmitriy, do let me know if you need help this is a large number of changes. I will be able to push changes for this. On point of the presence of both files, yes this is a good point. From the service side default value change, I am taking care of this on oslo.policy side[1]. If both files exist and deployment rely on the default value (config option is not overridden ) then oslo policy will pick up the 'policy.json'. With this, we make sure we do not break any upgrade for deployment relying on this default value. In the future, when we decide to remove the support of policy.json then we can remove this fallback logic. -gmann [1] https://github.com/openstack/oslo.policy/blob/0a228dea2ee96ec3eabed3361ca22502d0bbd4a1/oslo_policy/policy.py#L363 > 26.12.2020, 00:41, "Ghanshyam Mann" :Hello Everyone, > > Please find the week's R-16 updates on 'Migrate RBAC Policy Format from JSON to YAML' community-wide goals. > > Tracking: https://etherpad.opendev.org/p/migrate-policy-format-from-json-to-yaml > > Gerrit Topic: https://review.opendev.org/q/topic:%22policy-json-to-yaml%22+(status:open%20OR%20status:merged) > > Progress: > ======= > * Projects completed: 5 > * Projects left to merge the patches: 25 > * Projects left to push the patches: 2 (horizon and Openstackansible) > * Projects do not need any work: 17 > > Updates: > ======= > * I have pushed the patches for all the required service projects. > > ** Because of many services gate is already broken for lower constraints job, these patches might not be green in the > test results. I request projects to fix the gate so that we can merge this goal work before m-2. > > ** There are many project tests where CONF object was not fully initialized before the policy is init. This was working till now > as policy init did not use the CONF object but oslo_policy 3.6.0 onwards it needs fully initialized CONF object during init only. > > ** Aodh work for this goal is blocked because it needs oslo_policy 3.6.0 but gnocchi is capped for oslo_policy 3.4.0 [1] > - https://review.opendev.org/c/openstack/aodh/+/768499 > > * Horizon and Openstackansible work is pending to use/deploy the YAML formatted policy file. I will start exploring this > next week or so. > > [1] https://github.com/gnocchixyz/gnocchi/blob/e19fda590c7f7f07f1df0ba93177df07d9802300/setup.cfg#L33 > > Merry Christmas and Happy Holidays! > > -gmann > > -- > Kind Regards,Dmitriy Rabotyagov From ionut at fleio.com Tue Dec 29 21:52:26 2020 From: ionut at fleio.com (Ionut Biru) Date: Tue, 29 Dec 2020 23:52:26 +0200 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> Message-ID: Hi Feilong, I found out that each time the update_health_status periodic task is run, a new connection(for each uwsgi) is made to rabbitmq. root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 229 root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 234 root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 238 root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 241 root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 244 Not sure Dec 29 21:51:22 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:22.024 262800 DEBUG magnum.service.periodic [req-3b495326-cf80-481e-b3c6-c741f05b7f0e - - - - -] Dec 29 21:51:22 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:22.024 262800 DEBUG oslo_service.periodic_task [-] Running periodic task MagnumPeriodicTasks.sync Dec 29 21:51:16 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262804]: 2020-12-29 21:51:16.462 262804 DEBUG magnum.conductor.handlers.cluster_conductor [req-284ac12b-d76a-4e50-8e74-5bfb Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.573 262800 DEBUG magnum.service.periodic [-] Status for cluster 118 updated to HEALTHY ({'api' Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262805]: 2020-12-29 21:51:15.572 262805 DEBUG magnum.conductor.handlers.cluster_conductor [req-3fc29ee9-4051-42e7-ae19-3a49 Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.572 262800 DEBUG magnum.service.periodic [-] Status for cluster 121 updated to HEALTHY ({'api' Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.572 262800 DEBUG magnum.service.periodic [-] Status for cluster 122 updated to HEALTHY ({'api' Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.553 262800 DEBUG magnum.service.periodic [-] Updating health status for cluster 122 update_hea Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.544 262800 DEBUG magnum.service.periodic [-] Updating health status for cluster 121 update_hea Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.535 262800 DEBUG magnum.service.periodic [-] Updating health status for cluster 118 update_hea Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.494 262800 DEBUG magnum.service.periodic [req-405b1fed-0b8a-4a60-b6ae-834f548b21d1 - - - 2020-12-29 21:51:14.082 [info] <0.953.1293> accepting AMQP connection <0.953.1293> (172.29.93.14:48474 -> 172.29.95.38:5672) 2020-12-29 21:51:14.083 [info] <0.953.1293> Connection <0.953.1293> ( 172.29.93.14:48474 -> 172.29.95.38:5672) has a client-provided name: uwsgi:262739:f86c0570-8739-4b74-8102-76b5357acd71 2020-12-29 21:51:14.084 [info] <0.953.1293> connection <0.953.1293> ( 172.29.93.14:48474 -> 172.29.95.38:5672 - uwsgi:262739:f86c0570-8739-4b74-8102-76b5357acd71): user 'magnum' authenticated and granted access to vhost '/magnum' 2020-12-29 21:51:15.560 [info] <0.1656.1283> accepting AMQP connection <0.1656.1283> (172.29.93.14:48548 -> 172.29.95.38:5672) 2020-12-29 21:51:15.561 [info] <0.1656.1283> Connection <0.1656.1283> ( 172.29.93.14:48548 -> 172.29.95.38:5672) has a client-provided name: uwsgi:262744:2c9792ab-9198-493a-970c-f6ccfd9947d3 2020-12-29 21:51:15.561 [info] <0.1656.1283> connection <0.1656.1283> ( 172.29.93.14:48548 -> 172.29.95.38:5672 - uwsgi:262744:2c9792ab-9198-493a-970c-f6ccfd9947d3): user 'magnum' authenticated and granted access to vhost '/magnum' On Tue, Dec 22, 2020 at 4:12 AM feilong wrote: > Hi Ionut, > > I didn't see this before on our production. Magnum auto healer just simply > sends a POST request to Magnum api to update the health status. So I would > suggest write a small script or even use curl to see if you can reproduce > this firstly. > > > On 19/12/20 2:27 am, Ionut Biru wrote: > > Hi again, > > I failed to mention that is stable/victoria with couples of patches from > review. Ignore the fact that in logs it shows the 19.1.4 version in venv > path. > > On Fri, Dec 18, 2020 at 3:22 PM Ionut Biru wrote: > >> Hi guys, >> >> I have an issue with magnum api returning an error after a while: >> Server-side error: "[('system library', 'fopen', 'Too many open files'), >> ('BIO routines', 'BIO_new_file', 'system lib'), ('x509 certificate >> routines', 'X509_load_cert_crl_file', 'system lib')]" >> >> Log file: https://paste.xinu.at/6djE/ >> >> This started to appear after I enabled the >> template auto_healing_controller = magnum-auto-healer, >> magnum_auto_healer_tag = v1.19.0. >> >> Currently, I only have 4 clusters. >> >> After that the API is in error state and doesn't work unless I restart it. >> >> >> -- >> Ionut Biru - https://fleio.com >> > > > -- > Ionut Biru - https://fleio.com > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionut at fleio.com Tue Dec 29 22:20:25 2020 From: ionut at fleio.com (Ionut Biru) Date: Wed, 30 Dec 2020 00:20:25 +0200 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> Message-ID: Hi, Not sure if my suspicion is true but I think for each update a new notifier is prepared and used without closing the connection but my understanding of oslo is nonexistent. https://opendev.org/openstack/magnum/src/branch/master/magnum/conductor/utils.py#L147 https://opendev.org/openstack/magnum/src/branch/master/magnum/common/rpc.py#L173 On Tue, Dec 29, 2020 at 11:52 PM Ionut Biru wrote: > Hi Feilong, > > I found out that each time the update_health_status periodic task is run, > a new connection(for each uwsgi) is made to rabbitmq. > > root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l > 229 > root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l > 234 > root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l > 238 > root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l > 241 > root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l > 244 > > Not sure > > Dec 29 21:51:22 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262800]: 2020-12-29 21:51:22.024 262800 DEBUG > magnum.service.periodic [req-3b495326-cf80-481e-b3c6-c741f05b7f0e - - - - > -] > Dec 29 21:51:22 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262800]: 2020-12-29 21:51:22.024 262800 DEBUG > oslo_service.periodic_task [-] Running periodic task > MagnumPeriodicTasks.sync > Dec 29 21:51:16 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262804]: 2020-12-29 21:51:16.462 262804 DEBUG > magnum.conductor.handlers.cluster_conductor > [req-284ac12b-d76a-4e50-8e74-5bfb > Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262800]: 2020-12-29 21:51:15.573 262800 DEBUG > magnum.service.periodic [-] Status for cluster 118 updated to HEALTHY > ({'api' > Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262805]: 2020-12-29 21:51:15.572 262805 DEBUG > magnum.conductor.handlers.cluster_conductor > [req-3fc29ee9-4051-42e7-ae19-3a49 > Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262800]: 2020-12-29 21:51:15.572 262800 DEBUG > magnum.service.periodic [-] Status for cluster 121 updated to HEALTHY > ({'api' > Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262800]: 2020-12-29 21:51:15.572 262800 DEBUG > magnum.service.periodic [-] Status for cluster 122 updated to HEALTHY > ({'api' > Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262800]: 2020-12-29 21:51:15.553 262800 DEBUG > magnum.service.periodic [-] Updating health status for cluster 122 > update_hea > Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262800]: 2020-12-29 21:51:15.544 262800 DEBUG > magnum.service.periodic [-] Updating health status for cluster 121 > update_hea > Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262800]: 2020-12-29 21:51:15.535 262800 DEBUG > magnum.service.periodic [-] Updating health status for cluster 118 > update_hea > Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262800]: 2020-12-29 21:51:15.494 262800 DEBUG > magnum.service.periodic [req-405b1fed-0b8a-4a60-b6ae-834f548b21d1 - - - > > > 2020-12-29 21:51:14.082 [info] <0.953.1293> accepting AMQP connection > <0.953.1293> (172.29.93.14:48474 -> 172.29.95.38:5672) > 2020-12-29 21:51:14.083 [info] <0.953.1293> Connection <0.953.1293> ( > 172.29.93.14:48474 -> 172.29.95.38:5672) has a client-provided name: > uwsgi:262739:f86c0570-8739-4b74-8102-76b5357acd71 > 2020-12-29 21:51:14.084 [info] <0.953.1293> connection <0.953.1293> ( > 172.29.93.14:48474 -> 172.29.95.38:5672 - > uwsgi:262739:f86c0570-8739-4b74-8102-76b5357acd71): user 'magnum' > authenticated and granted access to vhost '/magnum' > 2020-12-29 21:51:15.560 [info] <0.1656.1283> accepting AMQP connection > <0.1656.1283> (172.29.93.14:48548 -> 172.29.95.38:5672) > 2020-12-29 21:51:15.561 [info] <0.1656.1283> Connection <0.1656.1283> ( > 172.29.93.14:48548 -> 172.29.95.38:5672) has a client-provided name: > uwsgi:262744:2c9792ab-9198-493a-970c-f6ccfd9947d3 > 2020-12-29 21:51:15.561 [info] <0.1656.1283> connection <0.1656.1283> ( > 172.29.93.14:48548 -> 172.29.95.38:5672 - > uwsgi:262744:2c9792ab-9198-493a-970c-f6ccfd9947d3): user 'magnum' > authenticated and granted access to vhost '/magnum' > > On Tue, Dec 22, 2020 at 4:12 AM feilong wrote: > >> Hi Ionut, >> >> I didn't see this before on our production. Magnum auto healer just >> simply sends a POST request to Magnum api to update the health status. So I >> would suggest write a small script or even use curl to see if you can >> reproduce this firstly. >> >> >> On 19/12/20 2:27 am, Ionut Biru wrote: >> >> Hi again, >> >> I failed to mention that is stable/victoria with couples of patches from >> review. Ignore the fact that in logs it shows the 19.1.4 version in venv >> path. >> >> On Fri, Dec 18, 2020 at 3:22 PM Ionut Biru wrote: >> >>> Hi guys, >>> >>> I have an issue with magnum api returning an error after a while: >>> Server-side error: "[('system library', 'fopen', 'Too many open files'), >>> ('BIO routines', 'BIO_new_file', 'system lib'), ('x509 certificate >>> routines', 'X509_load_cert_crl_file', 'system lib')]" >>> >>> Log file: https://paste.xinu.at/6djE/ >>> >>> This started to appear after I enabled the >>> template auto_healing_controller = magnum-auto-healer, >>> magnum_auto_healer_tag = v1.19.0. >>> >>> Currently, I only have 4 clusters. >>> >>> After that the API is in error state and doesn't work unless I restart >>> it. >>> >>> >>> -- >>> Ionut Biru - https://fleio.com >>> >> >> >> -- >> Ionut Biru - https://fleio.com >> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> ------------------------------------------------------ >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> ------------------------------------------------------ >> >> > > -- > Ionut Biru - https://fleio.com > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabriel.gamero at pucp.edu.pe Wed Dec 30 00:07:42 2020 From: gabriel.gamero at pucp.edu.pe (Gabriel Omar Gamero Montenegro) Date: Tue, 29 Dec 2020 19:07:42 -0500 Subject: [neutron] SR-IOV mechanism driver configuration, plugin.ini Message-ID: Dear all, I'm following the OpenStack guide for the implementation of SR-IOV mechanism driver. I'm planning to incorporate this driver to my current OpenStack deployment (Queens). Config SR-IOV Guide: https://docs.openstack.org/neutron/queens/admin/config-sriov.html At point 2, section "Configure neutron-server (Controller)" they said that I have to add the 'plugin.ini' file as a parameter to the neutron-server service. To do this they require to <>: --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini I'd like to know a few things: (1) Which plugin.ini file are talking about? (2) How to set up the neutron-server initialization script to add the plugin.ini file? I understand that this varies between OS distro (I'm currently using Ubuntu 16.04 LTS server) Here are some things I tried... I got the following results executing this command: systemctl status neutron-server.service ● neutron-server.service - OpenStack Neutron Server Loaded: loaded (/lib/systemd/system/neutron-server.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2020-12-29 18:13:50 -05 Main PID: 38590 (neutron-server) Tasks: 44 Memory: 738.8M CPU: 29.322s CGroup: /system.slice/neutron-server.service ├─38590 /usr/bin/python2 /usr/bin/neutron-server --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/neutron-server.log ... I see 2 things: (i) When neutron-server is exectured, the following parameters are passed: --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/neutron-server.log (ii) The file '/lib/systemd/system/neutron-server.service' is loaded and it has the following content: ... ExecStart=/etc/init.d/neutron-server systemd-start ... This indicates me that it's executing '/etc/init.d/neutron-server' script. So I suppose this is the file indicated to add the parameters of the SR-IOV OpenStack documentation, but I have no idea where to put them. For Red-Hat distros I found this documentation with the following configuration: https://access.redhat.com/documentation/en-us/ red_hat_enterprise_linux_openstack_platform/7/html/networking_guide /sr-iov-support-for-virtual-networking vi /usr/lib/systemd/system/neutron-server.service ... ExecStart=/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini --log-file /var/log/neutron/server.log Thanks in advance, Gabriel Gamero From i at liuyulong.me Wed Dec 30 07:27:40 2020 From: i at liuyulong.me (=?utf-8?B?TElVIFl1bG9uZw==?=) Date: Wed, 30 Dec 2020 15:27:40 +0800 Subject: [neutron] Neutron L3 meeting cancelled today In-Reply-To: <33405295.0rLbERne6g@p1> References: <33405295.0rLbERne6g@p1> Message-ID: Hi, Let's following the team agenda because of the long holidays in some areas, our next neutron L3 meeting will be scheduled at 13.01.2021. So, Happy New Year! See you guys online next year. Regards, LIU Yulong   ------------------ Original ------------------ From:  "Slawek Kaplonski" From skaplons at redhat.com Wed Dec 30 07:29:04 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 30 Dec 2020 08:29:04 +0100 Subject: [neutron] SR-IOV mechanism driver configuration, plugin.ini In-Reply-To: References: Message-ID: <3641168.yvBDRoByMW@p1> Hi, Dnia środa, 30 grudnia 2020 01:07:42 CET Gabriel Omar Gamero Montenegro pisze: > Dear all, > > I'm following the OpenStack guide for > the implementation of SR-IOV mechanism driver. > I'm planning to incorporate this driver to > my current OpenStack deployment (Queens). > > Config SR-IOV Guide: > https://docs.openstack.org/neutron/queens/admin/config-sriov.html > > At point 2, section "Configure neutron-server (Controller)" > they said that I have to add the 'plugin.ini' file > as a parameter to the neutron-server service. > To do this they require to > < neutron-server service to load the plugin configuration file>>: > --config-file /etc/neutron/neutron.conf > --config-file /etc/neutron/plugin.ini > > I'd like to know a few things: > > (1) Which plugin.ini file are talking about? That is IMO good question. I see this file for the first time now :) Looking at the commit [1] and commits which this patch reference to I think that this may be some old leftover which should be cleaned. But maybe Rodolfo will know more as he s our SR-IOV expert in the team. > (2) How to set up the neutron-server initialization script > to add the plugin.ini file? > I understand that this varies between OS distro > (I'm currently using Ubuntu 16.04 LTS server) > > Here are some things I tried... > > I got the following results executing this command: > > systemctl status neutron-server.service > ● neutron-server.service - OpenStack Neutron Server > Loaded: loaded (/lib/systemd/system/neutron-server.service; > enabled; vendor preset: enabled) > Active: active (running) since Tue 2020-12-29 18:13:50 -05 > Main PID: 38590 (neutron-server) > Tasks: 44 > Memory: 738.8M > CPU: 29.322s > CGroup: /system.slice/neutron-server.service > ├─38590 /usr/bin/python2 /usr/bin/neutron-server > --config-file=/etc/neutron/neutron.conf > --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini > --log-file=/var/log/neutron/neutron-server.log > ... > > I see 2 things: > > (i) When neutron-server is exectured, > the following parameters are passed: > --config-file=/etc/neutron/neutron.conf > --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini > --log-file=/var/log/neutron/neutron-server.log > > (ii) The file '/lib/systemd/system/neutron-server.service' > is loaded and it has the following content: > ... > ExecStart=/etc/init.d/neutron-server systemd-start > ... > > This indicates me that it's executing > '/etc/init.d/neutron-server' script. > So I suppose this is the file indicated to add the parameters > of the SR-IOV OpenStack documentation, > but I have no idea where to put them. > > For Red-Hat distros I found this documentation > with the following configuration: > https://access.redhat.com/documentation/en-us/ > red_hat_enterprise_linux_openstack_platform/7/html/networking_guide > /sr-iov-support-for-virtual-networking > > vi /usr/lib/systemd/system/neutron-server.service > ... > ExecStart=/usr/bin/neutron-server > --config-file /usr/share/neutron/neutron-dist.conf > --config-file /etc/neutron/neutron.conf > --config-file /etc/neutron/plugin.ini > --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini > --log-file /var/log/neutron/server.log > > Thanks in advance, > Gabriel Gamero [1] https://github.com/openstack/neutron/commit/ c4e76908ae0d8c1e5bcb7f839df5e22094805299 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Wed Dec 30 07:36:00 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 30 Dec 2020 08:36:00 +0100 Subject: Instance on provider network not working In-Reply-To: References: Message-ID: <179970300.cGSCgqodx2@p1> Hi, Dnia wtorek, 29 grudnia 2020 10:25:46 CET mohammad mighani pisze: > Hello > I use openstack Ussuri in Ubuntu 18 and one controller node and one compute > node with two interface, management and Provider that connected to internet > i install keystone, glance, placement, nova, neutron on self service option > *what i can * > i can create provider network and self service network and router > between them > i can ping router > i can launch instance on self service network and it has access to internet > and everything looks fine > *what i can NOT* > I can't launch instances on the provider network. When I launch an instance > on the provider network, the provider interface of the compute node will be > corrupted and not respond and the instance will not be created. > > error in instance : > > > *Message *Build of instance 2a86a84b-50fc-415c-a15c-cb87398f37ea aborted: > Failed to allocate the network(s), not rescheduling. > *Code *500 > *Details *Traceback (most recent call last): File > "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 6529, in > _create_domain_and_network post_xml_callback=post_xml_callback) File > "/usr/lib/python3.6/contextlib.py", line 88, in __exit__ next(self.gen) > File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 513, in > wait_for_instance_event actual_event = event.wait() File > "/usr/lib/python3/dist-packages/eventlet/event.py", line 125, in wait > result = hub.switch() File > "/usr/lib/python3/dist-packages/eventlet/hubs/hub.py", line 298, in switch > return self.greenlet.switch() eventlet.timeout.Timeout: 300 seconds During > handling of the above exception, another exception occurred: Traceback > (most recent call last): File > "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2378, in > _build_and_run_instance accel_info=accel_info) File > "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 3663, in > spawn cleanup_instance_disks=created_disks) File > "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 6552, in > _create_domain_and_network raise > exception.VirtualInterfaceCreateException() > nova.exception.VirtualInterfaceCreateException: Virtual Interface creation > failed During handling of the above exception, another exception occurred: > Traceback (most recent call last): File > "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2200, in > _do_build_and_run_instance filter_properties, request_spec, accel_uuids) > File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2444, > in _build_and_run_instance reason=msg) nova.exception.BuildAbortException: > Build of instance 2a86a84b-50fc-415c-a15c-cb87398f37ea aborted: Failed to > allocate the network(s), not rescheduling. > *Created *Dec. 29, 2020, 9:18 a.m. > > *After this I can't ping the computer from the provider network interface.* > > I reinstalled openstack so many times and also I tried the train and > victoria version and have the same problem. > > I attached log files of nova compute and neutron linux bridge from > compute node and neutron server from controller node. > thanks. This looks like a bug in the neutron-linuxbridge-agent for me. What version of Neutron are you using? Did You try on master branch if the same issue occurs there? If yes, please open bug in the launchpad for that [1]. [1] https://bugs.launchpad.net/neutron/+filebug -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From e0ne at e0ne.info Wed Dec 30 13:27:42 2020 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 30 Dec 2020 15:27:42 +0200 Subject: [horizon] No meetings until January, 13th Message-ID: Hi team, As agreed during the last IRC meeting [1], we'll skip the next two meetings due to the Christmas and New Year holidays. [1] http://eavesdrop.openstack.org/meetings/horizon/2020/horizon.2020-12-23-15.01.log.html#l-13 Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ranjeet.ivaan at gmail.com Wed Dec 30 11:18:31 2020 From: ranjeet.ivaan at gmail.com (Ranjeet Kumar) Date: Wed, 30 Dec 2020 16:48:31 +0530 Subject: Openstack instance management prive Message-ID: Hi What could be the average unit price for the life cycle management of an instance in open stack environment? -------------- next part -------------- An HTML attachment was scrubbed... URL: From aadewojo at gmail.com Wed Dec 30 23:20:08 2020 From: aadewojo at gmail.com (Adekunbi Adewojo) Date: Wed, 30 Dec 2020 23:20:08 +0000 Subject: [Ceilometer][all]Visualising Ceilometer Message-ID: Hi there, I want to get the results in ceilometer into Grafana dashboard so that I can visualise them. Is there a manual on this or can someone point me in the right direction please? Currently, I interact with ceilometer using it's API from my code to get some metrics result but would love to visualise them. Also, I came across Sensu and thinks this could help. Can it be used to visualise metrics gotten from ceilometer? I am interested in getting VM metrics in my code and at the same time be able to visualise them when applications on the VM are stress tested. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Thu Dec 31 07:28:39 2020 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Thu, 31 Dec 2020 09:28:39 +0200 Subject: [Ceilometer][all]Visualising Ceilometer In-Reply-To: References: Message-ID: <434301609399428@mail.yandex.ru> An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Thu Dec 31 09:11:36 2020 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Thu, 31 Dec 2020 18:11:36 +0900 Subject: [tacker] Next meeting Tuesday 12 Jan Message-ID: Hi team, As we agreed in the last meeting, we will skip IRC meeting next week for new year holidays. Thank you & happy holidays! Yasufumi From ikatzir at infinidat.com Thu Dec 31 13:18:26 2020 From: ikatzir at infinidat.com (Igal Katzir) Date: Thu, 31 Dec 2020 15:18:26 +0200 Subject: [tripleO] Customised Cinder-Volume fails at 'Paunch 5' during overcloud deployment Message-ID: <2D1F2693-49C0-4CA2-8F8E-F9E837D6A232@infinidat.com> Hello all, I am trying to deploy RHOSP16.1 (based on ‘train’ distribution) for Certification purposes. I have build a container for our cinder driver and trying to deploy it. Deployment runs almost till the end and fails at stage when it tries to configure Pacemaker; Here is the last message: "Info: Applying configuration version '1609231063'", "Notice: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_bind_addr]/ensure: created", "Info: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_bind_addr]: Scheduling refresh of Service[pcsd]", "Info: /Stage[main]/Pacemaker::Service/Service[pcsd]: Unscheduling all events on Service[pcsd]", "Info: Class[Pacemaker::Corosync]: Unscheduling all events on Class[Pacemaker::Corosync]", "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Volume_bundle/Pacemaker::Resource::Bundle[openstack-cinder-volume]/Pcmk_bundle[openstack-cinder-volume]: Dependency Pcmk_property[property-overcloud-controller-0-cinder-volume-role] has failures: true", "Info: Creating state file /var/lib/puppet/state/state.yaml", "Notice: Applied catalog in 382.92 seconds", "Changes:", " Total: 1", "Events:", " Success: 1", " Failure: 2", " Total: 3", I have verified that all packages on my container-image (Pacemaker,Corosync, libqb,and pcs) are installed with same versions as the overcloud-controller. But seems that something is still missing, because deployment with the default openstack-cinder-volume image completes successfully. Can anyone help with debugging this? Let me know if more info needed. Thanks in advance, Igal -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Thu Dec 31 13:55:28 2020 From: mrunge at matthias-runge.de (Matthias Runge) Date: Thu, 31 Dec 2020 14:55:28 +0100 Subject: [Ceilometer][all]Visualising Ceilometer In-Reply-To: References: Message-ID: On 31/12/2020 00:20, Adekunbi Adewojo wrote: > Hi there, > > I want to get the results in ceilometer into Grafana dashboard so that I > can visualise them. Is there a manual on this or can someone point me in > the right direction please? > > Currently, I interact with ceilometer using it's API from my code to get > some metrics result but would love to visualise them. > > > Also,  I came across Sensu and thinks this could help. Can it be used to > visualise metrics gotten from ceilometer? I am interested in getting VM > metrics in my code and at the same time be able to visualise them when > applications on the VM are stress tested. Hi there, ceilometer turned to be a collecting agent a long time ago. Metrics and events are stored in other databases, such as gnocchi, prometheus or others. The ceilometer API was split into a separate project named gnocchi, chances are good that you can use your code on gnocchi API. There is also a plugin[1] for grafana to pull data from gnocchi. Matthias [1] https://grafana.com/grafana/plugins/gnocchixyz-gnocchi-datasource From aadewojo at gmail.com Thu Dec 31 15:31:02 2020 From: aadewojo at gmail.com (Adekunbi Adewojo) Date: Thu, 31 Dec 2020 15:31:02 +0000 Subject: [Ceilometer][all]Visualising Ceilometer In-Reply-To: References: Message-ID: Thank you all very much for this information. I will explore them. On Thu, Dec 31, 2020 at 2:02 PM Matthias Runge wrote: > On 31/12/2020 00:20, Adekunbi Adewojo wrote: > > Hi there, > > > > I want to get the results in ceilometer into Grafana dashboard so that I > > can visualise them. Is there a manual on this or can someone point me in > > the right direction please? > > > > Currently, I interact with ceilometer using it's API from my code to get > > some metrics result but would love to visualise them. > > > > > > Also, I came across Sensu and thinks this could help. Can it be used to > > visualise metrics gotten from ceilometer? I am interested in getting VM > > metrics in my code and at the same time be able to visualise them when > > applications on the VM are stress tested. > > Hi there, > > ceilometer turned to be a collecting agent a long time ago. Metrics and > events are stored in other databases, such as gnocchi, prometheus or > others. > The ceilometer API was split into a separate project named gnocchi, > chances are good that you can use your code on gnocchi API. > There is also a plugin[1] for grafana to pull data from gnocchi. > > Matthias > > > [1] https://grafana.com/grafana/plugins/gnocchixyz-gnocchi-datasource > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Thu Dec 31 19:15:31 2020 From: abishop at redhat.com (Alan Bishop) Date: Thu, 31 Dec 2020 11:15:31 -0800 Subject: [tripleO] Customised Cinder-Volume fails at 'Paunch 5' during overcloud deployment In-Reply-To: <2D1F2693-49C0-4CA2-8F8E-F9E837D6A232@infinidat.com> References: <2D1F2693-49C0-4CA2-8F8E-F9E837D6A232@infinidat.com> Message-ID: On Thu, Dec 31, 2020 at 5:26 AM Igal Katzir wrote: > Hello all, > > I am trying to deploy RHOSP16.1 (based on ‘*train’ *distribution) for Certification > purposes. > I have build a container for our cinder driver and trying to deploy it. > Deployment runs almost till the end and fails at stage when it tries to > configure Pacemaker; > Here is the last message: > > "Info: Applying configuration version '1609231063'", "Notice: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_bind_addr]/ensure: created", "Info: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_bind_addr]: Scheduling refresh of Service[pcsd]", "Info: /Stage[main]/Pacemaker::Service/Service[pcsd]: Unscheduling all events on Service[pcsd]", "Info: Class[Pacemaker::Corosync]: Unscheduling all events on Class[Pacemaker::Corosync]", "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Volume_bundle/Pacemaker::Resource::Bundle[openstack-cinder-volume]/Pcmk_bundle[openstack-cinder-volume]: Dependency Pcmk_property[property-overcloud-controller-0-cinder-volume-role] has failures: true", "Info: Creating state file /var/lib/puppet/state/state.yaml", "Notice: Applied catalog in 382.92 seconds", "Changes:", " Total: 1", "Events:", " Success: 1", " Failure: 2", " Total: 3", > > > I have verified that all packages on my container-image > (Pacemaker,Corosync, libqb,and pcs) are installed with same versions as > the overcloud-controller. > Hi Igal, Thank you for checking these package versions and stating they match the ones installed on the overcloud node. This rules out one of the common reasons for failures when trying to run a customized cinder-volume container image. But seems that something is still missing, because deployment with the > default openstack-cinder-volume image completes successfully. > This is also good to know. Can anyone help with debugging this? Let me know if more info needed. > More info is needed, but it's hard to predict exactly where to look for the root cause of the failure. I'd start by looking for something at the cinder log file to determine whether the cinder-volume service is even trying to start. Look for /var/log/containers/cinder/cinder-volume.log on the node where pacemaker is trying to run the service. Are there logs indicating the service is trying to start? Or maybe the service is launched, but fails early during startup? Another possibility is podman fails to launch the container itself. If that's happening then check for errors in /var/log/messages. One source of this type of failure is you've specified a container bind mount, but the source directory doesn't exist (docker would auto-create the source directory, but podman does not). You specifically mentioned RHOSP, so if you need additional support then I recommend opening a support case with Red Hat. That will provide a forum for posting private data, such as details of your overcloud deployment and full sosreports. Alan > > Thanks in advance, > Igal > -------------- next part -------------- An HTML attachment was scrubbed... URL: