openstack-discuss Digest, Vol 60, Issue 10 ~ Fwaas in Openstack 2023.1 antelope

Lajos Katona katonalala at gmail.com
Mon Oct 9 09:07:09 UTC 2023


Hi Asma,
The enable_plugin is for devstack based deployments, I suppose.
To tell the truth I am not familiar with kolla, but I found this page which
speaks about enabling neutron extensions like sfc or vpnaas:
https://docs.openstack.org/kolla-ansible/4.0.2/networking-guide.html

and these in the group_vars/all.yml:
https://opendev.org/openstack/kolla-ansible/src/branch/master/ansible/group_vars/all.yml#L819
So to enable vpnaas: enable_neutron_vpnaas: "yes" I suppose to do all the
magic to install and set neutron-vpnaas.

I can't find neutron-fwaas , but perhaps this is just my lack of experience
with kolla.

To see what is necessary to configure fwaas, I would check the devstack
plugin:
https://opendev.org/openstack/neutron-fwaas/src/branch/master/devstack

Best wishes.
Lajos (lajoskatona)

Asma Naz Shariq <asma.naz at techavenue.biz> ezt írta (időpont: 2023. okt. 6.,
P, 14:37):

> Hi Openstack Community!
>
> I have set up OpenStack release 2023.1 antelope with Kolla-Ansible .
> However, I noticed that there is no enable_plugin option in the
> /etc/kolla/global.yml file. Now, I am trying to install FWaaS
> (Firewall-as-a-Service) following the instructions provided in this
> OpenStack's Firewall-as-a-Service (FWaaS) v2 scenario documentation.
>
> The documentation states, On Ubuntu and CentOS, modify the [fwaas] section
> in the /etc/neutron/fwaas_driver.ini file instead of
> /etc/neutron/neutron.conf. Unfortunately, I cannot find the
> fwaas_driver.ini
> file in the neutron-server, neutron-l3-agent, or neutron-openvswitch-agent
> containers
>
> Can someone guide me on how to properly install FWaaS in a Kolla
> environment
> using the information from the provided link?
>
> Best,
>
> -----Original Message-----
> From: openstack-discuss-request at lists.openstack.org
> <openstack-discuss-request at lists.openstack.org>
> Sent: Friday, October 6, 2023 1:27 PM
> To: openstack-discuss at lists.openstack.org
> Subject: openstack-discuss Digest, Vol 60, Issue 10
>
> Send openstack-discuss mailing list submissions to
>         openstack-discuss at lists.openstack.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>
> https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss
>
> or, via email, send a message with subject or body 'help' to
>         openstack-discuss-request at lists.openstack.org
>
> You can reach the person managing the list at
>         openstack-discuss-owner at lists.openstack.org
>
> When replying, please edit your Subject line so it is more specific than
> "Re: Contents of openstack-discuss digest..."
>
>
> Today's Topics:
>
>    1. Re: [ops] [nova] "invalid argument: shares xxx must be in
>       range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update
>       (Massimo Sgaravatto)
>    2. [TC][Monasca] Proposal to mark Monasca as an inactive project
>       (S?awek Kap?o?ski)
>    3. [neutron] Neutron drivers meeting cancelled
>       (Rodolfo Alonso Hernandez)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 6 Oct 2023 09:10:47 +0200
> From: Massimo Sgaravatto <massimo.sgaravatto at gmail.com>
> To: Franciszek Przewo?ny <fprzewozny at opera.com>
> Cc: OpenStack Discuss <openstack-discuss at lists.openstack.org>,
>         smooney at redhat.com
> Subject: Re: [ops] [nova] "invalid argument: shares xxx must be in
>         range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update
> Message-ID:
>         <
> CALaZjRGh6xnzX12cMgDTYx2yJYddUD9X3oh60JWnrB33ZdEf_Q at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Thanks a lot  Franciszek !
> I was indeed seeing the problem with a VM big 56 vcpus while I didn't see
> the issue with a tiny instance
>
> Thanks again !
>
> Cheers, Massimo
>
> On Fri, Oct 6, 2023 at 8:58?AM Franciszek Przewo?ny <fprzewozny at opera.com>
> wrote:
>
> > Hi Massimo,
> >
> > We are using Ubuntu for our environments and we experienced the same
> > issue during upgrade from Yoga/Focal to Yoga/Jammy. On Yoga/Focal
> > cgroups_v1 were used, and cpu_shares parameter value was cpu count *
> > 1024. From Jammy
> > cgroups_v2 have been implemented, and cpu_shares value has been set by
> > default to 100. It has hard limit of 10000, so flavors with more than
> > 9vCPUs won't fit. If you need to fix this issue without stopping VMs,
> > you can set cpu_shares with libvirt command: virsh schedinfo $domain
> > --live
> > cpu_shares=100
> > for more details about virsh schedinfo visit:
> > https://libvirt.org/manpages/virsh.html#schedinfo
> >
> > BR,
> > Franciszek
> >
> > On 5 Oct 2023, at 21:17, smooney at redhat.com wrote:
> >
> > On Thu, 2023-10-05 at 16:53 +0200, Massimo Sgaravatto wrote:
> >
> > Dear all
> >
> > We have recently updated openstack nova on some AlmaLinux9 compute
> > nodes running Yoga from 1:25.2.0 to 1.25.2.1. After this operation
> > some VMs don't start anymore. In the log it is reported:
> >
> > libvirt.libvirtError: invalid argument: shares \'57344\' must be in
> > range [1, 10000]\n'}
> >
> > libvirt version is 9.0.0-10.3
> >
> >
> > A quick google search suggests that it is something related to cgroups
> > and it is fixed in libvirt >= 9.1 (which is not yet in the almalinux9
> repos).
> > Did I get it right ?
> >
> > not quite
> >
> > it is reated to cgroups but the cause is that in cgroups_v1 the
> > maxvlaue of shares i.e. cpu_shares changed form make int to 10000 in
> > cgroups_v2 so the issue is teh vm requested a cpu share value of 57344
> > which is not vlaid on an OS that is useing cgroups_v2 libvirt will not
> > clamp the value nor will nova.
> > you have to change the volue in your flavor and resize the vm.
> >
> >
> >
> >
> > Thanks, Massimo
> >
> >
> >
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> <
> https://lists.openstack.org/pipermail/openstack-discuss/attachments/2023100
> 6/62a848f5/attachment-0001.htm
> <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20231006/62a848f5/attachment-0001.htm>
> >
>
> ------------------------------
>
> Message: 2
> Date: Fri, 06 Oct 2023 09:49:45 +0200
> From: S?awek Kap?o?ski <skaplons at redhat.com>
> To: "openstack-discuss at lists.openstack.org"
>         <openstack-discuss at lists.openstack.org>
> Subject: [TC][Monasca] Proposal to mark Monasca as an inactive project
> Message-ID: <8709827.GXAFRqVoOG at p1gen4>
> Content-Type: text/plain; charset="us-ascii"
>
> Hi,
>
> I just proposed patch [1] to mark Monasca as inactive project. It was
> discussed during last TC meeting on 03.10.2023. Reasons for that are
> described in the commit message of the [1] but I will also put all of it
> here:
>
> previous cycle there wasn't almost any contributions to that project and
> most active contributors in last 180 days were Elod Illes and Dr. Jens
> Harbott who were fixing some Zuul configuration issues only.
> Here are detailed statistics about Monasca projects:
>
> Validating Gerrit...
>  * There are 7 ready for review patches generated within 180 days
>  * There are 2 not reviewed patches generated within 180 days
>  * There are 5 merged patches generated within 180 days
>  * Unreviewed patch rate for patches generated within 180 days is 28.0 %
>  * Merged patch rate for patches generated within 180 days is 71.0 %
>  *  Here's top 10 owner for patches generated within 180 days
> (Name/Account_ID: Percentage):
>     -  Dr. Jens Harbott :  42.86%
>     -  Joel Capitao :  28.57%
>     -  Elod Illes :  28.57%
>  Validate Zuul...
>  Set buildsets fetch size to 500
>  * Repo: openstack/monasca-log-api gate job builds success rate: 82%
>  * Repo: openstack/monasca-statsd gate job builds success rate: 95%
>  * Repo: openstack/monasca-tempest-plugin gate job builds success rate: 81%
>  * Repo: openstack/monasca-common gate job builds success rate: 84%
>  * Repo: openstack/monasca-kibana-plugin gate job builds success rate: 83%
>  * Repo: openstack/monasca-ceilometer gate job builds success rate: 100%
>  * Repo: openstack/monasca-events-api gate job builds success rate: 83%
>  * Repo: openstack/monasca-ui gate job builds success rate: 88%
>  * Repo: openstack/monasca-specs gate job builds success rate: 100%
>  * Repo: openstack/monasca-grafana-datasource gate job builds success rate:
> 100%
>  * Repo: openstack/monasca-persister gate job builds success rate: 98%
>  * Repo: openstack/monasca-notification gate job builds success rate: 93%
>  * Repo: openstack/monasca-thresh gate job builds success rate: 100%
>  * Repo: openstack/monasca-api gate job builds success rate: 76%
>  * Repo: openstack/monasca-agent gate job builds success rate: 98%
>  * Repo: openstack/python-monascaclient gate job builds success rate: 100%
>  * Repo: openstack/monasca-transform gate job builds success rate: 100%
>
> What's next?
> According to the "Emerging and inactive projects" document [2] if there
> will
> be no volunteers who will step up and want to maintain this project before
> Milestone-2 of the 2024.1 cycle (week of Jan 08 2024) there will be no new
> release of Monasca in the 2024.1 cycle and TC will discuss if project
> should
> be retired.
>
> So if You are interested in keeping Monasca alive and active, please reach
> out to the TC to discuss that ASAP. Thx in advance.
>
> [1] https://review.opendev.org/c/openstack/governance/+/897520
> [2]
>
> https://governance.openstack.org/tc/reference/emerging-technology-and-inacti
> ve-projects.html
> <https://governance.openstack.org/tc/reference/emerging-technology-and-inactive-projects.html>
>
> --
> Slawek Kaplonski
> Principal Software Engineer
> Red Hat
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> <
> https://lists.openstack.org/pipermail/openstack-discuss/attachments/2023100
> 6/138c13c0/attachment-0001.htm
> <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20231006/138c13c0/attachment-0001.htm>
> >
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: signature.asc
> Type: application/pgp-signature
> Size: 488 bytes
> Desc: This is a digitally signed message part.
> URL:
> <
> https://lists.openstack.org/pipermail/openstack-discuss/attachments/2023100
> 6/138c13c0/attachment-0001.sig
> <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20231006/138c13c0/attachment-0001.sig>
> >
>
> ------------------------------
>
> Message: 3
> Date: Fri, 6 Oct 2023 10:26:49 +0200
> From: Rodolfo Alonso Hernandez <ralonsoh at redhat.com>
> To: openstack-discuss <openstack-discuss at lists.openstack.org>
> Subject: [neutron] Neutron drivers meeting cancelled
> Message-ID:
>         <
> CAECr9X7LBMRJQyi1sejTCKQopW+PkVA0JUfMh4i4QRD0Z7-eRw at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hello Neutrinos:
>
> Due to the lack of agenda [1], today's meeting is cancelled.
>
> Have a nice weekend.
>
> [1]https://wiki.openstack.org/wiki/Meetings/NeutronDrivers
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> <
> https://lists.openstack.org/pipermail/openstack-discuss/attachments/2023100
> 6/cc1d43b0/attachment.htm
> <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20231006/cc1d43b0/attachment.htm>
> >
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> openstack-discuss mailing list
> openstack-discuss at lists.openstack.org
>
>
> ------------------------------
>
> End of openstack-discuss Digest, Vol 60, Issue 10
> *************************************************
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20231009/b948812b/attachment-0001.htm>


More information about the openstack-discuss mailing list