openstack-discuss search results for query "canceled meeting"
openstack-discuss@lists.openstack.org- 895 messages
RE: [kolla-ansible][xena] Cell database HA/redundancy?
by Norrie, Andrew
Hi Mark,
Thanks for the reply.
I'll look into the HAProxy "custom config" as suggested and using different ports for the cell databases.
Maybe that can be worked in.
We are expanding our OpenStack/kolla-ansible personnel/knowledge so will look into the possibility for
us to contribute as reviewers/testers as a start.
Best regards .... Andrew
From: Mark Goddard <mark(a)stackhpc.com>
Sent: Monday, June 6, 2022 2:22 AM
To: Norrie, Andrew <Andrew.Norrie(a)cgg.com>
Cc: openstack-discuss(a)lists.openstack.org
Subject: Re: [kolla-ansible][xena] Cell database HA/redundancy?
On Fri, 3 Jun 2022 at 14:10, Norrie, Andrew <Andrew.Norrie(a)cgg.com<mailto:Andrew.Norrie@cgg.com>> wrote:
Hi,
We are currently planning some large OpenStack deployments utilizing kolla-ansible and
I'm curious what folks are doing with the cell database HA/redundancy.
With kolla-ansible (xena) it appears that a loadbalancer setup is only allowed for the
default database shard (shard 0)... reference: https://docs.openstack.org/kolla-ansible/latest/reference/databases/mariadb…<https://fra01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.open…>
If we are setting up separate cell database shards with Galera, I'm wondering if there is a convenient work-around or configuration for implementation of HA for these cell databases.
In the inventory group_vars directory you can specify the group variables (for each cell database) as:
nova_cell_database_address:
nova_cell_database_group:
but these aren't virtual IP hosts/addresses (non default db shard). This works perfectly fine with
a single server cell database but not Galera. If that db server goes down the cell instance information is lost.
Hi Andrew,
You are correct - only the first shard is load balanced. The sharding feature was actually just the first patch in a series intended to support proxysql. I believe this would add the functionality you are looking for. In fact, the proposed test case for proxysql in our CI is a multi-cell deployment.
Here is the patch series:
https://review.opendev.org/q/topic:proxysql<https://fra01.safelinks.protection.outlook.com/?url=https%3A%2F%2Freview.op…>
Unfortunately it has been stuck for a while, largely due to core reviewer bandwidth. The author is Michael Arbet, who is often on IRC as kevko. I'd suggest registering your interest in these patches via gerrit, and at one of the weekly kolla IRC meetings (this week is cancelled due to the summit). If you have time available, then testing and providing reviews could help to get the patches moving.
In the meantime, you can configure HAProxy to load balance additional services, by placing an HAProxy config snippet in /etc/kolla/config/haproxy/services.d/*.cfg.
Regards,
Mark
Many thanks ... Andrew
________________________________
"This e-mail and any accompanying attachments are confidential. The information is intended solely for the use of the individual to whom it is addressed. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. If you are not the intended recipient, you must not review, disclose, copy, distribute or use this e-mail; please delete it from your system and notify the sender immediately."
________________________________
"This e-mail and any accompanying attachments are confidential. The information is intended solely for the use of the individual to whom it is addressed. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. If you are not the intended recipient, you must not review, disclose, copy, distribute or use this e-mail; please delete it from your system and notify the sender immediately."
3 years, 5 months
RE: openstack-discuss Digest, Vol 60, Issue 10 ~ Fwaas in Openstack 2023.1 antelope
by Asma Naz Shariq
Hi Openstack Community!
I have set up OpenStack release 2023.1 antelope with Kolla-Ansible .
However, I noticed that there is no enable_plugin option in the
/etc/kolla/global.yml file. Now, I am trying to install FWaaS
(Firewall-as-a-Service) following the instructions provided in this
OpenStack's Firewall-as-a-Service (FWaaS) v2 scenario documentation.
The documentation states, On Ubuntu and CentOS, modify the [fwaas] section
in the /etc/neutron/fwaas_driver.ini file instead of
/etc/neutron/neutron.conf. Unfortunately, I cannot find the fwaas_driver.ini
file in the neutron-server, neutron-l3-agent, or neutron-openvswitch-agent
containers
Can someone guide me on how to properly install FWaaS in a Kolla environment
using the information from the provided link?
Best,
-----Original Message-----
From: openstack-discuss-request(a)lists.openstack.org
<openstack-discuss-request(a)lists.openstack.org>
Sent: Friday, October 6, 2023 1:27 PM
To: openstack-discuss(a)lists.openstack.org
Subject: openstack-discuss Digest, Vol 60, Issue 10
Send openstack-discuss mailing list submissions to
openstack-discuss(a)lists.openstack.org
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss
or, via email, send a message with subject or body 'help' to
openstack-discuss-request(a)lists.openstack.org
You can reach the person managing the list at
openstack-discuss-owner(a)lists.openstack.org
When replying, please edit your Subject line so it is more specific than
"Re: Contents of openstack-discuss digest..."
Today's Topics:
1. Re: [ops] [nova] "invalid argument: shares xxx must be in
range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update
(Massimo Sgaravatto)
2. [TC][Monasca] Proposal to mark Monasca as an inactive project
(S?awek Kap?o?ski)
3. [neutron] Neutron drivers meeting cancelled
(Rodolfo Alonso Hernandez)
----------------------------------------------------------------------
Message: 1
Date: Fri, 6 Oct 2023 09:10:47 +0200
From: Massimo Sgaravatto <massimo.sgaravatto(a)gmail.com>
To: Franciszek Przewo?ny <fprzewozny(a)opera.com>
Cc: OpenStack Discuss <openstack-discuss(a)lists.openstack.org>,
smooney(a)redhat.com
Subject: Re: [ops] [nova] "invalid argument: shares xxx must be in
range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update
Message-ID:
<CALaZjRGh6xnzX12cMgDTYx2yJYddUD9X3oh60JWnrB33ZdEf_Q(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Thanks a lot Franciszek !
I was indeed seeing the problem with a VM big 56 vcpus while I didn't see
the issue with a tiny instance
Thanks again !
Cheers, Massimo
On Fri, Oct 6, 2023 at 8:58?AM Franciszek Przewo?ny <fprzewozny(a)opera.com>
wrote:
> Hi Massimo,
>
> We are using Ubuntu for our environments and we experienced the same
> issue during upgrade from Yoga/Focal to Yoga/Jammy. On Yoga/Focal
> cgroups_v1 were used, and cpu_shares parameter value was cpu count *
> 1024. From Jammy
> cgroups_v2 have been implemented, and cpu_shares value has been set by
> default to 100. It has hard limit of 10000, so flavors with more than
> 9vCPUs won't fit. If you need to fix this issue without stopping VMs,
> you can set cpu_shares with libvirt command: virsh schedinfo $domain
> --live
> cpu_shares=100
> for more details about virsh schedinfo visit:
> https://libvirt.org/manpages/virsh.html#schedinfo
>
> BR,
> Franciszek
>
> On 5 Oct 2023, at 21:17, smooney(a)redhat.com wrote:
>
> On Thu, 2023-10-05 at 16:53 +0200, Massimo Sgaravatto wrote:
>
> Dear all
>
> We have recently updated openstack nova on some AlmaLinux9 compute
> nodes running Yoga from 1:25.2.0 to 1.25.2.1. After this operation
> some VMs don't start anymore. In the log it is reported:
>
> libvirt.libvirtError: invalid argument: shares \'57344\' must be in
> range [1, 10000]\n'}
>
> libvirt version is 9.0.0-10.3
>
>
> A quick google search suggests that it is something related to cgroups
> and it is fixed in libvirt >= 9.1 (which is not yet in the almalinux9
repos).
> Did I get it right ?
>
> not quite
>
> it is reated to cgroups but the cause is that in cgroups_v1 the
> maxvlaue of shares i.e. cpu_shares changed form make int to 10000 in
> cgroups_v2 so the issue is teh vm requested a cpu share value of 57344
> which is not vlaid on an OS that is useing cgroups_v2 libvirt will not
> clamp the value nor will nova.
> you have to change the volue in your flavor and resize the vm.
>
>
>
>
> Thanks, Massimo
>
>
>
>
>
2 years, 1 month
Re: [kolla-ansible][xena] Cell database HA/redundancy?
by Michal Arbet
Hi,
I think my patchsets are in good shape and ready for merge, just need some
reviewers finally :(.
I am glad that there is this type of question, who knows, maybe this will
speed up the process of merge :)
You can still backport it to your downstream branches, we are using it in
this way for several kolla-ansible versions and it works.
Thanks,
Michal
Michal Arbet
Openstack Engineer
Ultimum Technologies a.s.
Na Poříčí 1047/26, 11000 Praha 1
Czech Republic
+420 604 228 897
michal.arbet(a)ultimum.io
*https://ultimum.io <https://ultimum.io/>*
LinkedIn <https://www.linkedin.com/company/ultimum-technologies> | Twitter
<https://twitter.com/ultimumtech> | Facebook
<https://www.facebook.com/ultimumtechnologies/timeline>
st 8. 6. 2022 v 16:37 odesílatel Norrie, Andrew <Andrew.Norrie(a)cgg.com>
napsal:
> Hi Mark,
>
>
>
> Thanks for the reply.
>
>
>
> I’ll look into the HAProxy “custom config” as suggested and using
> different ports for the cell databases.
>
> Maybe that can be worked in.
>
>
>
> We are expanding our OpenStack/kolla-ansible personnel/knowledge so will
> look into the possibility for
>
> us to contribute as reviewers/testers as a start.
>
>
>
> Best regards .... Andrew
>
>
>
> *From:* Mark Goddard <mark(a)stackhpc.com>
> *Sent:* Monday, June 6, 2022 2:22 AM
> *To:* Norrie, Andrew <Andrew.Norrie(a)cgg.com>
> *Cc:* openstack-discuss(a)lists.openstack.org
> *Subject:* Re: [kolla-ansible][xena] Cell database HA/redundancy?
>
>
>
>
>
>
>
> On Fri, 3 Jun 2022 at 14:10, Norrie, Andrew <Andrew.Norrie(a)cgg.com> wrote:
>
>
>
> Hi,
>
>
>
> We are currently planning some large OpenStack deployments utilizing
> kolla-ansible and
>
> I’m curious what folks are doing with the cell database HA/redundancy.
>
>
>
> With kolla-ansible (xena) it appears that a loadbalancer setup is only
> allowed for the
>
> default database shard (shard 0)... reference:
> https://docs.openstack.org/kolla-ansible/latest/reference/databases/mariadb…
> <https://fra01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.open…>
>
>
>
> If we are setting up separate cell database shards with Galera, I’m
> wondering if there is a convenient work-around or configuration for
> implementation of HA for these cell databases.
>
>
>
> In the inventory group_vars directory you can specify the group variables
> (for each cell database) as:
>
>
>
> nova_cell_database_address:
>
> nova_cell_database_group:
>
>
>
> but these aren’t virtual IP hosts/addresses (non default db shard). This
> works perfectly fine with
>
> a single server cell database but not Galera. If that db server goes down
> the cell instance information is lost.
>
>
>
>
>
> Hi Andrew,
>
>
>
> You are correct - only the first shard is load balanced. The sharding
> feature was actually just the first patch in a series intended to support
> proxysql. I believe this would add the functionality you are looking for.
> In fact, the proposed test case for proxysql in our CI is a multi-cell
> deployment.
>
>
>
> Here is the patch series:
>
>
>
> https://review.opendev.org/q/topic:proxysql
> <https://fra01.safelinks.protection.outlook.com/?url=https%3A%2F%2Freview.op…>
>
>
>
> Unfortunately it has been stuck for a while, largely due to core reviewer
> bandwidth. The author is Michael Arbet, who is often on IRC as kevko. I'd
> suggest registering your interest in these patches via gerrit, and at one
> of the weekly kolla IRC meetings (this week is cancelled due to the
> summit). If you have time available, then testing and providing reviews
> could help to get the patches moving.
>
>
>
> In the meantime, you can configure HAProxy to load balance additional
> services, by placing an HAProxy config snippet in
> /etc/kolla/config/haproxy/services.d/*.cfg.
>
>
>
> Regards,
>
> Mark
>
>
>
> Many thanks ... Andrew
> ------------------------------
>
> “This e-mail and any accompanying attachments are confidential. The
> information is intended solely for the use of the individual to whom it is
> addressed. Any review, disclosure, copying, distribution, or use of the
> email by others is strictly prohibited. If you are not the intended
> recipient, you must not review, disclose, copy, distribute or use this
> e-mail; please delete it from your system and notify the sender
> immediately.”
>
> ------------------------------
> “This e-mail and any accompanying attachments are confidential. The
> information is intended solely for the use of the individual to whom it is
> addressed. Any review, disclosure, copying, distribution, or use of the
> email by others is strictly prohibited. If you are not the intended
> recipient, you must not review, disclose, copy, distribute or use this
> e-mail; please delete it from your system and notify the sender
> immediately.”
>
3 years, 5 months
Re: [all][ptg] Pre-PTG discussion: New Keystone Middleware Feature Supporting OAuth2.0 with External Authorization Service
by Julia Kreger
On Fri, Mar 24, 2023 at 9:55 AM Dave Wilde <dwilde(a)redhat.com> wrote:
> I’m happy to book an additional time slot(s) specifically for this
> discussion if something other than what we currently have works better for
> everyone. Please let me know.
>
> /Dave
> On Mar 24, 2023 at 10:49 AM -0500, Hiromu Asahina <
> hiromu.asahina.az(a)hco.ntt.co.jp>, wrote:
>
> As Keystone canceled Monday 14 UTC timeslot [1], I'd like to hold this
> discussion on Monday 15 UTC timeslot. If it doesn't work for Ironic
> members, please kindly reply convenient timeslots.
>
>
Unfortunately, I took the last few days off and I'm only seeing this now.
My morning is booked up aside from the original time slot which was
discussed.
Maybe there is a time later in the week which could work?
>
> [1] https://ptg.opendev.org/ptg.html
>
> Thanks,
>
> Hiromu Asahina
>
> On 2023/03/22 20:01, Hiromu Asahina wrote:
>
> Thanks!
>
> I look forward to your reply.
>
> On 2023/03/22 1:29, Julia Kreger wrote:
>
> No worries!
>
> I think that time works for me. I'm not sure it will work for
> everyone, but
> I can proxy information back to the whole of the ironic project as we
> also
> have the question of this functionality listed for our Operator Hour in
> order to help ironic gauge interest.
>
> -Julia
>
> On Tue, Mar 21, 2023 at 9:00 AM Hiromu Asahina <
> hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
>
> I apologize that I couldn't reply before the Ironic meeting on Monday.
>
> I need one slot to discuss this topic.
>
> I asked Keystone today and Monday's first Keystone slot (14 UTC Mon,
> 27)[1,2] works for them. Does this work for Ironic? I understand not all
> Ironic members will join this discussion, so I hope we can arrange a
> convenient date for you two at least and, hopefully, for those
> interested in this topic.
>
> [1]
>
>
> https://www.timeanddate.com/worldclock/fixedtime.html?iso=2023-03-27T14:00:…
> [2] https://ptg.opendev.org/ptg.html
>
> Thanks,
> Hiromu Asahina
>
> On 2023/03/17 23:29, Julia Kreger wrote:
>
> I'm not sure how many Ironic contributors would be the ones to attend a
> discussion, in part because this is disjointed from the items they need
>
> to
>
> focus on. It is much more of a "big picture" item for those of us
> who are
> leaders in the project.
>
> I think it would help to understand how much time you expect the
>
> discussion
>
> to take to determine a path forward and how we can collaborate. Ironic
>
> has
>
> a huge number of topics we want to discuss during the PTG, and I
> suspect
> our team meeting on Monday next week should yield more
> interest/awareness
> as well as an amount of time for each topic which will aid us in
>
> scheduling.
>
>
> If you can let us know how long, then I think we can figure out when
> the
> best day/time will be.
>
> Thanks!
>
> -Julia
>
>
>
>
>
> On Fri, Mar 17, 2023 at 2:57 AM Hiromu Asahina <
> hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
>
> Thank you for your reply.
>
> I'd like to decide the time slot for this topic.
> I just checked PTG schedule [1].
>
> We have the following time slots. Which one is convenient to gether?
> (I didn't get reply but I listed Barbican, as its cores are almost the
> same as Keystone)
>
> Mon, 27:
>
> - 14 (keystone)
> - 15 (keystone)
>
> Tue, 28
>
> - 13 (barbican)
> - 14 (keystone, ironic)
> - 15 (keysonte, ironic)
> - 16 (ironic)
>
> Wed, 29
>
> - 13 (ironic)
> - 14 (keystone, ironic)
> - 15 (keystone, ironic)
> - 21 (ironic)
>
> Thanks,
>
> [1] https://ptg.opendev.org/ptg.html
>
> Hiromu Asahina
>
>
> On 2023/02/11 1:41, Jay Faulkner wrote:
>
> I think it's safe to say the Ironic community would be very
> invested in
> such an effort. Let's make sure the time chosen for vPTG with this is
>
> such
>
> that Ironic contributors can attend as well.
>
> Thanks,
> Jay Faulkner
> Ironic PTL
>
> On Fri, Feb 10, 2023 at 7:40 AM Hiromu Asahina <
> hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
>
> Hello Everyone,
>
> Recently, Tacker and Keystone have been working together on a new
>
> Keystone
>
> Middleware that can work with external authentication
> services, such as Keycloak. The code has already been submitted [1],
>
> but
>
> we want to make this middleware a generic plugin that works
> with as many OpenStack services as possible. To that end, we would
>
> like
>
> to
>
> hear from other projects with similar use cases
> (especially Ironic and Barbican, which run as standalone
> services). We
> will make a time slot to discuss this topic at the next vPTG.
> Please contact me if you are interested and available to
> participate.
>
> [1]
>
> https://review.opendev.org/c/openstack/keystonemiddleware/+/868734
>
>
> --
> Hiromu Asahina
>
>
>
>
>
>
> --
> ◇-------------------------------------◇
> NTT Network Innovation Center
> Hiromu Asahina
> -------------------------------------
> 3-9-11, Midori-cho, Musashino-shi
> Tokyo 180-8585, Japan
> Phone: +81-422-59-7008
> Email: hiromu.asahina.az(a)hco.ntt.co.jp
> ◇-------------------------------------◇
>
>
>
>
> --
> ◇-------------------------------------◇
> NTT Network Innovation Center
> Hiromu Asahina
> -------------------------------------
> 3-9-11, Midori-cho, Musashino-shi
> Tokyo 180-8585, Japan
> Phone: +81-422-59-7008
> Email: hiromu.asahina.az(a)hco.ntt.co.jp
> ◇-------------------------------------◇
>
>
>
>
>
> --
> ◇-------------------------------------◇
> NTT Network Innovation Center
> Hiromu Asahina
> -------------------------------------
> 3-9-11, Midori-cho, Musashino-shi
> Tokyo 180-8585, Japan
> Phone: +81-422-59-7008
> Email: hiromu.asahina.az(a)hco.ntt.co.jp
> ◇-------------------------------------◇
>
>
>
2 years, 7 months
RE: openstack-discuss Digest, Vol 60, Issue 51 Multinode Cluster setup
by Asma Naz Shariq
Dear all,
I want to deployed Openstack through kolla-ansible release zed. The cluster architecture is as follows:
1- Controller Node same as Network Node.
2- Compute Node same as Storage Node.
On Controller Node, I have followed all the steps mentioned at https://docs.openstack.org/project-deploy-guide/kolla-ansible/zed/quickstar… and set the hosts name with ip at /etc/hosts on controller node to resolve the compute node.
When I tried to check the connectivity of controller-node with compute node, it encountered an error with:
Controller Node IP | FAILED! => {
"msg": "Missing sudo password"
}
localhost | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
Compute Node IP | FAILED! => {
"msg": "Missing sudo password
Can anyone guide me how to resolve this error, and setup multi node environment to which steps should I follow to make baseline cluster of one controller node and one compute node.
Thanks,
-----Original Message-----
From: openstack-discuss-request(a)lists.openstack.org <openstack-discuss-request(a)lists.openstack.org>
Sent: Tuesday, October 24, 2023 9:42 AM
To: openstack-discuss(a)lists.openstack.org
Subject: openstack-discuss Digest, Vol 60, Issue 51
Send openstack-discuss mailing list submissions to
openstack-discuss(a)lists.openstack.org
To subscribe or unsubscribe via email, send a message with subject or body 'help' to
openstack-discuss-request(a)lists.openstack.org
You can reach the person managing the list at
openstack-discuss-owner(a)lists.openstack.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..."
Today's Topics:
1. [ptls] [tc] 2023 OpenStack Deployment Data (Allison Price)
2. Re: [neutron][openvswitch][antelope] Bridge eth1 for physical network provider does not exist
(Sławek Kapłoński)
3. Environmental Sustainability WG goes to the PTG! (Kendall Nelson)
4. [tacker] Cancelling IRC meetings (Yasufumi Ogawa)
5. Re: openstack Vm shutoff by itself (AJ_ sunny)
----------------------------------------------------------------------
Message: 1
Date: Mon, 23 Oct 2023 15:16:05 -0500
From: Allison Price <allison(a)openinfra.dev>
Subject: [ptls] [tc] 2023 OpenStack Deployment Data
To: openstack-discuss <openstack-discuss(a)lists.openstack.org>
Message-ID: <AB576C9A-95CB-4BCD-9215-0AC1D3E64A25(a)openinfra.dev>
Content-Type: text/plain; charset=us-ascii
Hi everyone,
Apologies for sending this during the PTG as I intended to send this prior, but below is a link to the anonymized data including responses to many project or TC contributed questions. I have removed a significant number of fields to preserve the anonymity of the organizations, but if there is a particular data point that would help your team, please let me know.
Cheers,
Allison
https://docs.google.com/spreadsheets/d/1ehlabuqOnNK4M7xtN1GXvhLoeHn4JBaISih…
------------------------------
Message: 2
Date: Mon, 23 Oct 2023 22:46:23 +0200
From: Sławek Kapłoński <skaplons(a)redhat.com>
Subject: Re: [neutron][openvswitch][antelope] Bridge eth1 for
physical network provider does not exist
To: "openstack-discuss(a)lists.openstack.org"
<openstack-discuss(a)lists.openstack.org>
Cc: "ddorra(a)t-online.de" <ddorra(a)t-online.de>
Message-ID: <5895770.MhkbZ0Pkbq@p1gen4>
Content-Type: multipart/signed; boundary="nextPart1790705.VLH7GnMWUR";
micalg="pgp-sha256"; protocol="application/pgp-signature"
Hi,
You need to create bridge (e.g. br-ex), add your eth1 to that bridge and put name of the bridge in the bridge_mapping.
Dnia poniedziałek, 23 października 2023 21:44:43 CEST ddorra(a)t-online.de pisze:
>
> Hello,
>
> I'm installing Openstack Antelope with network option 2 ( following
> https://docs.openstack.org/neutron/2023.2/install/compute-install-opti
> on2-ubuntu.html) The interface name of the provider network is eth1,
> so I put this to bridge_mappings.
> The local IP is from the management network.
>
> #------------------------------------------------------
> # /etc/neutron/plugins/ml2/openvswitch_agent.ini
> #
> [DEFAULT]
> [agent]
> [dhcp]
> [network_log]
> [ovs]
> bridge_mappings = provider:eth1
> [securitygroup]
> enable_security_group = true
> firewall_driver = openvswitch
> [vxlan]
> local_ip = 192.168.2.71
> l2_population = true
> #--------------------------------------------------------
>
> However, the neutron log complains that bridger eth1 does not exist.
> Launching of instances fails
> neutron-openvswitch-agent.log:
> 2023-10-23 19:26:00.062 17604 INFO os_ken.base.app_manager [-]
> instantiating app os_ken.app.ofctl.service of OfctlService
> 2023-10-23 19:26:00.062 17604 INFO
> neutron.agent.agent_extensions_manager
> [-] Loaded agent extensions: []
> 2023-10-23 19:26:00.108 17604 INFO
> neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_brid
> ge [-] Bridge br-int has datapath-ID 00005e65b1943a49
> 2023-10-23 19:26:02.438 17604 INFO
> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-]
> Mapping physical network provider to bridge eth1 vvvvvvv
> 2023-10-23 19:26:02.438 17604 ERROR
> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-]
> Bridge
> eth1 for physical network provider does not exist. Agent terminated!
> ^^^^^^^
> 2023-10-23 19:26:03.914 17619 INFO neutron.common.config [-] Logging
> enabled!
> 2023-10-23 19:26:03.914 17619 INFO neutron.common.config [-]
> /usr/bin/neutron-openvswitch-agent version 20.4.0
> 2023-10-23 19:26:03.914 17619 INFO os_ken.base.app_manager [-] loading
> app
> neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_oske
> napp
>
> -----------------------------------------
> Additional information
> root@control:/var/log/neutron# openstack network list
> +--------------------------------------+----------+--------------------------------------+
> | ID | Name | Subnets |
> +--------------------------------------+----------+--------------------------------------+
> | 32f53edc-a394-419f-a438-a664183ee618 | doznet |
> e38e25c2-4683-48fb-a7a0-7cbd7d276ee1 |
> | 74e3ee6a-1116-4ff6-9e99-530c3cbaef28 | provider |
> 2d3c3de4-9a0d-4a21-9af1-8ecb9f6f16d5 |
> +--------------------------------------+----------+--------------------------------------+
> root@control:/var/log/neutron#
>
> root@compute1:/etc/neutron/plugins/ml2# ip a
> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel
> state UP group default qlen 1000 link/ether 08:00:27:9f:c7:89 brd
> ff:ff:ff:ff:ff:ff inet 192.168.2.71/24 brd 192.168.2.255 scope global
> eth0 valid_lft forever preferred_lft forever
> inet6 fe80::a00:27ff:fe9f:c789/64 scope link valid_lft forever
> preferred_lft forever
> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel
> state UP group default qlen 1000 link/ether 08:00:27:91:60:58 brd
> ff:ff:ff:ff:ff:ff inet 10.0.0.71/24 brd 10.0.0.255 scope global eth1
> valid_lft forever preferred_lft forever
> inet6 fe80::a00:27ff:fe91:6058/64 scope link valid_lft forever
> preferred_lft forever
> 4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
> group default qlen 1000 link/ether da:3b:3a:a0:59:97 brd
> ff:ff:ff:ff:ff:ff
> 5: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
> default qlen 1000 link/ether 5e:65:b1:94:3a:49 brd ff:ff:ff:ff:ff:ff
> 6: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
> state DOWN group default qlen 1000 link/ether 52:54:00:53:87:59 brd
> ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope
> global virbr0 valid_lft forever preferred_lft forever
>
>
> What are the proper settings?
> Any help appreciated
> Dieter
>
>
>
--
Slawek Kaplonski
Principal Software Engineer
Red Hat-------------- next part -------------- A message part incompatible with plain text digests has been removed ...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: This is a digitally signed message part.
------------------------------
Message: 3
Date: Mon, 23 Oct 2023 16:36:20 -0500
From: Kendall Nelson <kennelson11(a)gmail.com>
Subject: Environmental Sustainability WG goes to the PTG!
To: OpenStack Discuss <openstack-discuss(a)lists.openstack.org>
Message-ID:
<CAJ6yrQgbdvFW7NcUGJ5zLyXagx=hMt6R7ib0tk=1Y+sm06MY=w(a)mail.gmail.com>
Content-Type: multipart/alternative;
boundary="000000000000e3ee26060869044d"
Hello Everyone!
Last minute, but I wanted to make sure you saw that we are planning to meet at the PTG, tomorrow (Tuesday Oct 24th) from 13:00 UTC. I have the room booked for two hours, but I don't think we will need all of that time.
If there are topics you would like to discuss, please add them to the agenda[1].
Hope to see you there!
-Kendall Nelson
[1] https://etherpad.opendev.org/p/oct2023-ptg-env-sus
2 years
Re: openstack-discuss Digest, Vol 60, Issue 10 ~ Fwaas in Openstack 2023.1 antelope
by Lajos Katona
Hi Asma,
The enable_plugin is for devstack based deployments, I suppose.
To tell the truth I am not familiar with kolla, but I found this page which
speaks about enabling neutron extensions like sfc or vpnaas:
https://docs.openstack.org/kolla-ansible/4.0.2/networking-guide.html
and these in the group_vars/all.yml:
https://opendev.org/openstack/kolla-ansible/src/branch/master/ansible/group…
So to enable vpnaas: enable_neutron_vpnaas: "yes" I suppose to do all the
magic to install and set neutron-vpnaas.
I can't find neutron-fwaas , but perhaps this is just my lack of experience
with kolla.
To see what is necessary to configure fwaas, I would check the devstack
plugin:
https://opendev.org/openstack/neutron-fwaas/src/branch/master/devstack
Best wishes.
Lajos (lajoskatona)
Asma Naz Shariq <asma.naz(a)techavenue.biz> ezt írta (időpont: 2023. okt. 6.,
P, 14:37):
> Hi Openstack Community!
>
> I have set up OpenStack release 2023.1 antelope with Kolla-Ansible .
> However, I noticed that there is no enable_plugin option in the
> /etc/kolla/global.yml file. Now, I am trying to install FWaaS
> (Firewall-as-a-Service) following the instructions provided in this
> OpenStack's Firewall-as-a-Service (FWaaS) v2 scenario documentation.
>
> The documentation states, On Ubuntu and CentOS, modify the [fwaas] section
> in the /etc/neutron/fwaas_driver.ini file instead of
> /etc/neutron/neutron.conf. Unfortunately, I cannot find the
> fwaas_driver.ini
> file in the neutron-server, neutron-l3-agent, or neutron-openvswitch-agent
> containers
>
> Can someone guide me on how to properly install FWaaS in a Kolla
> environment
> using the information from the provided link?
>
> Best,
>
> -----Original Message-----
> From: openstack-discuss-request(a)lists.openstack.org
> <openstack-discuss-request(a)lists.openstack.org>
> Sent: Friday, October 6, 2023 1:27 PM
> To: openstack-discuss(a)lists.openstack.org
> Subject: openstack-discuss Digest, Vol 60, Issue 10
>
> Send openstack-discuss mailing list submissions to
> openstack-discuss(a)lists.openstack.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>
> https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss
>
> or, via email, send a message with subject or body 'help' to
> openstack-discuss-request(a)lists.openstack.org
>
> You can reach the person managing the list at
> openstack-discuss-owner(a)lists.openstack.org
>
> When replying, please edit your Subject line so it is more specific than
> "Re: Contents of openstack-discuss digest..."
>
>
> Today's Topics:
>
> 1. Re: [ops] [nova] "invalid argument: shares xxx must be in
> range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update
> (Massimo Sgaravatto)
> 2. [TC][Monasca] Proposal to mark Monasca as an inactive project
> (S?awek Kap?o?ski)
> 3. [neutron] Neutron drivers meeting cancelled
> (Rodolfo Alonso Hernandez)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 6 Oct 2023 09:10:47 +0200
> From: Massimo Sgaravatto <massimo.sgaravatto(a)gmail.com>
> To: Franciszek Przewo?ny <fprzewozny(a)opera.com>
> Cc: OpenStack Discuss <openstack-discuss(a)lists.openstack.org>,
> smooney(a)redhat.com
> Subject: Re: [ops] [nova] "invalid argument: shares xxx must be in
> range [1, 10000]" after 1:25.2.0 to 1.25.2.1. update
> Message-ID:
> <
> CALaZjRGh6xnzX12cMgDTYx2yJYddUD9X3oh60JWnrB33ZdEf_Q(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Thanks a lot Franciszek !
> I was indeed seeing the problem with a VM big 56 vcpus while I didn't see
> the issue with a tiny instance
>
> Thanks again !
>
> Cheers, Massimo
>
> On Fri, Oct 6, 2023 at 8:58?AM Franciszek Przewo?ny <fprzewozny(a)opera.com>
> wrote:
>
> > Hi Massimo,
> >
> > We are using Ubuntu for our environments and we experienced the same
> > issue during upgrade from Yoga/Focal to Yoga/Jammy. On Yoga/Focal
> > cgroups_v1 were used, and cpu_shares parameter value was cpu count *
> > 1024. From Jammy
> > cgroups_v2 have been implemented, and cpu_shares value has been set by
> > default to 100. It has hard limit of 10000, so flavors with more than
> > 9vCPUs won't fit. If you need to fix this issue without stopping VMs,
> > you can set cpu_shares with libvirt command: virsh schedinfo $domain
> > --live
> > cpu_shares=100
> > for more details about virsh schedinfo visit:
> > https://libvirt.org/manpages/virsh.html#schedinfo
> >
> > BR,
> > Franciszek
> >
> > On 5 Oct 2023, at 21:17, smooney(a)redhat.com wrote:
> >
> > On Thu, 2023-10-05 at 16:53 +0200, Massimo Sgaravatto wrote:
> >
> > Dear all
> >
> > We have recently updated openstack nova on some AlmaLinux9 compute
> > nodes running Yoga from 1:25.2.0 to 1.25.2.1. After this operation
> > some VMs don't start anymore. In the log it is reported:
> >
> > libvirt.libvirtError: invalid argument: shares \'57344\' must be in
> > range [1, 10000]\n'}
> >
> > libvirt version is 9.0.0-10.3
> >
> >
> > A quick google search suggests that it is something related to cgroups
> > and it is fixed in libvirt >= 9.1 (which is not yet in the almalinux9
> repos).
> > Did I get it right ?
> >
> > not quite
> >
> > it is reated to cgroups but the cause is that in cgroups_v1 the
> > maxvlaue of shares i.e. cpu_shares changed form make int to 10000 in
> > cgroups_v2 so the issue is teh vm requested a cpu share value of 57344
> > which is not vlaid on an OS that is useing cgroups_v2 libvirt will not
> > clamp the value nor will nova.
> > you have to change the volue in your flavor and resize the vm.
> >
> >
> >
> >
> > Thanks, Massimo
> >
> >
> >
> >
> >
>
2 years, 1 month
Re: [all][ptg] Pre-PTG discussion: New Keystone Middleware Feature Supporting OAuth2.0 with External Authorization Service
by Hiromu Asahina
As Keystone canceled Monday 14 UTC timeslot [1], I'd like to hold this
discussion on Monday 15 UTC timeslot. If it doesn't work for Ironic
members, please kindly reply convenient timeslots.
[1] https://ptg.opendev.org/ptg.html
Thanks,
Hiromu Asahina
On 2023/03/22 20:01, Hiromu Asahina wrote:
> Thanks!
>
> I look forward to your reply.
>
> On 2023/03/22 1:29, Julia Kreger wrote:
>> No worries!
>>
>> I think that time works for me. I'm not sure it will work for
>> everyone, but
>> I can proxy information back to the whole of the ironic project as we
>> also
>> have the question of this functionality listed for our Operator Hour in
>> order to help ironic gauge interest.
>>
>> -Julia
>>
>> On Tue, Mar 21, 2023 at 9:00 AM Hiromu Asahina <
>> hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
>>
>>> I apologize that I couldn't reply before the Ironic meeting on Monday.
>>>
>>> I need one slot to discuss this topic.
>>>
>>> I asked Keystone today and Monday's first Keystone slot (14 UTC Mon,
>>> 27)[1,2] works for them. Does this work for Ironic? I understand not all
>>> Ironic members will join this discussion, so I hope we can arrange a
>>> convenient date for you two at least and, hopefully, for those
>>> interested in this topic.
>>>
>>> [1]
>>>
>>> https://www.timeanddate.com/worldclock/fixedtime.html?iso=2023-03-27T14:00:…
>>> [2] https://ptg.opendev.org/ptg.html
>>>
>>> Thanks,
>>> Hiromu Asahina
>>>
>>> On 2023/03/17 23:29, Julia Kreger wrote:
>>>> I'm not sure how many Ironic contributors would be the ones to attend a
>>>> discussion, in part because this is disjointed from the items they need
>>> to
>>>> focus on. It is much more of a "big picture" item for those of us
>>>> who are
>>>> leaders in the project.
>>>>
>>>> I think it would help to understand how much time you expect the
>>> discussion
>>>> to take to determine a path forward and how we can collaborate. Ironic
>>> has
>>>> a huge number of topics we want to discuss during the PTG, and I
>>>> suspect
>>>> our team meeting on Monday next week should yield more
>>>> interest/awareness
>>>> as well as an amount of time for each topic which will aid us in
>>> scheduling.
>>>>
>>>> If you can let us know how long, then I think we can figure out when
>>>> the
>>>> best day/time will be.
>>>>
>>>> Thanks!
>>>>
>>>> -Julia
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Fri, Mar 17, 2023 at 2:57 AM Hiromu Asahina <
>>>> hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
>>>>
>>>>> Thank you for your reply.
>>>>>
>>>>> I'd like to decide the time slot for this topic.
>>>>> I just checked PTG schedule [1].
>>>>>
>>>>> We have the following time slots. Which one is convenient to gether?
>>>>> (I didn't get reply but I listed Barbican, as its cores are almost the
>>>>> same as Keystone)
>>>>>
>>>>> Mon, 27:
>>>>>
>>>>> - 14 (keystone)
>>>>> - 15 (keystone)
>>>>>
>>>>> Tue, 28
>>>>>
>>>>> - 13 (barbican)
>>>>> - 14 (keystone, ironic)
>>>>> - 15 (keysonte, ironic)
>>>>> - 16 (ironic)
>>>>>
>>>>> Wed, 29
>>>>>
>>>>> - 13 (ironic)
>>>>> - 14 (keystone, ironic)
>>>>> - 15 (keystone, ironic)
>>>>> - 21 (ironic)
>>>>>
>>>>> Thanks,
>>>>>
>>>>> [1] https://ptg.opendev.org/ptg.html
>>>>>
>>>>> Hiromu Asahina
>>>>>
>>>>>
>>>>> On 2023/02/11 1:41, Jay Faulkner wrote:
>>>>>> I think it's safe to say the Ironic community would be very
>>>>>> invested in
>>>>>> such an effort. Let's make sure the time chosen for vPTG with this is
>>>>> such
>>>>>> that Ironic contributors can attend as well.
>>>>>>
>>>>>> Thanks,
>>>>>> Jay Faulkner
>>>>>> Ironic PTL
>>>>>>
>>>>>> On Fri, Feb 10, 2023 at 7:40 AM Hiromu Asahina <
>>>>>> hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
>>>>>>
>>>>>>> Hello Everyone,
>>>>>>>
>>>>>>> Recently, Tacker and Keystone have been working together on a new
>>>>> Keystone
>>>>>>> Middleware that can work with external authentication
>>>>>>> services, such as Keycloak. The code has already been submitted [1],
>>> but
>>>>>>> we want to make this middleware a generic plugin that works
>>>>>>> with as many OpenStack services as possible. To that end, we would
>>> like
>>>>> to
>>>>>>> hear from other projects with similar use cases
>>>>>>> (especially Ironic and Barbican, which run as standalone
>>>>>>> services). We
>>>>>>> will make a time slot to discuss this topic at the next vPTG.
>>>>>>> Please contact me if you are interested and available to
>>>>>>> participate.
>>>>>>>
>>>>>>> [1]
>>> https://review.opendev.org/c/openstack/keystonemiddleware/+/868734
>>>>>>>
>>>>>>> --
>>>>>>> Hiromu Asahina
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> ◇-------------------------------------◇
>>>>> NTT Network Innovation Center
>>>>> Hiromu Asahina
>>>>> -------------------------------------
>>>>> 3-9-11, Midori-cho, Musashino-shi
>>>>> Tokyo 180-8585, Japan
>>>>> Phone: +81-422-59-7008
>>>>> Email: hiromu.asahina.az(a)hco.ntt.co.jp
>>>>> ◇-------------------------------------◇
>>>>>
>>>>>
>>>>
>>>
>>> --
>>> ◇-------------------------------------◇
>>> NTT Network Innovation Center
>>> Hiromu Asahina
>>> -------------------------------------
>>> 3-9-11, Midori-cho, Musashino-shi
>>> Tokyo 180-8585, Japan
>>> Phone: +81-422-59-7008
>>> Email: hiromu.asahina.az(a)hco.ntt.co.jp
>>> ◇-------------------------------------◇
>>>
>>>
>>
>
--
◇-------------------------------------◇
NTT Network Innovation Center
Hiromu Asahina
-------------------------------------
3-9-11, Midori-cho, Musashino-shi
Tokyo 180-8585, Japan
Phone: +81-422-59-7008
Email: hiromu.asahina.az(a)hco.ntt.co.jp
◇-------------------------------------◇
2 years, 7 months
[placement] update 19-12
by Chris Dent
HTML: https://anticdent.org/placement-update-19-12.html
Placement update 19-12. Nearing 1/4 of the way through the year.
I won't be around on Monday, if someone else can chair the
[meeting](http://eavesdrop.openstack.org/#Placement_Team_Meeting)
that would be great. Or feel free to cancel it.
# Most Important
An RC2 was cut earlier this week, expecting it to be the last, but
there are a [couple of
patches](https://review.openstack.org/#/q/project:openstack/placement+branc…
which could be put in an RC3 if we were inclined that way.
Discuss.
We merged a first suite of [contribution
guidelines](https://docs.openstack.org/placement/latest/contributor/contrib….
These are worth reading as they explain how to manage bugs, start
new features, and be a good reviewer. Because of the introduction of
StoryBoard, processes are different from what you may have been used
to in Nova.
Because of limited time and space and conflicting responsibilities
the Placement team will be doing a [Virtual
Pre-PTG](http://lists.openstack.org/pipermail/openstack-discuss/2019-March/….
# What's Changed
* The contribution guidelines linked above describe how to manage
specs, which will now be in-tree. If you have a spec to propose or
re-propose (from stein in nova), it now goes in
``doc/source/specs/train/approved/``.
* Some [image type traits](https://review.openstack.org/648147) have
merged (to be used in a nova-side request pre-filter), but the
change has exposed an issue we'll need to resolve: os-traits and
os-resource-classes are under the cycle-with-intermediary style
release which means that at this time in the cycle it is difficult
to make a release which can delay work. We could switch to
independent. This would make sense for libraries that are
basically lists of strings. It's hard to break that. We could also
investigate making os-traits and os-resource-classes
required-projects in job templates in zuul. This would allow them
to be "tox siblings". Or we could wait. Please express an
opinion if you have one.
* In discussion in `#openstack-nova` about the patch to [delete
placement from nova](https://review.openstack.org/#/c/618215/), it
was decided that rather than merge that after the final RC, we
would wait until the PTG. There is discussion on the patch which
attempts to explain the reasons why.
# Specs/Blueprint/Features
* The spec for [forbidden aggregate
membership](https://docs.openstack.org/placement/latest/specs/train/approve…
has merged.
* The two traits-related specs from stein will need to be
re-proposed to placement for train:
* [any traits](http://specs.openstack.org/openstack/nova-specs/specs/stein/approve…
* [mixing required traits](http://specs.openstack.org/openstack/nova-specs/specs/stein/approve…
* The spec for [request group
mapping](https://review.openstack.org/597601) will need to be
revisited.
# Bugs
* StoryBoard stories in [the placement
group](https://storyboard.openstack.org/#!/project_group/placement): 4
* Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 14.
* [In progress placement bugs](https://goo.gl/vzGGDQ) 7. +1.
We should do a bug squash day at some point. Should we wait until
after the PTG or no?
Note that the [contribution
guidelines](https://docs.openstack.org/placement/latest/contributor/contrib…
has some information on how to evaluate new stories and what tags to
add.
# osc-placement
osc-placement is currently behind by 13 microversions.
Pending changes:
* [support for 1.19](https://review.openstack.org/#/c/641094/)
* [support for 1.21](https://review.openstack.org/#/c/641123/)
* [aggregate allocation ratio
tool](https://review.openstack.org/#/c/640898/)
# Main Themes
Be thinking about what you'd like the main themes to be. Put them on
the [PTG
etherpad](https://etherpad.openstack.org/p/placement-ptg-train).
# Other Placement
* <https://review.openstack.org/#/q/topic:2005297-negative-aggregate-membership>
Negative member of aggregate filtering resource providers and
allocation candidates. This is nearly ready.
* <https://review.openstack.org/#/c/645255/>
This is a start at unit tests for the PlacementFixture. It is
proving a bit "fun" to get right, as there are many layers
involved. Making sure seemingly unrelated changes in placement
don't break the nova gate is important. Besides these unit tests,
there's discussion on the PTG etherpad of running the nova
functional tests, or a subset thereof, in placement's check run.
On the one hand this is a pain and messy, but on the other
consider what we're enabling: Functional tests that use the real
functionality of an external service (real data, real web
requests), not stubs or fakes.
* <https://review.openstack.org/641404>
Use ``code`` role in api-ref titles
# Other Service Users
There's a lot here, but it is certain this is not all of it. If I
missed something you care about, followup mentioning it.
* <https://review.openstack.org/552924>
Nova: Spec: Proposes NUMA topology with RPs
* <https://review.openstack.org/622893>
Nova: Spec: Virtual persistent memory libvirt driver
implementation
* <https://review.openstack.org/641899>
Nova: Check compute_node existence in when nova-compute reports
info to placement
* <https://review.openstack.org/601596>
Nova: spec: support virtual persistent memory
* <https://review.openstack.org/#/q/topic:bug/1790204>
Workaround doubling allocations on resize
* <https://review.openstack.org/555081>
Nova: Spec: Standardize CPU resource tracking
* <https://review.openstack.org/646029>
Nova: Spec: Use in_tree getting allocation candidates
* <https://review.openstack.org/645316>
Nova: Pre-filter hosts based on multiattach volume support
* <https://review.openstack.org/606199>
Ironic: A fresh way of looking at step retrieval
* <https://review.openstack.org/647396>
Nova: Add flavor to requested_resources in RequestSpec
* <https://review.openstack.org/633204>
Blazar: Retry on inventory update conflict
* <https://review.openstack.org/640080>
Nova: Use aggregate_add_host in nova-manage
* <https://review.openstack.org/#/q/topic:bp/count-quota-usage-from-placement>
Nova: count quota usage from placement
* <https://review.openstack.org/#/q/topic:bug/1819923>
Nova: nova-manage: heal port allocations
* <https://review.openstack.org/648500>
Tempest: Init placement client in tempest Manager object
* <https://review.openstack.org/624335>
puppet-tripleo: Initial extraction of the Placement service from Nova
* <https://review.openstack.org/#/q/topic:bug/1821824>
Nova: bug fix prevent forbidden traits from working as expected
* <https://review.openstack.org/648665>
Nova: Spec for a new nova virt driver to manage an RSD
* <https://review.openstack.org/#/q/topic:bug/1819460>
Nova: Handle placement error during re-schedule
* <https://review.openstack.org/#/c/642067/>
Helm: Allow more generic overrides for nova placement-api
* <https://review.openstack.org/647578>
Nova: add spec for image metadata prefiltering
--
Chris Dent ٩◔̯◔۶ https://anticdent.org/
freenode: cdent tw: @anticdent
6 years, 7 months
[placement] update 19-20
by Chris Dent
HTML: https://anticdent.org/placement-update-19-20.html
Placement update 19-20. Lots of cleanups in progress, laying in the
groundwork to do the nested magic work (see themes below).
The poll to determine [what to do with the
weekly meeting](https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_9599a2647c319fd4&…
will close at the end of today. Thus far the leader is
office hours. Whatever the outcome, the meeting that would happen
this coming Monday is cancelled because many people will be having a
holiday.
# Most Important
The [spec for nested magic](https://review.opendev.org/658510) is
ready for more robust review. Since most of the work happening in
placement this cycle is described by that spec, getting it reviewed
well and quickly is important.
Generally speaking: review things. This is, and always will be, the
most important thing to do.
# What's Changed
* os-resource-classes 0.4.0 was released, promptly breaking the
placement gate (tests are broken not os-resource-classes).
[Fixes underway](https://review.opendev.org/661131).
* [Null root provider
protections](https://review.opendev.org/657716) have been removed
and a blocker migration and status check added. This removes a few
now redundant joins in the SQL queries which should help with our
ongoing efforts to speed up and simplify getting allocation
candidates.
* I had suggested an additional core group for os-traits and
os-resource-classes but after discussion with various people it
was decided it's easier/better to be aware of the right subject
matter experts and call them in to the reviews when required.
# Specs/Features
* <https://review.opendev.org/654799>
Support Consumer Types. This is very close with a few details to
work out on what we're willing and able to query on. It's a week
later and it still only has reviews from me so far.
* <https://review.opendev.org/658510>
Spec for Nested Magic. Un-wipped.
* <https://review.opendev.org/657582>
Resource provider - request group mapping in allocation candidate.
This spec was copied over from nova. It is a requirement of the
overall nested magic theme. While it has a well-defined and
refined design, there's currently no one on the hook implement
it.
These and other features being considered can be found on the
[feature
worklist](https://storyboard.openstack.org/#!/worklist/594).
Some non-placement specs are listed in the Other section below.
# Stories/Bugs
(Numbers in () are the change since the last pupdate.)
There are 20 (-3) stories in [the placement
group](https://storyboard.openstack.org/#!/project_group/placement).
0 are [untagged](https://storyboard.openstack.org/#!/worklist/580).
2 (-2) are [bugs](https://storyboard.openstack.org/#!/worklist/574). 5 are
[cleanups](https://storyboard.openstack.org/#!/worklist/575). 11
(-1) are [rfes](https://storyboard.openstack.org/#!/worklist/594).
2 are [docs](https://storyboard.openstack.org/#!/worklist/637).
If you're interested in helping out with placement, those stories
are good places to look.
On launchpad:
* Placement related nova [bugs not yet in progress](https://goo.gl/TgiPXb)
on launchpad: 16 (0).
* Placement related nova [in progress bugs](https://goo.gl/vzGGDQ) on
launchpad: 7 (+1).
# osc-placement
osc-placement is currently behind by 11 microversions. No change
since the last report.
Pending changes:
* <https://review.openstack.org/#/c/640898/>
Add 'resource provider inventory update' command (that helps with
aggregate allocation ratios).
* <https://review.openstack.org/#/c/651783/>
Add support for 1.22 microversion
* <https://review.openstack.org/586056>
Provide a useful message in the case of 500-error
# Main Themes
## Nested Magic
At the PTG we decided that it was worth the effort, in both Nova and
Placement, to make the push to make better use of nested providers —
things like NUMA layouts, multiple devices, networks — while keeping
the "simple" case working well. The general ideas for this are
described in a [story](https://storyboard.openstack.org/#!/story/2005575)
and an evolving [spec](https://review.opendev.org/658510).
Some code has started, mostly to reveal issues:
* <https://review.opendev.org/657419>
Changing request group suffix to string
* <https://review.opendev.org/657510>
WIP: Allow RequestGroups without resources
* <https://review.opendev.org/657463>
Add NUMANetworkFixture for gabbits
* <https://review.opendev.org/658192>
Gabbi test cases for can_split
## Consumer Types
Adding a type to consumers will allow them to be grouped for various
purposes, including quota accounting. A
[spec](https://review.opendev.org/654799) has started. There are
some questions about request and response details that need to be
resolved, but the overall concept is sound.
## Cleanup
As we explore and extend nested functionality we'll need to do some
work to make sure that the code is maintainable and has suitable
performance. There's some work in progress for this that's important
enough to call out as a theme:
* <https://storyboard.openstack.org/#!/story/2005712>
Some work from Tetsuro exploring ways to remove redundancies in
the code. There's a [stack of good improvements](https://review.opendev.org/658778).
* <https://review.opendev.org/643269>
WIP: Optionally run a wsgi profiler when asked.
This was used to find some of the above issues. Should we make it
generally available or is it better as a thing to base off when
exploring?
* <https://review.opendev.org/660691>
Avoid traversing summaries in _check_traits_for_alloc_request
Ed Leafe has also been doing some intriguing work on using graph
databases with placement. It's not yet clear if or how it could be
integrated with mainline placement, but there are likely many things
to be learned from the experiment.
# Other Placement
Miscellaneous changes can be found in [the usual
place](https://review.opendev.org/#/q/project:openstack/placement+status:op….
There are several [os-traits
changes](https://review.opendev.org/#/q/project:openstack/os-traits+status:…
being discussed.
# Other Service Users
New discoveries are added to the end. Merged stuff is removed.
Starting with the next pupdate I'll also be removing anything that
has had no reviews and no activity from the author in 4 weeks.
Otherwise these lists get too long and uselessly noisy.
* <https://review.openstack.org/552924>
Nova: Spec: Proposes NUMA topology with RPs
* <https://review.openstack.org/622893>
Nova: Spec: Virtual persistent memory libvirt driver
implementation
* <https://review.openstack.org/641899>
Nova: Check compute_node existence in when nova-compute reports
info to placement
* <https://review.openstack.org/601596>
Nova: spec: support virtual persistent memory
* <https://review.openstack.org/#/q/topic:bug/1790204>
Workaround doubling allocations on resize
* <https://review.openstack.org/645316>
Nova: Pre-filter hosts based on multiattach volume support
* <https://review.openstack.org/647396>
Nova: Add flavor to requested_resources in RequestSpec
* <https://review.openstack.org/633204>
Blazar: Retry on inventory update conflict
* <https://review.openstack.org/#/q/topic:bp/count-quota-usage-from-placement>
Nova: count quota usage from placement
* <https://review.openstack.org/#/q/topic:bug/1819923>
Nova: nova-manage: heal port allocations
* <https://review.openstack.org/648665>
Nova: Spec for a new nova virt driver to manage an RSD
* <https://review.openstack.org/625284>
Cyborg: Initial readme for nova pilot
* <https://review.openstack.org/629142>
Tempest: Add QoS policies and minimum bandwidth rule client
* <https://review.openstack.org/648687>
Nova-spec: Add PENDING vm state
* <https://review.openstack.org/650188>
nova-spec: Allow compute nodes to use DISK_GB from shared storage RP
* <https://review.openstack.org/651024>
nova-spec: RMD Plugin: Energy Efficiency using CPU Core P-State control
* <https://review.openstack.org/650963>
nova-spec: Proposes NUMA affinity for vGPUs. This describes a
legacy way of doing things because affinity in placement may be a
ways off. But it also [may not
be](https://review.openstack.org/650476).
* <https://review.openstack.org/#/q/topic:heal_allocations_dry_run>
Nova: heal allocations, --dry-run
* <https://review.opendev.org/656448>
Watcher spec: Add Placement helper
* <https://review.opendev.org/659233>
Cyborg: Placement report
* <https://review.opendev.org/657884>
Nova: Spec to pre-filter disabled computes with placement
* <https://review.opendev.org/657801>
rpm-packaging: placement service
* <https://review.opendev.org/657016>
Delete resource providers for all nodes when deleting compute service
* <https://review.opendev.org/654066>
nova fix for: Drop source node allocations if finish_resize fails
* <https://review.opendev.org/660924>
neutron: Add devstack plugin for placement service plugin
* <https://review.opendev.org/661179>
ansible: Add playbook to test placement
* <https://review.opendev.org/656885>
nova: WIP: Hey let's support routed networks y'all!
# End
As indicated above, I'm going to tune these pupdates to make sure
they are reporting only active links. This doesn't mean stalled out
stuff will be ignored, just that it won't come back on the lists
until someone does some work related to it.
--
Chris Dent ٩◔̯◔۶ https://anticdent.org/
freenode: cdent
6 years, 5 months
Re: [all][ptg] Pre-PTG discussion: New Keystone Middleware Feature Supporting OAuth2.0 with External Authorization Service
by Jay Faulkner
So, looking over the Ironic PTG schedule, I appear to have booked Firmware
Upgrade interface in two places -- tomorrow and Wednesday 2200 UTC. This is
fortuitous: I can move the firmware upgrade conversation entirely into 2200
UTC, and give the time we had set aside to this topic.
Dave, Julia and I consulted on IRC, and decided to take this action. We'll
be adding an item to Ironic's PTG for tomorrow, Tuesday March 28 at 1500
UTC - 1525 UTC to discuss KeystoneMiddleware OAUTH support.
I will perform the following changes to the Ironic schedule to accommodate:
- Remove firmware upgrades from Ironic Tues 1630-1700 UTC, move all
discussion fo it to Weds 2200 UTC - 2300 UTC (should be plenty of time).
- Move everything from Service Steps and later (after the first break)
forward 30 minutes
- Add new item for KeystoneMiddleware/OAUTH discussion into Ironic's
schedule at Wednesday, 1500 UTC - 1525 UTC (30 minutes with room for a
break)
Ironic will host the discussion in the Folsom room, and Dave will ensure
interested keystone contributors are redirected to our room for this period.
-
Jay Faulkner
Ironic PTL
On Mon, Mar 27, 2023 at 7:07 AM Dave Wilde <dwilde(a)redhat.com> wrote:
> Hi Julia,
>
> No worries!
>
> I see that several of our sessions are overlapping, perhaps we could
> combine the 15:00 UTC session tomorrow to discuss this topic?
>
> /Dave
> On Mar 27, 2023 at 8:44 AM -0500, Julia Kreger <
> juliaashleykreger(a)gmail.com>, wrote:
>
>
>
> On Fri, Mar 24, 2023 at 9:55 AM Dave Wilde <dwilde(a)redhat.com> wrote:
>
>> I’m happy to book an additional time slot(s) specifically for this
>> discussion if something other than what we currently have works better for
>> everyone. Please let me know.
>>
>> /Dave
>> On Mar 24, 2023 at 10:49 AM -0500, Hiromu Asahina <
>> hiromu.asahina.az(a)hco.ntt.co.jp>, wrote:
>>
>> As Keystone canceled Monday 14 UTC timeslot [1], I'd like to hold this
>> discussion on Monday 15 UTC timeslot. If it doesn't work for Ironic
>> members, please kindly reply convenient timeslots.
>>
>>
> Unfortunately, I took the last few days off and I'm only seeing this now.
> My morning is booked up aside from the original time slot which was
> discussed.
>
> Maybe there is a time later in the week which could work?
>
>
>
>>
>> [1] https://ptg.opendev.org/ptg.html
>>
>> Thanks,
>>
>> Hiromu Asahina
>>
>> On 2023/03/22 20:01, Hiromu Asahina wrote:
>>
>> Thanks!
>>
>> I look forward to your reply.
>>
>> On 2023/03/22 1:29, Julia Kreger wrote:
>>
>> No worries!
>>
>> I think that time works for me. I'm not sure it will work for
>> everyone, but
>> I can proxy information back to the whole of the ironic project as we
>> also
>> have the question of this functionality listed for our Operator Hour in
>> order to help ironic gauge interest.
>>
>> -Julia
>>
>> On Tue, Mar 21, 2023 at 9:00 AM Hiromu Asahina <
>> hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
>>
>> I apologize that I couldn't reply before the Ironic meeting on Monday.
>>
>> I need one slot to discuss this topic.
>>
>> I asked Keystone today and Monday's first Keystone slot (14 UTC Mon,
>> 27)[1,2] works for them. Does this work for Ironic? I understand not all
>> Ironic members will join this discussion, so I hope we can arrange a
>> convenient date for you two at least and, hopefully, for those
>> interested in this topic.
>>
>> [1]
>>
>>
>> https://www.timeanddate.com/worldclock/fixedtime.html?iso=2023-03-27T14:00:…
>> [2] https://ptg.opendev.org/ptg.html
>>
>> Thanks,
>> Hiromu Asahina
>>
>> On 2023/03/17 23:29, Julia Kreger wrote:
>>
>> I'm not sure how many Ironic contributors would be the ones to attend a
>> discussion, in part because this is disjointed from the items they need
>>
>> to
>>
>> focus on. It is much more of a "big picture" item for those of us
>> who are
>> leaders in the project.
>>
>> I think it would help to understand how much time you expect the
>>
>> discussion
>>
>> to take to determine a path forward and how we can collaborate. Ironic
>>
>> has
>>
>> a huge number of topics we want to discuss during the PTG, and I
>> suspect
>> our team meeting on Monday next week should yield more
>> interest/awareness
>> as well as an amount of time for each topic which will aid us in
>>
>> scheduling.
>>
>>
>> If you can let us know how long, then I think we can figure out when
>> the
>> best day/time will be.
>>
>> Thanks!
>>
>> -Julia
>>
>>
>>
>>
>>
>> On Fri, Mar 17, 2023 at 2:57 AM Hiromu Asahina <
>> hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
>>
>> Thank you for your reply.
>>
>> I'd like to decide the time slot for this topic.
>> I just checked PTG schedule [1].
>>
>> We have the following time slots. Which one is convenient to gether?
>> (I didn't get reply but I listed Barbican, as its cores are almost the
>> same as Keystone)
>>
>> Mon, 27:
>>
>> - 14 (keystone)
>> - 15 (keystone)
>>
>> Tue, 28
>>
>> - 13 (barbican)
>> - 14 (keystone, ironic)
>> - 15 (keysonte, ironic)
>> - 16 (ironic)
>>
>> Wed, 29
>>
>> - 13 (ironic)
>> - 14 (keystone, ironic)
>> - 15 (keystone, ironic)
>> - 21 (ironic)
>>
>> Thanks,
>>
>> [1] https://ptg.opendev.org/ptg.html
>>
>> Hiromu Asahina
>>
>>
>> On 2023/02/11 1:41, Jay Faulkner wrote:
>>
>> I think it's safe to say the Ironic community would be very
>> invested in
>> such an effort. Let's make sure the time chosen for vPTG with this is
>>
>> such
>>
>> that Ironic contributors can attend as well.
>>
>> Thanks,
>> Jay Faulkner
>> Ironic PTL
>>
>> On Fri, Feb 10, 2023 at 7:40 AM Hiromu Asahina <
>> hiromu.asahina.az(a)hco.ntt.co.jp> wrote:
>>
>> Hello Everyone,
>>
>> Recently, Tacker and Keystone have been working together on a new
>>
>> Keystone
>>
>> Middleware that can work with external authentication
>> services, such as Keycloak. The code has already been submitted [1],
>>
>> but
>>
>> we want to make this middleware a generic plugin that works
>> with as many OpenStack services as possible. To that end, we would
>>
>> like
>>
>> to
>>
>> hear from other projects with similar use cases
>> (especially Ironic and Barbican, which run as standalone
>> services). We
>> will make a time slot to discuss this topic at the next vPTG.
>> Please contact me if you are interested and available to
>> participate.
>>
>> [1]
>>
>> https://review.opendev.org/c/openstack/keystonemiddleware/+/868734
>>
>>
>> --
>> Hiromu Asahina
>>
>>
>>
>>
>>
>>
>> --
>> ◇-------------------------------------◇
>> NTT Network Innovation Center
>> Hiromu Asahina
>> -------------------------------------
>> 3-9-11, Midori-cho, Musashino-shi
>> Tokyo 180-8585, Japan
>> Phone: +81-422-59-7008
>> Email: hiromu.asahina.az(a)hco.ntt.co.jp
>> ◇-------------------------------------◇
>>
>>
>>
>>
>> --
>> ◇-------------------------------------◇
>> NTT Network Innovation Center
>> Hiromu Asahina
>> -------------------------------------
>> 3-9-11, Midori-cho, Musashino-shi
>> Tokyo 180-8585, Japan
>> Phone: +81-422-59-7008
>> Email: hiromu.asahina.az(a)hco.ntt.co.jp
>> ◇-------------------------------------◇
>>
>>
>>
>>
>>
>> --
>> ◇-------------------------------------◇
>> NTT Network Innovation Center
>> Hiromu Asahina
>> -------------------------------------
>> 3-9-11, Midori-cho, Musashino-shi
>> Tokyo 180-8585, Japan
>> Phone: +81-422-59-7008
>> Email: hiromu.asahina.az(a)hco.ntt.co.jp
>> ◇-------------------------------------◇
>>
>>
>>
2 years, 7 months