From abishop at redhat.com Thu Jun 1 04:21:37 2023 From: abishop at redhat.com (Alan Bishop) Date: Wed, 31 May 2023 21:21:37 -0700 Subject: Add netapp storage in edge site | Wallaby | DCN In-Reply-To: References: Message-ID: On Wed, May 31, 2023 at 10:44?AM Swogat Pradhan wrote: > Hi Alan, > Can you please check if the environment file attached is in correct format > and how it should be? > Also should i remove the old netapp environment file and use the new > custom environment file created (attached). > You could retain the original environment file, and merge in just a portion of the sample file that you attached. The original file contains all the settings, including the resource_registry entry, to deploy the first netapp in the controlplane. You want to enhance this with just a couple of items in order to deploy the second netapp at the edge site. The important thing to know is the CinderNetappMultiConfig entries are necessary only for the second netapp. The 'tripleo_netapp' entry could be removed, because it's basically taken care of by the original env file. You only need to include the 'tripleo_netapp_dcn02' settings. Furthermore, you don't need to list every setting; you only need to list the ones that differ from the settings in the original env file. Basically, the settings in the original env file (including the resource_registry) define the default values for all netapp backends, and the entries in the CinderNetappMultiConfig define the overrides for specific backends. For example, if the original env file sets "CinderNetappStorageProtocol: iscsi" then that will be the default value for all netapp backends, unless you override it in the CinderNetappMultiConfig section. Lastly, your CinderNetappBackendName setting looks correct (it specifies both the 'tripleo_netapp' and 'tripleo_netapp_dcn02' backends). Alan > > With regards, > Swogat Pradhan > > On Wed, May 31, 2023 at 10:46?PM Swogat Pradhan > wrote: > >> Hi Alan, >> Thanks for your clarification, the way you suggested will solve my issue. >> But i already have a netapp backend in my central site and to add another >> backend should i follow this documentation: >> https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.0/html/custom_block_storage_back_end_deployment_guide/ref_configuration-sample-environment-file_custom-cinder-back-end >> >> And should i remove the old netapp environment file and use the new >> custom environment file created using the above mentioned guide?? >> >> I already have a prod workload in the currently deployed netapp and I do >> not want to cause any issues in that netapp storage. >> >> >> With regards, >> Swogat Pradhan >> >> On Tue, May 30, 2023 at 8:43?PM Alan Bishop wrote: >> >>> >>> >>> On Thu, May 25, 2023 at 9:39?PM Swogat Pradhan < >>> swogatpradhan22 at gmail.com> wrote: >>> >>>> Hi Alan, >>>> My netapp storage is located in edge site itself. >>>> As the networks are routable my central site is able to reach the >>>> netapp storage ip address (ping response is 30ms-40ms). >>>> Let's say i included the netapp storage yaml in central site deployment >>>> script (which is not recommended) and i am able to create the volumes as it >>>> is reachable from controller nodes. >>>> Will i be able to mount those volumes in edge site VM's?? And if i am >>>> able to do so, then how will the data flow?? When storing something in the >>>> netapp volume will the data flow through the central site controller and >>>> get stored in the storage space? >>>> >>> >>> A cinder-volume service running in the central site's controplane will >>> be able to work with a netapp backend that's physically located at an edge >>> site. The good news is the c-vol service will be HA because it will be >>> controlled by pacemaker running on the controllers. >>> >>> In order for VMs at the edge site to access volumes on the netapp, >>> you'll need to set the CinderNetappAvailabilityZone [1] to the edge site's >>> AZ. >>> >>> [1] >>> https://opendev.org/openstack/tripleo-heat-templates/src/branch/stable/wallaby/deployment/cinder/cinder-backend-netapp-puppet.yaml#L43 >>> >>> To attach a netapp volume, nova-compute at the edge will interact with >>> cinder-volume in the controlplane, and cinder-volume will in turn interact >>> with the netapp. This will happen over central <=> edge network >>> connections. Eventually, nova will directly connect to the netapp, so all >>> traffic from the VM to the netapp will occur within the edge site. Data >>> will not flow through the cinder-volume service, but there are restrictions >>> and limitations: >>> - Only that one edge site can access the netapp backend >>> - If the central <=> edge network connection then you won't be able to >>> attach or detach a netapp volume (but active connections will continue to >>> work) >>> >>> Of course, there are operations where cinder services are in the data >>> path (e.g. creating a volume from an image), but not when a VM is accessing >>> a volume. >>> >>> Alan >>> >>> >>>> With regards, >>>> Swogat Pradhan >>>> >>>> On Fri, 26 May 2023, 10:03 am Alan Bishop, wrote: >>>> >>>>> >>>>> >>>>> On Thu, May 25, 2023 at 12:09?AM Swogat Pradhan < >>>>> swogatpradhan22 at gmail.com> wrote: >>>>> >>>>>> Hi Alan, >>>>>> So, can I include the cinder-netapp-storage.yaml file in the central >>>>>> site and then use the new backend to add storage to edge VM's? >>>>>> >>>>> >>>>> Where is the NetApp physically located? Tripleo's DCN architecture >>>>> assumes the storage is physically located at the same site where the >>>>> cinder-volume service will be deployed. If you include the >>>>> cinder-netapp-storage.yaml environment file in the central site's >>>>> controlplane, then VMs at the edge site will encounter the problems I >>>>> outlined earlier (network latency, no ability to do cross-AZ attachments). >>>>> >>>>> >>>>>> I believe it is not possible right?? as the cinder volume in the edge >>>>>> won't have the config for the netapp. >>>>>> >>>>> >>>>> The cinder-volume services at an edge site are meant to manage storage >>>>> devices at that site. If the NetApp is at the edge site, ideally you'd >>>>> include some variation of the cinder-netapp-storage.yaml environment file >>>>> in the edge site's deployment. However, then you're faced with the fact >>>>> that the NetApp driver doesn't support A/A, which is required for c-vol >>>>> services running at edge sites (In case you're not familiar with these >>>>> details, tripleo runs all cinder-volume services in active/passive mode >>>>> under pacemaker on controllers in the controlplane. Thus, only a single >>>>> instance runs at any time, and pacemaker provides HA by moving the service >>>>> to another controller if the first one goes down. However, pacemaker is not >>>>> available at edge sites, and so to get HA, multiple instances of the >>>>> cinder-volume service run simultaneously on 3 nodes (A/A), using etcd as a >>>>> Distributed Lock Manager (DLM) to coordinate things. But drivers must >>>>> specifically support running A/A, and the NetApp driver does NOT.) >>>>> >>>>> Alan >>>>> >>>>> >>>>>> With regards, >>>>>> Swogat Pradhan >>>>>> >>>>>> On Thu, May 25, 2023 at 2:17?AM Alan Bishop >>>>>> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Wed, May 24, 2023 at 3:15?AM Swogat Pradhan < >>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> I have a DCN setup and there is a requirement to use a netapp >>>>>>>> storage device in one of the edge sites. >>>>>>>> Can someone please confirm if it is possible? >>>>>>>> >>>>>>> >>>>>>> I see from prior email to this list that you're using tripleo, so >>>>>>> I'll respond with that in mind. >>>>>>> >>>>>>> There are many factors that come into play, but I suspect the short >>>>>>> answer to your question is no. >>>>>>> >>>>>>> Tripleo's DCN architecture requires the cinder-volume service >>>>>>> running at edge sites to run in active-active >>>>>>> mode, where there are separate instances running on three nodes in >>>>>>> to for the service to be highly >>>>>>> available (HA).The problem is that only a small number of cinder >>>>>>> drivers support running A/A, and NetApp's >>>>>>> drivers do not support A/A. >>>>>>> >>>>>>> It's conceivable you could create a custom tripleo role that deploys >>>>>>> just a single node running cinder-volume >>>>>>> with a NetApp backend, but it wouldn't be HA. >>>>>>> >>>>>>> It's also conceivable you could locate the NetApp system in the >>>>>>> central site's controlplane, but there are >>>>>>> extremely difficult constraints you'd need to overcome: >>>>>>> - Network latency between the central and edge sites would mean the >>>>>>> disk performance would be bad. >>>>>>> - You'd be limited to using iSCSI (FC wouldn't work) >>>>>>> - Tripleo disables cross-AZ attachments, so the only way for an edge >>>>>>> site to access a NetApp volume >>>>>>> would be to configure the cinder-volume service running in the >>>>>>> controlplane with a backend availability >>>>>>> zone set to the edge site's AZ. You mentioned the NetApp is needed >>>>>>> "in one of the edge sites," but in >>>>>>> reality the NetApp would be available in one, AND ONLY ONE edge >>>>>>> site, and it would also not be available >>>>>>> to any instances running in the central site. >>>>>>> >>>>>>> Alan >>>>>>> >>>>>>> >>>>>>>> And if so then should i add the parameters in the edge deployment >>>>>>>> script or the central deployment script. >>>>>>>> Any suggestions? >>>>>>>> >>>>>>>> With regards, >>>>>>>> Swogat Pradhan >>>>>>>> >>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From manchandavishal143 at gmail.com Thu Jun 1 09:08:51 2023 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Thu, 1 Jun 2023 14:38:51 +0530 Subject: [horizon][stable] [release] Proposal to make stable/stein as EOL Message-ID: Hi all, As discussed in yesterday's Horizon weekly meeting[1] on 2023-05-31, I want to announce that the Horizon team decided to move the stable/stein branch as EOL. Consider this mail as an official announcement of that. The gate for horizon stable/stein is failing for the last few months [2]. I would like to know if anyone still wants to keep this branch to be open otherwise, I will propose an EOL patch for this branch. Thanks & Regards, Vishal Manchanda [1] https://meetings.opendev.org/irclogs/%23openstack-horizon/%23openstack-horizon.2023-05-31.log.html#t2023-05-31T15:35:22 [2] https://review.opendev.org/q/project:openstack%252Fhorizon+branch:stable%252Fstein+status:open -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu Jun 1 10:57:03 2023 From: zigo at debian.org (Thomas Goirand) Date: Thu, 1 Jun 2023 12:57:03 +0200 Subject: CRITICAL! RabbitMQ PackageCloud repos will be not more available from today - affected Openstack-ansible In-Reply-To: References: <83dd077c3248b87e7bee15ebd4b88477@sunray.sk> Message-ID: On 5/28/23 02:27, Dmitriy Rabotyagov wrote: > That is just ridiculous... We have just switched from cloudsmith because > it's rotating packages too aggressively... IMO, what's ridiculous, is to insist using upstream broken-by-design repositories for each-and-every-component (this event illustrate it well...), just because you want the latest upstream release of everything for no valid reason (as if what was released 2 weeks ago is not relevant anymore...). On the specific case of RabbitMQ, this means using the upstream repo version of erlang, meaning that everything else that is packaged in the distro that uses erlang is broken. If there was any valid reason to do a backport of a component in stable-backports (for Debian), or even if it was a personal preference of the OSA team, I would have happily done the work. Though never ever, the OSA / Kolla team got in touch with me for this kind of things. Another issue is that, if you want to do an off-line installation (ie: without internet connectivity on your OpenStack servers), it becomes really horrible to setup all the mirrors. This broken policy is one major blocker for me to even use OSA, and one good reason that makes me recommend against using it. Is there a chance that we see the team changing this policy / way of installing things? Or am I mistaking that this is a mandatory thing in OSA maybe? If so, then I probably shouldn't have written the above, so please let me know. Cheers, Thomas Goirand (zigo) From noonedeadpunk at gmail.com Thu Jun 1 11:19:19 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Thu, 1 Jun 2023 13:19:19 +0200 Subject: CRITICAL! RabbitMQ PackageCloud repos will be not more available from today - affected Openstack-ansible In-Reply-To: References: <83dd077c3248b87e7bee15ebd4b88477@sunray.sk> Message-ID: You can set `rabbitmq_install_method: distro` in user_variables and rabbitmq will get installed from distro-provided repositories rather then external ones: https://opendev.org/openstack/openstack-ansible-rabbitmq_server/src/branch/master/releasenotes/notes/rabbitmq-using-external-repo-instead-of-pkg-file-8cdd00f58d3496ba.yaml But yes, default behaviour is to use external repos. You're slightly wrong about reasons behind why this behaviour is default though. It's not about having "latest" versions, it's about having consistent/same versions across all distributions. First of all, then related bugs and security vulnerabilities are the same for all distros, so it's kinda easier to keep track on that. But then most important part is cross-distro installations. So let's assume an individual is running Ubuntu (or CentOS, doesn't matter), and they want to migrate to Debian. Having different rabbitmq versions installed by these distributions will totally be a blocker for such resilient migration. At the same time, when exactly the same versions of rabbit/erlang/galera are installed - they can just re-setup control planes one by one to another distro without any pain and the cluster will remain functional. ??, 1 ???. 2023??. ? 13:00, Thomas Goirand : > > On 5/28/23 02:27, Dmitriy Rabotyagov wrote: > > That is just ridiculous... We have just switched from cloudsmith because > > it's rotating packages too aggressively... > > IMO, what's ridiculous, is to insist using upstream broken-by-design > repositories for each-and-every-component (this event illustrate it > well...), just because you want the latest upstream release of > everything for no valid reason (as if what was released 2 weeks ago is > not relevant anymore...). > > On the specific case of RabbitMQ, this means using the upstream repo > version of erlang, meaning that everything else that is packaged in the > distro that uses erlang is broken. > > If there was any valid reason to do a backport of a component in > stable-backports (for Debian), or even if it was a personal preference > of the OSA team, I would have happily done the work. Though never ever, > the OSA / Kolla team got in touch with me for this kind of things. > > Another issue is that, if you want to do an off-line installation (ie: > without internet connectivity on your OpenStack servers), it becomes > really horrible to setup all the mirrors. > > This broken policy is one major blocker for me to even use OSA, and one > good reason that makes me recommend against using it. > > Is there a chance that we see the team changing this policy / way of > installing things? Or am I mistaking that this is a mandatory thing in > OSA maybe? If so, then I probably shouldn't have written the above, so > please let me know. > > Cheers, > > Thomas Goirand (zigo) > > From ralonsoh at redhat.com Thu Jun 1 11:21:10 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Thu, 1 Jun 2023 13:21:10 +0200 Subject: [neutron] New Yoga, Zed and 2023.1 releases Message-ID: Hello Neutrinos: Please check the new releases proposals for Yoga, Zed and 2023.1: * https://review.opendev.org/c/openstack/releases/+/885030 * https://review.opendev.org/c/openstack/releases/+/885031 * https://review.opendev.org/c/openstack/releases/+/885033 If you find something wrong please comment on the patch. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu Jun 1 11:42:46 2023 From: zigo at debian.org (Thomas Goirand) Date: Thu, 1 Jun 2023 13:42:46 +0200 Subject: CRITICAL! RabbitMQ PackageCloud repos will be not more available from today - affected Openstack-ansible In-Reply-To: References: <83dd077c3248b87e7bee15ebd4b88477@sunray.sk> Message-ID: On 6/1/23 13:19, Dmitriy Rabotyagov wrote: > You can set `rabbitmq_install_method: distro` in user_variables and > rabbitmq will get installed from distro-provided repositories rather > then external ones: > https://opendev.org/openstack/openstack-ansible-rabbitmq_server/src/branch/master/releasenotes/notes/rabbitmq-using-external-repo-instead-of-pkg-file-8cdd00f58d3496ba.yaml > But yes, default behaviour is to use external repos. > > You're slightly wrong about reasons behind why this behaviour is > default though. It's not about having "latest" versions, it's about > having consistent/same versions across all distributions. First of > all, then related bugs and security vulnerabilities are the same for > all distros, so it's kinda easier to keep track on that. But then most > important part is cross-distro installations. > So let's assume an individual is running Ubuntu (or CentOS, doesn't > matter), and they want to migrate to Debian. Having different rabbitmq > versions installed by these distributions will totally be a blocker > for such resilient migration. At the same time, when exactly the same > versions of rabbit/erlang/galera are installed - they can just > re-setup control planes one by one to another distro without any pain > and the cluster will remain functional. Hi Dmitriy, Thanks for the explanation. It makes more sense now (even though I don't think there's many people willing to switch distro in an existing deployment). Cheers, Thomas Goirand (zigo) From mnasiadka at gmail.com Thu Jun 1 11:49:49 2023 From: mnasiadka at gmail.com (=?utf-8?Q?Micha=C5=82_Nasiadka?=) Date: Thu, 1 Jun 2023 13:49:49 +0200 Subject: CRITICAL! RabbitMQ PackageCloud repos will be not more available from today - affected Openstack-ansible In-Reply-To: References: <83dd077c3248b87e7bee15ebd4b88477@sunray.sk> Message-ID: That?s the same reason for Kolla - we support multiple distros - so we try to have the same version across distributions to minimise differences - especially in RabbitMQ and MariaDB department. As a user you can still override the Dockerfile templates and use the distro provided packages - but for the images published through OpenDev CI jobs - we?re publishing with packages from upstream RMQ/MariaDB repos. Best regards, Michal > On 1 Jun 2023, at 13:19, Dmitriy Rabotyagov wrote: > > You can set `rabbitmq_install_method: distro` in user_variables and > rabbitmq will get installed from distro-provided repositories rather > then external ones: > https://opendev.org/openstack/openstack-ansible-rabbitmq_server/src/branch/master/releasenotes/notes/rabbitmq-using-external-repo-instead-of-pkg-file-8cdd00f58d3496ba.yaml > But yes, default behaviour is to use external repos. > > You're slightly wrong about reasons behind why this behaviour is > default though. It's not about having "latest" versions, it's about > having consistent/same versions across all distributions. First of > all, then related bugs and security vulnerabilities are the same for > all distros, so it's kinda easier to keep track on that. But then most > important part is cross-distro installations. > So let's assume an individual is running Ubuntu (or CentOS, doesn't > matter), and they want to migrate to Debian. Having different rabbitmq > versions installed by these distributions will totally be a blocker > for such resilient migration. At the same time, when exactly the same > versions of rabbit/erlang/galera are installed - they can just > re-setup control planes one by one to another distro without any pain > and the cluster will remain functional. > > ??, 1 ???. 2023??. ? 13:00, Thomas Goirand : >> >> On 5/28/23 02:27, Dmitriy Rabotyagov wrote: >>> That is just ridiculous... We have just switched from cloudsmith because >>> it's rotating packages too aggressively... >> >> IMO, what's ridiculous, is to insist using upstream broken-by-design >> repositories for each-and-every-component (this event illustrate it >> well...), just because you want the latest upstream release of >> everything for no valid reason (as if what was released 2 weeks ago is >> not relevant anymore...). >> >> On the specific case of RabbitMQ, this means using the upstream repo >> version of erlang, meaning that everything else that is packaged in the >> distro that uses erlang is broken. >> >> If there was any valid reason to do a backport of a component in >> stable-backports (for Debian), or even if it was a personal preference >> of the OSA team, I would have happily done the work. Though never ever, >> the OSA / Kolla team got in touch with me for this kind of things. >> >> Another issue is that, if you want to do an off-line installation (ie: >> without internet connectivity on your OpenStack servers), it becomes >> really horrible to setup all the mirrors. >> >> This broken policy is one major blocker for me to even use OSA, and one >> good reason that makes me recommend against using it. >> >> Is there a chance that we see the team changing this policy / way of >> installing things? Or am I mistaking that this is a mandatory thing in >> OSA maybe? If so, then I probably shouldn't have written the above, so >> please let me know. >> >> Cheers, >> >> Thomas Goirand (zigo) >> >> > From roberto.acosta at luizalabs.com Thu Jun 1 11:58:04 2023 From: roberto.acosta at luizalabs.com (Roberto Bartzen Acosta) Date: Thu, 1 Jun 2023 08:58:04 -0300 Subject: Community attendance - OpenInfra Summit Vancouver Message-ID: Hello, Will anyone from the Brazilian community attend the OpenInfra in Vancouver? I would like to meet other members from Brazil and discuss the challenges and possibilities of using OpenStack in Brazilian infrastructures. You can ping me on IRC too (racosta). Kind regards, Roberto -- _?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?._ *?**?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Thu Jun 1 12:01:44 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Thu, 1 Jun 2023 14:01:44 +0200 Subject: CRITICAL! RabbitMQ PackageCloud repos will be not more available from today - affected Openstack-ansible In-Reply-To: References: <83dd077c3248b87e7bee15ebd4b88477@sunray.sk> Message-ID: > even though I don't think there's many people willing to switch distro in an existing deployment Oh, there were quite some migrations from CentOS once they announced their Stream approach. And who knows what the future prepares for us... This is also applicable btw for OS upgrades. Again, if we're talking about CentOS or Rocky, where you must re-install OS to majorly upgrade it - having versions synced makes it less painful to upgrade through re-install. ??, 1 ???. 2023??. ? 13:50, Micha? Nasiadka : > > That?s the same reason for Kolla - we support multiple distros - so we try to have the same version across distributions to minimise differences - especially in RabbitMQ and MariaDB department. > > As a user you can still override the Dockerfile templates and use the distro provided packages - but for the images published through OpenDev CI jobs - we?re publishing with packages from upstream RMQ/MariaDB repos. > > Best regards, > > Michal > > > On 1 Jun 2023, at 13:19, Dmitriy Rabotyagov wrote: > > > > You can set `rabbitmq_install_method: distro` in user_variables and > > rabbitmq will get installed from distro-provided repositories rather > > then external ones: > > https://opendev.org/openstack/openstack-ansible-rabbitmq_server/src/branch/master/releasenotes/notes/rabbitmq-using-external-repo-instead-of-pkg-file-8cdd00f58d3496ba.yaml > > But yes, default behaviour is to use external repos. > > > > You're slightly wrong about reasons behind why this behaviour is > > default though. It's not about having "latest" versions, it's about > > having consistent/same versions across all distributions. First of > > all, then related bugs and security vulnerabilities are the same for > > all distros, so it's kinda easier to keep track on that. But then most > > important part is cross-distro installations. > > So let's assume an individual is running Ubuntu (or CentOS, doesn't > > matter), and they want to migrate to Debian. Having different rabbitmq > > versions installed by these distributions will totally be a blocker > > for such resilient migration. At the same time, when exactly the same > > versions of rabbit/erlang/galera are installed - they can just > > re-setup control planes one by one to another distro without any pain > > and the cluster will remain functional. > > > > ??, 1 ???. 2023??. ? 13:00, Thomas Goirand : > >> > >> On 5/28/23 02:27, Dmitriy Rabotyagov wrote: > >>> That is just ridiculous... We have just switched from cloudsmith because > >>> it's rotating packages too aggressively... > >> > >> IMO, what's ridiculous, is to insist using upstream broken-by-design > >> repositories for each-and-every-component (this event illustrate it > >> well...), just because you want the latest upstream release of > >> everything for no valid reason (as if what was released 2 weeks ago is > >> not relevant anymore...). > >> > >> On the specific case of RabbitMQ, this means using the upstream repo > >> version of erlang, meaning that everything else that is packaged in the > >> distro that uses erlang is broken. > >> > >> If there was any valid reason to do a backport of a component in > >> stable-backports (for Debian), or even if it was a personal preference > >> of the OSA team, I would have happily done the work. Though never ever, > >> the OSA / Kolla team got in touch with me for this kind of things. > >> > >> Another issue is that, if you want to do an off-line installation (ie: > >> without internet connectivity on your OpenStack servers), it becomes > >> really horrible to setup all the mirrors. > >> > >> This broken policy is one major blocker for me to even use OSA, and one > >> good reason that makes me recommend against using it. > >> > >> Is there a chance that we see the team changing this policy / way of > >> installing things? Or am I mistaking that this is a mandatory thing in > >> OSA maybe? If so, then I probably shouldn't have written the above, so > >> please let me know. > >> > >> Cheers, > >> > >> Thomas Goirand (zigo) > >> > >> > > > From alex at raksmart.com Thu Jun 1 06:39:58 2023 From: alex at raksmart.com (Alex) Date: Thu, 1 Jun 2023 06:39:58 +0000 Subject: Question About BGP Dynamic Routing, Floating IP, and SNAT Message-ID: Hi Everyone, Hope you are all doing well. I'm a beginner to Openstack and Neutron and now run into an issue about SNAT and shared floating IP. I've already deployed a neutron network which use BGP to announce floating IP to PE (Provider Edge router), and everything works as expected when I assigned the public floating IP (e.g., 123.0.0.10/24) to VMs. But when I tried to use floating IP port-forwarding function with floating IP 123.0.0.20/24 and rule (internal_ip 10.10.10.10, internal_port 5555, external_port 64000), and assign a private IP (10.10.10.10/24) to a VM. The floating IP 123.0.0.20 won't be advertised through BGP. May I have some suggestions about how could I get this fixed, or the neutron just won't work this way? FYI, 1. Per my understanding, the port_forwardings rule will make the port acts like a SNAT role and forward any packets that reached to it with destination 123.0.0.20:64000 to the private IP 10.10.10.10/24. 2. The IP address could be reached in the neutron network. 3. PE IP address, CE IP address, and floating IP gateway are using the same subnet A and subnet pool (192.168.123.0/24), while floating IP belongs to subnet B and subnet pool (123.0.0.0/24), both subnets belong to provider network. 4. Only floating IP that assigned to the specific VM will be advertised to PE through BGP 5. Floating IP that assigned to port of a router in the neutron network won't be advertised, even the IP is activated and is reachable internally. Sincerely, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc.gariepy at calculquebec.ca Thu Jun 1 12:16:19 2023 From: marc.gariepy at calculquebec.ca (Marc Gariepy) Date: Thu, 1 Jun 2023 08:16:19 -0400 Subject: CRITICAL! RabbitMQ PackageCloud repos will be not more available from today - affected Openstack-ansible In-Reply-To: References: <83dd077c3248b87e7bee15ebd4b88477@sunray.sk> Message-ID: On 6/1/23 07:42, Thomas Goirand wrote: > On 6/1/23 13:19, Dmitriy Rabotyagov wrote: >> You can set `rabbitmq_install_method: distro` in user_variables and >> rabbitmq will get installed from distro-provided repositories rather >> then external ones: >> https://opendev.org/openstack/openstack-ansible-rabbitmq_server/src/branch/master/releasenotes/notes/rabbitmq-using-external-repo-instead-of-pkg-file-8cdd00f58d3496ba.yaml >> >> But yes, default behaviour is to use external repos. >> >> You're slightly wrong about reasons behind why this behaviour is >> default though. It's not about having "latest" versions, it's about >> having consistent/same versions across all distributions. First of >> all, then related bugs and security vulnerabilities are the same for >> all distros, so it's kinda easier to keep track on that. But then most >> important part is cross-distro installations. >> So let's assume an individual is running Ubuntu (or CentOS, doesn't >> matter), and they want to migrate to Debian. Having different rabbitmq >> versions installed by these distributions will totally be a blocker >> for such resilient migration. At the same time, when exactly the same >> versions of rabbit/erlang/galera are installed - they can just >> re-setup control planes one by one to another distro without any pain >> and the cluster will remain functional. > > Hi Dmitriy, > > Thanks for the explanation. It makes more sense now (even though I > don't think there's many people willing to switch distro in an > existing deployment). > > Cheers, > > Thomas Goirand (zigo) > > Hello Thomas, Yes you are bound to do it every few year when you need to upgrade the OS on your existing installation. Thanks Marc From iurygregory at gmail.com Thu Jun 1 12:43:45 2023 From: iurygregory at gmail.com (Iury Gregory) Date: Thu, 1 Jun 2023 09:43:45 -0300 Subject: Community attendance - OpenInfra Summit Vancouver In-Reply-To: References: Message-ID: Hi Roberto, I know some Brazilians that will be attending the OIS Vancouver, including me. Em qui., 1 de jun. de 2023 ?s 09:00, Roberto Bartzen Acosta < roberto.acosta at luizalabs.com> escreveu: > Hello, > > Will anyone from the Brazilian community attend the OpenInfra in Vancouver? > > I would like to meet other members from Brazil and discuss the challenges > and possibilities of using OpenStack in Brazilian infrastructures. You can > ping me on IRC too (racosta). > > Kind regards, > Roberto > > > *?Esta mensagem ? direcionada apenas para os endere?os constantes no > cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no > cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa > mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o > imediatamente anuladas e proibidas?.* > > *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para > assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o > poder? aceitar a responsabilidade por quaisquer perdas ou danos causados > por esse e-mail ou por seus anexos?.* > -- *Att[]'s* *Iury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Ironic PTL * *Senior Software Engineer at Red Hat Brazil* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mister.mackarow at yandex.ru Thu Jun 1 13:44:01 2023 From: mister.mackarow at yandex.ru (=?utf-8?B?0JzQkNCa0JDQoNCe0JIg0JzQkNCa0KE=?=) Date: Thu, 01 Jun 2023 16:44:01 +0300 Subject: problem with zun not available console in horizon Message-ID: <3561111685625661@mail.yandex.ru> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CCWzudwuzC.png Type: image/png Size: 69327 bytes Desc: not available URL: From samaryazdani at yahoo.com Thu Jun 1 13:49:03 2023 From: samaryazdani at yahoo.com (syed) Date: Thu, 1 Jun 2023 14:49:03 +0100 Subject: openstack/devstack issue References: Message-ID: Hi, I was building a system i.e. an orchestrator that includes resources like an ubuntu virtual machine, firecracker micro virtual machine, unikraft unikernel and a storage. It seems that openstack/devstack is capable of orchestration of? such a heterogeneous system and could be customized according to the specific needs. I, however, am experiencing a lot of issues when including the resources e.g. when i try to create an ubuntu image and launch it, it results in errors that are quite complex to resolve and so i intend to reach out to the experts so as to resolve these issues at hand. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: screenshot.jpg Type: image/jpeg Size: 132330 bytes Desc: not available URL: From pierre at stackhpc.com Thu Jun 1 14:23:11 2023 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 1 Jun 2023 16:23:11 +0200 Subject: [blazar] Meeting cancelled today Message-ID: Hello, I am unable to run the IRC meeting today, let's cancel. Best wishes, Pierre Riteau (priteau) From amy at demarco.com Thu Jun 1 17:32:59 2023 From: amy at demarco.com (Amy Marrich) Date: Thu, 1 Jun 2023 12:32:59 -0500 Subject: RDO Ice Cream Social at OpenInfra Summit Message-ID: Join members of the community for ice cream after the Marketplace Mixer on Wednesday at Soft Peaks in Gastown! Soft Peaks is known for using organic milk and their unique topics! For those with dairy related dietary needs who register we will have an alternative for you so don't feel left out! https://eventyay.com/events/231a963f From swogatpradhan22 at gmail.com Thu Jun 1 18:18:40 2023 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Thu, 1 Jun 2023 23:48:40 +0530 Subject: Add netapp storage in edge site | Wallaby | DCN In-Reply-To: References: Message-ID: Thank you so much Alan for your help. I am using the following template to deploy the netapp. I have my original template and this is the custom template I am adding to it. I will check and confirm if this works. With regards, Swogat Pradhan On Thu, Jun 1, 2023 at 9:51?AM Alan Bishop wrote: > > > On Wed, May 31, 2023 at 10:44?AM Swogat Pradhan > wrote: > >> Hi Alan, >> Can you please check if the environment file attached is in >> correct format and how it should be? >> Also should i remove the old netapp environment file and use the new >> custom environment file created (attached). >> > > You could retain the original environment file, and merge in just a > portion of the sample file that you attached. > > The original file contains all the settings, including the > resource_registry entry, to deploy the first netapp > in the controlplane. You want to enhance this with just a couple of items > in order to deploy the second > netapp at the edge site. > > The important thing to know is the CinderNetappMultiConfig entries are > necessary only for the second netapp. > The 'tripleo_netapp' entry could be removed, because it's basically taken > care of by the original env file. You > only need to include the 'tripleo_netapp_dcn02' settings. Furthermore, you > don't need to list every setting; > you only need to list the ones that differ from the settings in the > original env file. Basically, the settings in > the original env file (including the resource_registry) define the default > values for all netapp backends, and > the entries in the CinderNetappMultiConfig define the overrides for > specific backends. For example, if the > original env file sets "CinderNetappStorageProtocol: iscsi" then that will > be the default value for all netapp > backends, unless you override it in the CinderNetappMultiConfig section. > > Lastly, your CinderNetappBackendName setting looks correct (it specifies > both the 'tripleo_netapp' and > 'tripleo_netapp_dcn02' backends). > > Alan > > >> >> With regards, >> Swogat Pradhan >> >> On Wed, May 31, 2023 at 10:46?PM Swogat Pradhan < >> swogatpradhan22 at gmail.com> wrote: >> >>> Hi Alan, >>> Thanks for your clarification, the way you suggested will solve my issue. >>> But i already have a netapp backend in my central site and to add >>> another backend should i follow this documentation: >>> https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.0/html/custom_block_storage_back_end_deployment_guide/ref_configuration-sample-environment-file_custom-cinder-back-end >>> >>> And should i remove the old netapp environment file and use the new >>> custom environment file created using the above mentioned guide?? >>> >>> I already have a prod workload in the currently deployed netapp and I do >>> not want to cause any issues in that netapp storage. >>> >>> >>> With regards, >>> Swogat Pradhan >>> >>> On Tue, May 30, 2023 at 8:43?PM Alan Bishop wrote: >>> >>>> >>>> >>>> On Thu, May 25, 2023 at 9:39?PM Swogat Pradhan < >>>> swogatpradhan22 at gmail.com> wrote: >>>> >>>>> Hi Alan, >>>>> My netapp storage is located in edge site itself. >>>>> As the networks are routable my central site is able to reach the >>>>> netapp storage ip address (ping response is 30ms-40ms). >>>>> Let's say i included the netapp storage yaml in central site >>>>> deployment script (which is not recommended) and i am able to create the >>>>> volumes as it is reachable from controller nodes. >>>>> Will i be able to mount those volumes in edge site VM's?? And if i am >>>>> able to do so, then how will the data flow?? When storing something in the >>>>> netapp volume will the data flow through the central site controller and >>>>> get stored in the storage space? >>>>> >>>> >>>> A cinder-volume service running in the central site's controplane will >>>> be able to work with a netapp backend that's physically located at an edge >>>> site. The good news is the c-vol service will be HA because it will be >>>> controlled by pacemaker running on the controllers. >>>> >>>> In order for VMs at the edge site to access volumes on the netapp, >>>> you'll need to set the CinderNetappAvailabilityZone [1] to the edge site's >>>> AZ. >>>> >>>> [1] >>>> https://opendev.org/openstack/tripleo-heat-templates/src/branch/stable/wallaby/deployment/cinder/cinder-backend-netapp-puppet.yaml#L43 >>>> >>>> To attach a netapp volume, nova-compute at the edge will interact with >>>> cinder-volume in the controlplane, and cinder-volume will in turn interact >>>> with the netapp. This will happen over central <=> edge network >>>> connections. Eventually, nova will directly connect to the netapp, so all >>>> traffic from the VM to the netapp will occur within the edge site. Data >>>> will not flow through the cinder-volume service, but there are restrictions >>>> and limitations: >>>> - Only that one edge site can access the netapp backend >>>> - If the central <=> edge network connection then you won't be able to >>>> attach or detach a netapp volume (but active connections will continue to >>>> work) >>>> >>>> Of course, there are operations where cinder services are in the data >>>> path (e.g. creating a volume from an image), but not when a VM is accessing >>>> a volume. >>>> >>>> Alan >>>> >>>> >>>>> With regards, >>>>> Swogat Pradhan >>>>> >>>>> On Fri, 26 May 2023, 10:03 am Alan Bishop, wrote: >>>>> >>>>>> >>>>>> >>>>>> On Thu, May 25, 2023 at 12:09?AM Swogat Pradhan < >>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>> >>>>>>> Hi Alan, >>>>>>> So, can I include the cinder-netapp-storage.yaml file in the central >>>>>>> site and then use the new backend to add storage to edge VM's? >>>>>>> >>>>>> >>>>>> Where is the NetApp physically located? Tripleo's DCN architecture >>>>>> assumes the storage is physically located at the same site where the >>>>>> cinder-volume service will be deployed. If you include the >>>>>> cinder-netapp-storage.yaml environment file in the central site's >>>>>> controlplane, then VMs at the edge site will encounter the problems I >>>>>> outlined earlier (network latency, no ability to do cross-AZ attachments). >>>>>> >>>>>> >>>>>>> I believe it is not possible right?? as the cinder volume in the >>>>>>> edge won't have the config for the netapp. >>>>>>> >>>>>> >>>>>> The cinder-volume services at an edge site are meant to manage >>>>>> storage devices at that site. If the NetApp is at the edge site, ideally >>>>>> you'd include some variation of the cinder-netapp-storage.yaml environment >>>>>> file in the edge site's deployment. However, then you're faced with the >>>>>> fact that the NetApp driver doesn't support A/A, which is required for >>>>>> c-vol services running at edge sites (In case you're not familiar with >>>>>> these details, tripleo runs all cinder-volume services in active/passive >>>>>> mode under pacemaker on controllers in the controlplane. Thus, only a >>>>>> single instance runs at any time, and pacemaker provides HA by moving the >>>>>> service to another controller if the first one goes down. However, >>>>>> pacemaker is not available at edge sites, and so to get HA, multiple >>>>>> instances of the cinder-volume service run simultaneously on 3 nodes (A/A), >>>>>> using etcd as a Distributed Lock Manager (DLM) to coordinate things. But >>>>>> drivers must specifically support running A/A, and the NetApp driver does >>>>>> NOT.) >>>>>> >>>>>> Alan >>>>>> >>>>>> >>>>>>> With regards, >>>>>>> Swogat Pradhan >>>>>>> >>>>>>> On Thu, May 25, 2023 at 2:17?AM Alan Bishop >>>>>>> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Wed, May 24, 2023 at 3:15?AM Swogat Pradhan < >>>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> I have a DCN setup and there is a requirement to use a netapp >>>>>>>>> storage device in one of the edge sites. >>>>>>>>> Can someone please confirm if it is possible? >>>>>>>>> >>>>>>>> >>>>>>>> I see from prior email to this list that you're using tripleo, so >>>>>>>> I'll respond with that in mind. >>>>>>>> >>>>>>>> There are many factors that come into play, but I suspect the short >>>>>>>> answer to your question is no. >>>>>>>> >>>>>>>> Tripleo's DCN architecture requires the cinder-volume service >>>>>>>> running at edge sites to run in active-active >>>>>>>> mode, where there are separate instances running on three nodes in >>>>>>>> to for the service to be highly >>>>>>>> available (HA).The problem is that only a small number of cinder >>>>>>>> drivers support running A/A, and NetApp's >>>>>>>> drivers do not support A/A. >>>>>>>> >>>>>>>> It's conceivable you could create a custom tripleo role that >>>>>>>> deploys just a single node running cinder-volume >>>>>>>> with a NetApp backend, but it wouldn't be HA. >>>>>>>> >>>>>>>> It's also conceivable you could locate the NetApp system in the >>>>>>>> central site's controlplane, but there are >>>>>>>> extremely difficult constraints you'd need to overcome: >>>>>>>> - Network latency between the central and edge sites would mean the >>>>>>>> disk performance would be bad. >>>>>>>> - You'd be limited to using iSCSI (FC wouldn't work) >>>>>>>> - Tripleo disables cross-AZ attachments, so the only way for an >>>>>>>> edge site to access a NetApp volume >>>>>>>> would be to configure the cinder-volume service running in the >>>>>>>> controlplane with a backend availability >>>>>>>> zone set to the edge site's AZ. You mentioned the NetApp is needed >>>>>>>> "in one of the edge sites," but in >>>>>>>> reality the NetApp would be available in one, AND ONLY ONE edge >>>>>>>> site, and it would also not be available >>>>>>>> to any instances running in the central site. >>>>>>>> >>>>>>>> Alan >>>>>>>> >>>>>>>> >>>>>>>>> And if so then should i add the parameters in the edge deployment >>>>>>>>> script or the central deployment script. >>>>>>>>> Any suggestions? >>>>>>>>> >>>>>>>>> With regards, >>>>>>>>> Swogat Pradhan >>>>>>>>> >>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cinder-netapp-custom-env - Copy.yml Type: application/octet-stream Size: 1126 bytes Desc: not available URL: From christian.rohmann at inovex.de Thu Jun 1 20:03:57 2023 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Thu, 1 Jun 2023 22:03:57 +0200 Subject: [Nova] How to (live-) migrate instances in a server-group with affinity to another host? Message-ID: <253ceb21-2755-3a80-6952-97b60aba1aa4@inovex.de> Hello OpenStack-Discuss, I was wondering how I as an admin can (live-)migrate instances for which the user applied a server-group with affinity. Looking at the API ([1]), there is no (obvious) way to migrate multiple instances or a server-group "en bloc". But migrating a single instance out of a group does result in: > [...] > 2023-06-01 12:20:00.489 6910 INFO nova.scheduler.host_manager > [req-01d9 83eb5 9c8353142098 - default default] Host filter ignoring > hosts: > 2023-06-01 12:20:00.490 6910 INFO nova.filters [req-01d9 83eb5 > 9c8353142098 - default default] Filter ServerGroupAffinityFilter > returned 0 hosts > 2023-06-01 12:20:00.491 6910 INFO nova.filters [req-01d9 83eb5 > 9c8353142098 - default default] Filtering removed all hosts for the > request with instance ID '2476cfdd-9315-4bff-ba3e-d3be5874593f'. > Filter results: ['AvailabilityZoneFilter: (start: 10, end: 10)', > 'ComputeFilter: (start: 10, end: 10)', 'ComputeCapabilitiesFilter: > (start: 10, end: 9)', 'ImagePropertiesFilter: (start: 9, end: 9)', > 'ServerGroupAntiAffinityFilter: (start: 9, end: 9)', > 'ServerGroupAffinityFilter: (start: 9, end: 0)'] > [...] Am I missing something here? Or am I just holding this wrong? Regards Christian [1] https://docs.openstack.org/api-ref/compute/?expanded=force-migration-complete-action-force-complete-action-detail#live-migrate-server-os-migratelive-action From smooney at redhat.com Thu Jun 1 20:26:13 2023 From: smooney at redhat.com (smooney at redhat.com) Date: Thu, 01 Jun 2023 21:26:13 +0100 Subject: [Nova] How to (live-) migrate instances in a server-group with affinity to another host? In-Reply-To: <253ceb21-2755-3a80-6952-97b60aba1aa4@inovex.de> References: <253ceb21-2755-3a80-6952-97b60aba1aa4@inovex.de> Message-ID: <88b7b4790ee518ea012a20584ace8f5ebdb6a95c.camel@redhat.com> On Thu, 2023-06-01 at 22:03 +0200, Christian Rohmann wrote: > Hello OpenStack-Discuss, > > I was wondering how I as an admin can (live-)migrate instances for which > the user applied a server-group with affinity. > Looking at the API ([1]), there is no (obvious) way to migrate multiple > instances or a server-group "en bloc". correct there is no way to migrate a singel instnace or the block of instance when usign the affintiy policy > > But migrating a single instance out of a group does result in: > > > [...] > > 2023-06-01 12:20:00.489 6910 INFO nova.scheduler.host_manager > > [req-01d9 83eb5 9c8353142098 - default default] Host filter ignoring > > hosts: > > 2023-06-01 12:20:00.490 6910 INFO nova.filters [req-01d9 83eb5 > > 9c8353142098 - default default] Filter ServerGroupAffinityFilter > > returned 0 hosts > > 2023-06-01 12:20:00.491 6910 INFO nova.filters [req-01d9 83eb5 > > 9c8353142098 - default default] Filtering removed all hosts for the > > request with instance ID '2476cfdd-9315-4bff-ba3e-d3be5874593f'. > > Filter results: ['AvailabilityZoneFilter: (start: 10, end: 10)', > > 'ComputeFilter: (start: 10, end: 10)', 'ComputeCapabilitiesFilter: > > (start: 10, end: 9)', 'ImagePropertiesFilter: (start: 9, end: 9)', > > 'ServerGroupAntiAffinityFilter: (start: 9, end: 9)', > > 'ServerGroupAffinityFilter: (start: 9, end: 0)'] > > [...] > > Am I missing something here? Or am I just holding this wrong? no it not supported. its one of the limiations of using server grousp and allowing the affintiy policy. the only way to migrat ethe instance is to disabel the late affintiy check and then force the migration using an older microversion to bypass the filter. > > > > Regards > > Christian > > > > [1] > https://docs.openstack.org/api-ref/compute/?expanded=force-migration-complete-action-force-complete-action-detail#live-migrate-server-os-migratelive-action > > > > From jesper at schmitz.computer Thu Jun 1 20:31:36 2023 From: jesper at schmitz.computer (Jesper Schmitz Mouridsen) Date: Thu, 01 Jun 2023 22:31:36 +0200 Subject: =?US-ASCII?Q?Re=3A_=5BNova=5D_How_to_=28live-=29_migrate_instances_in?= =?US-ASCII?Q?_a_server-group_with_affinity_to_another_host=3F?= In-Reply-To: <253ceb21-2755-3a80-6952-97b60aba1aa4@inovex.de> References: <253ceb21-2755-3a80-6952-97b60aba1aa4@inovex.de> Message-ID: --force iirc Den 1. juni 2023 22.03.57 CEST, Christian Rohmann skrev: >Hello OpenStack-Discuss, > >I was wondering how I as an admin can (live-)migrate instances for which the user applied a server-group with affinity. >Looking at the API ([1]), there is no (obvious) way to migrate multiple instances or a server-group "en bloc". > >But migrating a single instance out of a group does result in: > >> [...] >> 2023-06-01 12:20:00.489 6910 INFO nova.scheduler.host_manager [req-01d9 83eb5 9c8353142098 - default default] Host filter ignoring hosts: >> 2023-06-01 12:20:00.490 6910 INFO nova.filters [req-01d9 83eb5 9c8353142098 - default default] Filter ServerGroupAffinityFilter returned 0 hosts >> 2023-06-01 12:20:00.491 6910 INFO nova.filters [req-01d9 83eb5 9c8353142098 - default default] Filtering removed all hosts for the request with instance ID '2476cfdd-9315-4bff-ba3e-d3be5874593f'. Filter results: ['AvailabilityZoneFilter: (start: 10, end: 10)', 'ComputeFilter: (start: 10, end: 10)', 'ComputeCapabilitiesFilter: (start: 10, end: 9)', 'ImagePropertiesFilter: (start: 9, end: 9)', 'ServerGroupAntiAffinityFilter: (start: 9, end: 9)', 'ServerGroupAffinityFilter: (start: 9, end: 0)'] >> [...] > >Am I missing something here? Or am I just holding this wrong? > > > >Regards > >Christian > > > >[1] https://docs.openstack.org/api-ref/compute/?expanded=force-migration-complete-action-force-complete-action-detail#live-migrate-server-os-migratelive-action > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbaker at redhat.com Thu Jun 1 22:21:19 2023 From: sbaker at redhat.com (Steve Baker) Date: Fri, 2 Jun 2023 10:21:19 +1200 Subject: [ironic][stable] Proposing EOL of ironic project branches older than Wallaby In-Reply-To: References: Message-ID: On 31/05/23 08:30, Jay Faulkner wrote: > Hey, > > I'm trying to clean up zuul-config-errors for Ironic, and Train has > reared its head again: > https://review.opendev.org/c/openstack/ironic-lib/+/884722. > > Is there still value in continuing to keep Train (and perhaps, Ussuri > and Victoria) in EM at this point? Should we migrate them to EOL? > > What do you all think? We'd like to request that train remains until mid-August, then it can EOL. The cinder backports may well take a decent proportion of this time. > > - > Jay Faulkner > Ironic PTL > > On Tue, Nov 1, 2022 at 3:12?PM Steve Baker wrote: > > > On 12/10/22 05:53, Jay Faulkner wrote: >> We discussed stable branches in the most recent ironic meeting >> (https://meetings.opendev.org/meetings/ironic/2022/ironic.2022-10-10-15.01.log.txt). >> The decision was made to do the following: >> >> EOL these branches: >> - stable/queens >> - stable/rocky >> - stable/stein >> >> Reduce testing considerably on these branches, and only backport >> critical bugfixes or security bugfixes: >> - stable/train >> - stable/ussuri >> - stable/victoria >> > Just coming back to this, keeping stable/train jobs green has > become untenable so I think its time we consider EOLing it. > > It is the extended-maintenance branch of interest to me, so I'd be > fine with stable/ussuri and stable/victoria being EOLed also. > >> Our remaining branches will continue to get most eligible patches >> backported to them. >> >> This email, plus earlier communications including a tweet, will >> serve as notice that these branches are being EOL'd. >> >> Thanks, >> Jay Faulkner >> >> On Tue, Oct 4, 2022 at 11:18 AM Jay Faulkner wrote: >> >> Hi all, >> >> Ironic has a large amount of stable branches still in EM. We >> need to take action to ensure those branches are either >> retired or have CI repaired to the point of being usable. >> >> Specifically, I'm looking at these branches across all Ironic >> projects: >> - stable/queens >> - stable/rocky >> - stable/stein >> - stable/train >> - stable/ussuri >> - stable/victoria >> >> In lieu of any volunteers to maintain the CI, my >> recommendation for all the branches listed above is that they >> be marked EOL. If someone wants to volunteer to maintain CI >> for those branches, they can propose one of the below paths >> be taken instead: >> >> 1 - Someone volunteers to maintain these branches, and also >> report the status of CI of these older branches periodically >> on the Ironic whiteboard and in Ironic meetings. If you feel >> strongly that one of these branches needs to continue to be >> in service; volunteering in this way is how to save them. >> >> 2 - We seriously reduce CI. Basically removing all tempest >> tests to ensure that CI remains reliable and able to merge >> emergency or security fixes when needed. In some cases; this >> still requires CI fixes as some older inspector branches are >> failing *installing packages* in unit tests. I would still >> like, in this case, that someone volunteers to ensure the >> minimalist CI remains happy. >> >> My intention is to let this message serve as notice and a >> waiting period; and if I've not heard any response here or in >> Monday's Ironic meeting (in 6 days), I will begin taking >> action on retiring these branches. >> >> This is simply a start; other branches (including bugfix >> branches) are also in bad shape in CI, but getting these >> retired will significantly reduce the surface area of >> projects and branches to evaluate. >> >> I know it's painful to drop support for these branches; but >> we've provided good EM support for these branches for a long >> time and by pruning them away, we'll be able to save time to >> dedicate to other items. >> >> Thanks, >> Jay Faulkner >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Fri Jun 2 01:09:31 2023 From: berndbausch at gmail.com (berndbausch at gmail.com) Date: Fri, 2 Jun 2023 10:09:31 +0900 Subject: openstack/devstack issue In-Reply-To: References: Message-ID: <0e1201d994ee$e4012d70$ac038850$@gmail.com> To get a better answer, add information regarding your many issues, describe your cloud (e.g. how many nodes, their characteristics, the network setup), and characterize the current load on your cloud (e.g. number of instances, what resources are in use ? see Horizon?s front page). >From the information you provide, I can say that the launch fails because no compute host was found that could run the instance. Possible reasons (the list is not exhaustive): * No compute host has enough free CPU, RAM or disk storage resources * The instance?s CPU architecture is not compatible with the compute hosts? hypervisors and/or CPUs * The image from which the instance is launched requests conditions that are not met by compute hosts * You set instance launch options whose conditions are not met by compute hosts (screenshot removed from my answer to decrease message size) From: syed Sent: Thursday, June 1, 2023 10:49 PM To: openstack-discuss at lists.openstack.org Subject: openstack/devstack issue Hi, I was building a system i.e. an orchestrator that includes resources like an ubuntu virtual machine, firecracker micro virtual machine, unikraft unikernel and a storage. It seems that openstack/devstack is capable of orchestration of such a heterogeneous system and could be customized according to the specific needs. I, however, am experiencing a lot of issues when including the resources e.g. when i try to create an ubuntu image and launch it, it results in errors that are quite complex to resolve and so i intend to reach out to the experts so as to resolve these issues at hand. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Fri Jun 2 05:49:34 2023 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Fri, 2 Jun 2023 11:19:34 +0530 Subject: instance console something went wrong, connection is closed | Wallaby DCN Message-ID: Hi, I am creating instances in my DCN site and i am unable to get the console sometimes, error: something went wrong, connection is closed I have 3 instances now running on my hci02 node and there is console access on 1 of the vm's and the rest two i am not getting the console, i have used the same flavor, same image same security group for the VM's. Please suggest what can be done. With regards, Swogat Pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Fri Jun 2 05:57:34 2023 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Fri, 2 Jun 2023 11:27:34 +0530 Subject: instance console something went wrong, connection is closed | Wallaby DCN In-Reply-To: References: Message-ID: Update: If the i am performing any activity like migration or resize of an instance whose console is accessible, the console becomes inaccessible giving out the following error : something went wrong, connection is closed The was 1 other instance whose console was not accessible and i did a shelve and unshelve and suddenly the instance console became accessible. This is a peculiar behavior and i don't understand where is the issue . With regards, Swogat Pradhan On Fri, Jun 2, 2023 at 11:19?AM Swogat Pradhan wrote: > Hi, > I am creating instances in my DCN site and i am unable to get the console > sometimes, error: something went wrong, connection is closed > > I have 3 instances now running on my hci02 node and there is console > access on 1 of the vm's and the rest two i am not getting the console, i > have used the same flavor, same image same security group for the VM's. > > Please suggest what can be done. > > With regards, > Swogat Pradhan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Fri Jun 2 06:19:46 2023 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Fri, 2 Jun 2023 08:19:46 +0200 Subject: [Nova] How to (live-) migrate instances in a server-group with affinity to another host? In-Reply-To: <88b7b4790ee518ea012a20584ace8f5ebdb6a95c.camel@redhat.com> References: <253ceb21-2755-3a80-6952-97b60aba1aa4@inovex.de> <88b7b4790ee518ea012a20584ace8f5ebdb6a95c.camel@redhat.com> Message-ID: <5aa06a8b-6fd2-77e8-c454-59c0fe7accc3@inovex.de> On 01/06/2023 22:26, smooney at redhat.com wrote: >> Am I missing something here? Or am I just holding this wrong? > no it not supported. > its one of the limiations of using server grousp and allowing the affintiy policy. > the only way to migrat ethe instance is to disabel the late affintiy check and then force the migration > using an older microversion to bypass the filter. Thanks for your clear and rapid response, even though it's sad. I wonder if people are not using live-migration as much as we do to allow for rolling maintenance (packages, kernel updates, HW replacements, ...) on their compute servers without interrupting user workload (much). Since the functionality to "bypass" the filters is there, even in an old microvesion ... why not (allow to) skip filters when a distinct host is chosen? That's something that can only be used by admins anyways. And it would create? a way to migrate those instances away and be able to completely free a host. Regards Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Fri Jun 2 06:31:04 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 2 Jun 2023 08:31:04 +0200 Subject: [Nova] How to (live-) migrate instances in a server-group with affinity to another host? In-Reply-To: <5aa06a8b-6fd2-77e8-c454-59c0fe7accc3@inovex.de> References: <253ceb21-2755-3a80-6952-97b60aba1aa4@inovex.de> <88b7b4790ee518ea012a20584ace8f5ebdb6a95c.camel@redhat.com> <5aa06a8b-6fd2-77e8-c454-59c0fe7accc3@inovex.de> Message-ID: We're using live migrations a lot, though what is used less are affinity groups I guess, as more typically is to use anti-affinity (at least with our workloads). The only thing I can suggest is too use soft-affinity. I assume it should allow then to break policy if scheduler is explicitly told to do so. ??, 2 ???. 2023 ?., 08:22 Christian Rohmann : > On 01/06/2023 22:26, smooney at redhat.com wrote: > > Am I missing something here? Or am I just holding this wrong? > > no it not supported. > its one of the limiations of using server grousp and allowing the affintiy policy. > the only way to migrat ethe instance is to disabel the late affintiy check and then force the migration > using an older microversion to bypass the filter. > > Thanks for your clear and rapid response, even though it's sad. > > I wonder if people are not using live-migration as much as we do to allow > for rolling maintenance (packages, kernel updates, HW replacements, ...) > on their compute servers without interrupting user workload (much). > > Since the functionality to "bypass" the filters is there, even in an old > microvesion ... why not (allow to) skip filters when a distinct host is > chosen? That's something that can only be used by admins anyways. > And it would create a way to migrate those instances away and be able to > completely free a host. > > > Regards > > > Christian > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Fri Jun 2 11:48:19 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 2 Jun 2023 13:48:19 +0200 Subject: [neutron] Neutron drivers meeting Message-ID: Hello Neutrinos: Please remember we have the Neutron drivers meeting today at 14:00 UTC. The agenda [1] has one topic: * https://bugs.launchpad.net/neutron/+bug/2020823: [RFE] Add flavor/service provider support to routers in the L3 OVN plugin See you later. [1]https://wiki.openstack.org/wiki/Meetings/NeutronDrivers -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Fri Jun 2 16:29:41 2023 From: amy at demarco.com (Amy Marrich) Date: Fri, 2 Jun 2023 11:29:41 -0500 Subject: RDO Ice Cream Social at OpenInfra Summit - Correct link In-Reply-To: References: Message-ID: Sorry for the confusion, here is the correct link to register. https://eventyay.com/e/231a963f Looking forward to seeing everyone there! On Thu, Jun 1, 2023 at 12:32?PM Amy Marrich wrote: > > Join members of the community for ice cream after the Marketplace > Mixer on Wednesday at Soft Peaks in Gastown! Soft Peaks is known for > using organic milk and their unique topics! For those with dairy > related dietary needs who register we will have an alternative for you > so don't feel left out! > > https://eventyay.com/events/231a963f From haiwu.us at gmail.com Fri Jun 2 17:11:27 2023 From: haiwu.us at gmail.com (hai wu) Date: Fri, 2 Jun 2023 12:11:27 -0500 Subject: [nova] hw:numa_nodes question In-Reply-To: References: <713262656198f4e0330a086b906daa8a1cb3e40c.camel@redhat.com> <1a05b92654acb6309bc52fac14c9ae79242ab40e.camel@redhat.com> <1705b563fb0e936a2aa8356f6adccddd948b69bf.camel@redhat.com> <56ffe1e6cabcc54920b6f8a3a255d13bd7407628.camel@redhat.com> <8acd0ffb7bb09de4b48c5c69f849659d805134c5.camel@redhat.com> <63f8c407de908e1bbe8589ad9400e7751c4b4d44.camel@redhat.com> Message-ID: Sean, I just tried and set one existing flavor (already with "hw:numa_nodes='1'" set as its property) with 'openstack flavor set --property hw:mem_page_size=small', and created a new VM from this flavor. Then I tried to live migrate this VM from its source hypervisor (which has many numa nodes, and this VM running on numa node 5) to another hypervisor (which has only 2 numa nodes), and the live migration failed. Log messages are complaining that there's no numa node 5 found on the target hypervisor host. What else is needed in order for this numa live migration to work, so that this VM could end up in a different numa node on the target host? Is there any patch that's needed to be backported in order for this to work? On Mon, May 15, 2023 at 3:46?PM hai wu wrote: > > Hmm, regarding this: `if the vm only has hw:numa_node=1 > https://review.opendev.org/c/openstack/nova/+/805649 wont help`. > > Per my recent numerous tests, if the vm only has hw:numa_node=1 > https://review.opendev.org/c/openstack/nova/+/805649 will actually > help, but only for newly built VMs, it works pretty well only for > newly built VMs. > > On Mon, May 15, 2023 at 3:21?PM Sean Mooney wrote: > > > > On Mon, 2023-05-15 at 14:46 -0500, hai wu wrote: > > > This patch was backported: > > > https://review.opendev.org/c/openstack/nova/+/805649. Once this is in > > > place, new VMs always get assigned correctly to the numa node with > > > more free memory. But when existing VMs (created with vm flavor with > > > hw:numa_node=1 set) already running on numa node #0 got live migrated, > > > it would always be stuck on numa node #0 after live migration. > > if the vm only has hw:numa_node=1 https://review.opendev.org/c/openstack/nova/+/805649 wont help > > > > because we never claim any mempages or cpus in the host numa toplogy blob > > as such the sorting based on usage to balance the nodes wont work since there is never any usage recored > > for vms with just hw:numa_node=1 and nothign else set. > > > > > > So it seems we would also need to set hw:mem_page_size=small on the vm > > > flavor, so that new VMs created from that flavor would be able to land > > > on different numa node other than node#0 after its live migration? > > yes again becasue mem_page_size there is no usage in the host numa toplogy blob so as far as the schduler/resouces > > tracker is concerned all numa nodes are equally used. > > > > so it will always select nuam 0 by default since the scheduling algortim is deterministic. > > > > > > On Mon, May 15, 2023 at 2:33?PM Sean Mooney wrote: > > > > > > > > On Mon, 2023-05-15 at 13:03 -0500, hai wu wrote: > > > > > > > Another question: Let's say a VM runs on one host's numa node #0. If > > > > > > > we live-migrate this VM to another host, and that host's numa node #1 > > > > > > > has more free memory, is it possible for this VM to land on the other > > > > > > > host's numa node #1? > > > > > > yes it is > > > > > > on newer relsese we will prefer to balance the load across numa nodes > > > > > > on older release nova woudl fill the first numa node then move to the second. > > > > > > > > > > About the above point, it seems even with the numa patch back ported > > > > > and in place, the VM would be stuck in its existing numa node. Per my > > > > > tests, after its live migration, the VM will end up on the other > > > > > host's numa node #0, even if numa node#1 has more free memory. This is > > > > > not the case for newly built VMs. > > > > > > > > > > Is this a design issue? > > > > if you are using a release that supprot numa live migration (train +) > > > > https://specs.openstack.org/openstack/nova-specs/specs/train/implemented/numa-aware-live-migration.html > > > > then the numa affintiy is recalulated on live migration however numa node 0 is prefered. > > > > > > > > as of xena [compute]/packing_host_numa_cells_allocation_strategy has been added to contol how vms are balanced acros numa nodes > > > > in zed the default was changed form packing vms per host numa node to balancing vms between host numa nodes > > > > https://docs.openstack.org/releasenotes/nova/zed.html#relnotes-26-0-0-stable-zed-upgrade-notes > > > > > > > > even without the enhanchemt in xena and zed it was possible for the scheduler to select a numa node > > > > > > > > if you dont enable memory or cpu aware numa placment with > > > > hw:mem_page_size or hw:cpu_policy=dedicated then it will always select numa 0 > > > > > > > > if you do not request cpu pinnign or a specifc page size the sechudler cant properly select the host nuam node > > > > and will alwasy use numa node 0. That is one of the reason i said that if hw:numa_nodes is set then hw:mem_page_size shoudl be set. > > > > > > > > from a nova point of view using numa_nodes without mem_page_size is logically incorrect as you asked for > > > > a vm to be affinites to n host numa nodes but did not enable numa aware memory scheduling. > > > > > > > > we unfortnally cant provent this in the nova api without breaking upgrades for everyone who has made this mistake. > > > > we woudl need to force them to resize all affected instances which means guest downtime. > > > > the other issue si multiple numa nodes are supproted by Hyper-V but they do not supprot mem_page_size > > > > > > > > we have tried to document this in the past but never agreed on how becasuse it subtel and requries alot of context. > > > > the tl;dr is if the instace has a numa toplogy it should have mem_page_size set in the image or flavor but > > > > we never foudn a good place to capture that. > > > > > > > > > > > > > > On Thu, May 11, 2023 at 2:42?PM Sean Mooney wrote: > > > > > > > > > > > > On Thu, 2023-05-11 at 08:40 -0500, hai wu wrote: > > > > > > > Ok. Then I don't understand why 'hw:mem_page_size' is not made the > > > > > > > default in case if hw:numa_node is set. There is a huge disadvantage > > > > > > > if not having this one set (all existing VMs with hw:numa_node set > > > > > > > will have to be taken down for resizing in order to get this one > > > > > > > right). > > > > > > there is an upgrade impact to changign the default. > > > > > > its not impossibel to do but its complicated if we dont want to break exisitng deployments > > > > > > we woudl need to recored a value for eveny current instance that was spawned before > > > > > > this default was changed that had hw:numa_node without hw:mem_page_size so they kept the old behavior > > > > > > and make sure that is cleared when the vm is next moved so it can have the new default > > > > > > after a live migratoin. > > > > > > > > > > > > > > I could not find this point mentioned in any existing Openstack > > > > > > > documentation: that we would have to set hw:mem_page_size explicitly > > > > > > > if hw:numa_node is set. Also this slide at > > > > > > > https://www.linux-kvm.org/images/0/0b/03x03-Openstackpdf.pdf kind of > > > > > > > indicates that hw:mem_page_size `Default to small pages`. > > > > > > it defaults to unset. > > > > > > that results in small pages by default but its not the same as hw:mem_page_size=small > > > > > > or hw:mem_page_size=any. > > > > > > > > > > > > > > > > > > > > > > > > > > Another question: Let's say a VM runs on one host's numa node #0. If > > > > > > > we live-migrate this VM to another host, and that host's numa node #1 > > > > > > > has more free memory, is it possible for this VM to land on the other > > > > > > > host's numa node #1? > > > > > > yes it is > > > > > > on newer relsese we will prefer to balance the load across numa nodes > > > > > > on older release nova woudl fill the first numa node then move to the second. > > > > > > > > > > > > > > On Thu, May 11, 2023 at 4:25?AM Sean Mooney wrote: > > > > > > > > > > > > > > > > On Wed, 2023-05-10 at 15:06 -0500, hai wu wrote: > > > > > > > > > Is it possible to update something in the Openstack database for the > > > > > > > > > relevant VMs in order to do the same, and then hard reboot the VM so > > > > > > > > > that the VM would have this attribute? > > > > > > > > not really adding the missing hw:mem_page_size requirement to the flavor chagnes the > > > > > > > > requirements for node placement and numa affinity > > > > > > > > so you really can only change this via resizing the vm to a new flavor > > > > > > > > > > > > > > > > > > On Wed, May 10, 2023 at 2:47?PM Sean Mooney wrote: > > > > > > > > > > > > > > > > > > > > On Wed, 2023-05-10 at 14:22 -0500, hai wu wrote: > > > > > > > > > > > So there's no default value assumed/set for hw:mem_page_size for each > > > > > > > > > > > flavor? > > > > > > > > > > > > > > > > > > > > > correct this is a known edgecase in the currnt design > > > > > > > > > > hw:mem_page_size=any would be a resonable default but > > > > > > > > > > techinially if just set hw:numa_nodes=1 nova allow memory over subscription > > > > > > > > > > > > > > > > > > > > in pratch if you try to do that you will almost always end up with vms > > > > > > > > > > being killed due to OOM events. > > > > > > > > > > > > > > > > > > > > so from a api point of view it woudl be a change of behvior for use to default > > > > > > > > > > to hw:mem_page_size=any but i think it would be the correct thign to do for operators > > > > > > > > > > in the long run. > > > > > > > > > > > > > > > > > > > > i could bring this up with the core team again but in the past we > > > > > > > > > > decided to be conservitive and just warn peopel to alwasy set > > > > > > > > > > hw:mem_page_size if using numa affinity. > > > > > > > > > > > > > > > > > > > > > Yes https://bugs.launchpad.net/nova/+bug/1893121 is critical > > > > > > > > > > > when using hw:numa_nodes=1. > > > > > > > > > > > > > > > > > > > > > > I did not hit an issue with 'hw:mem_page_size' not set, maybe I am > > > > > > > > > > > missing some known test cases? It would be very helpful to have a test > > > > > > > > > > > case where I could reproduce this issue with 'hw:numa_nodes=1' being > > > > > > > > > > > set, but without 'hw:mem_page_size' being set. > > > > > > > > > > > > > > > > > > > > > > How to ensure this one for existing vms already running with > > > > > > > > > > > 'hw:numa_nodes=1', but without 'hw:mem_page_size' being set? > > > > > > > > > > you unfortuletly need to resize the instance. > > > > > > > > > > tehre are some image porpeties you can set on an instance via nova-manage > > > > > > > > > > but you cannot use nova-mange to update the enbedd flavor and set this. > > > > > > > > > > > > > > > > > > > > so you need to define a new flavour and resize. > > > > > > > > > > > > > > > > > > > > this is the main reason we have not changed the default as it may requrie you to > > > > > > > > > > move instnace around if there placement is now invalid now that per numa node memory > > > > > > > > > > allocatons are correctly being accounted for. > > > > > > > > > > > > > > > > > > > > if it was simple to change the default without any enduser or operator impact we would. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Wed, May 10, 2023 at 1:47?PM Sean Mooney wrote: > > > > > > > > > > > > > > > > > > > > > > > > if you set hw:numa_nodes there are two things you should keep in mind > > > > > > > > > > > > > > > > > > > > > > > > first if hw:numa_nodes si set to any value incluing hw:numa_nodes=1 > > > > > > > > > > > > then hw:mem_page_size shoudl also be defiend on the falvor. > > > > > > > > > > > > > > > > > > > > > > > > if you dont set hw:mem_page_size then the vam will be pinned to a host numa node > > > > > > > > > > > > but the avaible memory on the host numa node will not be taken into account > > > > > > > > > > > > > > > > > > > > > > > > only the total free memory on the host so this almost always results in VMs being killed by the OOM reaper > > > > > > > > > > > > in the kernel. > > > > > > > > > > > > > > > > > > > > > > > > i recomend setting hw:mem_page_size=small hw:mem_page_size=large or hw:mem_page_size=any > > > > > > > > > > > > small will use your kernels default page size for guest memory, typically this is 4k pages > > > > > > > > > > > > large will use any pages size other then the smallest that is avaiable (i.e. this will use hugepages) > > > > > > > > > > > > and any will use small pages but allow the guest to request hugepages via the hw_page_size image property. > > > > > > > > > > > > > > > > > > > > > > > > hw:mem_page_size=any is the most flexable as a result but generally i recommend using hw:mem_page_size=small > > > > > > > > > > > > and having a seperate flavor for hugepages. its really up to you. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > the second thing to keep in mind is using expict numa toplolig8ies including hw:numa_nodes=1 > > > > > > > > > > > > disables memory oversubsctipion. > > > > > > > > > > > > > > > > > > > > > > > > so you will not be able ot oversubscibe the memory on the host. > > > > > > > > > > > > > > > > > > > > > > > > in general its better to avoid memory oversubscribtion anyway but jsut keep that in mind. > > > > > > > > > > > > you cant jsut allocate a buch of swap space and run vms at a 2:1 or higher memory over subscription ratio > > > > > > > > > > > > if you are using numa affinity. > > > > > > > > > > > > > > > > > > > > > > > > https://that.guru/blog/the-numa-scheduling-story-in-nova/ > > > > > > > > > > > > and > > > > > > > > > > > > https://that.guru/blog/cpu-resources-redux/ > > > > > > > > > > > > > > > > > > > > > > > > are also good to read > > > > > > > > > > > > > > > > > > > > > > > > i do not think stephen has a dedicated block on the memory aspect > > > > > > > > > > > > but https://bugs.launchpad.net/nova/+bug/1893121 covers some of the probelem that only setting > > > > > > > > > > > > hw:numa_nodes=1 will casue. > > > > > > > > > > > > > > > > > > > > > > > > if you have vms with hw:numa_nodes=1 set and you do not have hw:mem_page_size set in the falvor or > > > > > > > > > > > > hw_mem_page_size set in the image then that vm is not configure properly. > > > > > > > > > > > > > > > > > > > > > > > > On Wed, 2023-05-10 at 11:52 -0600, Alvaro Soto wrote: > > > > > > > > > > > > > Another good resource =) > > > > > > > > > > > > > > > > > > > > > > > > > > https://that.guru/blog/cpu-resources/ > > > > > > > > > > > > > > > > > > > > > > > > > > On Wed, May 10, 2023 at 11:50?AM Alvaro Soto wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > I don't think so. > > > > > > > > > > > > > > > > > > > > > > > > > > > > ~~~ > > > > > > > > > > > > > > The most common case will be that the admin only sets hw:numa_nodes and > > > > > > > > > > > > > > then the flavor vCPUs and memory will be divided equally across the NUMA > > > > > > > > > > > > > > nodes. When a NUMA policy is in effect, it is mandatory for the instance's > > > > > > > > > > > > > > memory allocations to come from the NUMA nodes to which it is bound except > > > > > > > > > > > > > > where overriden by hw:numa_mem.NN. > > > > > > > > > > > > > > ~~~ > > > > > > > > > > > > > > > > > > > > > > > > > > > > Here are the implementation documents since Juno release: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://opendev.org/openstack/nova-specs/src/branch/master/specs/juno/implemented/virt-driver-numa-placement.rst > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://opendev.org/openstack/nova-specs/commit/45252df4c54674d2ac71cd88154af476c4d510e1 > > > > > > > > > > > > > > ? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Wed, May 10, 2023 at 11:31?AM hai wu wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Is there any concern to enable 'hw:numa_nodes=1' on all flavors, as > > > > > > > > > > > > > > > long as that flavor can fit into one numa node? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > > > > > > > > > > > > > > > > > > > Alvaro Soto > > > > > > > > > > > > > > > > > > > > > > > > > > > > *Note: My work hours may not be your work hours. Please do not feel the > > > > > > > > > > > > > > need to respond during a time that is not convenient for you.* > > > > > > > > > > > > > > ---------------------------------------------------------- > > > > > > > > > > > > > > Great people talk about ideas, > > > > > > > > > > > > > > ordinary people talk about things, > > > > > > > > > > > > > > small people talk... about other people. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From peter.matulis at canonical.com Fri Jun 2 20:15:14 2023 From: peter.matulis at canonical.com (Peter Matulis) Date: Fri, 2 Jun 2023 16:15:14 -0400 Subject: [charms] OpenStack Charms 2023.1 (Antelope) release is now available Message-ID: The 2023.1 (Antelope) release of the OpenStack Charms is now available. Please see the release notes for full details: https://docs.openstack.org/charm-guide/latest/release-notes/2023.1-antelope.html == Highlights == * OpenStack Antelope OpenStack Antelope is now supported on Ubuntu 22.04 LTS (via UCA) and Ubuntu 23.04 natively. * New charm: ironic-dashboard This new charm integrates the Ironic Dashboard into the OpenStack Dashboard. * Service user password rotation The keystone, mysql-innodb-cluster, and rabbitmq-server charms have gained actions to assist with rotating the passwords of OpenStack service users. Two actions are provided for each of these charms (to list usernames and rotate their passwords). * New features for the ironic-conductor charm The ironic-conductor charm has acquired a few new abilities. It can now enable hardware specific options in the Ironic Conductor service. It also now supports custom timeouts for when requests are made to download install images. * Documentation updates Regular improvements and bug fixes. A new page on Network spaces was landed. == OpenStack Charms team == The OpenStack Charms team can be contacted via real time chat: https://chat.charmhub.io/charmhub/channels/openstack-charms The Juju user forum is also popular for diagnosing technical issues: https://discourse.charmhub.io/c/juju/ == Thank you == Lots of thanks to the 24 contributors below who squashed bugs, enabled new features, and improved the documentation! Alex Kavanagh Arif Ali Bas de Bruijne Billy Olsen Chris MacNaughton Corey Bryant Dmitrii Shcherbakov Edward Hope-Morley Felipe Reyes Frode Nordahl Gabriel Cocenza Guillaume Boutry Hemanth Nakkina James Page Jorge Merlino Liam Young Marcus Boden Martin Kalcok Neil Campbell Olivier Dufour-Cuvillier Peter Matulis Simon Dodsley Tiago Pasqualini Trent Lloyd -- OpenStack Charms Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From hanguangyu2 at gmail.com Sat Jun 3 14:09:19 2023 From: hanguangyu2 at gmail.com (=?UTF-8?B?6Z+p5YWJ5a6H?=) Date: Sat, 3 Jun 2023 14:09:19 +0000 Subject: [neutron][ovn] The implementation status of DVR for port forwarding in OpenStack. Message-ID: Hello I previously learned that distributed SNAT has not been implemented in ML2/OVS yet, and traffic for floating IP port forwarding still goes through the network node. Therefore, I tried using ML2/OVN. However, I found that the traffic for floating IP port forwarding still goes through the network node. Does anyone know if this is normal or if OVN has implemented DVR for port forwarding? Or is it because there is an issue with my configuration? I deploy a master ubuntu env by kolla, and I had set dvr of ovn. ``` # in /etc/kolla/globals.yml neutron_plugin_agent: "ovn" neutron_ovn_distributed_fip: "yes" ``` I would like to know if OVN currently implements DVR for port forwarding. If not, what can I do to distribute the traffic load for port forwarding of OpenStack? In my use case, I need to create a large number of port forwarding rules, which puts a lot of pressure on the network node. I will appreciate any help or advice. Best regards, Han Guangyu From manchandavishal143 at gmail.com Sun Jun 4 18:09:45 2023 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Sun, 4 Jun 2023 23:39:45 +0530 Subject: [horizon] Cancelling next weekly meeting on 7th June Message-ID: Hello Team, As agreed, during the last weekly meeting, we are canceling our next weekly meeting on 7th June. I am on vacation for next complete week with limited or no access to internet. Please reach out to horizon core team if need any help. Thanks & regards, Vishal Manchanda -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Mon Jun 5 01:33:43 2023 From: tkajinam at redhat.com (Takashi Kajinami) Date: Mon, 5 Jun 2023 10:33:43 +0900 Subject: [puppet] puppet module improvements In-Reply-To: <86d026c5-ea18-62da-ed3b-f83c533fd772@cirrax.com> References: <86d026c5-ea18-62da-ed3b-f83c533fd772@cirrax.com> Message-ID: Hi Benedikt, Thanks for bringing these items. I generally agree with the points you listed and will leave a few more comments inline below. Our puppet modules were developed initially some time ago(even long before I joined the team) and we have many legacy implementations. It'd be nice if we can adapt to modern design patterns. However at the same time we have limited resources in the project now so addressing all these topics in a short term might be difficult. As we maintain number of modules in this project, I'd like to make sure we have the consistent design pattern across all modules. So we can probably determine the priorities of these works and also milestones(especially in which cycle we implement each change). Thank you, Takashi On Tue, May 30, 2023 at 9:37?PM Benedikt Trefzer < benedikt.trefzer at cirrax.com> wrote: > Hi all > > I use the openstack puppet modules to deploy openstack. I'd like to > suggest some improvements to the modules and like to know what the > community thinks about: > > 1.) use proper types for parameters > Parameter validation is done with 'validate_legacy...' instead of > defining the class/resource parameter with a proper type all over the code. > I cannot imaging of any advantage not using proper type definitions. > Instead using typed parameters would be more efficient an code > readability would increase. > I think we have to prioritize this since puppetlabs-stdlib 9.0.0 deprecated the validate_legacy function and emits large warnings. The reason we(at least, I) hesitated implementing the strict type validation is that this is likely to break existing manifests(in fact, even our modules or manifests have experienced breakage caused by new validations added in dependent modules) and also can heavily affect TripleO (though the project has been deprecated now). We can start with just replacing all validate_legacy functions by typed parameters first and target this work during this cycle then discuss how we improve coverage of type validations further. > > 2.) params.pp > This is the legacy way to define parameter defaults for different OS's. > In the modern puppet world a module specific hiera structure is used, > which eliminates the need of params.pp class (with inheritance and > include). > The usage of hiera improves readability and flexibility (every parameter > can be overwritten on request, eg. change of packages names etc.) > This also eliminate the restriction that the modules can only be used by > certain OS'es (osfamily 'RedHat' or 'Debian'). > +1. Though I'd like to understand which OS users would like to use with our modules so that we can ideally implement CI coverage. > > 3.) Eliminate "if OS=='bla' {" statements in code > These statements make the code very inflexible. It cannot be overruled > if necessary (eg. if I use custom packages to install and do not need > the code provided in the if statement). > Instead a parameter should be used with a default provided in hiera. > +1 . We can partially cover this as part of 2 but might leave this for the last work among 3, I guess. > > > Since there is lot of code to change I do not expect this to be done in > a single commit (per module) but in steps probably in more than one > release cycle. But defining this as best practice for openstack puppet > modules and start using above in new commits would bring the code forward. > I tend to disagree that we start these new patterns in new commits. Having partial migration causes difficulty in maintenance. I really want to see these are implemented consistently in a single moules as well as among all modules, so that we are not confused when we are forced to implement global changes in all of our modules. We can probably start with a few "independent" modules such as puppet-vswitch(or p-o-i) and once we agree with the pattern then we can schedule when we start implementing these changes in all modules in a single release cycle. > > Finally: These are suggestions open for discussion. In no way I like to > critic the current state of the puppet modules (which is quite good, but > a bit legacy) or the people working on the modules. This is just a > feedback/suggestion from an operator using the modules on a daily basis. > > Regards > > Benedikt Trefzer > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Mon Jun 5 03:04:31 2023 From: tkajinam at redhat.com (Takashi Kajinami) Date: Mon, 5 Jun 2023 12:04:31 +0900 Subject: [heat] Plan for upcoming June 2023 PTG Message-ID: Hello, I'm attending the upcoming OpenInfra Summit @ Vancouver, so would like to moderate some Heat sessions in PTG. Unfortunately we have limited number of core reviewers attending this time. We discussed most of the important items during the previous vPTG. So I'm planning to have open discussion slots mainly to hear feedback from users/operators/etc. If you are interested, please add your name and topics to the etherpad . https://etherpad.opendev.org/p/vancouver-june2023-heat I've reserved slots from 11:00 to 12:50 on Wednesday, but we can be flexible. Please let me know if anyone is interested but that slot does not work. Thank you, Takashi Kajinami -------------- next part -------------- An HTML attachment was scrubbed... URL: From ygk.kmr at gmail.com Mon Jun 5 07:52:28 2023 From: ygk.kmr at gmail.com (Gk Gk) Date: Mon, 5 Jun 2023 13:22:28 +0530 Subject: Openstack Ansible Message-ID: Hi All, We have an OSA setup. When I check the usage of filesystem of any container using "df" command, it shows the lvm partitions of the underlying host as well as shown below: -- Filesystem Size Used Avail Use% Mounted on /dev/mapper/lxc1-lv_root 86G 60G 22G 74% / none 492K 4.0K 488K 1% /dev /dev/mapper/lxc-openstack 493G 136G 334G 29% /var/log tmpfs 252G 0 252G 0% /dev/shm tmpfs 51G 88K 51G 1% /run tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 252G 0 252G 0% /sys/fs/cgroup example:gfs-repo 86G 67G 16G 82% /var/www/repo --- How to exclude the host partitions such that df only reports the container usage only ? Thanks Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From masahito.muroi at linecorp.com Mon Jun 5 08:02:51 2023 From: masahito.muroi at linecorp.com (=?utf-8?B?5a6k5LqV6ZuF5LuB?=) Date: Mon, 05 Jun 2023 17:02:51 +0900 Subject: =?utf-8?B?W29zbG9dIEhUVFAgYmFzZSBkaXJlY3QgUlBDIG9zbG8ubWVzc2FnaW5nIGRyaXZlciBjbw==?= =?utf-8?B?bnRyaWJ1dGlvbg==?= Message-ID: Hi oslo team, We'd like to contribute HTTP base direct RPC driver to the oslo.messaging community. We have developed the HTTP base driver internally. We have been using the driver in the production with over 10K hypervisors now. I checked the IRC meeting log of the oslo team[1], but there is no regluar meeting in 2023. Is it okay to submit oslo-spec[2] to propose the driver directly, or is there another good place to discuss the feature before submitting a spec? 1. https://meetings.opendev.org/#Oslo_Team_Meeting 2. https://opendev.org/openstack/oslo-specs best regards, Masahito -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Jun 5 08:21:43 2023 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 5 Jun 2023 10:21:43 +0200 Subject: [oslo] HTTP base direct RPC oslo.messaging driver contribution In-Reply-To: References: Message-ID: Hello Masahito, Submission to oslo-spec is a good starting point. Best regards Le lun. 5 juin 2023 ? 10:04, ???? a ?crit : > Hi oslo team, > > We'd like to contribute HTTP base direct RPC driver to the oslo.messaging > community. We have developed the HTTP base driver internally. We have been > using the driver in the production with over 10K hypervisors now. > > I checked the IRC meeting log of the oslo team[1], but there is no regluar > meeting in 2023. Is it okay to submit oslo-spec[2] to propose the driver > directly, or is there another good place to discuss the feature before > submitting a spec? > > 1. https://meetings.opendev.org/#Oslo_Team_Meeting > 2. https://opendev.org/openstack/oslo-specs > > best regards, > Masahito > > -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Mon Jun 5 08:28:41 2023 From: tobias.urdin at binero.com (Tobias Urdin) Date: Mon, 5 Jun 2023 08:28:41 +0000 Subject: [puppet] puppet module improvements In-Reply-To: References: <86d026c5-ea18-62da-ed3b-f83c533fd772@cirrax.com> Message-ID: <7367570A-0497-4311-A2B9-63F4F5C2CE50@binero.com> Hello, Thanks for bringing this up! I agree with all your points. It would be great if you wanted to help get these goals completed and we could work together on all that. As Takashi said we?re having very few contributors these days and we appreciate any help we can get, he has been doing an amazing job on cleaning up and doing maintenance. I agree with Takashi?s feedback below, starting with removing validate_legacy to get us started seems like a good option. We will be on the OpenInfra Summit next week and are going to meet up, if you are there we would love to have you join us in a discussion/planning on moving forward. Best regards Tobias On 5 Jun 2023, at 03:33, Takashi Kajinami wrote: Hi Benedikt, Thanks for bringing these items. I generally agree with the points you listed and will leave a few more comments inline below. Our puppet modules were developed initially some time ago(even long before I joined the team) and we have many legacy implementations. It'd be nice if we can adapt to modern design patterns. However at the same time we have limited resources in the project now so addressing all these topics in a short term might be difficult. As we maintain number of modules in this project, I'd like to make sure we have the consistent design pattern across all modules. So we can probably determine the priorities of these works and also milestones(especially in which cycle we implement each change). Thank you, Takashi On Tue, May 30, 2023 at 9:37?PM Benedikt Trefzer > wrote: Hi all I use the openstack puppet modules to deploy openstack. I'd like to suggest some improvements to the modules and like to know what the community thinks about: 1.) use proper types for parameters Parameter validation is done with 'validate_legacy...' instead of defining the class/resource parameter with a proper type all over the code. I cannot imaging of any advantage not using proper type definitions. Instead using typed parameters would be more efficient an code readability would increase. I think we have to prioritize this since puppetlabs-stdlib 9.0.0 deprecated the validate_legacy function and emits large warnings. The reason we(at least, I) hesitated implementing the strict type validation is that this is likely to break existing manifests(in fact, even our modules or manifests have experienced breakage caused by new validations added in dependent modules) and also can heavily affect TripleO (though the project has been deprecated now). We can start with just replacing all validate_legacy functions by typed parameters first and target this work during this cycle then discuss how we improve coverage of type validations further. 2.) params.pp This is the legacy way to define parameter defaults for different OS's. In the modern puppet world a module specific hiera structure is used, which eliminates the need of params.pp class (with inheritance and include). The usage of hiera improves readability and flexibility (every parameter can be overwritten on request, eg. change of packages names etc.) This also eliminate the restriction that the modules can only be used by certain OS'es (osfamily 'RedHat' or 'Debian'). +1. Though I'd like to understand which OS users would like to use with our modules so that we can ideally implement CI coverage. 3.) Eliminate "if OS=='bla' {" statements in code These statements make the code very inflexible. It cannot be overruled if necessary (eg. if I use custom packages to install and do not need the code provided in the if statement). Instead a parameter should be used with a default provided in hiera. +1 . We can partially cover this as part of 2 but might leave this for the last work among 3, I guess. Since there is lot of code to change I do not expect this to be done in a single commit (per module) but in steps probably in more than one release cycle. But defining this as best practice for openstack puppet modules and start using above in new commits would bring the code forward. I tend to disagree that we start these new patterns in new commits. Having partial migration causes difficulty in maintenance. I really want to see these are implemented consistently in a single moules as well as among all modules, so that we are not confused when we are forced to implement global changes in all of our modules. We can probably start with a few "independent" modules such as puppet-vswitch(or p-o-i) and once we agree with the pattern then we can schedule when we start implementing these changes in all modules in a single release cycle. Finally: These are suggestions open for discussion. In no way I like to critic the current state of the puppet modules (which is quite good, but a bit legacy) or the people working on the modules. This is just a feedback/suggestion from an operator using the modules on a daily basis. Regards Benedikt Trefzer -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Mon Jun 5 08:41:31 2023 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Mon, 5 Jun 2023 08:41:31 +0000 Subject: [oslo][largescale-sig] HTTP base direct RPC oslo.messaging driver contribution In-Reply-To: References: Message-ID: Hello, That seems very interesting for Large-scale topics as well! Do you already have the code available somewhere? On 05.06.23 - 10:21, Herve Beraud wrote: > Hello Masahito, > > Submission to oslo-spec is a good starting point. > > Best regards > > Le lun. 5 juin 2023 ? 10:04, ???? a ?crit : > > > Hi oslo team, > > > > We'd like to contribute HTTP base direct RPC driver to the oslo.messaging > > community. We have developed the HTTP base driver internally. We have been > > using the driver in the production with over 10K hypervisors now. > > > > I checked the IRC meeting log of the oslo team[1], but there is no regluar > > meeting in 2023. Is it okay to submit oslo-spec[2] to propose the driver > > directly, or is there another good place to discuss the feature before > > submitting a spec? > > > > 1. https://meetings.opendev.org/#Oslo_Team_Meeting > > 2. https://opendev.org/openstack/oslo-specs > > > > best regards, > > Masahito > > > > > > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ From masahito.muroi at linecorp.com Mon Jun 5 08:42:46 2023 From: masahito.muroi at linecorp.com (=?utf-8?B?TWFzYWhpdG8gTXVyb2k=?=) Date: Mon, 05 Jun 2023 17:42:46 +0900 Subject: =?utf-8?B?UmU6IFtvc2xvXSBIVFRQIGJhc2UgZGlyZWN0IFJQQyBvc2xvLm1lc3NhZ2luZyBkcml2ZQ==?= =?utf-8?B?ciBjb250cmlidXRpb24=?= In-Reply-To: References: Message-ID: Hello Herve, Thank you for the quick replying. Let us prepare the spec and submit it. btw, does olso team have PTG in the up-comming summit? We'd like to get a quick feedback of the spec if time is allowed in the PTG. But it looks like oslo team won't have PTG there. best regards, Masahito -----Original Message----- From: "Herve Beraud" To: "????"; Cc: ; Sent: 2023/06/05(?) 17:21 (GMT+09:00) Subject: Re: [oslo] HTTP base direct RPC oslo.messaging driver contribution Hello Masahito, Submission to oslo-spec is a good starting point. Best regards Le lun. 5 juin 2023 ? 10:04, ???? a ?crit : Hi oslo team, We'd like to contribute HTTP base direct RPC driver to the oslo.messaging community. We have developed the HTTP base driver internally. We have been using the driver in the production with over 10K hypervisors now. I checked the IRC meeting log of the oslo team[1], but there is no regluar meeting in 2023. Is it okay to submit oslo-spec[2] to propose the driver directly, or is there another good place to discuss the feature before submitting a spec? 1. https://meetings.opendev.org/#Oslo_Team_Meeting 2. https://opendev.org/openstack/oslo-specs best regards, Masahito -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon Jun 5 08:48:04 2023 From: zigo at debian.org (Thomas Goirand) Date: Mon, 5 Jun 2023 10:48:04 +0200 Subject: [puppet] puppet module improvements In-Reply-To: <7367570A-0497-4311-A2B9-63F4F5C2CE50@binero.com> References: <86d026c5-ea18-62da-ed3b-f83c533fd772@cirrax.com> <7367570A-0497-4311-A2B9-63F4F5C2CE50@binero.com> Message-ID: <868e287e-7622-d850-b58a-347cfcf5586a@debian.org> Hi team! If we were to decide to move on on what Benedikt suggests, hopefully we (at infomaniak) can probably invest a bit of time for it. I very much would like having typed parameters too, for example. Cheers, Thomas Goirand (zigo) On 6/5/23 10:28, Tobias Urdin wrote: > Hello, > > Thanks for bringing this up! I agree with all your points. > > It would be great if you wanted to help get these goals completed and we > could > work together on all that. > > As Takashi said we?re having very few contributors these days and we > appreciate > any help we can get, he has been doing an amazing job on cleaning up and > doing > maintenance. > > I agree with Takashi?s feedback below, starting with removing > validate_legacy to get > us started seems like a good option. > > We will be on the OpenInfra Summit next week and are going to meet up, > if you are > there we would love to have you join us in a discussion/planning on > moving forward. > > Best regards > Tobias > >> On 5 Jun 2023, at 03:33, Takashi Kajinami wrote: >> >> Hi Benedikt, >> >> Thanks for bringing these items. I generally agree with the points you >> listed and will leave a few more >> comments inline below. >> >> Our puppet modules were developed initially some time ago(even long >> before I joined the team) >> and we have many legacy implementations. It'd be nice if we can adapt >> to modern design patterns. >> >> However at the same time we have limited resources in the project now >> so addressing all these topics >> in a short term might be difficult. As we maintain number of modules >> in this project, I'd like to make sure >> we have the consistent design pattern across all modules. >> So we can probably determine the priorities of these works and also >> milestones(especially in which cycle >> we implement each change). >> >> Thank you, >> Takashi >> >> On Tue, May 30, 2023 at 9:37?PM Benedikt Trefzer >> > wrote: >> >> Hi all >> >> I use the openstack puppet modules to deploy openstack. I'd like to >> suggest some improvements to the modules and like to know what the >> community thinks about: >> >> 1.) use proper types for parameters >> Parameter validation is done with 'validate_legacy...' instead of >> defining the class/resource parameter with a proper type all over >> the code. >> I cannot imaging of any advantage not using proper type definitions. >> Instead using typed parameters would be more efficient an code >> readability would increase. >> >> I think we have to prioritize this since puppetlabs-stdlib 9.0.0 >> deprecated the validate_legacy function >> and emits large warnings. >> >> The reason we(at least, I) hesitated implementing the strict type >> validation is that this is likely to break >> existing manifests(in fact, even our modules or manifests have >> experienced breakage caused by new >> validations added in dependent modules) and also can heavily affect >> TripleO (though the project has >> been deprecated now). >> >> We can start with just replacing all validate_legacy functions by >> typed parameters first and target this >> work during this cycle then discuss how we improve coverage of type >> validations further. >> >> >> 2.) params.pp >> This is the legacy way to define parameter defaults for different >> OS's. >> In the modern puppet world a module specific hiera structure is used, >> which eliminates the need of params.pp class (with inheritance and >> include). >> The usage of hiera improves readability and flexibility (every >> parameter >> can be overwritten on request, eg. change of packages names etc.) >> This also eliminate the restriction that the modules can only be >> used by >> certain OS'es (osfamily 'RedHat' or 'Debian'). >> >> >> +1. Though I'd like to understand which OS users would like to use >> with our modules so that we can >> ideally implement CI coverage. >> >> >> 3.) Eliminate "if OS=='bla' {" statements in code >> These statements make the code very inflexible. It cannot be >> overruled >> if necessary (eg. if I use custom packages to install and do not need >> the code provided in the if statement). >> Instead a parameter should be used with a default provided in hiera. >> >> >> +1 . We can partially cover this as part of 2 but might leave this for >> the last work among 3, I guess. >> >> >> >> Since there is lot of code to change I do not expect this to be >> done in >> a single commit (per module) but in steps probably in more than one >> release cycle. But defining this as best practice for openstack >> puppet >> modules and start using above in new commits would bring the code >> forward. >> >> >> I tend to disagree that we start these new patterns in new commits. >> Having partial migration causes >> difficulty in maintenance. I really want to see these are implemented >> consistently in a single moules >> as well as among all modules, so that we are not confused when we are >> forced to implement global >> changes in all of our modules. >> >> We can probably start with a few "independent" modules such as >> puppet-vswitch(or p-o-i) and >> once we agree with the pattern then we can schedule when we start >> implementing these changes >> in all modules in a single release cycle. >> >> >> Finally: These are suggestions open for discussion. In no way I >> like to >> critic the current state of the puppet modules (which is quite >> good, but >> a bit legacy) or the people working on the modules. This is just a >> feedback/suggestion from an operator using the modules on a daily >> basis. >> >> Regards >> >> Benedikt Trefzer >> > From noonedeadpunk at gmail.com Mon Jun 5 11:45:39 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Mon, 5 Jun 2023 13:45:39 +0200 Subject: Openstack Ansible In-Reply-To: References: Message-ID: Hi, I don't think this is possible. Disk usage of containers should be performed from the host rather then container. Unfortunately, with current default backing store (which is `dir`), there's no neat way of doing that. So you'd need to use smth like `du -sh /var/lib/lxc//rootfs/` to check how much diskspace it's consuming. Though, backing store is customizable. So you can set `lxc_container_backing_store` variable to "lvm" or "zfs" and check diskspace consumption per lvm/zfs volume. ??, 5 ???. 2023??. ? 09:56, Gk Gk : > > Hi All, > > We have an OSA setup. When I check the usage of filesystem of any container using "df" command, it shows the lvm partitions of the underlying host as well as shown below: > -- > Filesystem Size Used Avail Use% Mounted on > /dev/mapper/lxc1-lv_root 86G 60G 22G 74% / > none 492K 4.0K 488K 1% /dev > /dev/mapper/lxc-openstack 493G 136G 334G 29% /var/log > tmpfs 252G 0 252G 0% /dev/shm > tmpfs 51G 88K 51G 1% /run > tmpfs 5.0M 0 5.0M 0% /run/lock > tmpfs 252G 0 252G 0% /sys/fs/cgroup > example:gfs-repo 86G 67G 16G 82% /var/www/repo > --- > > How to exclude the host partitions such that df only reports the container usage only ? > > Thanks > Kumar From nell at tigera.io Mon Jun 5 12:07:21 2023 From: nell at tigera.io (Nell Jerram) Date: Mon, 5 Jun 2023 13:07:21 +0100 Subject: [devstack] Recent change breaks cloning with GIT_DEPTH setting Message-ID: FYI, devstack commit b0bd5b92 doesn't work with a GIT_DEPTH=1 setting. That commit basically changed from git clone ... --branch to git clone ... git checkout So it's easy for the desired to be unavailable if GIT_DEPTH is limited. Best wishes - Nell -------------- next part -------------- An HTML attachment was scrubbed... URL: From ygk.kmr at gmail.com Mon Jun 5 12:12:44 2023 From: ygk.kmr at gmail.com (Gk Gk) Date: Mon, 5 Jun 2023 17:42:44 +0530 Subject: Openstack Ansible In-Reply-To: References: Message-ID: But when I check it from the host, it is still showing incorrect usage: --- du -sh /var/lib/lxc/example_repo_container-237946ab/rootfs/root/ 36K /var/lib/lxc/example_repo_container-237946ab/rootfs/root/ -- How to get its correct usage then ? On Mon, Jun 5, 2023 at 5:15?PM Dmitriy Rabotyagov wrote: > Hi, > > I don't think this is possible. > Disk usage of containers should be performed from the host rather then > container. Unfortunately, with current default backing store (which is > `dir`), there's no neat way of doing that. So you'd need to use smth > like `du -sh /var/lib/lxc//rootfs/` to check how much > diskspace it's consuming. Though, backing store is customizable. So > you can set `lxc_container_backing_store` variable to "lvm" or "zfs" > and check diskspace consumption per lvm/zfs volume. > > ??, 5 ???. 2023??. ? 09:56, Gk Gk : > > > > Hi All, > > > > We have an OSA setup. When I check the usage of filesystem of any > container using "df" command, it shows the lvm partitions of the underlying > host as well as shown below: > > -- > > Filesystem Size Used Avail Use% > Mounted on > > /dev/mapper/lxc1-lv_root 86G 60G 22G 74% > / > > none 492K 4.0K 488K 1% > /dev > > /dev/mapper/lxc-openstack 493G 136G 334G 29% > /var/log > > tmpfs 252G 0 252G 0% > /dev/shm > > tmpfs 51G 88K 51G 1% > /run > > tmpfs 5.0M 0 5.0M 0% > /run/lock > > tmpfs 252G 0 252G 0% > /sys/fs/cgroup > > example:gfs-repo 86G 67G 16G 82% /var/www/repo > > --- > > > > How to exclude the host partitions such that df only reports the > container usage only ? > > > > Thanks > > Kumar > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Mon Jun 5 12:19:33 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Mon, 5 Jun 2023 14:19:33 +0200 Subject: Openstack Ansible In-Reply-To: References: Message-ID: I think it's the correct usage. You're just checking the size of `/root` folder inside the container, rather than the container overall. As it should be `du -sh /var/lib/lxc/example_repo_container-237946ab/rootfs/` instead. ??, 5 ???. 2023??. ? 14:12, Gk Gk : > > But when I check it from the host, it is still showing incorrect usage: > > --- > du -sh /var/lib/lxc/example_repo_container-237946ab/rootfs/root/ > 36K /var/lib/lxc/example_repo_container-237946ab/rootfs/root/ > -- > > How to get its correct usage then ? > > On Mon, Jun 5, 2023 at 5:15?PM Dmitriy Rabotyagov wrote: >> >> Hi, >> >> I don't think this is possible. >> Disk usage of containers should be performed from the host rather then >> container. Unfortunately, with current default backing store (which is >> `dir`), there's no neat way of doing that. So you'd need to use smth >> like `du -sh /var/lib/lxc//rootfs/` to check how much >> diskspace it's consuming. Though, backing store is customizable. So >> you can set `lxc_container_backing_store` variable to "lvm" or "zfs" >> and check diskspace consumption per lvm/zfs volume. >> >> ??, 5 ???. 2023??. ? 09:56, Gk Gk : >> > >> > Hi All, >> > >> > We have an OSA setup. When I check the usage of filesystem of any container using "df" command, it shows the lvm partitions of the underlying host as well as shown below: >> > -- >> > Filesystem Size Used Avail Use% Mounted on >> > /dev/mapper/lxc1-lv_root 86G 60G 22G 74% / >> > none 492K 4.0K 488K 1% /dev >> > /dev/mapper/lxc-openstack 493G 136G 334G 29% /var/log >> > tmpfs 252G 0 252G 0% /dev/shm >> > tmpfs 51G 88K 51G 1% /run >> > tmpfs 5.0M 0 5.0M 0% /run/lock >> > tmpfs 252G 0 252G 0% /sys/fs/cgroup >> > example:gfs-repo 86G 67G 16G 82% /var/www/repo >> > --- >> > >> > How to exclude the host partitions such that df only reports the container usage only ? >> > >> > Thanks >> > Kumar From mkopec at redhat.com Mon Jun 5 13:04:15 2023 From: mkopec at redhat.com (Martin Kopec) Date: Mon, 5 Jun 2023 15:04:15 +0200 Subject: [devstack] Recent change breaks cloning with GIT_DEPTH setting In-Reply-To: References: Message-ID: Right, thanks Nell for pointing that out, you're right! The patch in question: https://review.opendev.org/c/openstack/devstack/+/882299 We changed that because --branch didn't accept hashes anymore (not exactly sure why) but didn't realize that we might be passing --deph argument there. We'll need to rework that code so that it accepts both, branches and hashes, as well as depth argument. Thanks, On Mon, 5 Jun 2023 at 14:15, Nell Jerram wrote: > FYI, devstack commit b0bd5b92 doesn't work with a GIT_DEPTH=1 setting. > > That commit basically changed from > > git clone ... --branch > > to > > git clone ... > git checkout > > So it's easy for the desired to be unavailable if GIT_DEPTH is > limited. > > Best wishes - Nell > -- Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Mon Jun 5 14:27:15 2023 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Mon, 05 Jun 2023 16:27:15 +0200 Subject: [forum] [k8s] Kubernetes on OpenStack Message-ID: Hi folks, With Matt we're holding a Forum session to discuss Kubernetes on OpenStack in Vancouver. We'd like the users, admins and developers of the projects to connect and talk about challenges, pain points and best practices. The agenda is being gathered in the etherpad [1], please do not hesitate to add any topics that you would like to discuss during the meeting. See you in Vancouver! Thanks, Micha? [1] https://etherpad.opendev.org/p/openinfra-2023-kubernetes-on-openstack From antony at edgebricks.com Mon Jun 5 14:45:10 2023 From: antony at edgebricks.com (Antony P) Date: Mon, 5 Jun 2023 20:15:10 +0530 Subject: Nested template Message-ID: Dear team, I have 2 template test1.yaml test2.yaml how to run this 2 templates as single template. what is steps i have to follow *--Warm Regards* *Antony P* *Jr. DevOps Engineer* antony at edgebricks.com Mob No: +919498079898 www.edgebricks.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gopalrajora1995 at gmail.com Sat Jun 3 16:28:57 2023 From: gopalrajora1995 at gmail.com (GOPAL RAJORA) Date: Sat, 3 Jun 2023 18:28:57 +0200 Subject: Octavia does not work properly. Message-ID: Hi everyone. Hope you are well. This is Gopal. I am contacting you as I need your help. So, recently I installed the Octavia service on the Openstack server. After installation of the Octavia, several services such as Nova, Neutron and Keystone started not to work and I can not access the horizon page either. So, I have checked the logs, for the moment, keystone service does not work after the Octavia installation. *Unable to establish connection to http://keystone-api.openstack.svc.cluster.local:5000/v3/auth/tokens : ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))* So, I tried to install the Octavia service several times, so if I disable the health manager, it seems everything works properly, but as you know, Octavia can not work properly if the heath manger is not installed. So, can you please help me regarding this problem? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Mon Jun 5 15:12:26 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Mon, 5 Jun 2023 08:12:26 -0700 Subject: [ironic] No Meeting June 12th Message-ID: Hi all, Due to proximity to OpenInfra Summit/Forum/PTG events, the Ironic meeting for June 12th will be cancelled. The next Ironic meeting will be Monday, June 19th. See many of you in Vancouver! Thanks, Jay Faulkner Ironic PTL -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtomaska at redhat.com Mon Jun 5 16:19:59 2023 From: mtomaska at redhat.com (Miro Tomaska) Date: Mon, 5 Jun 2023 11:19:59 -0500 Subject: [neutron] Bug Deputy May 29 - June 4 Message-ID: Hello All! Here is bug report from May 29nd to June 4th, Undecided bugs needs further triage: *Critical:-* *High:-* - https://bugs.launchpad.net/neutron/+bug/2021457 The firewall group without any port in active status Assigned to ZhouHeng - https://bugs.launchpad.net/neutron/+bug/2022059 Trunk can be deleted when the parent port is bound to a VM Assigned to Rodolfo Alonso *Medium:-* - https://bugs.launchpad.net/neutron/+bug/2021968 [OVN-BGP-AGENT] Expose subnet CIDR information on NB DB Assigned to Lucas Alvares Gomez - https://bugs.launchpad.net/neutron/+bug/2022070 "neutron-ovn-rally-task" timing out randomly Assigned to Rodolfo Alonso *Low:-* *- *https://bugs.launchpad.net/neutron/+bug/2022043 APIs for resources which don't have project_id still requires it in the API definition Assigned Slawek Kaplonski *Wishlist:-* *Undecided:-* https://bugs.launchpad.net/neutron/+bug/2022058 - l3ha and distributed router extra attributes do not reflect OVN state https://bugs.launchpad.net/neutron/+bug/2022360 - SecurityGroup deletion causes bulk_pull of SG rules by all the agents - requested more info Thank you Miro Tomaska -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Jun 5 17:40:44 2023 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 5 Jun 2023 12:40:44 -0500 Subject: Kubernetes Conformance 1.24 + 1.25 In-Reply-To: References: <387290b9-c3be-559f-afd2-b41d508d0fac@ardc.edu.au> <908b2eb4-dae4-28f8-02e2-3b9f4660ce2f@ardc.edu.au> Message-ID: Hello! Wanted to revive this thread since our conformance for magnum is past expiration now. Looks like the patch has landed but I think there is more work required in the driver to be able to pass the tests. I am sure a lot of folks are busy with summit prep, I just wanted to make sure this was still on everyone's radar. -Kendall On Fri, May 5, 2023 at 1:03?PM Guilherme Steinm?ller < gsteinmuller at vexxhost.com> wrote: > Hey there! > > I am trying to run conformance against 1.25 and 1.26 now, but it looks > like we are still with this ongoing? > https://review.opendev.org/c/openstack/magnum/+/874092 > > Im still facing issues to create the cluster due to "PodSecurityPolicy\" > is unknown. > > Thank you, > Guilherme Steinmuller > ------------------------------ > *From:* Kendall Nelson > *Sent:* 21 February 2023 17:38 > *To:* Guilherme Steinm?ller > *Cc:* Jake Yip ; OpenStack Discuss < > openstack-discuss at lists.openstack.org>; dale at catalystcloud.nz < > dale at catalystcloud.nz> > *Subject:* Re: Kubernetes Conformance 1.24 + 1.25 > > Circling back to this thread- > > Thanks Jake for getting this rolling! > https://review.opendev.org/c/openstack/magnum/+/874092 > > -Kendall > > On Wed, Feb 15, 2023 at 6:34 AM Guilherme Steinm?ller < > gsteinmuller at vexxhost.com> wrote: > > Hi Jake, > > Yeah, that could be it. > > On devstack magnum master, the kube-apiserver pod fails to start with > rancher 1.25 hyperkube image with: > > Feb 14 20:24:06 k8s-cluster-dgpwfkugdna5-master-0 conmon[119164]: E0214 > 20:24:06.615919 1 run.go:74] "command failed" err="admission-control > plugin \"PodSecurityPolicy\" is unknown" > > Regards, > Guilherme Steinmuller > > On Tue, Feb 14, 2023 at 10:03 AM Jake Yip wrote: > > Hi Guilherme Steinmuller, > > Is the issue with 1.25 the removal of PodSecurityPolicy? And that there > are pieces of PSP in Magnum code. I've been trying to remove it. > > Regards, > Jake > > > On 14/2/2023 11:35 pm, Guilherme Steinm?ller wrote: > > Hi everyone! > > > > Dale, thanks for your comments here. I no longer have my devstack which > > I tested v1.25. However, you pointed out something I haven't noticed: > > for v1.25 I tried using the fedora coreos that is shipped with devstack, > > which is f36. > > > > I will try to reproduce it again, but now using a newer fedora coreos. > > If it fails, I will be happy to share my results here for us to figure > > out and get certified for 1.25! > > > > Keep in tune! > > > > Thank you, > > Guilherme Steinmuller > > > > On Tue, Feb 14, 2023 at 9:26 AM Jake Yip > > wrote: > > > > On 14/2/2023 6:53 am, Kendall Nelson wrote: > > > Hello All! > > > > > > First of all, I want to say a huge thanks to Guilherme > > Steinmuller for > > > all his help ensuring that OpenStack Magnum remains Kubernetes > > Certified > > > [1]! We are certified for v1.24! > > > > > Wow great work Guilherme Steinmuller! > > > > - Jake > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zaitcev at redhat.com Mon Jun 5 19:02:11 2023 From: zaitcev at redhat.com (Pete Zaitcev) Date: Mon, 5 Jun 2023 14:02:11 -0500 Subject: swift container retention In-Reply-To: References: Message-ID: <20230605140211.6a23d7c8@lebethron.zaitcev.lan> On Sun, 28 May 2023 14:13:32 +0200 Geert Geurts wrote: > What I want is for all objects in a container to get deleted > after X days without having to add a header once the object is uploaded. > Does anyone have suggestions on how I could implement this requirement? I'd write a proxy middleware that added the X-Delete-After automatically to objects in certain containers, according to configuraiton. -- Pete From fkr at osb-alliance.com Mon Jun 5 19:10:35 2023 From: fkr at osb-alliance.com (Felix Kronlage-Dammers) Date: Mon, 05 Jun 2023 21:10:35 +0200 Subject: [publiccloud-sig] Reminder - next meeting June 7th - 0700 UTC Message-ID: <4EF93811-BCA1-4725-B5F3-99085846016D@osb-alliance.com> Hi everyone, on Wednesday the next meeting of the public cloud sig is going to happen. We meet on IRC in #openstack-operators at 0700 UTC. A preliminary agenda can be found in the pad: https://etherpad.opendev.org/p/publiccloud-sig-meeting See also here for all other details: https://wiki.openstack.org/wiki/PublicCloudSIG read you on Wednesday! felix -- Felix Kronlage-Dammers Product Owner IaaS & Operations Sovereign Cloud Stack Sovereign Cloud Stack ? standardized, built and operated by many Ein Projekt der Open Source Business Alliance - Bundesverband f?r digitale Souver?nit?t e.V. Tel.: +49-30-206539-205 | Matrix: @fkronlage:matrix.org | fkr at osb-alliance.com -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 862 bytes Desc: OpenPGP digital signature URL: From fsbiz at yahoo.com Mon Jun 5 21:18:19 2023 From: fsbiz at yahoo.com (fsbiz at yahoo.com) Date: Mon, 5 Jun 2023 21:18:19 +0000 (UTC) Subject: [ceph-users] Quuncy release - radosGW integration with Keystone References: <1307423519.61679.1685999899533.ref@mail.yahoo.com> Message-ID: <1307423519.61679.1685999899533@mail.yahoo.com> Hi folks,? My ceph cluster with Quincy and Rocky9 is up and running.But I'm having issues with radosGW authenticating with keystone.?Was wondering if I'm missed anything in the configuration.?From the debug logs below, it appears that radosgw is still trying to authenticate withSwift instead of Keystone.Any pointers will be appreciated.? thanks,?Fred Here is my configuration.??# ceph config dump | grep rgw? client advanced debug_rgw 20/20?client advanced rgw_keystone_accepted_roles admin,user *?client advanced rgw_keystone_admin_domain Default *?client advanced rgw_keystone_admin_password *?client advanced rgw_keystone_admin_project service *?client advanced rgw_keystone_admin_user ceph-ks-svc *?client advanced rgw_keystone_api_version 3?client advanced rgw_keystone_implicit_tenants false *?client advanced rgw_keystone_token_cache_size 0?client basic rgw_keystone_url *?client advanced rgw_s3_auth_use_keystone true?client advanced rgw_swift_account_in_url true?client basic rgw_thread_pool_size 512?client.rgw.s_rgw.dev-ipp1-u1-control01.ojmddc basic rgw_frontends beast port=7480 *?client.rgw.s_rgw.dev-ipp1-u1-control02.adnjrx basic rgw_frontends beast port=7480? Here's the debug log. If I interpret it correctly, it is trying to do a swift authentication and failing.Am I missing any configuration for Keystone based authentication ?? ?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: beast: 0x7fddeb8e7710:10.117.53.10 - - [03/Jun/2023:18:47:03.060 +0000] "GET/swift/v1/AUTH_c668ed224e434c88a9e0fce125056112?format=json HTTP/1.1" 401 119 -"openstacksdk/0.52.0 keystoneauth1/4.0.0 python-requests/2.22.0 CPython/3.8.10"- latency=0.000000000s?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: HTTP_ACCEPT=*/*Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: HTTP_ACCEPT_ENCODING=gzip,deflateJun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: HTTP_CONNECTION=closeJun 03 11:47:03 dev-ipp1-u1-control02.radosgw[2802861]:HTTP_HOST=dev-ipp1-u1-object-storeJun 03 11:47:03 dev-ipp1-u1-control02radosgw[2802861]: HTTP_USER_AGENT=openstacksdk/0.52.0keystoneauth1/4.0.0 python-requests/2.22.0 CPython/3.8.10Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: HTTP_VERSION=1.1?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]:HTTP_X_AUTH_TOKEN=gAAAAABke4qn779UQ_XMz0EDL3P3TgjBQsGG6p-MNhviJxLZTuMTnTDmpT5Yfi9UpgO_T3LOOsPjQAw6zoMUIaC22wPeryp5x-UumB3XwXOWp-qSXLbuN3b9oj_Qg5kCZWA0waWNRHzQ1mwtlEmmpTgvTXbU5V1ym6hEBOn6Q3RWhn34Hj3cF9oJun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: HTTP_X_FORWARDED_FOR=10.117.148.3Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: QUERY_STRING=format=jsonJun 03 11:47:03 dev-ipp1-u1-control02.radosgw[2802861]: REMOTE_ADDR=10.117.53.10?Jun 03 11:47:03 dev-ipp1-u1-control02.radosgw[2802861]: REQUEST_METHOD=GETJun 03 11:47:03 dev-ipp1-u1-control02.radosgw[2802861]:REQUEST_URI=/swift/v1/AUTH_c668ed224e434c88a9e0fce125056112?format=jsonJun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]:SCRIPT_URI=/swift/v1/AUTH_c668ed224e434c88a9e0fce125056112?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: SERVER_PORT=7480Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: ====== starting new requestreq=0x7fddeb8e7710 =====?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s initializing for trans_id = tx000003991cfc5c1791f95-00647b8aa7-30c56-default?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s rgw api priority: s3=8 s3website=7Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s host=dev-ipp1-u1-object-store?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s subdomain= domain= in_hosted_domain=0 in_hosted_domain_s3website=0?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s final domain/bucket subdomain= domain= in_hosted_domain=0in_hosted_domain_s3website=0 s->info.domain=s->info.request_uri=/swift/v1/AUTH_c668ed224e434c88a9e0fce125056112?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s name: format val: jsonJun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s ver=v1 first= req=?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s handler=29RGWHandler_REST_Service_SWIFT?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s getting op 0?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s get_system_obj_state: rctx=0x7fddeb8e6790obj=default.rgw.log:script.prerequest. state=0x55f743b97720 s->prefetch_data=0?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s cache get: name=default.rgw.log++script.prerequest. : hit (negative entry)?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s swift:list_buckets scheduling with throttler client=3 cost=1Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s swift:list_buckets op=29RGWListBuckets_ObjStore_SWIFTJun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s swift:list_buckets verifying requester?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s swift:list_buckets rgw::auth::swift::DefaultStrategy: tryingrgw::auth::swift::TempURLEngine?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s swift:list_buckets rgw::auth::swift::TempURLEngine denied with reason=-13?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s swift:list_buckets rgw::auth::swift::DefaultStrategy: tryingrgw::auth::swift::SignedTokenEngine?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s swift:list_buckets rgw::auth::swift::SignedTokenEngine denied with reason=-1?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s swift:list_buckets rgw::auth::swift::DefaultStrategy: tryingrgw::auth::swift::SwiftAnonymousEngine?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s swift:list_buckets rgw::auth::swift::SwiftAnonymousEngine denied withreason=-1?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s swift:list_buckets Failed the auth strategy, reason=-1?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: failed to authorize requestJun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s op->ERRORHANDLER: err_no=-1 new_err_no=-1Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s get_system_obj_state: rctx=0x7fddeb8e6790obj=default.rgw.log:script.postrequest. state=0x55f743b97960 s->prefetch_data=0?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s cache get: name=default.rgw.log++script.postrequest. : hit (negative entry)?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s swift:list_buckets op status=0?Jun 03 11:47:03 dev-ipp1-u1-control02 radosgw[2802861]: req 41483251800463850450.000000000s swift:list_buckets http status=401 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knikolla at bu.edu Mon Jun 5 23:14:31 2023 From: knikolla at bu.edu (Nikolla, Kristi) Date: Mon, 5 Jun 2023 23:14:31 +0000 Subject: [tc] Technical Committee next weekly meeting on June 6, 2023 Message-ID: Hi all, This is a reminder that the next weekly Technical Committee meeting is to be held on Tuesday, Jun 6, 2023 at 1800 UTC on Zoom. Meeting link and agenda can be found at https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting Please find below the current agenda for the meeting. * Roll call * Follow up on past action items ** knikolla Book timeslots for PTG in Vancouver ** knikolla Prepare PTG agenda for Vancouver ** knikolla To fix link redirect to release from docs ** knikolla Finish TC Tracker 2023.2 * Open Infra Summit and PTG in Vancouver, next week * Gate health check * Broken docs due to inconsistent release naming * Open Discussion and Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open Thank you, Kristi Nikolla From benedikt.trefzer at cirrax.com Tue Jun 6 06:41:59 2023 From: benedikt.trefzer at cirrax.com (Benedikt Trefzer) Date: Tue, 6 Jun 2023 08:41:59 +0200 Subject: [puppet] puppet module improvements In-Reply-To: <7367570A-0497-4311-A2B9-63F4F5C2CE50@binero.com> References: <86d026c5-ea18-62da-ed3b-f83c533fd772@cirrax.com> <7367570A-0497-4311-A2B9-63F4F5C2CE50@binero.com> Message-ID: Hi > > As Takashi said we?re having very few contributors these days and we > appreciate > any help we can get, he has been doing an amazing job on cleaning up and > doing > maintenance. Ack and fully agree. > We will be on the OpenInfra Summit next week and are going to meet up, > if you are > there we would love to have you join us in a discussion/planning on > moving forward. I will not be there, but willing to contribute. It would be nice if you could define a hiera.yaml file to use for all youre modules at the summit. Think this is a work that can better be done in a meetup than on reviews. >> >> 1.) use proper types for parameters >> Parameter validation is done with 'validate_legacy...' instead of >> defining the class/resource parameter with a proper type all over >> the code. >> I cannot imaging of any advantage not using proper type definitions. >> Instead using typed parameters would be more efficient an code >> readability would increase. >> >> I think we have to prioritize this since puppetlabs-stdlib 9.0.0 >> deprecated the validate_legacy function >> and emits large warnings. >> >> The reason we(at least, I) hesitated implementing the strict type >> validation is that this is likely to break >> existing manifests(in fact, even our modules or manifests have >> experienced breakage caused by new >> validations added in dependent modules) and also can heavily affect >> TripleO (though the project has >> been deprecated now). >> >> We can start with just replacing all validate_legacy functions by >> typed parameters first and target this >> work during this cycle then discuss how we improve coverage of type >> validations further. Agree, an like to add: "And ensure for all modules that new parameters introduced are properly typed." >> 2.) params.pp >> This is the legacy way to define parameter defaults for different >> OS's. >> In the modern puppet world a module specific hiera structure is used, >> which eliminates the need of params.pp class (with inheritance and >> include). >> The usage of hiera improves readability and flexibility (every >> parameter >> can be overwritten on request, eg. change of packages names etc.) >> This also eliminate the restriction that the modules can only be >> used by >> certain OS'es (osfamily 'RedHat' or 'Debian'). >> >> >> +1. Though I'd like to understand which OS users would like to use >> with our modules so that we can >> ideally implement CI coverage. >> Ack. We use Debian. But with self brewed openstack packages. Which means, we maintain a patchset for eliminating if "OS=='Debian' " statements (in code and in params.pp) for many of the puppet modules. >> >> 3.) Eliminate "if OS=='bla' {" statements in code >> These statements make the code very inflexible. It cannot be >> overruled >> if necessary (eg. if I use custom packages to install and do not need >> the code provided in the if statement). >> Instead a parameter should be used with a default provided in hiera. >> >> >> +1 . We can partially cover this as part of 2 but might leave this for >> the last work among 3, I guess. >> Although this is the most hurting part for us, I agree that for proper solutions point 2 is needed. >> >> >> Since there is lot of code to change I do not expect this to be >> done in >> a single commit (per module) but in steps probably in more than one >> release cycle. But defining this as best practice for openstack >> puppet >> modules and start using above in new commits would bring the code >> forward. >> >> >> I tend to disagree that we start these new patterns in new commits. >> Having partial migration causes >> difficulty in maintenance. I really want to see these are implemented >> consistently in a single moules >> as well as among all modules, so that we are not confused when we are >> forced to implement global >> changes in all of our modules. >> >> We can probably start with a few "independent" modules such as >> puppet-vswitch(or p-o-i) and >> once we agree with the pattern then we can schedule when we start >> implementing these changes >> in all modules in a single release cycle. >> Probably although a good thing to discuss at Vancouver. Would be nice if somebody could communicate after Vancouver what the outcome was for the further development. Thanks Benedikt From hberaud at redhat.com Tue Jun 6 07:43:03 2023 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 6 Jun 2023 09:43:03 +0200 Subject: [oslo] HTTP base direct RPC oslo.messaging driver contribution In-Reply-To: References: Message-ID: Hello, Indeed, Oslo doesn't have PTG sessions. Best regards Le lun. 5 juin 2023 ? 10:42, Masahito Muroi a ?crit : > Hello Herve, > > Thank you for the quick replying. Let us prepare the spec and submit it. > > btw, does olso team have PTG in the up-comming summit? We'd like to get a > quick feedback of the spec if time is allowed in the PTG. But it looks like > oslo team won't have PTG there. > > best regards, > Masahito > > -----Original Message----- > *From:* "Herve Beraud" > *To:* "????"; > *Cc:* ; > *Sent:* 2023/06/05(?) 17:21 (GMT+09:00) > *Subject:* Re: [oslo] HTTP base direct RPC oslo.messaging driver > contribution > > Hello Masahito, > > Submission to oslo-spec is a good starting point. > > Best regards > > Le lun. 5 juin 2023 ? 10:04, ???? a ?crit : > > Hi oslo team, > > We'd like to contribute HTTP base direct RPC driver to the oslo.messaging > community. We have developed the HTTP base driver internally. We have been > using the driver in the production with over 10K hypervisors now. > > I checked the IRC meeting log of the oslo team[1], but there is no regluar > meeting in 2023. Is it okay to submit oslo-spec[2] to propose the driver > directly, or is there another good place to discuss the feature before > submitting a spec? > > 1. https://meetings.opendev.org/#Oslo_Team_Meeting > 2. https://opendev.org/openstack/oslo-specs > > best regards, > Masahito > > > > > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > > -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gthiemonge at redhat.com Tue Jun 6 07:52:20 2023 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Tue, 6 Jun 2023 09:52:20 +0200 Subject: Octavia does not work properly. In-Reply-To: References: Message-ID: Hi Gopal, What deployment method are you using? My first guess is that the HTTP server/proxy that handles the API endpoints is broken, can you check the logs there? You mention that disabling the octavia health manager fixes the issue. Do you mean that the other projects (like Nova) work fine when the health manager is disabled but don't work when it is enabled? Greg On Mon, Jun 5, 2023 at 4:57?PM GOPAL RAJORA wrote: > Hi everyone. > Hope you are well. > This is Gopal. > > I am contacting you as I need your help. > > So, recently I installed the Octavia service on the Openstack server. > After installation of the Octavia, several services such as Nova, Neutron > and Keystone started not to work and I can not access the horizon page > either. > > So, I have checked the logs, for the moment, keystone service does not > work after the Octavia installation. > > *Unable to establish connection to > http://keystone-api.openstack.svc.cluster.local:5000/v3/auth/tokens > : > ('Connection aborted.', RemoteDisconnected('Remote end closed connection > without response'))* > > So, I tried to install the Octavia service several times, so if I disable > the health manager, it seems everything works properly, but as you know, > Octavia can not work properly if the heath manger is not installed. > > So, can you please help me regarding this problem? > > Thanks. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Tue Jun 6 09:20:40 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 6 Jun 2023 11:20:40 +0200 Subject: [neutron][ovn] The implementation status of DVR for port forwarding in OpenStack. In-Reply-To: References: Message-ID: Hello Han: That is a very good question. I would suggest you ask it in the OVN mailing list [1]. They will provide you with a better answer than in this list. Regards. [1]https://docs.ovn.org/en/latest/internals/ovs-discuss at openvswitch.org On Sat, Jun 3, 2023 at 4:10?PM ??? wrote: > Hello > > I previously learned that distributed SNAT has not been implemented in > ML2/OVS yet, and traffic for floating IP port forwarding still goes > through the network node. > > Therefore, I tried using ML2/OVN. However, I found that the traffic > for floating IP port forwarding still goes through the network node. > Does anyone know if this is normal or if OVN has implemented DVR for > port forwarding? Or is it because there is an issue with my > configuration? > > I deploy a master ubuntu env by kolla, and I had set dvr of ovn. > ``` > # in /etc/kolla/globals.yml > neutron_plugin_agent: "ovn" > neutron_ovn_distributed_fip: "yes" > ``` > > I would like to know if OVN currently implements DVR for port > forwarding. If not, what can I do to distribute the traffic load for > port forwarding of OpenStack? In my use case, I need to create a large > number of port forwarding rules, which puts a lot of pressure on the > network node. > > I will appreciate any help or advice. > > Best regards, > Han Guangyu > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alsotoes at gmail.com Tue Jun 6 09:24:55 2023 From: alsotoes at gmail.com (Alvaro Soto) Date: Tue, 6 Jun 2023 03:24:55 -0600 Subject: Nested template In-Reply-To: References: Message-ID: Hey Antony, You mean, while using Ansible? If so, take a look at this. https://docs.ansible.com/ansible/latest/collections/ansible/builtin/include_tasks_module.html Cheers! --- Alvaro Soto. Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you. ---------------------------------------------------------- Great people talk about ideas, ordinary people talk about things, small people talk... about other people. On Mon, Jun 5, 2023, 8:49 AM Antony P wrote: > Dear team, > I have 2 template > test1.yaml > test2.yaml > > how to run this 2 templates as single template. what is steps i have to > follow > > *--Warm Regards* > *Antony P* > *Jr. DevOps Engineer* > antony at edgebricks.com > Mob No: +919498079898 > > www.edgebricks.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From antony at edgebricks.com Tue Jun 6 09:33:17 2023 From: antony at edgebricks.com (antony) Date: Tue, 06 Jun 2023 15:03:17 +0530 Subject: Nested template In-Reply-To: Message-ID: <647efd60.a70a0220.58637.b605@mx.google.com> Hi Alvaro?Am taking about openstack hot template.?Sent from my Galaxy -------- Original message --------From: Alvaro Soto Date: 06/06/2023 14:55 (GMT+05:30) To: Antony P Cc: openstack-discuss Subject: Re: Nested template Hey Antony,You mean, while using Ansible? If so, take a look at this.https://docs.ansible.com/ansible/latest/collections/ansible/builtin/include_tasks_module.htmlCheers!---Alvaro Soto.Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you.----------------------------------------------------------Great people talk about ideas,ordinary?people talk about things,small people talk... about other people.On Mon, Jun 5, 2023, 8:49 AM Antony P wrote:Dear team,I have 2 templatetest1.yamltest2.yamlhow to run this 2 templates as single template. what is steps i have to follow--Warm RegardsAntony PJr. DevOps Engineerantony at edgebricks.comMob No:?+919498079898www.edgebricks.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From alsotoes at gmail.com Tue Jun 6 09:35:33 2023 From: alsotoes at gmail.com (Alvaro Soto) Date: Tue, 6 Jun 2023 03:35:33 -0600 Subject: Community attendance - OpenInfra Summit Vancouver In-Reply-To: References: Message-ID: Hello Roberto, I'm not from Brazil (I'm based on Mexico) but as part of LATAM community, I'll love to be part in local projects :) I'll be at OIS, I'll be nice to talk about community challenges for our local community. Cheers. --- Alvaro Soto. Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you. ---------------------------------------------------------- Great people talk about ideas, ordinary people talk about things, small people talk... about other people. On Thu, Jun 1, 2023, 6:47 AM Iury Gregory wrote: > Hi Roberto, > > I know some Brazilians that will be attending the OIS Vancouver, including > me. > > Em qui., 1 de jun. de 2023 ?s 09:00, Roberto Bartzen Acosta < > roberto.acosta at luizalabs.com> escreveu: > >> Hello, >> >> Will anyone from the Brazilian community attend the OpenInfra in >> Vancouver? >> >> I would like to meet other members from Brazil and discuss the challenges >> and possibilities of using OpenStack in Brazilian infrastructures. You can >> ping me on IRC too (racosta). >> >> Kind regards, >> Roberto >> >> >> *?Esta mensagem ? direcionada apenas para os endere?os constantes no >> cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no >> cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa >> mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o >> imediatamente anuladas e proibidas?.* >> >> *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para >> assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o >> poder? aceitar a responsabilidade por quaisquer perdas ou danos causados >> por esse e-mail ou por seus anexos?.* >> > > > -- > *Att[]'s* > > *Iury Gregory Melo Ferreira * > *MSc in Computer Science at UFCG* > *Ironic PTL * > *Senior Software Engineer at Red Hat Brazil* > *Social*: https://www.linkedin.com/in/iurygregory > *E-mail: iurygregory at gmail.com * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue Jun 6 09:50:18 2023 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 6 Jun 2023 18:50:18 +0900 Subject: [oslo] HTTP base direct RPC oslo.messaging driver contribution In-Reply-To: References: Message-ID: Hello, This is very interesting and I agree having the spec would be the good way to move this forward. We have not requested oslo sessions in the upcoming PTG but Stephen and I are attending it so will be available for the discussion. Because some other cores such as Herve won't be there, we'd need to continue further discussions after PTG in spec review, but if that early in-person discussion sounds helpful for you then I'll reserve a table. Thank you, Takashi On Tue, Jun 6, 2023 at 4:48?PM Herve Beraud wrote: > Hello, > > Indeed, Oslo doesn't have PTG sessions. > > Best regards > > Le lun. 5 juin 2023 ? 10:42, Masahito Muroi > a ?crit : > >> Hello Herve, >> >> Thank you for the quick replying. Let us prepare the spec and submit it. >> >> btw, does olso team have PTG in the up-comming summit? We'd like to get a >> quick feedback of the spec if time is allowed in the PTG. But it looks like >> oslo team won't have PTG there. >> >> best regards, >> Masahito >> >> -----Original Message----- >> *From:* "Herve Beraud" >> *To:* "????"; >> *Cc:* ; >> *Sent:* 2023/06/05(?) 17:21 (GMT+09:00) >> *Subject:* Re: [oslo] HTTP base direct RPC oslo.messaging driver >> contribution >> >> Hello Masahito, >> >> Submission to oslo-spec is a good starting point. >> >> Best regards >> >> Le lun. 5 juin 2023 ? 10:04, ???? a >> ?crit : >> >> Hi oslo team, >> >> We'd like to contribute HTTP base direct RPC driver to the oslo.messaging >> community. We have developed the HTTP base driver internally. We have been >> using the driver in the production with over 10K hypervisors now. >> >> I checked the IRC meeting log of the oslo team[1], but there is no >> regluar meeting in 2023. Is it okay to submit oslo-spec[2] to propose the >> driver directly, or is there another good place to discuss the feature >> before submitting a spec? >> >> 1. https://meetings.opendev.org/#Oslo_Team_Meeting >> 2. https://opendev.org/openstack/oslo-specs >> >> best regards, >> Masahito >> >> >> >> >> -- >> Herv? Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> >> > > > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Tue Jun 6 10:07:25 2023 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 6 Jun 2023 12:07:25 +0200 Subject: [devstack] Recent change breaks cloning with GIT_DEPTH setting In-Reply-To: References: Message-ID: I reported that to Launchpad to have it tracked: https://bugs.launchpad.net/devstack/+bug/2023020 On Mon, 5 Jun 2023 at 15:04, Martin Kopec wrote: > Right, thanks Nell for pointing that out, you're right! > > The patch in question: > https://review.opendev.org/c/openstack/devstack/+/882299 > > We changed that because --branch didn't accept hashes anymore (not exactly > sure why) but didn't realize that we might be passing --deph argument there. > We'll need to rework that code so that it accepts both, branches and > hashes, as well as depth argument. > > Thanks, > > > On Mon, 5 Jun 2023 at 14:15, Nell Jerram wrote: > >> FYI, devstack commit b0bd5b92 doesn't work with a GIT_DEPTH=1 setting. >> >> That commit basically changed from >> >> git clone ... --branch >> >> to >> >> git clone ... >> git checkout >> >> So it's easy for the desired to be unavailable if GIT_DEPTH is >> limited. >> >> Best wishes - Nell >> > > > -- > Martin > -- Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Tue Jun 6 10:40:40 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Tue, 6 Jun 2023 12:40:40 +0200 Subject: Nested template In-Reply-To: <647efd60.a70a0220.58637.b605@mx.google.com> References: <647efd60.a70a0220.58637.b605@mx.google.com> Message-ID: You can include other templates in heat using resource_def property. Good example of that would be OS::Heat::ResourceGroup: https://github.com/syseleven/heat-examples/blob/master/server-groups/group.yaml#L11-L20 ??, 6 ???. 2023??. ? 11:35, antony : > Hi Alvaro > > Am taking about openstack hot template. > > > > Sent from my Galaxy > > > -------- Original message -------- > From: Alvaro Soto > Date: 06/06/2023 14:55 (GMT+05:30) > To: Antony P > Cc: openstack-discuss > Subject: Re: Nested template > > Hey Antony, > You mean, while using Ansible? If so, take a look at this. > > > https://docs.ansible.com/ansible/latest/collections/ansible/builtin/include_tasks_module.html > > Cheers! > --- > Alvaro Soto. > > Note: My work hours may not be your work hours. Please do not feel the > need to respond during a time that is not convenient for you. > ---------------------------------------------------------- > Great people talk about ideas, > ordinary people talk about things, > small people talk... about other people. > > On Mon, Jun 5, 2023, 8:49 AM Antony P wrote: > >> Dear team, >> I have 2 template >> test1.yaml >> test2.yaml >> >> how to run this 2 templates as single template. what is steps i have to >> follow >> >> *--Warm Regards* >> *Antony P* >> *Jr. DevOps Engineer* >> antony at edgebricks.com >> Mob No: +919498079898 >> >> www.edgebricks.com >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Jun 6 10:41:37 2023 From: smooney at redhat.com (smooney at redhat.com) Date: Tue, 06 Jun 2023 11:41:37 +0100 Subject: [heat] Nested template In-Reply-To: <647efd60.a70a0220.58637.b605@mx.google.com> References: <647efd60.a70a0220.58637.b605@mx.google.com> Message-ID: just adding the [heat] tag to the subject line. im not sure if HOT support arbairty nestign of templates but perhaps it does is this what you are looking for https://github.com/syseleven/heat-examples/blob/master/substacks/masterstack.yaml embeding multiple substack into one larger stack? On Tue, 2023-06-06 at 15:03 +0530, antony wrote: > Hi Alvaro?Am taking about openstack hot template.?Sent from my Galaxy > -------- Original message --------From: Alvaro Soto Date: 06/06/2023? 14:55? (GMT+05:30) To: > Antony P Cc: openstack-discuss Subject: Re: Nested > template Hey Antony,You mean, while using Ansible? If so, take a look at > this.https://docs.ansible.com/ansible/latest/collections/ansible/builtin/include_tasks_module.htmlCheers!---Alvaro?Sot > o.Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not > convenient for you.----------------------------------------------------------Great people talk about > ideas,ordinary?people talk about things,small people talk... about other people.On Mon, Jun 5, 2023, 8:49 AM Antony P > wrote:Dear team,I have 2 templatetest1.yamltest2.yamlhow to run this 2 templates as single > template. what is steps i have to follow--Warm RegardsAntony PJr. DevOps > Engineerantony at edgebricks.comMob?No:?+919498079898www.edgebricks.com > From sylvain.bauza at gmail.com Tue Jun 6 12:53:54 2023 From: sylvain.bauza at gmail.com (Sylvain Bauza) Date: Tue, 6 Jun 2023 14:53:54 +0200 Subject: [nova] Nova Spec review day next Tuesday 6 June In-Reply-To: References: Message-ID: Le mar. 30 mai 2023 ? 21:02, Sylvain Bauza a ?crit : > Hey Nova community, > As a reminder, you can find the Nova agenda for the 2023.2 Bobcat release > here https://releases.openstack.org/bobcat/schedule.html > > As you can see, we plan to do a round of reviews for all the open > specifications in the openstack/nova-specs repository by next Tuesday June > 6th. > > If you're a developer wanting to propose a new feature in Nova and you > have been said to provide a spec, or if you know that you need to create a > spec, or if you already have an open spec, please make sure to > upload/update your file before next Tuesday so we'll look at it. Also, if > you can look again on your spec during the day, it would be nice as we > could try to discuss your spec during this day. > > As a reminder, this is today :-) > Thanks, > -Sylvain > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Tue Jun 6 14:31:45 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Tue, 6 Jun 2023 20:01:45 +0530 Subject: [cinder][all] EOL EM branches Message-ID: Hi, We had a discussion in the last cinder meeting regarding making EM branches EOL for the cinder project[1]. The discussion started because of the CVE fixes where we backported to active stable branches i.e. Yoga, Zed and 2023.1 but there were no backports to further EM stable branches like Xena, Wallaby ... all the way to Train. Cinder team doesn't see much merit in keeping these EM stable branches alive since there is rarely any activity that requires collaboration and if there is, it is usually by the core cinder team. Following are some of our reasons to EOL these branches: 1) We have less review bandwidth even for active stable branches (Yoga, Zed and 2023.1) 2) No one, apart from the project team, does backport of critical fixes implying that those branches aren't used much for collaboration 3) Will save gate resources for some periodic jobs and the patches proposed 4) Save project team's time to fix gate issues We, as the cinder team, have decided to EOL all the existing EM branches that go from Train to Xena. It was agreed upon by the cinder team and no objections were raised during the upstream cinder meeting. We would like to gather more feedback outside of the cinder team regarding if this affects other projects, deployers, operators, vendors etc. Please reply to this email with your concerns, if you have any, so we can discuss again and reconsider our decision. Else the week after the summit we will be moving forward with our current decision. If you will be at the Vancouver Summit, you can also give us feedback at the cinder Forum session at 11:40 am on Wednesday June 14. [1] https://meetings.opendev.org/irclogs/%23openstack-meeting-alt/%23openstack-meeting-alt.2023-05-31.log.html#t2023-05-31T14:22:23 Thanks Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Tue Jun 6 14:37:40 2023 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 6 Jun 2023 07:37:40 -0700 Subject: Octavia does not work properly. In-Reply-To: References: Message-ID: I think you really meant to say your deployment tooling does not work properly. Octavia works just fine as many of us use it every day. What deployment tool are you using? (there are something like 12 that support deploying Octavia) I say this because Octavia does not change the configuration of keystone nor nova, and rarely is a configuration change in neutron needed. So if keystone and nova are having problems after deploying Octavia, it is your deployment tooling that is having a problem. Michael On Tue, Jun 6, 2023 at 12:53?AM Gregory Thiemonge wrote: > > Hi Gopal, > > What deployment method are you using? > > My first guess is that the HTTP server/proxy that handles the API endpoints is broken, can you check the logs there? > > You mention that disabling the octavia health manager fixes the issue. Do you mean that the other projects (like Nova) work fine when the health manager is disabled but don't work when it is enabled? > > Greg > > > On Mon, Jun 5, 2023 at 4:57?PM GOPAL RAJORA wrote: >> >> Hi everyone. >> Hope you are well. >> This is Gopal. >> >> I am contacting you as I need your help. >> >> So, recently I installed the Octavia service on the Openstack server. >> After installation of the Octavia, several services such as Nova, Neutron and Keystone started not to work and I can not access the horizon page either. >> >> So, I have checked the logs, for the moment, keystone service does not work after the Octavia installation. >> >> Unable to establish connection to http://keystone-api.openstack.svc.cluster.local:5000/v3/auth/tokens: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) >> >> So, I tried to install the Octavia service several times, so if I disable the health manager, it seems everything works properly, but as you know, Octavia can not work properly if the heath manger is not installed. >> >> So, can you please help me regarding this problem? >> >> Thanks. From hessas.imene at gmail.com Tue Jun 6 06:26:59 2023 From: hessas.imene at gmail.com (Imene Hessas) Date: Tue, 6 Jun 2023 07:26:59 +0100 Subject: OpenStack informations Message-ID: Greetings , As a curious person and a cloud enthusiast , I started recently documenting myself about OpenStack, learned about its components , methods of deployments .. etc And now am in front of an essential question : is there a lightweight version of Openstack ? Since I noticed that it requires significant hardware resources, including compute, storage, and networking infrastructure, to run efficiently. I'll be waiting patiently for your response ! Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gopalrajora1995 at gmail.com Tue Jun 6 08:41:38 2023 From: gopalrajora1995 at gmail.com (GOPAL RAJORA) Date: Tue, 6 Jun 2023 10:41:38 +0200 Subject: Octavia does not work properly. In-Reply-To: References: Message-ID: Hi Greg Thank you so much for your kind reply. Regarding the deployment method, I used the following GitHub helm charts to deploy the Openstack on the K8S env. *https://github.com/openstack/openstack-helm https://github.com/openstack/openstack-helm-infra * So far, all the services are working properly unless I don't install the Octavia service. As the reply to your last question, yes, I think so, generally I don't need to check the logs as the everything works fine, but I could not access the Openstack horizon page after the Octavia is installed, so I checked the logs then and I found the several pods such as Nova, Neutron, Keystone and Horizon have been restarting or CrashLoopBackOff. Such this problem disappears once I disable the health manager and install the Octavia again, so I think the health manager is the problem. Hope this will be helpful. Regards. Gopal. On Tue, Jun 6, 2023 at 9:52?AM Gregory Thiemonge wrote: > Hi Gopal, > > What deployment method are you using? > > My first guess is that the HTTP server/proxy that handles the API > endpoints is broken, can you check the logs there? > > You mention that disabling the octavia health manager fixes the issue. Do > you mean that the other projects (like Nova) work fine when the health > manager is disabled but don't work when it is enabled? > > Greg > > > On Mon, Jun 5, 2023 at 4:57?PM GOPAL RAJORA > wrote: > >> Hi everyone. >> Hope you are well. >> This is Gopal. >> >> I am contacting you as I need your help. >> >> So, recently I installed the Octavia service on the Openstack server. >> After installation of the Octavia, several services such as Nova, Neutron >> and Keystone started not to work and I can not access the horizon page >> either. >> >> So, I have checked the logs, for the moment, keystone service does not >> work after the Octavia installation. >> >> *Unable to establish connection to >> http://keystone-api.openstack.svc.cluster.local:5000/v3/auth/tokens >> : >> ('Connection aborted.', RemoteDisconnected('Remote end closed connection >> without response'))* >> >> So, I tried to install the Octavia service several times, so if I disable >> the health manager, it seems everything works properly, but as you know, >> Octavia can not work properly if the heath manger is not installed. >> >> So, can you please help me regarding this problem? >> >> Thanks. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Tue Jun 6 15:39:34 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Tue, 6 Jun 2023 17:39:34 +0200 Subject: OpenStack informations In-Reply-To: References: Message-ID: Hey there, Depends on what you mean as lightweight:) For testing purposes we spawn All-in-One OpenStack environments (using openstack-ansible) on a single VM with 4 CPU cores and 12Gb of RAM and 100Gb of disk. That is totally not smth you should run in production, but it's fully operational OpenStack deployment for playing/messing around with it. Same can be done with devstack or kolla-ansible. Also you're not obliged to install all services that OpenStack have. Bare minimal setup can be limited to Keystone, Placement, Neutron, Glance and Nova (I'd recommend having Cinder as well, but you technically can survive even without it). So it really depends on your user-story and what lightweight means in the context. ??, 6 ???. 2023??. ? 17:30, Imene Hessas : > > Greetings , > As a curious person and a cloud enthusiast , I started recently documenting myself about OpenStack, learned about its components , methods of deployments .. etc > And now am in front of an essential question : is there a lightweight version of Openstack ? > Since I noticed that it requires significant hardware resources, including compute, storage, and networking infrastructure, to run efficiently. > I'll be waiting patiently for your response ! > Thank you. From mkopec at redhat.com Tue Jun 6 16:20:57 2023 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 6 Jun 2023 18:20:57 +0200 Subject: [ptg][qa] Planning for the upcoming June 2023 PTG Message-ID: Hello everyone, me and Ghanshyam will be present at the OpenInfra summit next week. If you have any topic you would like to discuss, feel free to add it to the etherpad [1]. Based on the number of topics we'll reserve a table and time accordingly. [1] https://etherpad.opendev.org/p/vancouver-june2023-qa Thanks, -- Martin Kopec Principal Software Quality Engineer Red Hat EMEA IM: kopecmartin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Tue Jun 6 16:23:43 2023 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 6 Jun 2023 18:23:43 +0200 Subject: [qa] No office hour June 13th Message-ID: Hello everyone, we're cancelling our office hour on June 13th due to the OpenInfra Summit happening that week. If there is anything you would like to discuss in person in Vancouver, feel free to add it to the agenda [1]. The next office hour will be held on June 20th, 15:00 UTC as usual [2]. [1] https://etherpad.opendev.org/p/vancouver-june2023-qa [2] https://wiki.openstack.org/wiki/Meetings/QATeamMeeting Thanks, -- Martin Kopec Principal Software Quality Engineer Red Hat EMEA IM: kopecmartin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Tue Jun 6 16:35:02 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 6 Jun 2023 18:35:02 +0200 Subject: [neutron] Neutron meetings next week, Vancouver PTG Message-ID: Hello Neutrinos: As commented today during the Neutron meeting, next week we are cancelling all Neutron meetings (team, CI and drivers) due to the Vancouver PTG. See you there! -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Jun 6 17:03:41 2023 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 6 Jun 2023 19:03:41 +0200 Subject: [nova] Tuesday June 13 meeting is CANCELLED Message-ID: Some folks (including me) will be traveling for the OpenInfra Summit, so we deferred the nova meeting to the week after. -Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Jun 6 17:04:09 2023 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 6 Jun 2023 12:04:09 -0500 Subject: Forum Etherpad Collection Message-ID: Hello All! I started a wiki to collect etherpads into as we get them ready for next week. Feel free to add them as you create them! Can't wait to see everyone :) https://wiki.openstack.org/wiki/Forum/Vanvouver2023 -Kendall Nelson -------------- next part -------------- An HTML attachment was scrubbed... URL: From gopalrajora1995 at gmail.com Tue Jun 6 16:31:18 2023 From: gopalrajora1995 at gmail.com (GOPAL RAJORA) Date: Tue, 6 Jun 2023 18:31:18 +0200 Subject: Octavia does not work properly. In-Reply-To: References: Message-ID: Hi Michael Thank you for your update. So, I deployed the Octavia service by using following 3 scripts. https://github.com/openstack/openstack-helm/blob/master/tools/deployment/developer/common/180-create-resource-for-octavia.sh https://github.com/openstack/openstack-helm/blob/master/tools/deployment/developer/common/190-create-octavia-certs.sh https://github.com/openstack/openstack-helm/blob/master/tools/deployment/developer/common/200-octavia.sh So, can't I use these scripts for the deployment in reality? Looking forward to hear from you. Thanks, Gopal. On Tue, Jun 6, 2023 at 4:37?PM Michael Johnson wrote: > I think you really meant to say your deployment tooling does not work > properly. Octavia works just fine as many of us use it every day. > > What deployment tool are you using? (there are something like 12 that > support deploying Octavia) > > I say this because Octavia does not change the configuration of > keystone nor nova, and rarely is a configuration change in neutron > needed. So if keystone and nova are having problems after deploying > Octavia, it is your deployment tooling that is having a problem. > > Michael > > On Tue, Jun 6, 2023 at 12:53?AM Gregory Thiemonge > wrote: > > > > Hi Gopal, > > > > What deployment method are you using? > > > > My first guess is that the HTTP server/proxy that handles the API > endpoints is broken, can you check the logs there? > > > > You mention that disabling the octavia health manager fixes the issue. > Do you mean that the other projects (like Nova) work fine when the health > manager is disabled but don't work when it is enabled? > > > > Greg > > > > > > On Mon, Jun 5, 2023 at 4:57?PM GOPAL RAJORA > wrote: > >> > >> Hi everyone. > >> Hope you are well. > >> This is Gopal. > >> > >> I am contacting you as I need your help. > >> > >> So, recently I installed the Octavia service on the Openstack server. > >> After installation of the Octavia, several services such as Nova, > Neutron and Keystone started not to work and I can not access the horizon > page either. > >> > >> So, I have checked the logs, for the moment, keystone service does not > work after the Octavia installation. > >> > >> Unable to establish connection to > http://keystone-api.openstack.svc.cluster.local:5000/v3/auth/tokens: > ('Connection aborted.', RemoteDisconnected('Remote end closed connection > without response')) > >> > >> So, I tried to install the Octavia service several times, so if I > disable the health manager, it seems everything works properly, but as you > know, Octavia can not work properly if the heath manger is not installed. > >> > >> So, can you please help me regarding this problem? > >> > >> Thanks. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From colby at sdsc.edu Tue Jun 6 17:21:50 2023 From: colby at sdsc.edu (Colby Walsworth) Date: Tue, 6 Jun 2023 10:21:50 -0700 Subject: Strange Cinder rpc timeouts after Xena upgrade Message-ID: <956aff3e-920b-3807-5990-95e7d796df75@sdsc.edu> Hey Everyone, We just upgraded from Wallaby to Xena. We use Ceph as our volume/image backend. All instances from before the upgrade seem to be unable to be shelved/unshelved or live migrated (have not tried offline migration yet). Any new instances that are created work fine and all those tasks work fine on those. The problem seems to be with cinder: cinder.api.v3.attachments [req-2af34fab-a4b5-43f3-bd9a-94f360201533 e28435e0a66740968c523e6376c57f68 179f701c810c425ab004548cc1f76bc9 - default default] Unable to create attachment for volume.: oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to message ID 8eea6a00568a477c89ced8db17f1179d the migrations and shelve/unshelve processes start on the hypervisor but eventually error out with the above message. Any new instances I create do not have this issue at all. I have done all the db upgrades so I don't believe this is a db record update issue. Any ideas of what might be causing this? Thanks, Colby From noonedeadpunk at gmail.com Tue Jun 6 17:31:11 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Tue, 6 Jun 2023 19:31:11 +0200 Subject: [ptg][openstack-ansible] Upcoming hybrid PTG and operator hours Message-ID: Hi everyone, I'm glad to announce that OpenStack-Ansible team is going to have a hybrid PTG during the OpenInfra Summit in Vancouver. PTG will happen on Wednesday, June 14, 09:40 - 11:30 local time or 16:45 - 18:30 UTC For those who will be on-site: we have table number 23 booked in the PTG room (Ballroom A?) For those who can not make it to the summit, we have a Zoom room prepared [1] Moreover, we're planning to have an in-person Operator Hours session on Thursday, June 15, at 11:40 - 12:50, in an hour after the Project on-boarding session. So if you have more questions or wanna continue discussion or just can't make it to the Project on-boarding due to some conflict - you're warmly welcome to the operator hours slot. Etherpad with dates and topics that are to be discussed is here [2] [1] https://us06web.zoom.us/j/89099630568?pwd=WnBHYkEzdXJxQ2tLclQ1QXRvNjg5UT09 [2] https://etherpad.opendev.org/p/vancouver-june2023-os-ansible From fungi at yuggoth.org Tue Jun 6 17:32:32 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Jun 2023 17:32:32 +0000 Subject: [cinder][all][tc][ops][stable] EOL EM branches In-Reply-To: References: Message-ID: <20230606173231.jrbuwdx2lupt7t7r@yuggoth.org> On 2023-06-06 20:01:45 +0530 (+0530), Rajat Dhasmana wrote: [...] > The discussion started because of the CVE fixes where we > backported to active stable branches i.e. Yoga, Zed and 2023.1 but > there were no backports to further EM stable branches like Xena, > Wallaby ... all the way to Train. [...] The idea behind EM branches was that downstream distributions who need to backport these changes for their own customers would push the patches to the upstream EM branches so that they didn't all have to redo the same work and could benefit from each other's knowledge and experience around backports. If that's not happening for critical security patches, then I agree that the goal of the EM model has failed. Taking OSSA-2023-003 (CVE-2023-2088) as the most recent example, the advisory and patches for maintained branches were published four weeks ago. Fixes for stable/xena were developed along with the other backports because the branch had only just transitioned to EM a week or two prior. Of the four deliverables which were patched as part of that advisory, only Nova provided a patch for anything older, and that was just to stable/wallaby (and possibly added as a mistake or due to miscommunication between the various groups involved). In the four weeks since, I haven't seen anyone outside the core review teams for Cinder, Glance or Nova supply changes for additional backports, even though I expect the downstream distributions patched versions contemporary with some branches still in EM. It could be that this vulnerability is a poor example, because a lot of deployments use RBD by default and it wasn't affected, but the situation with the two other advisories earlier in the year wasn't much different where backports were concerned. > 1) We have less review bandwidth even for active stable branches > (Yoga, Zed and 2023.1) The intent for EM was that reviewing branches no longer under normal maintenance could be delegated to other members of the community. Of course, that was the idea with stable maintenance as well. Perhaps it would be more accurate to restate this as there are no volunteers to review the changes (it shouldn't be the core review team's obligation either way)? > 2) No one, apart from the project team, does backport of critical > fixes implying that those branches aren't used much for > collaboration This is the best rationale for scrapping the whole EM idea, in my opinion. There's a possibility that if the core reviewers weren't pushing backports someone else would have done so eventually, but I think we have plenty of evidence now to indicate that doesn't really happen in practice. > 3) Will save gate resources for some periodic jobs and the patches > proposed EM branches don't have to run the same jobs as maintained stable branches (or really any jobs at all), but that does still at a minimum need the attention of someone interested in removing the unwanted jobs. > 4) Save project team's time to fix gate issues [...] Similar to the earlier points, the project team shouldn't feel obligated to fix testing issues on EM branches. If testing breaks, it should be up to the volunteers proposing and reviewing changes to do that. If they don't, nothing will merge. If they don't care enough to keep it possible to merge things, then sure that's basically back to item #1 again. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From noonedeadpunk at gmail.com Tue Jun 6 17:33:43 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Tue, 6 Jun 2023 19:33:43 +0200 Subject: [openstack-ansible] Tuesday June 13 meeting is CANCELLED Message-ID: Hi folks, Since some members will be attending OpenInfra Summit and as we are having a PTG session on Wednesday, weekly meeting on Tuesday, June 13 is cancelled. From wodel.youchi at gmail.com Tue Jun 6 17:35:42 2023 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 6 Jun 2023 18:35:42 +0100 Subject: [kolla-ansible][Yoga] Manila share creation stuck in creating status cephfs backend Message-ID: Hi, We are facing a strange problem, when creating manila shares, we use cephfs as backend, the creation is stuck in creation state. If we try to delete the share, we get and error "you are not authorized to delete share ...." We have to use force-delete in cli to be able to delete it. We don't have any indication in manila's log files, there no error or exception. It seems like an access right problem, but we don't know where to look. Regards. Virus-free.www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Tue Jun 6 18:39:01 2023 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 6 Jun 2023 13:39:01 -0500 Subject: [cinder][all] EOL EM branches In-Reply-To: References: Message-ID: On Tue, 6 Jun 2023 at 09:36, Rajat Dhasmana wrote: > > Hi, > > We had a discussion in the last cinder meeting regarding making EM branches EOL for the cinder project[1]. > The discussion started because of the CVE fixes where we backported to active stable branches i.e. Yoga, Zed and 2023.1 > but there were no backports to further EM stable branches like Xena, Wallaby ... all the way to Train. That is expected/totally fine with EM branches > Cinder team doesn't see much merit in keeping these EM stable branches alive since there is rarely any activity that requires > collaboration and if there is, it is usually by the core cinder team. Following are some of our reasons to EOL these branches: This is less related to EM and more a result of make-up of many project teams. > 1) We have less review bandwidth even for active stable branches (Yoga, Zed and 2023.1) These are the only branches the cinder team is expected to be reviewing. > 2) No one, apart from the project team, does backport of critical fixes implying that those branches aren't used much for collaboration I understand your point but by removing them there is zero scope for collaboration. When the existing stable policy was discussed/created in Sydney it was observed that there is basically 0 overlap with branches that vendors pick to support for a longer time. It seems that in the current climate this looks even worse. :( > 3) Will save gate resources for some periodic jobs and the patches proposed > 4) Save project team's time to fix gate issues This is a valid point, do you have a feel for how much time the CInder team spent fixing issues on EM branches? > We, as the cinder team, have decided to EOL all the existing EM branches that go from Train to Xena. It was agreed upon by the cinder > team and no objections were raised during the upstream cinder meeting. This will be somewhat impactful on projects that wish to keep CI running for EM branches, devstack will need to know where/how to get cinder projects code, and would make it nearly impossible to projects that overlap with the cinder team (nova via os-brick) to maintain any critical updates on their non EM branches. > We would like to gather more feedback outside of the cinder team regarding if this affects other projects, deployers, operators, vendors etc. > Please reply to this email with your concerns, if you have any, so we can discuss again and reconsider our decision. Else the week after the > summit we will be moving forward with our current decision. > If you will be at the Vancouver Summit, you can also give us feedback at the cinder Forum session at 11:40 am on Wednesday June 14. For sure. > > [1] https://meetings.opendev.org/irclogs/%23openstack-meeting-alt/%23openstack-meeting-alt.2023-05-31.log.html#t2023-05-31T14:22:23 > > Thanks > Rajat Dhasmana -- Yours Tony. From vrook at wikimedia.org Tue Jun 6 19:11:34 2023 From: vrook at wikimedia.org (Vivian Rook) Date: Tue, 6 Jun 2023 15:11:34 -0400 Subject: [magnum] kubectl loses access after 31 days Message-ID: After 31 days I lose kubectl access to magnum clusters. This has happened consistently for any cluster that I have deployed. The clusters run just fine, though around 31 days of operation kubectl cannot connect, and the web service shows the service as down (Though the web service on the cluster is responding enough to say that nothing is working, so the cluster has not completely crashed) All kubectl commands have a long pause (about 10 minutes) then gives errors like: Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps) Unable to connect to the server: stream error: stream ID 11; INTERNAL_ERROR; received from peer I have a little more information in https://phabricator.wikimedia.org/T336586 It feels like a cert is expiring as it always seems to happen right about 31 days after deployment. Does magnum have some kind of certificate like that? I checked the kubectl certs, they were set to be fine for years, so I don't think it is them unless I didn't check them correctly (Let's not discount that possibility, I totally could have read the wrong bit of the cert). I can still generate a new kubectl config file with openstack coe cluster config Though the resulting configuration will have the same issue as the original config (long pause, then timeout errors). I have also tried to run: openstack coe ca rotate Which is accepted and seems to run fine, but after that point if I regenerate a kubeconfig file as above I get new errors when running kubectl: Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "") If the key rotation would work, and I'm not doing it correctly, I would be delighted to hear how to run it correctly. Though ideally I would like to find where the original key is failing, and if it is an expiration, how to set it to a longer time. Thank you! -- *Vivian Rook (They/Them)* Site Reliability Engineer Wikimedia Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Tue Jun 6 19:12:06 2023 From: dms at danplanet.com (Dan Smith) Date: Tue, 6 Jun 2023 12:12:06 -0700 Subject: [cinder][all] EOL EM branches In-Reply-To: References: Message-ID: <6C4E7D1E-E30E-4AB3-BF9F-DD1E35BB7BAC@danplanet.com> > 1) We have less review bandwidth even for active stable branches (Yoga, Zed and 2023.1) > 2) No one, apart from the project team, does backport of critical fixes implying that those branches aren't used much for collaboration > 3) Will save gate resources for some periodic jobs and the patches proposed > 4) Save project team's time to fix gate issues These are all good reasons, and I think that they highlight how the original plan for EM has turned out to have failed in practice. I think in most cases, it's the project teams that continue maintaining these, backporting patches here, and fixing zuul config issues or other gate fails when they happen. I myself tried to make the point recently that we should be dropping (not fixing) the ceph job on the wallaby gate when it broke (per the plan), but the well-meaning people involved ended up fixing it anyway. I think we're probably due to revisit the current EM strategy soon. The recent CVE is the most important thing to me though. Anyone that looks at the recent activity in say, wallaby will see a lot of familiar faces and recent backports from the project teams. It would not be a stretch at all to assume that since we've backported minor fixes that we've also already backported the most substantial CVE in the last decade as well -- but we haven't (and won't). Nova is in a similar boat, with the last *two* CVEs unfixed in the earlier branches because of the complexity of the multiple projects, libraries, releases, and tests that need to be coordinated in order to have the desired effect. IMHO, it is the prudent and responsible thing to do to drop these branches which look maintained but in reality have known severe vulnerabilities in them. > We, as the cinder team, have decided to EOL all the existing EM branches that go from Train to Xena. It was agreed upon by the cinder team and no objections were raised during the upstream cinder meeting. Given the severity of the impact to older unpatched Cinder branches, think this makes sense. Nova isn't in quite the same boat, but I made the same arguments in this patch proposing to EOL train, where the second-to-last VMDK-related vulnerability remains unfixed: https://review.opendev.org/c/openstack/releases/+/885365 --Dan From gmann at ghanshyammann.com Tue Jun 6 19:17:23 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 06 Jun 2023 12:17:23 -0700 Subject: [cinder][all][tc][ops][stable] EOL EM branches In-Reply-To: <20230606173231.jrbuwdx2lupt7t7r@yuggoth.org> References: <20230606173231.jrbuwdx2lupt7t7r@yuggoth.org> Message-ID: <18892247997.12b624f8c128041.5749054378410411347@ghanshyammann.com> ---- On Tue, 06 Jun 2023 10:32:32 -0700 Jeremy Stanley wrote --- > On 2023-06-06 20:01:45 +0530 (+0530), Rajat Dhasmana wrote: > [...] > > The discussion started because of the CVE fixes where we > > backported to active stable branches i.e. Yoga, Zed and 2023.1 but > > there were no backports to further EM stable branches like Xena, > > Wallaby ... all the way to Train. > [...] > > The idea behind EM branches was that downstream distributions who > need to backport these changes for their own customers would push > the patches to the upstream EM branches so that they didn't all have > to redo the same work and could benefit from each other's knowledge > and experience around backports. If that's not happening for > critical security patches, then I agree that the goal of the EM > model has failed. > > Taking OSSA-2023-003 (CVE-2023-2088) as the most recent example, > the advisory and patches for maintained branches were published > four weeks ago. Fixes for stable/xena were developed along with the > other backports because the branch had only just transitioned to EM > a week or two prior. Of the four deliverables which were patched as > part of that advisory, only Nova provided a patch for anything > older, and that was just to stable/wallaby (and possibly added as a > mistake or due to miscommunication between the various groups > involved). > > In the four weeks since, I haven't seen anyone outside the core > review teams for Cinder, Glance or Nova supply changes for > additional backports, even though I expect the downstream > distributions patched versions contemporary with some branches still > in EM. It could be that this vulnerability is a poor example, > because a lot of deployments use RBD by default and it wasn't > affected, but the situation with the two other advisories earlier in > the year wasn't much different where backports were concerned. > > > 1) We have less review bandwidth even for active stable branches > > (Yoga, Zed and 2023.1) > > The intent for EM was that reviewing branches no longer under normal > maintenance could be delegated to other members of the community. Of > course, that was the idea with stable maintenance as well. Perhaps > it would be more accurate to restate this as there are no volunteers > to review the changes (it shouldn't be the core review team's > obligation either way)? > > > 2) No one, apart from the project team, does backport of critical > > fixes implying that those branches aren't used much for > > collaboration > > This is the best rationale for scrapping the whole EM idea, in my > opinion. There's a possibility that if the core reviewers weren't > pushing backports someone else would have done so eventually, but I > think we have plenty of evidence now to indicate that doesn't really > happen in practice. > > > 3) Will save gate resources for some periodic jobs and the patches > > proposed > > EM branches don't have to run the same jobs as maintained stable > branches (or really any jobs at all), but that does still at a > minimum need the attention of someone interested in removing the > unwanted jobs. > > > 4) Save project team's time to fix gate issues > [...] > > Similar to the earlier points, the project team shouldn't feel > obligated to fix testing issues on EM branches. If testing breaks, > it should be up to the volunteers proposing and reviewing changes to > do that. If they don't, nothing will merge. If they don't care > enough to keep it possible to merge things, then sure that's > basically back to item #1 again. This is true but I think this is the point where things are becoming difficult. Even we do not need to but we as community developers keep fixing the EM gate, at least I can tell from my QA experience for this. We should stop at some line but in reality, we end up doing it. IMO, we should do some policies and testing changes which can help to understand the EM clearly and spend only the required time on those. A few of the ideas are: 1. Reduce the number of EM count, currently we have 7 EM branches. we should make it a little less say 4 or 5 as max any time. And after the max count limit (say 4) we will make the last one EOL. Having 4 EM branches means 2 years of extended support which is good enough. 2. Completely remove all the integration test jobs even if they are passing at the time branch is moving to EM state. This way project team will get less frustrated by seeing the gate failure on backport. This will simply ask them to backport it and make it available for downstream consumers who will test it properly before use. Until we keep the integration testing because it was passing today make more work for future maintenance. -gmann > -- > Jeremy Stanley > From fungi at yuggoth.org Tue Jun 6 19:48:43 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Jun 2023 19:48:43 +0000 Subject: [cinder][all][tc][ops][stable] EOL EM branches In-Reply-To: <18892247997.12b624f8c128041.5749054378410411347@ghanshyammann.com> References: <20230606173231.jrbuwdx2lupt7t7r@yuggoth.org> <18892247997.12b624f8c128041.5749054378410411347@ghanshyammann.com> Message-ID: <20230606194842.hegozazpxgi4ibye@yuggoth.org> On 2023-06-06 12:17:23 -0700 (-0700), Ghanshyam Mann wrote: [...] > This is true but I think this is the point where things are > becoming difficult. Even we do not need to but we as community > developers keep fixing the EM gate, at least I can tell from my QA > experience for this. We should stop at some line but in reality, > we end up doing it. [...] Maybe my verbosity made it unclear, so just in case, what I was trying to say is that I consider Extended Maintenance to be a failed experiment and agree we should be talking about either reverting to the prior process from before EM was a thing or finding an alternative process that doesn't have so many of the obvious shortcomings of EM. People said if we just stopped EOL'ing branches so soon they would show up and help make use of those branches. They didn't, and so the expected benefits never materialized. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tony at bakeyournoodle.com Tue Jun 6 20:02:17 2023 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 6 Jun 2023 15:02:17 -0500 Subject: [cinder][all][tc][ops][stable] EOL EM branches In-Reply-To: <20230606194842.hegozazpxgi4ibye@yuggoth.org> References: <20230606173231.jrbuwdx2lupt7t7r@yuggoth.org> <18892247997.12b624f8c128041.5749054378410411347@ghanshyammann.com> <20230606194842.hegozazpxgi4ibye@yuggoth.org> Message-ID: On Tue, 6 Jun 2023 at 14:52, Jeremy Stanley wrote: > Maybe my verbosity made it unclear, so just in case, what I was > trying to say is that I consider Extended Maintenance to be a failed > experiment and agree we should be talking about either reverting to > the prior process from before EM was a thing or finding an > alternative process that doesn't have so many of the obvious > shortcomings of EM. > > People said if we just stopped EOL'ing branches so soon they would > show up and help make use of those branches. They didn't, and so the > expected benefits never materialized. I agree, my main concern is that we do this well across the whole set of projects, not have a variety of projects doing different things. Yours Tony. From masahito.muroi at linecorp.com Tue Jun 6 21:04:56 2023 From: masahito.muroi at linecorp.com (=?utf-8?B?TWFzYWhpdG8gTXVyb2k=?=) Date: Wed, 07 Jun 2023 06:04:56 +0900 Subject: =?utf-8?B?UmU6IFtvc2xvXVtsYXJnZXNjYWxlLXNpZ10gSFRUUCBiYXNlIGRpcmVjdCBSUEMgb3Nsbw==?= =?utf-8?B?Lm1lc3NhZ2luZyBkcml2ZXIgY29udHJpYnV0aW9u?= In-Reply-To: References: Message-ID: Hi, Thank you everyone for the kindly reply. I got the PTG situation. Submitting the spec seems to be a nice first step. We don't have public repository of the driver because of internal repository structure reason. The repository is really stick to the current internal repository structure now. Cleaning up repository would take time so that we didn't do the extra tasks. best regards. Masahito -----Original Message----- From: "Takashi Kajinami" To: "Masahito Muroi"; Cc: ; "Herve Beraud"; Sent: 2023/06/06(?) 18:50 (GMT+09:00) Subject: Re: [oslo] HTTP base direct RPC oslo.messaging driver contribution Hello, This is very interesting and I agree having the spec would be the good way to move this forward. We have not requested oslo sessions in the upcoming PTG but Stephen and I are attending it so will be available for the discussion. Because some other cores such as Herve won't be there, we'd need to continue further discussions after PTG in spec review, but if that early in-person discussion sounds helpful for you then I'll reserve a table. Thank you, Takashi On Tue, Jun 6, 2023 at 4:48 PM Herve Beraud wrote: Hello, Indeed, Oslo doesn't have PTG sessions. Best regards Le lun. 5 juin 2023 ? 10:42, Masahito Muroi a ?crit : Hello Herve, Thank you for the quick replying. Let us prepare the spec and submit it. btw, does olso team have PTG in the up-comming summit? We'd like to get a quick feedback of the spec if time is allowed in the PTG. But it looks like oslo team won't have PTG there. best regards, Masahito -----Original Message----- From: "Herve Beraud" To: "????"; Cc: ; Sent: 2023/06/05(?) 17:21 (GMT+09:00) Subject: Re: [oslo] HTTP base direct RPC oslo.messaging driver contribution Hello Masahito, Submission to oslo-spec is a good starting point. Best regards Le lun. 5 juin 2023 ? 10:04, ???? a ?crit : Hi oslo team, We'd like to contribute HTTP base direct RPC driver to the oslo.messaging community. We have developed the HTTP base driver internally. We have been using the driver in the production with over 10K hypervisors now. I checked the IRC meeting log of the oslo team[1], but there is no regluar meeting in 2023. Is it okay to submit oslo-spec[2] to propose the driver directly, or is there another good place to discuss the feature before submitting a spec? 1. https://meetings.opendev.org/#Oslo_Team_Meeting 2. https://opendev.org/openstack/oslo-specs best regards, Masahito -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Jun 6 22:04:51 2023 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 6 Jun 2023 15:04:51 -0700 Subject: [oslo][largescale-sig] HTTP base direct RPC oslo.messaging driver contribution In-Reply-To: References: Message-ID: Jumping in because the thread has been rather reminiscent of the json-rpc messaging feature ironic carries so our users don't have to run with rabbit. I suspect Ironic might be happy to propose it to oslo.messaging if this http driver is acceptable. Please feel free to add me as a reviewer on the spec. -Julia On Tue, Jun 6, 2023 at 2:10?PM Masahito Muroi wrote: > Hi, > > Thank you everyone for the kindly reply. > > I got the PTG situation. Submitting the spec seems to be a nice first > step. > > We don't have public repository of the driver because of internal > repository structure reason. The repository is really stick to the current > internal repository structure now. Cleaning up repository would take time > so that we didn't do the extra tasks. > > best regards. > Masahito > > -----Original Message----- > *From:* "Takashi Kajinami" > *To:* "Masahito Muroi"; > *Cc:* ; "Herve Beraud"< > hberaud at redhat.com>; > *Sent:* 2023/06/06(?) 18:50 (GMT+09:00) > *Subject:* Re: [oslo] HTTP base direct RPC oslo.messaging driver > contribution > > Hello, > > > This is very interesting and I agree having the spec would be the good way > to move this forward. > > We have not requested oslo sessions in the upcoming PTG but Stephen and I > are attending it so will be > available for the discussion. > > Because some other cores such as Herve won't be there, we'd need to > continue further discussions after PTG > in spec review, but if that early in-person discussion sounds helpful for > you then I'll reserve a table. > > Thank you, > Takashi > > > On Tue, Jun 6, 2023 at 4:48 PM Herve Beraud wrote: > > Hello, > > Indeed, Oslo doesn't have PTG sessions. > > Best regards > > Le lun. 5 juin 2023 ? 10:42, Masahito Muroi > a ?crit : > > Hello Herve, > > Thank you for the quick replying. Let us prepare the spec and submit it. > > btw, does olso team have PTG in the up-comming summit? We'd like to get a > quick feedback of the spec if time is allowed in the PTG. But it looks like > oslo team won't have PTG there. > > best regards, > Masahito > > -----Original Message----- > *From:* "Herve Beraud" > *To:* "????"; > *Cc:* ; > *Sent:* 2023/06/05(?) 17:21 (GMT+09:00) > *Subject:* Re: [oslo] HTTP base direct RPC oslo.messaging driver > contribution > > Hello Masahito, > > Submission to oslo-spec is a good starting point. > > Best regards > > Le lun. 5 juin 2023 ? 10:04, ???? a ?crit : > > Hi oslo team, > > We'd like to contribute HTTP base direct RPC driver to the oslo.messaging > community. We have developed the HTTP base driver internally. We have been > using the driver in the production with over 10K hypervisors now. > > I checked the IRC meeting log of the oslo team[1], but there is no regluar > meeting in 2023. Is it okay to submit oslo-spec[2] to propose the driver > directly, or is there another good place to discuss the feature before > submitting a spec? > > 1. https://meetings.opendev.org/#Oslo_Team_Meeting > 2. https://opendev.org/openstack/oslo-specs > > best regards, > Masahito > > > > > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > > > > > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Tue Jun 6 22:31:35 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Tue, 6 Jun 2023 15:31:35 -0700 Subject: [oslo][largescale-sig] HTTP base direct RPC oslo.messaging driver contribution In-Reply-To: References: Message-ID: I'm interested in this as well, please add me to the spec if you need additional brains :). I'll also be at the summit if you'd like to discuss any of it in person. -- Jay Faulkner Ironic PTL On Tue, Jun 6, 2023 at 3:14?PM Julia Kreger wrote: > Jumping in because the thread has been rather reminiscent of the json-rpc > messaging feature ironic carries so our users don't have to run with > rabbit. I suspect Ironic might be happy to propose it to oslo.messaging if > this http driver is acceptable. > > Please feel free to add me as a reviewer on the spec. > > -Julia > > On Tue, Jun 6, 2023 at 2:10?PM Masahito Muroi > wrote: > >> Hi, >> >> Thank you everyone for the kindly reply. >> >> I got the PTG situation. Submitting the spec seems to be a nice first >> step. >> >> We don't have public repository of the driver because of internal >> repository structure reason. The repository is really stick to the current >> internal repository structure now. Cleaning up repository would take time >> so that we didn't do the extra tasks. >> >> best regards. >> Masahito >> >> -----Original Message----- >> *From:* "Takashi Kajinami" >> *To:* "Masahito Muroi"; >> *Cc:* ; "Herve Beraud"< >> hberaud at redhat.com>; >> *Sent:* 2023/06/06(?) 18:50 (GMT+09:00) >> *Subject:* Re: [oslo] HTTP base direct RPC oslo.messaging driver >> contribution >> >> Hello, >> >> >> This is very interesting and I agree having the spec would be the good >> way to move this forward. >> >> We have not requested oslo sessions in the upcoming PTG but Stephen and I >> are attending it so will be >> available for the discussion. >> >> Because some other cores such as Herve won't be there, we'd need to >> continue further discussions after PTG >> in spec review, but if that early in-person discussion sounds helpful for >> you then I'll reserve a table. >> >> Thank you, >> Takashi >> >> >> On Tue, Jun 6, 2023 at 4:48 PM Herve Beraud wrote: >> >> Hello, >> >> Indeed, Oslo doesn't have PTG sessions. >> >> Best regards >> >> Le lun. 5 juin 2023 ? 10:42, Masahito Muroi >> a ?crit : >> >> Hello Herve, >> >> Thank you for the quick replying. Let us prepare the spec and submit it. >> >> btw, does olso team have PTG in the up-comming summit? We'd like to get a >> quick feedback of the spec if time is allowed in the PTG. But it looks like >> oslo team won't have PTG there. >> >> best regards, >> Masahito >> >> -----Original Message----- >> *From:* "Herve Beraud" >> *To:* "????"; >> *Cc:* ; >> *Sent:* 2023/06/05(?) 17:21 (GMT+09:00) >> *Subject:* Re: [oslo] HTTP base direct RPC oslo.messaging driver >> contribution >> >> Hello Masahito, >> >> Submission to oslo-spec is a good starting point. >> >> Best regards >> >> Le lun. 5 juin 2023 ? 10:04, ???? a >> ?crit : >> >> Hi oslo team, >> >> We'd like to contribute HTTP base direct RPC driver to the oslo.messaging >> community. We have developed the HTTP base driver internally. We have been >> using the driver in the production with over 10K hypervisors now. >> >> I checked the IRC meeting log of the oslo team[1], but there is no >> regluar meeting in 2023. Is it okay to submit oslo-spec[2] to propose the >> driver directly, or is there another good place to discuss the feature >> before submitting a spec? >> >> 1. https://meetings.opendev.org/#Oslo_Team_Meeting >> 2. https://opendev.org/openstack/oslo-specs >> >> best regards, >> Masahito >> >> >> >> >> -- >> Herv? Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> >> >> >> >> -- >> Herv? Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From atidor12 at gmail.com Tue Jun 6 22:36:13 2023 From: atidor12 at gmail.com (altidor JB) Date: Tue, 6 Jun 2023 18:36:13 -0400 Subject: Openstack /port mirroir issue [Neutron] Message-ID: Hello, I've set up Openstack Zed on Ubuntu Jammy using Juju and MAAS. I'm experimenting with some IDS on the architecture and need to implement some kind of port mirroring capacity. I've been trying with Tap as a Service but can't find any resources for the installation/configuration on my architecture. The git refers to Devstack implementation. I'm using the large-scale deployment on 6 servers. Can anyone point me in the right direction? Thanks! JB -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Wed Jun 7 06:55:09 2023 From: katonalala at gmail.com (Lajos Katona) Date: Wed, 7 Jun 2023 08:55:09 +0200 Subject: Openstack /port mirroir issue [Neutron] In-Reply-To: References: Message-ID: Hi, I don't know about any deployment tooling efforts for tap-as-a-service. Do you have perhaps any kind of issues or you are just at the first step: the deployment? Best regards Lajos Katona (lajoskatona) altidor JB ezt ?rta (id?pont: 2023. j?n. 7., Sze, 0:42): > Hello, > I've set up Openstack Zed on Ubuntu Jammy using Juju and MAAS. I'm > experimenting with some IDS on the architecture and need to implement some > kind of port mirroring capacity. I've been trying with Tap as a Service but > can't find any resources for the installation/configuration on my > architecture. The git refers to Devstack implementation. > I'm using the large-scale deployment on 6 servers. Can anyone point me in > the right direction? > Thanks! > JB > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bartosz at stackhpc.com Wed Jun 7 08:30:28 2023 From: bartosz at stackhpc.com (Bartosz Bezak) Date: Wed, 7 Jun 2023 10:30:28 +0200 Subject: [kolla] IRC meeting cancelled today Message-ID: <072BE516-E264-4155-A709-AAA8FB94DD6D@stackhpc.com> Hello, Unfortunately I have conflicting meetings and I am unable to run the IRC meeting today, let?s cancel it today. Best regards, Bartosz Bezak -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Jun 7 08:35:21 2023 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 7 Jun 2023 10:35:21 +0200 Subject: [oslo][largescale-sig] HTTP base direct RPC oslo.messaging driver contribution In-Reply-To: References: Message-ID: Le mer. 7 juin 2023 ? 00:31, Jay Faulkner a ?crit : > I'm interested in this as well, please add me to the spec if you need > additional brains :). I'll also be at the summit if you'd like to discuss > any of it in person. > > -- > Jay Faulkner > Ironic PTL > > On Tue, Jun 6, 2023 at 3:14?PM Julia Kreger > wrote: > >> Jumping in because the thread has been rather reminiscent of the json-rpc >> messaging feature ironic carries so our users don't have to run with >> rabbit. I suspect Ironic might be happy to propose it to oslo.messaging if >> this http driver is acceptable. >> > Indeed, it could be interesting, thanks Julia. >> Please feel free to add me as a reviewer on the spec. >> >> -Julia >> >> On Tue, Jun 6, 2023 at 2:10?PM Masahito Muroi < >> masahito.muroi at linecorp.com> wrote: >> >>> Hi, >>> >>> Thank you everyone for the kindly reply. >>> >>> I got the PTG situation. Submitting the spec seems to be a nice first >>> step. >>> >>> We don't have public repository of the driver because of internal >>> repository structure reason. The repository is really stick to the current >>> internal repository structure now. Cleaning up repository would take time >>> so that we didn't do the extra tasks. >>> >>> best regards. >>> Masahito >>> >>> -----Original Message----- >>> *From:* "Takashi Kajinami" >>> *To:* "Masahito Muroi"; >>> *Cc:* ; "Herve Beraud"< >>> hberaud at redhat.com>; >>> *Sent:* 2023/06/06(?) 18:50 (GMT+09:00) >>> *Subject:* Re: [oslo] HTTP base direct RPC oslo.messaging driver >>> contribution >>> >>> Hello, >>> >>> >>> This is very interesting and I agree having the spec would be the good >>> way to move this forward. >>> >>> We have not requested oslo sessions in the upcoming PTG but Stephen and >>> I are attending it so will be >>> available for the discussion. >>> >>> Because some other cores such as Herve won't be there, we'd need to >>> continue further discussions after PTG >>> in spec review, but if that early in-person discussion sounds helpful >>> for you then I'll reserve a table. >>> >>> Thank you, >>> Takashi >>> >>> >>> On Tue, Jun 6, 2023 at 4:48 PM Herve Beraud wrote: >>> >>> Hello, >>> >>> Indeed, Oslo doesn't have PTG sessions. >>> >>> Best regards >>> >>> Le lun. 5 juin 2023 ? 10:42, Masahito Muroi >>> a ?crit : >>> >>> Hello Herve, >>> >>> Thank you for the quick replying. Let us prepare the spec and submit it. >>> >>> btw, does olso team have PTG in the up-comming summit? We'd like to get >>> a quick feedback of the spec if time is allowed in the PTG. But it looks >>> like oslo team won't have PTG there. >>> >>> best regards, >>> Masahito >>> >>> -----Original Message----- >>> *From:* "Herve Beraud" >>> *To:* "????"; >>> *Cc:* ; >>> *Sent:* 2023/06/05(?) 17:21 (GMT+09:00) >>> *Subject:* Re: [oslo] HTTP base direct RPC oslo.messaging driver >>> contribution >>> >>> Hello Masahito, >>> >>> Submission to oslo-spec is a good starting point. >>> >>> Best regards >>> >>> Le lun. 5 juin 2023 ? 10:04, ???? a >>> ?crit : >>> >>> Hi oslo team, >>> >>> We'd like to contribute HTTP base direct RPC driver to the >>> oslo.messaging community. We have developed the HTTP base driver >>> internally. We have been using the driver in the production with over 10K >>> hypervisors now. >>> >>> I checked the IRC meeting log of the oslo team[1], but there is no >>> regluar meeting in 2023. Is it okay to submit oslo-spec[2] to propose the >>> driver directly, or is there another good place to discuss the feature >>> before submitting a spec? >>> >>> 1. https://meetings.opendev.org/#Oslo_Team_Meeting >>> 2. https://opendev.org/openstack/oslo-specs >>> >>> best regards, >>> Masahito >>> >>> >>> >>> >>> -- >>> Herv? Beraud >>> Senior Software Engineer at Red Hat >>> irc: hberaud >>> https://github.com/4383/ >>> >>> >>> >>> >>> -- >>> Herv? Beraud >>> Senior Software Engineer at Red Hat >>> irc: hberaud >>> https://github.com/4383/ >>> >>> >>> -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Wed Jun 7 09:06:38 2023 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 7 Jun 2023 11:06:38 +0200 Subject: [nova][ptg][ops] Please collect your thoughs and wishlists for the Forum and the PTG ! Message-ID: Hey folks and especially operators, As a reminder, we will have multiple opportunities during the next week for discussing about some topics : - we will have on Tuesday a general 30-min contributors Meet&Greet Forum session where operators and developers will be. If you want to attend this session and you have some questions or if you want to explain about your problems, you can provide those in the meet&greet etherpad : https://etherpad.opendev.org/p/nova-vancouver2023-meet-and-greet - we will also have a 2-day PTG on Wednesday and Thursday where operators and contributors on a table (#24) can discuss about everything that's in the physical PTG agenda that we will have in the below etherpad : https://etherpad.opendev.org/p/vancouver-june2023-nova The more we are able to collect insights, the better the Summit will be. Thanks folks and hope to see the most of you in Vancouver face-to-face ! -Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From anbanerj at redhat.com Wed Jun 7 10:34:02 2023 From: anbanerj at redhat.com (Ananya Banerjee) Date: Wed, 7 Jun 2023 12:34:02 +0200 Subject: [gate][tripleo] gate blocker Message-ID: Hello, Jobs on Centos 9 deploying standalone or multinode are failing standalone deploy or overcloud deploy with https://bugs.launchpad.net/tripleo/+bug/2023019 . Fixes are in progress, please hold rechecks for the moment if you hit this. Thanks, Ananya -- Ananya Banerjee, RHCSA, RHCE-OSP Software Engineer Red Hat EMEA anbanerj at redhat.com M: +491784949931 IM: frenzy_friday @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.acosta at luizalabs.com Wed Jun 7 11:46:25 2023 From: roberto.acosta at luizalabs.com (Roberto Bartzen Acosta) Date: Wed, 7 Jun 2023 08:46:25 -0300 Subject: Community attendance - OpenInfra Summit Vancouver In-Reply-To: References: Message-ID: Hi Alvaro, nice! of course we can meet there! all are welcome :) We are trying to schedule something on Tuesday, I already sent messages to Iury and Carlos. Cheers, Roberto Em ter., 6 de jun. de 2023 ?s 06:35, Alvaro Soto escreveu: > Hello Roberto, > I'm not from Brazil (I'm based on Mexico) but as part of LATAM community, > I'll love to be part in local projects :) > > I'll be at OIS, I'll be nice to talk about community challenges for our > local community. > > Cheers. > --- > Alvaro Soto. > > Note: My work hours may not be your work hours. Please do not feel the > need to respond during a time that is not convenient for you. > ---------------------------------------------------------- > Great people talk about ideas, > ordinary people talk about things, > small people talk... about other people. > > On Thu, Jun 1, 2023, 6:47 AM Iury Gregory wrote: > >> Hi Roberto, >> >> I know some Brazilians that will be attending the OIS Vancouver, >> including me. >> >> Em qui., 1 de jun. de 2023 ?s 09:00, Roberto Bartzen Acosta < >> roberto.acosta at luizalabs.com> escreveu: >> >>> Hello, >>> >>> Will anyone from the Brazilian community attend the OpenInfra in >>> Vancouver? >>> >>> I would like to meet other members from Brazil and discuss the >>> challenges and possibilities of using OpenStack in Brazilian >>> infrastructures. You can ping me on IRC too (racosta). >>> >>> Kind regards, >>> Roberto >>> >>> >>> *?Esta mensagem ? direcionada apenas para os endere?os constantes no >>> cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no >>> cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa >>> mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o >>> imediatamente anuladas e proibidas?.* >>> >>> *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para >>> assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o >>> poder? aceitar a responsabilidade por quaisquer perdas ou danos causados >>> por esse e-mail ou por seus anexos?.* >>> >> >> >> -- >> *Att[]'s* >> >> *Iury Gregory Melo Ferreira * >> *MSc in Computer Science at UFCG* >> *Ironic PTL * >> *Senior Software Engineer at Red Hat Brazil* >> *Social*: https://www.linkedin.com/in/iurygregory >> *E-mail: iurygregory at gmail.com * >> > -- _?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?._ *?**?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Jun 7 12:21:23 2023 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 7 Jun 2023 13:21:23 +0100 Subject: Cinder Bug Report 2023-06-07 Message-ID: Hello Argonauts, Cinder Bug Meeting Etherpad *Medium* - Volume creation fails if an old signature is found when using the LVM driver. - *Status*: Unassigned. - LIO target doesn't support discard on thin volumes. - *Status*: Fix proposed to master. *Low* - Cinder PowerMax driver volume attach/detach hang issues. - *Status*: Fix proposed to master . - Huawei Oceanstor volume driver is not up to date with official Huawei's driver . - *Status*: Please propose the topic to the cinder meeting . *Incomplete* - [huawei] wrong hostname passed to os_brick. - *Status*: Waiting on keep the conversation on the Launchpad bug or propose a WIP patch to discuss the changes on the patch. -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Jun 7 12:49:01 2023 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 7 Jun 2023 08:49:01 -0400 Subject: [i18n] upstream investment: translations infrastructure engineer Message-ID: <556e0291-4327-ceb7-c056-b5e4063e3175@gmail.com> I've proposed an OpenStack "upstream investment opportunity" for an engineer to handle the Zanata -> Weblate re-plumbing for gerrit/zuul: https://review.opendev.org/c/openstack/governance/+/885379 Please leave comments if you have any. cheers, brian From dsneddon at redhat.com Wed Jun 7 14:29:08 2023 From: dsneddon at redhat.com (Dan Sneddon) Date: Wed, 7 Jun 2023 07:29:08 -0700 Subject: New repo for os-net-config? Message-ID: The TripleO project is being retired, and the Master/Zed branches are no longer being maintained. However the os-net-config project is still required for bare metal network configuration, and I believe the codebase may be used by other parties, however I don't have a good list of who is still using the codebase. We need to find another repo that can host os-net-config, hopefully on opendev.org, since the GitHub repo doesn't have the same level of CI that we have now on OpenStack Jenkins. Can anyone recommend or volunteer a repository that would be able to host os-net-config going forward? I expect it will be needed for another year or so at least, just for the use cases that I'm aware of. If anyone has still been using os-net-config outside of TripleO, can you please speak up, either on-list or to me directly? Thank you very much for any suggestions, -Dan Sneddon -- Dan Sneddon | Senior Principal Software Engineer dsneddon at redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Wed Jun 7 15:36:58 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Wed, 7 Jun 2023 21:06:58 +0530 Subject: [cinder] Spec Review Day 09 June Message-ID: Hello Argonauts, Inspired from the nova team, we will be having our day of spec review. As decided in today's cinder meeting, we will be having spec review day on 09th June i.e. this Friday. Following are the details: Date: 09 June, 2023 Time: 1400-1500 UTC Connection Information: Meeting link will be shared in the cinder channel before the event Thanks Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Jun 7 15:51:50 2023 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 07 Jun 2023 08:51:50 -0700 Subject: New repo for os-net-config? In-Reply-To: References: Message-ID: On Wed, Jun 7, 2023, at 7:29 AM, Dan Sneddon wrote: > The TripleO project is being retired, and the Master/Zed branches are > no longer being maintained. However the os-net-config project is still > required for bare metal network configuration, and I believe the > codebase may be used by other parties, however I don't have a good list > of who is still using the codebase. We need to find another repo that > can host os-net-config, hopefully on opendev.org, since the GitHub repo > doesn't have the same level of CI that we have now on OpenStack Jenkins. Note we haven't run Jenkins since ~2017. > > > > Can anyone recommend or volunteer a repository that would be able to > host os-net-config going forward? I expect it will be needed for > another year or so at least, just for the use cases that I'm aware of. Is there some reason the existing repository won't work? > > > > If anyone has still been using os-net-config outside of TripleO, can > you please speak up, either on-list or to me directly? > > > > Thank you very much for any suggestions, > > -Dan Sneddon > > > > -- > Dan Sneddon | Senior Principal Software Engineer > dsneddon at redhat.com | redhat.com/cloud > dsneddon:irc | @dxs:twitter From tony at bakeyournoodle.com Wed Jun 7 16:27:39 2023 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 7 Jun 2023 11:27:39 -0500 Subject: New repo for os-net-config? In-Reply-To: References: Message-ID: On Wed, 7 Jun 2023 at 09:36, Dan Sneddon wrote: > > The TripleO project is being retired, and the Master/Zed branches are no longer being maintained. However the os-net-config project is still required for bare metal network configuration, and I believe the codebase may be used by other parties, however I don't have a good list of who is still using the codebase. We need to find another repo that can host os-net-config, hopefully on opendev.org, since the GitHub repo doesn't have the same level of CI that we have now on OpenStack Jenkins. s/Jenkins/Zuul/ :p > Can anyone recommend or volunteer a repository that would be able to host os-net-config going forward? I expect it will be needed for another year or so at least, just for the use cases that I'm aware of. What exactly is the problem with the repo as is? I guess if needed it could be adopted by a team that is actively using it? > If anyone has still been using os-net-config outside of TripleO, can you please speak up, either on-list or to me directly? I don't see anyone: $ beagle --server-url https://codesearch.openstack.org search --format=grep --file '.*requirements.*' os-net-config openstack/requirements:global-requirements.txt:171:os-net-config # Apache-2.0 openstack/tripleo-ansible:molecule-requirements.txt:22:os-net-config # Apache-2.0 openstack/tripleo-validations:requirements.txt:13:os-net-config>=7.1.0 # Apache-2.0 starlingx/root:build-tools/build-wheels/debian/openstack-requirements/ussuri/global-requirements.txt:210:os-net-config # Apache-2.0 starlingx/root:build-tools/build-wheels/debian/openstack-requirements/ussuri/upper-constraints.txt:351:os-net-config===12.3.5 starlingx/root:build-tools/build-wheels/debian/openstack-requirements/ussuri/upstream/36a2c2677e56afe0ece61cc0ade0f89285440ea0/global-requirements.txt:210:os-net-config # Apache-2.0 starlingx/root:build-tools/build-wheels/debian/openstack-requirements/ussuri/upstream/36a2c2677e56afe0ece61cc0ade0f89285440ea0/upper-constraints.txt:309:os-net-config===12.3.5 > > > Thank you very much for any suggestions, > > -Dan Sneddon > > > -- > Dan Sneddon | Senior Principal Software Engineer > dsneddon at redhat.com | redhat.com/cloud > dsneddon:irc | @dxs:twitter -- Yours Tony. From tony at bakeyournoodle.com Wed Jun 7 16:30:05 2023 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 7 Jun 2023 11:30:05 -0500 Subject: [cinder][all][tc][ops][stable] EOL EM branches In-Reply-To: References: <20230606173231.jrbuwdx2lupt7t7r@yuggoth.org> <18892247997.12b624f8c128041.5749054378410411347@ghanshyammann.com> <20230606194842.hegozazpxgi4ibye@yuggoth.org> Message-ID: On Tue, 6 Jun 2023 at 15:02, Tony Breeds wrote: > > On Tue, 6 Jun 2023 at 14:52, Jeremy Stanley wrote: > > > Maybe my verbosity made it unclear, so just in case, what I was > > trying to say is that I consider Extended Maintenance to be a failed > > experiment and agree we should be talking about either reverting to > > the prior process from before EM was a thing or finding an > > alternative process that doesn't have so many of the obvious > > shortcomings of EM. > > > > People said if we just stopped EOL'ing branches so soon they would > > show up and help make use of those branches. They didn't, and so the > > expected benefits never materialized. > > I agree, my main concern is that we do this well across the whole set > of projects, > not have a variety of projects doing different things. There is space on the summit schedule to discuss this as a community. https://vancouver2023.openinfra.dev/a/schedule#title=How%20do%20we%20end%20the%20Extended%20Maintenance&view=calendar I'll create an etherpad ASAP. Yours Tony. From jay at gr-oss.io Wed Jun 7 16:48:44 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Wed, 7 Jun 2023 09:48:44 -0700 Subject: New repo for os-net-config? In-Reply-To: References: Message-ID: Hey Dan, There are a couple other projects which are maintained, and might be able to take the role of os-net-config in your environment. Both of these are based on reading out information from a config drive and configuring a host: - https://docs.openstack.org/infra/glean/ - https://cloudinit.readthedocs.io/en/latest/ Neither of these may fit your use case exactly, but they continue to exist and are supported, whereas right now (other than you) there's been no interest in keeping some of these tripleo deliverables alive. Good luck, Jay Faulkner Ironic PTL On Wed, Jun 7, 2023 at 7:46?AM Dan Sneddon wrote: > The TripleO project is being retired, and the Master/Zed branches are no > longer being maintained. However the os-net-config project is still > required for bare metal network configuration, and I believe the codebase > may be used by other parties, however I don't have a good list of who is > still using the codebase. We need to find another repo that can host > os-net-config, hopefully on opendev.org, since the GitHub repo doesn't > have the same level of CI that we have now on OpenStack Jenkins. > > > Can anyone recommend or volunteer a repository that would be able to host > os-net-config going forward? I expect it will be needed for another year or > so at least, just for the use cases that I'm aware of. > > > If anyone has still been using os-net-config outside of TripleO, can you > please speak up, either on-list or to me directly? > > > Thank you very much for any suggestions, > > -Dan Sneddon > > > -- > Dan Sneddon | Senior Principal Software Engineer > dsneddon at redhat.com | redhat.com/cloud > dsneddon:irc | @dxs:twitter > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Wed Jun 7 17:07:59 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 7 Jun 2023 19:07:59 +0200 Subject: New repo for os-net-config? In-Reply-To: References: Message-ID: I want to add to this that we also do maintain an ansible role [1], that performs network configuration of hosts using systemd-networkd. Role is tested against Ubuntu 20.04 and 22.04, CentOS Stream 9, Rocky 9 and Debian 11. [1] https://opendev.org/openstack/ansible-role-systemd_networkd ??, 7 ???. 2023??. ? 18:51, Jay Faulkner : > > Hey Dan, > > There are a couple other projects which are maintained, and might be able to take the role of os-net-config in your environment. Both of these are based on reading out information from a config drive and configuring a host: > - https://docs.openstack.org/infra/glean/ > - https://cloudinit.readthedocs.io/en/latest/ > > Neither of these may fit your use case exactly, but they continue to exist and are supported, whereas right now (other than you) there's been no interest in keeping some of these tripleo deliverables alive. > > Good luck, > Jay Faulkner > Ironic PTL > > On Wed, Jun 7, 2023 at 7:46?AM Dan Sneddon wrote: >> >> The TripleO project is being retired, and the Master/Zed branches are no longer being maintained. However the os-net-config project is still required for bare metal network configuration, and I believe the codebase may be used by other parties, however I don't have a good list of who is still using the codebase. We need to find another repo that can host os-net-config, hopefully on opendev.org, since the GitHub repo doesn't have the same level of CI that we have now on OpenStack Jenkins. >> >> >> Can anyone recommend or volunteer a repository that would be able to host os-net-config going forward? I expect it will be needed for another year or so at least, just for the use cases that I'm aware of. >> >> >> If anyone has still been using os-net-config outside of TripleO, can you please speak up, either on-list or to me directly? >> >> >> Thank you very much for any suggestions, >> >> -Dan Sneddon >> >> >> -- >> Dan Sneddon | Senior Principal Software Engineer >> dsneddon at redhat.com | redhat.com/cloud >> dsneddon:irc | @dxs:twitter From gmann at ghanshyammann.com Wed Jun 7 17:16:05 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 07 Jun 2023 10:16:05 -0700 Subject: [cinder][all][tc][ops][stable] EOL EM branches In-Reply-To: <20230606194842.hegozazpxgi4ibye@yuggoth.org> References: <20230606173231.jrbuwdx2lupt7t7r@yuggoth.org> <18892247997.12b624f8c128041.5749054378410411347@ghanshyammann.com> <20230606194842.hegozazpxgi4ibye@yuggoth.org> Message-ID: <18896dbc4d7.deb1706c39430.148593531405490350@ghanshyammann.com> ---- On Tue, 06 Jun 2023 12:48:43 -0700 Jeremy Stanley wrote --- > On 2023-06-06 12:17:23 -0700 (-0700), Ghanshyam Mann wrote: > [...] > > This is true but I think this is the point where things are > > becoming difficult. Even we do not need to but we as community > > developers keep fixing the EM gate, at least I can tell from my QA > > experience for this. We should stop at some line but in reality, > > we end up doing it. > [...] > > Maybe my verbosity made it unclear, so just in case, what I was > trying to say is that I consider Extended Maintenance to be a failed > experiment and agree we should be talking about either reverting to > the prior process from before EM was a thing or finding an > alternative process that doesn't have so many of the obvious > shortcomings of EM. > > People said if we just stopped EOL'ing branches so soon they would > show up and help make use of those branches. They didn't, and so the > expected benefits never materialized. I agree. If I see the main overhead in EM maintenance is keeping testing green. it is not easy to keep 11 branches (including Em, supported stable and master) testing up to date. My point is if we remove all the integration testing (can keep pep8 and unit tests) at the time the branch move to EM will solve the problem that the upstream community faces to maintain EM branches. -gmann > -- > Jeremy Stanley > From fungi at yuggoth.org Wed Jun 7 17:35:19 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 7 Jun 2023 17:35:19 +0000 Subject: [cinder][all][tc][ops][stable] EOL EM branches In-Reply-To: <18896dbc4d7.deb1706c39430.148593531405490350@ghanshyammann.com> References: <20230606173231.jrbuwdx2lupt7t7r@yuggoth.org> <18892247997.12b624f8c128041.5749054378410411347@ghanshyammann.com> <20230606194842.hegozazpxgi4ibye@yuggoth.org> <18896dbc4d7.deb1706c39430.148593531405490350@ghanshyammann.com> Message-ID: <20230607173518.fkp54hyab5fueoqj@yuggoth.org> On 2023-06-07 10:16:05 -0700 (-0700), Ghanshyam Mann wrote: [...] > I agree. If I see the main overhead in EM maintenance is keeping > testing green. it is not easy to keep 11 branches (including Em, > supported stable and master) testing up to date. My point is if we > remove all the integration testing (can keep pep8 and unit tests) > at the time the branch move to EM will solve the problem that the > upstream community faces to maintain EM branches. The main counterargument I've heard repeated against this approach is that the project teams feel responsible for the quality of changes merging to those branches, and don't believe that lightly tested or nearly untested backports (from an integration perspective) adequately represent their quality standards. They'd rather close out the branches completely than have to explain that they contain basically untested backports (communication that they further fear will fall on deaf ears, leaving users angry or hurt when they discover it for themselves the hard way). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jay at gr-oss.io Wed Jun 7 17:38:03 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Wed, 7 Jun 2023 10:38:03 -0700 Subject: [cinder][all][tc][ops][stable] EOL EM branches In-Reply-To: <18896dbc4d7.deb1706c39430.148593531405490350@ghanshyammann.com> References: <20230606173231.jrbuwdx2lupt7t7r@yuggoth.org> <18892247997.12b624f8c128041.5749054378410411347@ghanshyammann.com> <20230606194842.hegozazpxgi4ibye@yuggoth.org> <18896dbc4d7.deb1706c39430.148593531405490350@ghanshyammann.com> Message-ID: On Wed, Jun 7, 2023 at 10:23?AM Ghanshyam Mann wrote: > > ---- On Tue, 06 Jun 2023 12:48:43 -0700 Jeremy Stanley wrote --- > > On 2023-06-06 12:17:23 -0700 (-0700), Ghanshyam Mann wrote: > > [...] > > > This is true but I think this is the point where things are > > > becoming difficult. Even we do not need to but we as community > > > developers keep fixing the EM gate, at least I can tell from my QA > > > experience for this. We should stop at some line but in reality, > > > we end up doing it. > > [...] > > > > Maybe my verbosity made it unclear, so just in case, what I was > > trying to say is that I consider Extended Maintenance to be a failed > > experiment and agree we should be talking about either reverting to > > the prior process from before EM was a thing or finding an > > alternative process that doesn't have so many of the obvious > > shortcomings of EM. > > > > People said if we just stopped EOL'ing branches so soon they would > > show up and help make use of those branches. They didn't, and so the > > expected benefits never materialized. > > I agree. If I see the main overhead in EM maintenance is keeping testing > green. > it is not easy to keep 11 branches (including Em, supported stable and > master) > testing up to date. My point is if we remove all the integration testing > (can keep pep8 > and unit tests) at the time the branch move to EM will solve the problem > that the upstream > community faces to maintain EM branches. > This, IMO, is akin to retiring the branches. How could I, as a developer, patch an older version of a branch against a vulnerability of the style of the recent Cinder one, where the impact is felt cross-project, and you clearly need a working dev environment (such as devstack). If, as you propose, we stopped doing any integration testing on branches older than 18 months, we would be de-facto retiring the integration testing infrastructure, which shares a huge amount of DNA with our dev tooling infrastructure. I don't know what the answer is; but this as a middle ground seems like the worst of all worlds: the branches still exist, and we will not have the tools to (manually, not just CI) test meaningful changes on them. Just a thought! - Jay Faulkner Ironic PTL -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Jun 7 17:46:35 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 07 Jun 2023 10:46:35 -0700 Subject: [cinder][all][tc][ops][stable] EOL EM branches In-Reply-To: References: <20230606173231.jrbuwdx2lupt7t7r@yuggoth.org> <18892247997.12b624f8c128041.5749054378410411347@ghanshyammann.com> <20230606194842.hegozazpxgi4ibye@yuggoth.org> <18896dbc4d7.deb1706c39430.148593531405490350@ghanshyammann.com> Message-ID: <18896f7b35c.d76eef8a40628.8327667355238766257@ghanshyammann.com> ---- On Wed, 07 Jun 2023 10:38:03 -0700 Jay Faulkner wrote --- > > > On Wed, Jun 7, 2023 at 10:23?AM Ghanshyam Mann gmann at ghanshyammann.com> wrote: > > ?---- On Tue, 06 Jun 2023 12:48:43 -0700? Jeremy Stanley? wrote --- > ?> On 2023-06-06 12:17:23 -0700 (-0700), Ghanshyam Mann wrote: > ?> [...] > ?> > This is true but I think this is the point where things are > ?> > becoming difficult. Even we do not need to but we as community > ?> > developers keep fixing the EM gate, at least I can tell from my QA > ?> > experience for this. We should stop at some line but in reality, > ?> > we end up doing it. > ?> [...] > ?> > ?> Maybe my verbosity made it unclear, so just in case, what I was > ?> trying to say is that I consider Extended Maintenance to be a failed > ?> experiment and agree we should be talking about either reverting to > ?> the prior process from before EM was a thing or finding an > ?> alternative process that doesn't have so many of the obvious > ?> shortcomings of EM. > ?> > ?> People said if we just stopped EOL'ing branches so soon they would > ?> show up and help make use of those branches. They didn't, and so the > ?> expected benefits never materialized. > > I agree. If I see the main overhead in EM maintenance is keeping testing green. > it is not easy to keep 11 branches (including Em, supported stable and master) > testing up to date. My point is if we remove all the integration testing (can keep pep8 > and unit tests) at the time the branch move to EM will solve the problem that the upstream > community faces to maintain EM branches.? > > > This, IMO, is akin to retiring the branches. How could I, as a developer, patch an older version of a branch against a vulnerability of the style of the recent Cinder one, where the impact is felt cross-project, and you clearly need a working dev environment (such as devstack).? > If, as you propose, we stopped doing any integration testing on branches older than 18 months, we would be de-facto retiring the integration testing infrastructure, which shares a huge amount of DNA with our dev tooling infrastructure. It is not the same as retiring but if we see it can still run unit/functional tests and changes have been tested till supported and stable so we did testing of those fixes at some level. And there cannot be the case where I apply the fix directly to the EM branch. In our current doc also, we have the minimum testing expectation and I am just saying to reduce the testing at the time branch moved to EM instead of waiting for the gate to break and getting frustrated while backporting. EM as we meant since starting it not upstream maintained/guaranteed things so leaving testing expectation at downstream is no bug change than what current policy is. -gmann > > I don't know what the answer is; but this as a middle ground seems like the worst of all worlds: the branches still exist, and we will not have the tools to (manually, not just CI) test meaningful changes on them. > Just a thought!? > -Jay FaulknerIronic PTL From dsneddon at redhat.com Wed Jun 7 19:07:59 2023 From: dsneddon at redhat.com (Dan Sneddon) Date: Wed, 7 Jun 2023 12:07:59 -0700 Subject: New repo for os-net-config? In-Reply-To: References: Message-ID: On Wed, Jun 7, 2023 at 9:00 AM Clark Boylan wrote: > On Wed, Jun 7, 2023, at 7:29 AM, Dan Sneddon wrote: > > The TripleO project is being retired, and the Master/Zed branches are > > no longer being maintained. However the os-net-config project is still > > required for bare metal network configuration, and I believe the > > codebase may be used by other parties, however I don't have a good list > > of who is still using the codebase. We need to find another repo that > > can host os-net-config, hopefully on opendev.org, since the GitHub repo > > doesn't have the same level of CI that we have now on OpenStack Jenkins. > > Note we haven't run Jenkins since ~2017. I misspoke, I meant to say Gerrit. GitHub is workable, but the review/merge interface is not as good, IMHO. > > Can anyone recommend or volunteer a repository that would be able to > > host os-net-config going forward? I expect it will be needed for > > another year or so at least, just for the use cases that I'm aware of. > > Is there some reason the existing repository won't work? Currently os-net-config is a part of the TripleO project. Since TripleO is retiring and being replaced by something hosted on GitHub, we are no longer maintaining the Master/Zed branches. > > > > If anyone has still been using os-net-config outside of TripleO, can > > you please speak up, either on-list or to me directly? > > > > > > > > Thank you very much for any suggestions, > > > > -Dan Sneddon > > > > > > > > -- > > Dan Sneddon | Senior Principal Software Engineer > > dsneddon at redhat.com | redhat.com/cloud > > dsneddon:irc | @dxs:twitter > > -- Dan Sneddon | Senior Principal Software Engineer dsneddon at redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From madalex666233 at gmail.com Wed Jun 7 19:11:18 2023 From: madalex666233 at gmail.com (Alex Z) Date: Wed, 7 Jun 2023 12:11:18 -0700 Subject: Question About BGP Dynamic Routing, Floating IP, and SNAT Message-ID: Hi Everyone, Hope you are all doing well. I?m a beginner to Openstack and Neutron and now run into an issue about SNAT and shared floating IP. I?ve already deployed a neutron network which uses BGP to announce floating IP to PE (Provider Edge router), and everything works as expected when I assign the public floating IP (e.g., 123.0.0.10/24) to VMs. But when I tried to use floating IP port-forwarding function with floating IP 123.0.0.20/24 and rule (internal_ip 10.10.10.10, internal_port 5555, external_port 64000), and assign a private IP (10.10.10.10/24) to a VM. The floating IP 123.0.0.20 won?t be advertised through BGP. May I have some suggestions about how I could get this fixed, or the neutron just won?t work this way? FYI, 1. Per my understanding, the port_forwardings rule will make the port act like a SNAT role and forward any packets that reach it with destination 123.0.0.20:64000 to the private IP 10.10.10.10/24. 2. The IP address could be reached in the neutron network. 3. PE IP address, CE IP address, and floating IP gateway are using the same subnet A and subnet pool (192.168.123.0/24), while floating IP belongs to subnet B and subnet pool (123.0.0.0/24), both subnets belong to the provider network. 4. Only floating IP that assigned to the specific VM will be advertised to PE through BGP 5. Floating IP that is assigned to the port of a router in the neutron network won?t be advertised, even if the IP is activated and is reachable internally. Sincerely, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Wed Jun 7 19:35:20 2023 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 7 Jun 2023 12:35:20 -0700 Subject: New repo for os-net-config? In-Reply-To: References: Message-ID: On Wed, Jun 7, 2023 at 12:13?PM Dan Sneddon wrote: > > > On Wed, Jun 7, 2023 at 9:00 AM Clark Boylan wrote: > >> On Wed, Jun 7, 2023, at 7:29 AM, Dan Sneddon wrote: >> > The TripleO project is being retired, and the Master/Zed branches are >> > no longer being maintained. However the os-net-config project is still >> > required for bare metal network configuration, and I believe the >> > codebase may be used by other parties, however I don't have a good list >> > of who is still using the codebase. We need to find another repo that >> > can host os-net-config, hopefully on opendev.org, since the GitHub >> repo >> > doesn't have the same level of CI that we have now on OpenStack Jenkins. >> >> Note we haven't run Jenkins since ~2017. > > > I misspoke, I meant to say Gerrit. GitHub is workable, but the > review/merge interface is not as good, IMHO. > > >> > Can anyone recommend or volunteer a repository that would be able to >> > host os-net-config going forward? I expect it will be needed for >> > another year or so at least, just for the use cases that I'm aware of. >> >> Is there some reason the existing repository won't work? > > > Currently os-net-config is a part of the TripleO project. Since TripleO is > retiring and being replaced by something hosted on GitHub, we are no longer > maintaining the Master/Zed branches. > > Its governance could be moved and the repo stay as-is, that is if a project wishes to adopt it to keep it in OpenStack governance. > >> > >> > If anyone has still been using os-net-config outside of TripleO, can >> > you please speak up, either on-list or to me directly? >> > >> > >> > >> > Thank you very much for any suggestions, >> > >> > -Dan Sneddon >> > >> > >> > >> > -- >> > Dan Sneddon | Senior Principal Software Engineer >> > dsneddon at redhat.com | redhat.com/cloud >> > dsneddon:irc | @dxs:twitter >> >> -- > Dan Sneddon | Senior Principal Software Engineer > dsneddon at redhat.com | redhat.com/cloud > dsneddon:irc | @dxs:twitter > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Jun 7 19:42:44 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 7 Jun 2023 19:42:44 +0000 Subject: New repo for os-net-config? In-Reply-To: References: Message-ID: <20230607194243.nan7j3loophhp6cy@yuggoth.org> On 2023-06-07 12:07:59 -0700 (-0700), Dan Sneddon wrote: > On Wed, Jun 7, 2023 at 9:00 AM Clark Boylan wrote: [...] > > Is there some reason the existing repository won't work? > > Currently os-net-config is a part of the TripleO project. Since > TripleO is retiring and being replaced by something hosted on > GitHub, we are no longer maintaining the Master/Zed branches. [...] Maybe you misunderstood. To restate: Is there any reason the people who want to use and maintain the openstack/os-net-config master and stable/zed (or other) branches can't just adopt the project? It's well within the TC's power to grant control of that repository to another project team who isn't TripleO. Which use cases specifically (outside of Red Hat's lingering interest in the stable/wallaby branches of TripleO repositories) are you referring to? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Wed Jun 7 19:42:44 2023 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 7 Jun 2023 15:42:44 -0400 Subject: [cinder] Vancouver PTG scheduling Message-ID: <6e057760-1866-323d-db17-b18c41d00434@gmail.com> Hello Vancouver-bound Argonauts, In trying to put together a doodle pool, I found it was way easier to just use an etherpad. So please navigate on over to it and block out your unavailability before the Cinder Spec Review meeting at 1400 UTC on Friday 9 June. https://etherpad.opendev.org/p/vancouver2023-cinder-ptg-scheduling thanks, brian From gmann at ghanshyammann.com Wed Jun 7 21:29:27 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 07 Jun 2023 14:29:27 -0700 Subject: [all][ops][rbac] RBAC feedback sessions in Vancouver Summit Message-ID: <18897c3bca5.1077a294f47286.2646874156604086347@ghanshyammann.com> Hello Everyone, If you are planning to attend Vancouver Summit, we will have a forum discussion on Tuesday to provide the RBAC updates and collect operators' feedback. "OpenStack RBAC: updates & operators feedback" - Tue, June 13, 1:45pm - 2:15pm | Vancouver Convention Centre - Room 9 I have created the etherpad to collect the feedback, feel free to write your feedback if you cannot attend this session. - https://etherpad.opendev.org/p/rbac-operator-feedback-vancouver2023 -gmann From ces.eduardo98 at gmail.com Wed Jun 7 21:41:51 2023 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Wed, 7 Jun 2023 18:41:51 -0300 Subject: [manila][ptg] Vancouver PTG Message-ID: Hello, zorillas and interested stackers. In the previous weekly meeting I mentioned the tentative time slot for next week's PTG and I got confirmation. We will host a hybrid session for our PTG on Thursday 9:40AM to 12:10 PM PDT. More details about the location, topics and connecting to the meeting bridge will be available in the PTG etherpad [1]. You can still add your topics until Friday (June 9th) to the etherpad. [1] https://etherpad.opendev.org/p/manila-openinfra-2023 Regards, carloss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ces.eduardo98 at gmail.com Wed Jun 7 21:46:14 2023 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Wed, 7 Jun 2023 18:46:14 -0300 Subject: [manila] Cancelling June 8th and June 15th weekly meetings Message-ID: Hello, Since June 8th is a holiday in some countries and that will impact the weekly meeting's attendance numbers, we are cancelling this meeting. The weekly meeting of June 15th is also being cancelled, as some members will be attending the OpenInfra Summit. The next upstream weekly meeting will be on June 22nd. Regards, carloss -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Jun 7 22:12:55 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 7 Jun 2023 22:12:55 +0000 Subject: [security-sig] PTG schedule Message-ID: <20230607221254.qydoylw5hwdblkxo@yuggoth.org> I've booked a PTG table for the two "slots" immediately prior to Thursday's lunch break (11:00-12:50 Vancouver time). See https://ptg.opendev.org/etherpads.html for a link to the notepad for it; I've added some ideas for things we could do or discuss but add anything you'd like to cover and I'm happy to pivot based on attendee interest. Given the timing, we can also turn it into an early lunch outing or whatever. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dsneddon at redhat.com Wed Jun 7 22:44:16 2023 From: dsneddon at redhat.com (Dan Sneddon) Date: Wed, 7 Jun 2023 15:44:16 -0700 Subject: New repo for os-net-config? In-Reply-To: <20230607194243.nan7j3loophhp6cy@yuggoth.org> References: <20230607194243.nan7j3loophhp6cy@yuggoth.org> Message-ID: On Wed, Jun 7, 2023 at 12:49 PM Jeremy Stanley wrote: > On 2023-06-07 12:07:59 -0700 (-0700), Dan Sneddon wrote: > > On Wed, Jun 7, 2023 at 9:00 AM Clark Boylan > wrote: > [...] > > > Is there some reason the existing repository won't work? > > > > Currently os-net-config is a part of the TripleO project. Since > > TripleO is retiring and being replaced by something hosted on > > GitHub, we are no longer maintaining the Master/Zed branches. > [...] > > Maybe you misunderstood. To restate: Is there any reason the people > who want to use and maintain the openstack/os-net-config master and > stable/zed (or other) branches can't just adopt the project? It's > well within the TC's power to grant control of that repository to > another project team who isn't TripleO. > > Which use cases specifically (outside of Red Hat's lingering > interest in the stable/wallaby branches of TripleO repositories) are > you referring to? > -- > Jeremy Stanley > The people who want to continue to use os-net-config are developing the replacement for TripleO, but they have moved to GutHub. That?s an option for os-net-config, but not my first preference. I have over the years heard of companies using os-net-config for various use cases. It?s possible that posting to openstack-discuss won?t reach any of those users, but it doesn?t hurt to ask here. -Dan -- Dan Sneddon | Senior Principal Software Engineer dsneddon at redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Thu Jun 8 03:38:19 2023 From: tkajinam at redhat.com (Takashi Kajinami) Date: Thu, 8 Jun 2023 12:38:19 +0900 Subject: [puppet] Sessions in June 2023 PTG Message-ID: Hello, I'm attending the upcoming OpenInfra Summit and PTG at Vancouver so would like to moderate some puppet sessions in PTG. I've reserved slots from 14:30 to 16:20 on Wednesday. However we can be flexible so please let me know if you are interested but have any conflicts. https://etherpad.opendev.org/p/vancouver-june2023-puppet-openstack I've added a few topics including the recent discussion about module modernization. However it'd be nice if we can have a few more topics, especially any feedback from users or (potential) new contributors. Please add your name and topics if you are planning to join us. Thank you, Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Thu Jun 8 10:26:11 2023 From: wodel.youchi at gmail.com (wodel youchi) Date: Thu, 8 Jun 2023 11:26:11 +0100 Subject: [kolla-ansible][Yoga] Manila share creation stuck in creating status cephfs backend In-Reply-To: References: Message-ID: Hi, Anyone??? I am using the Admin account on Openstack and I cannot create any share, the stuck in creating state. And when trying to delete them, I get the no authorized error message. Regards. Le mar. 6 juin 2023 ? 18:35, wodel youchi a ?crit : > Hi, > > We are facing a strange problem, when creating manila shares, we use > cephfs as backend, the creation is stuck in creation state. > > If we try to delete the share, we get and error "you are not authorized to > delete share ...." > We have to use force-delete in cli to be able to delete it. > > We don't have any indication in manila's log files, there no error or > exception. > > It seems like an access right problem, but we don't know where to look. > > Regards. > > > > Virus-free.www.avast.com > > <#m_-6305486820374085440_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Jun 8 10:53:14 2023 From: smooney at redhat.com (smooney at redhat.com) Date: Thu, 08 Jun 2023 11:53:14 +0100 Subject: New repo for os-net-config? In-Reply-To: References: <20230607194243.nan7j3loophhp6cy@yuggoth.org> Message-ID: <0fcc7dbc078566c3f692f6824e3248f4cfefeb3b.camel@redhat.com> On Wed, 2023-06-07 at 15:44 -0700, Dan Sneddon wrote: > On Wed, Jun 7, 2023 at 12:49 PM Jeremy Stanley wrote: > > > On 2023-06-07 12:07:59 -0700 (-0700), Dan Sneddon wrote: > > > On Wed, Jun 7, 2023 at 9:00 AM Clark Boylan > > wrote: > > [...] > > > > Is there some reason the existing repository won't work? > > > > > > Currently os-net-config is a part of the TripleO project. Since > > > TripleO is retiring and being replaced by something hosted on > > > GitHub, we are no longer maintaining the Master/Zed branches. > > [...] > > > > Maybe you misunderstood. To restate: Is there any reason the people > > who want to use and maintain the openstack/os-net-config master and > > stable/zed (or other) branches can't just adopt the project? It's > > well within the TC's power to grant control of that repository to > > another project team who isn't TripleO. > > > > Which use cases specifically (outside of Red Hat's lingering > > interest in the stable/wallaby branches of TripleO repositories) are > > you referring to? > > -- > > Jeremy Stanley > > > > > The people who want to continue to use os-net-config are developing the > replacement for TripleO, but they have moved to GutHub. That?s an option > for os-net-config, but not my first preference. > I have over the years heard of companies using os-net-config for various > use cases. It?s possible that posting to openstack-discuss won?t reach any > of those users, but it doesn?t hurt to ask here. i will need to reread the thread but i thought that there was reference to baremetal use cases i had assumed that meant ironic? just because we are using GitHub for the replacement for tripleo is not a reason to move things to github by default. movign to github for the code review as im sure you are aware is a much worse code review interface. we would be loosing the release tooling and ablity to publish to pypi. the ci would need to be ported amoung other things. There is also an open question of if/how much of os-net-config will continue to be used for the operator based installer. alternatives are being considered although we have known gaps. no desicssion has been made on if we will continue with os-net-config or replace it with nmstate or a hybird of the two. a quick search https://codesearch.opendev.org/?q=os-net-config&i=nope&literal=nope&files=&excludeFiles=&repos= seams to indicate that os-net-config is used by: - openstack-virtual-baremetal - networking-bigswitch - possibly starlingx so i would suggest moving it to ironic governance and keeping the repo as is to suppot the virtual baremetal usecase. > > -Dan > > From murilo at evocorp.com.br Thu Jun 8 12:19:11 2023 From: murilo at evocorp.com.br (Murilo Morais) Date: Thu, 8 Jun 2023 09:19:11 -0300 Subject: Monitoring Message-ID: Good morning everybody! Guys, which monitoring stack do you recommend to monitor Host and VM resources? Or what monitoring strategies do they recommend? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Thu Jun 8 12:30:33 2023 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Thu, 8 Jun 2023 09:30:33 -0300 Subject: Monitoring In-Reply-To: References: Message-ID: Ceilometer, specially with the sub-system of dynamic pollsters. Then, you can leverage Gnocchi as a storage backend for the monitored metrics. On Thu, Jun 8, 2023 at 9:25?AM Murilo Morais wrote: > Good morning everybody! > > Guys, which monitoring stack do you recommend to monitor Host and VM > resources? Or what monitoring strategies do they recommend? > -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Thu Jun 8 13:39:28 2023 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 8 Jun 2023 09:39:28 -0400 Subject: [cinder][all][tc][ops][stable] EOL EM branches In-Reply-To: <18896f7b35c.d76eef8a40628.8327667355238766257@ghanshyammann.com> References: <20230606173231.jrbuwdx2lupt7t7r@yuggoth.org> <18892247997.12b624f8c128041.5749054378410411347@ghanshyammann.com> <20230606194842.hegozazpxgi4ibye@yuggoth.org> <18896dbc4d7.deb1706c39430.148593531405490350@ghanshyammann.com> <18896f7b35c.d76eef8a40628.8327667355238766257@ghanshyammann.com> Message-ID: On 6/7/23 1:46 PM, Ghanshyam Mann wrote: > ---- On Wed, 07 Jun 2023 10:38:03 -0700 Jay Faulkner wrote --- > > > > On Wed, Jun 7, 2023 at 10:23?AM Ghanshyam Mann gmann at ghanshyammann.com> wrote: > > > > ?---- On Tue, 06 Jun 2023 12:48:43 -0700? Jeremy Stanley? wrote --- > > ?> On 2023-06-06 12:17:23 -0700 (-0700), Ghanshyam Mann wrote: [snip] > > I agree. If I see the main overhead in EM maintenance is keeping testing green. > > it is not easy to keep 11 branches (including Em, supported stable and master) > > testing up to date. My point is if we remove all the integration testing (can keep pep8 > > and unit tests) at the time the branch move to EM will solve the problem that the upstream > > community faces to maintain EM branches. > > > > > > This, IMO, is akin to retiring the branches. How could I, as a developer, patch an older version of a branch against a vulnerability of the style of the recent Cinder one, where the impact is felt cross-project, and you clearly need a working dev environment (such as devstack). > > If, as you propose, we stopped doing any integration testing on branches older than 18 months, we would be de-facto retiring the integration testing infrastructure, which shares a huge amount of DNA with our dev tooling infrastructure. > > It is not the same as retiring but if we see it can still run unit/functional tests and changes have been > tested till supported and stable so we did testing of those fixes at some level. And there cannot be the > case where I apply the fix directly to the EM branch. I agree with Jay on this. IMO, what keeps devstack functional in the EM branches is that it's needed to run tempest tests. If we rely on unit/functional tests only, that motivation goes away. Further, as Jay points out, a working OpenStack deployment requires a harmonization of multiple components beyond the individual projects' unit/functional tests. For example, this (invalid) bug: https://bugs.launchpad.net/cinder/+bug/2020382 was reported after backporting a patch that had gone through the normal backport process upstream from master through stable/xena without skipping any branches; the xena patch applied cleanly and I'm pretty sure it passed unit and functional tests (I didn't run them myself). The issue did not occur until the code was actually used by cinder interacting with a real nova. So relying on unit and functional tests only is not adequate. When I approve a backport, I'm supposed to be fairly confident that the change is low-risk and will not cause regressions. Clean tempest jobs give me some useful evidence when making that assessment. A patch that passes CI is not guaranteed to be OK, but if it causes a CI failure, we know it's not OK. > In our current doc also, we have the minimum testing expectation and I am just saying to reduce the testing > at the time branch moved to EM instead of waiting for the gate to break and getting frustrated while backporting. > > EM as we meant since starting it not upstream maintained/guaranteed things so leaving testing expectation at > downstream is no bug change than what current policy is. That's correct, but as I think has been mentioned elsewhere in this thread, this has not proved to be workable. The stable cores on the project teams take their work seriously, and even though the docs say that EM branches should be treated caveat emptor, we still feel that our approval should mean something. So even though the docs say there's no guarantee on EM branches, nobody wants to have their name show up as approving a patch that caused a regression, even in an EM branch. Further (and I don't think I'm speaking only for myself here), I don't like the idea of other people merging unvetted stuff into our codebase. But that hasn't become an issue, because as Jeremy pointed out earlier, no one outside the project teams has showed up to take ownership of EM branches. Though (to get back to my real point here) if such people did show up, I would expect them to also maintain the full CI, including tempest integration tests, for the reasons I mentioned earlier. So I'm against the idea unit/functional testing is adequate for EM branches. > > -gmann (By the way, I am not implying that gmann is in favor of poor QA. He's articulated clearly what the current docs say about EM branches. But he's also been heroically responsible for keeping a lot of the EM integration gates functional!) > > > > I don't know what the answer is; but this as a middle ground seems like the worst of all worlds: the branches still exist, and we will not have the tools to (manually, not just CI) test meaningful changes on them. > > Just a thought! > > -Jay FaulknerIronic PTL > From gmann at ghanshyammann.com Thu Jun 8 15:37:02 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 08 Jun 2023 08:37:02 -0700 Subject: [cinder][all][tc][ops][stable] EOL EM branches In-Reply-To: References: <20230606173231.jrbuwdx2lupt7t7r@yuggoth.org> <18892247997.12b624f8c128041.5749054378410411347@ghanshyammann.com> <20230606194842.hegozazpxgi4ibye@yuggoth.org> <18896dbc4d7.deb1706c39430.148593531405490350@ghanshyammann.com> <18896f7b35c.d76eef8a40628.8327667355238766257@ghanshyammann.com> Message-ID: <1889ba771fa.dc5178a4122684.3942458206927791638@ghanshyammann.com> ---- On Thu, 08 Jun 2023 06:39:28 -0700 Brian Rosmaita wrote --- > On 6/7/23 1:46 PM, Ghanshyam Mann wrote: > > ---- On Wed, 07 Jun 2023 10:38:03 -0700 Jay Faulkner wrote --- > > > > > > On Wed, Jun 7, 2023 at 10:23?AM Ghanshyam Mann gmann at ghanshyammann.com> wrote: > > > > > > ?---- On Tue, 06 Jun 2023 12:48:43 -0700? Jeremy Stanley? wrote --- > > > ?> On 2023-06-06 12:17:23 -0700 (-0700), Ghanshyam Mann wrote: > [snip] > > > I agree. If I see the main overhead in EM maintenance is keeping testing green. > > > it is not easy to keep 11 branches (including Em, supported stable and master) > > > testing up to date. My point is if we remove all the integration testing (can keep pep8 > > > and unit tests) at the time the branch move to EM will solve the problem that the upstream > > > community faces to maintain EM branches. > > > > > > > > > This, IMO, is akin to retiring the branches. How could I, as a developer, patch an older version of a branch against a vulnerability of the style of the recent Cinder one, where the impact is felt cross-project, and you clearly need a working dev environment (such as devstack). > > > If, as you propose, we stopped doing any integration testing on branches older than 18 months, we would be de-facto retiring the integration testing infrastructure, which shares a huge amount of DNA with our dev tooling infrastructure. > > > > It is not the same as retiring but if we see it can still run unit/functional tests and changes have been > > tested till supported and stable so we did testing of those fixes at some level. And there cannot be the > > case where I apply the fix directly to the EM branch. > > I agree with Jay on this. IMO, what keeps devstack functional in the EM > branches is that it's needed to run tempest tests. If we rely on > unit/functional tests only, that motivation goes away. > > Further, as Jay points out, a working OpenStack deployment requires a > harmonization of multiple components beyond the individual projects' > unit/functional tests. For example, this (invalid) bug: > https://bugs.launchpad.net/cinder/+bug/2020382 > was reported after backporting a patch that had gone through the normal > backport process upstream from master through stable/xena without > skipping any branches; the xena patch applied cleanly and I'm pretty > sure it passed unit and functional tests (I didn't run them myself). > The issue did not occur until the code was actually used by cinder > interacting with a real nova. > > So relying on unit and functional tests only is not adequate. When I > approve a backport, I'm supposed to be fairly confident that the change > is low-risk and will not cause regressions. Clean tempest jobs give me > some useful evidence when making that assessment. A patch that passes > CI is not guaranteed to be OK, but if it causes a CI failure, we know > it's not OK. > > > In our current doc also, we have the minimum testing expectation and I am just saying to reduce the testing > > at the time branch moved to EM instead of waiting for the gate to break and getting frustrated while backporting. > > > > EM as we meant since starting it not upstream maintained/guaranteed things so leaving testing expectation at > > downstream is no bug change than what current policy is. > > That's correct, but as I think has been mentioned elsewhere in this > thread, this has not proved to be workable. The stable cores on the > project teams take their work seriously, and even though the docs say > that EM branches should be treated caveat emptor, we still feel that our > approval should mean something. So even though the docs say there's no > guarantee on EM branches, nobody wants to have their name show up as > approving a patch that caused a regression, even in an EM branch. > > Further (and I don't think I'm speaking only for myself here), I don't > like the idea of other people merging unvetted stuff into our codebase. > But that hasn't become an issue, because as Jeremy pointed out earlier, > no one outside the project teams has showed up to take ownership of EM > branches. Though (to get back to my real point here) if such people did > show up, I would expect them to also maintain the full CI, including > tempest integration tests, for the reasons I mentioned earlier. So I'm > against the idea unit/functional testing is adequate for EM branches. I do not disagree with Jay and you on more and more testing, but I am saying reducing testing (which is what the original idea was) is one of the tradeoffs between keeping extended branches available for fixes for a long time and upstream maintenance costs. We are clearly at the stage where the upstream community cannot maintain them with proper testing. Either we have to remove the idea of EM or try any new idea that can add more cost in upstream maintenance. I still do not find it very odd that we do not guarantee the EM backport fixes testing but at the same time make sure they are tested all the way from master to supported stable branches backporting. Leave the complete testing to the downstream consumers to test properly before applying the fixes. > > > > > -gmann > > (By the way, I am not implying that gmann is in favor of poor QA. He's > articulated clearly what the current docs say about EM branches. But > he's also been heroically responsible for keeping a lot of the EM > integration gates functional!) Apart from maintaining, pinning tempest/plugins version also takes a lot of time. Now I am starting the pinning tempest/plugins for recent EM stable/xena and it requires a large amount of time to test/pin the compatible version of tempest and plugins on stable/xena. -gmann > > > > > > > I don't know what the answer is; but this as a middle ground seems like the worst of all worlds: the branches still exist, and we will not have the tools to (manually, not just CI) test meaningful changes on them. > > > Just a thought! > > > -Jay FaulknerIronic PTL > > > > > From ilya_p at hotmail.com Thu Jun 8 15:50:30 2023 From: ilya_p at hotmail.com (Ilya Popov) Date: Thu, 8 Jun 2023 15:50:30 +0000 Subject: [cinder] Active-Active cluster support for allocated_capacity_gb Message-ID: Hello, All ! There is one issue with Active/Actvie cinder cluster. Each cinder-volume instance in cluster holds it's own local allocated_capacity_gb. When cinder-volume service starts it counts all volumes created on particular pool and calculates allocated_capacity_gb accordingly. If one instance of cinder-volume create volume - it increase its local allocated_capacity_gb. If other instance delete volume - it decrease its local allocated_capacity_gb. And each instance of cinder-volume in cluster reports its own local allocated_capacity_gb. It cause incorrect allocated_capacity_gb presented to cinder get-pools --detail. And it is able to see even negative value of allocated_capacity_gb for pool. There is reported issue about it: https://bugs.launchpad.net/cinder/+bug/1927186 There is document about different approaches of Active-Active cinder support, but there is nothing about incorrect allocated_capacity_gb report on it: https://docs.openstack.org/cinder/latest/contributor/high_availability.html Looks like without fixing allocated_capacity_gb we have to get all valumes and calculate this value by portal (it isnt quite effective and resource consuming by the way). Are there any plans to complete Active/Active support for cinder ? For example as the coordinator is demand for Active/Active support (?My recommendation is to do this right: configure the cluster option, remove the backend_host, and configure the coordinator.?, https://lists.openstack.org/pipermail/openstack-discuss/2020-November/018853.html) - it is possible to move allocated_capacity_gb for each pool to redis or to listen notification queue and change local allocated_capacity_gb for each instance of cinder-volume accordingly Bug #1927186 ?the value of allocated_capacity_gb is Incorrect wh...? : Bugs : Cinder the value of allocated_capacity_gb is incorrect when the number of replaca of cinder-volume is more than one and all of them configurated same backend, We set up more than one cinder-volume service ,and all of them have the same config file with the same storage backend. When all cinder-volume services complete initialization, the value of 'allocated_capacity_gb' that command of 'cinder get pools -detail' returned is correct.But once any creating or deleting volumes has been done, the value... bugs.launchpad.net ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Thu Jun 8 16:19:16 2023 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 8 Jun 2023 09:19:16 -0700 Subject: Octavia does not work properly. In-Reply-To: References: Message-ID: Hi Gopal, You will have to inquire with the openstack-helm deployment tooling team. This is an openstack-helm issue and not an Octavia one. Maybe start a new thread with "openstack-helm" in the subject line. Michael On Tue, Jun 6, 2023 at 9:31?AM GOPAL RAJORA wrote: > > Hi Michael > Thank you for your update. > > So, I deployed the Octavia service by using following 3 scripts. > > https://github.com/openstack/openstack-helm/blob/master/tools/deployment/developer/common/180-create-resource-for-octavia.sh > https://github.com/openstack/openstack-helm/blob/master/tools/deployment/developer/common/190-create-octavia-certs.sh > https://github.com/openstack/openstack-helm/blob/master/tools/deployment/developer/common/200-octavia.sh > > So, can't I use these scripts for the deployment in reality? > Looking forward to hear from you. > > Thanks, > Gopal. > > > > On Tue, Jun 6, 2023 at 4:37?PM Michael Johnson wrote: >> >> I think you really meant to say your deployment tooling does not work >> properly. Octavia works just fine as many of us use it every day. >> >> What deployment tool are you using? (there are something like 12 that >> support deploying Octavia) >> >> I say this because Octavia does not change the configuration of >> keystone nor nova, and rarely is a configuration change in neutron >> needed. So if keystone and nova are having problems after deploying >> Octavia, it is your deployment tooling that is having a problem. >> >> Michael >> >> On Tue, Jun 6, 2023 at 12:53?AM Gregory Thiemonge wrote: >> > >> > Hi Gopal, >> > >> > What deployment method are you using? >> > >> > My first guess is that the HTTP server/proxy that handles the API endpoints is broken, can you check the logs there? >> > >> > You mention that disabling the octavia health manager fixes the issue. Do you mean that the other projects (like Nova) work fine when the health manager is disabled but don't work when it is enabled? >> > >> > Greg >> > >> > >> > On Mon, Jun 5, 2023 at 4:57?PM GOPAL RAJORA wrote: >> >> >> >> Hi everyone. >> >> Hope you are well. >> >> This is Gopal. >> >> >> >> I am contacting you as I need your help. >> >> >> >> So, recently I installed the Octavia service on the Openstack server. >> >> After installation of the Octavia, several services such as Nova, Neutron and Keystone started not to work and I can not access the horizon page either. >> >> >> >> So, I have checked the logs, for the moment, keystone service does not work after the Octavia installation. >> >> >> >> Unable to establish connection to http://keystone-api.openstack.svc.cluster.local:5000/v3/auth/tokens: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) >> >> >> >> So, I tried to install the Octavia service several times, so if I disable the health manager, it seems everything works properly, but as you know, Octavia can not work properly if the heath manger is not installed. >> >> >> >> So, can you please help me regarding this problem? >> >> >> >> Thanks. From alsotoes at gmail.com Thu Jun 8 17:13:28 2023 From: alsotoes at gmail.com (Alvaro Soto) Date: Thu, 8 Jun 2023 11:13:28 -0600 Subject: Community attendance - OpenInfra Summit Vancouver In-Reply-To: References: Message-ID: Awesome! Thank you so much! On Wed, Jun 7, 2023 at 5:46?AM Roberto Bartzen Acosta < roberto.acosta at luizalabs.com> wrote: > Hi Alvaro, > > nice! of course we can meet there! all are welcome :) > We are trying to schedule something on Tuesday, I already sent messages to > Iury and Carlos. > > Cheers, > Roberto > > Em ter., 6 de jun. de 2023 ?s 06:35, Alvaro Soto > escreveu: > >> Hello Roberto, >> I'm not from Brazil (I'm based on Mexico) but as part of LATAM community, >> I'll love to be part in local projects :) >> >> I'll be at OIS, I'll be nice to talk about community challenges for our >> local community. >> >> Cheers. >> --- >> Alvaro Soto. >> >> Note: My work hours may not be your work hours. Please do not feel the >> need to respond during a time that is not convenient for you. >> ---------------------------------------------------------- >> Great people talk about ideas, >> ordinary people talk about things, >> small people talk... about other people. >> >> On Thu, Jun 1, 2023, 6:47 AM Iury Gregory wrote: >> >>> Hi Roberto, >>> >>> I know some Brazilians that will be attending the OIS Vancouver, >>> including me. >>> >>> Em qui., 1 de jun. de 2023 ?s 09:00, Roberto Bartzen Acosta < >>> roberto.acosta at luizalabs.com> escreveu: >>> >>>> Hello, >>>> >>>> Will anyone from the Brazilian community attend the OpenInfra in >>>> Vancouver? >>>> >>>> I would like to meet other members from Brazil and discuss the >>>> challenges and possibilities of using OpenStack in Brazilian >>>> infrastructures. You can ping me on IRC too (racosta). >>>> >>>> Kind regards, >>>> Roberto >>>> >>>> >>>> *?Esta mensagem ? direcionada apenas para os endere?os constantes no >>>> cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no >>>> cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa >>>> mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o >>>> imediatamente anuladas e proibidas?.* >>>> >>>> *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para >>>> assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o >>>> poder? aceitar a responsabilidade por quaisquer perdas ou danos causados >>>> por esse e-mail ou por seus anexos?.* >>>> >>> >>> >>> -- >>> *Att[]'s* >>> >>> *Iury Gregory Melo Ferreira * >>> *MSc in Computer Science at UFCG* >>> *Ironic PTL * >>> *Senior Software Engineer at Red Hat Brazil* >>> *Social*: https://www.linkedin.com/in/iurygregory >>> *E-mail: iurygregory at gmail.com * >>> >> > > *?Esta mensagem ? direcionada apenas para os endere?os constantes no > cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no > cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa > mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o > imediatamente anuladas e proibidas?.* > > *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para > assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o > poder? aceitar a responsabilidade por quaisquer perdas ou danos causados > por esse e-mail ou por seus anexos?.* > -- Alvaro Soto *Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you.* ---------------------------------------------------------- Great people talk about ideas, ordinary people talk about things, small people talk... about other people. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu Jun 8 21:25:12 2023 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 8 Jun 2023 16:25:12 -0500 Subject: [ironic][barbican][qa][tripleo][nova][opendev] Next steps for Fedora jobs in OpenStack Message-ID: Hello all, Following on from "Future of Fedora and CentOS Stream Test Images"[1] discussion on service-discuss the opendev infra team have removed the unused Fedora content from the AFS mirrors. Now it's time to look at removing the Fedora-36 content from the mirrors. I'd like to, again, point out the Fedora 36 isn't the latest release and is infact EOL[4]. Removal of the Fedora 36 content will start with something like "[configure-mirrors] Allow per distribution disabling of mirrors"[2]. Once merged, and activated, this means that any/all Fedora based jobs will pull OS content from the Fedora project mirrors directly. There isn't an ETA but "soon" is a fair estimate. Any Fedora jobs will potentially run slower and will be more subject to network failures, this is also the situation with existing rocky linux jobs and there hasn't been any major problems[3]. Projects using Fedora are encouraged to switch to CentOS Stream or Rocky Linux. As at some stage the Fedora nodepool images may also need to go away. OpenStack users of Fedora (master) appear to be: * name: bifrost-integration-tinyipa-fedora-latest where: https://opendev.org/openstack/bifrost/src/branch/master/zuul.d/bifrost-jobs.yaml#L170 recent builds: https://zuul.openstack.org/builds?job_name=bifrost-integration-tinyipa-fedora-latest&branch=master&result=SUCCESS&result=FAILURE&skip=0&limit=10 * name: bifrost-integration-redfish-uefi-fedora-latest where: https://opendev.org/openstack/bifrost/src/branch/master/zuul.d/bifrost-jobs.yaml#L175 recent builds: https://zuul.openstack.org/builds?job_name=bifrost-integration-redfish-uefi-fedora-latest&branch=master&result=SUCCESS&result=FAILURE&skip=0&limit=10 * name: barbican-dogtag-tox-functional where: https://opendev.org/openstack/barbican/src/branch/master/.zuul.yaml#L16 recent builds: https://zuul.openstack.org/builds?job_name=barbican-dogtag-tox-functional&branch=master&result=SUCCESS&result=FAILURE&skip=0&limit=10 * name: devstack-plugin-ceph-tempest-fedora-latest where: https://opendev.org/openstack/devstack-plugin-ceph/src/branch/master/.zuul.yaml#L114 recent builds: https://zuul.openstack.org/builds?job_name=devstack-plugin-ceph-tempest-fedora-latest&branch=master&result=SUCCESS&result=FAILURE&skip=0&limit=10 * name: devstack-platform-fedora-latest where: https://opendev.org/openstack/devstack/src/branch/master/.zuul.yaml#L861 recent builds: https://zuul.openstack.org/builds?job_name=devstack-platform-fedora-latest&branch=master&result=SUCCESS&result=FAILURE&skip=0&limit=10 * name: devstack-platform-fedora-latest-virt-preview where: https://opendev.org/openstack/devstack/src/branch/master/.zuul.yaml#L868 recent builds: https://zuul.openstack.org/builds?job_name=devstack-platform-fedora-latest-virt-preview&branch=master&result=SUCCESS&result=FAILURE&skip=0&limit=10 Notes: This COPR has CentOS Stream packages * name: validations-libs-podified-podman where: https://opendev.org/openstack/validations-libs/src/branch/master/.zuul.yaml#L3 recent builds: https://zuul.openstack.org/builds?job_name=validations-libs-podified-podman&branch=master&result=SUCCESS&result=FAILURE&skip=0&limit=10 * name: novajoin-functional where: https://opendev.org/x/novajoin/src/branch/master/.zuul.yaml#L13 recent builds: https://zuul.openstack.org/builds?job_name=novajoin-functional&branch=master&result=SUCCESS&result=FAILURE&skip=0&limit=10 * name: bindep-fedora-latest where: https://opendev.org/opendev/bindep/src/branch/master/.zuul.yaml#L24 recent builds: https://zuul.openstack.org/builds?job_name=bindep-fedora-latest&branch=master&result=SUCCESS&result=FAILURE&skip=0&limit=10 Looking at these jobs they're all non-voting or disabled apart from: * validations-libs-podified-podman, which is part of which is part of tripleo and as such the future is hazy * bindep-fedora-latest, This job would seem to be a loss, however from a funcational test POV Fedora and Centos Stream are compatible [1] https://lists.opendev.org/archives/list/service-discuss at lists.opendev.org/thread/IOYIYWGTZW3TM4TR2N47XY6X7EB2W2A6/ [2] https://review.opendev.org/c/zuul/zuul-jobs/+/884935?tab=change-view-tab-header-zuul-results-summary [3] This may be in part due to the scale/number of those jobs [4] https://docs.fedoraproject.org/en-US/releases/eol/ Yours Tony. From gmann at ghanshyammann.com Thu Jun 8 21:55:26 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 08 Jun 2023 14:55:26 -0700 Subject: [ironic][barbican][qa][tripleo][nova][opendev] Next steps for Fedora jobs in OpenStack In-Reply-To: References: Message-ID: <1889d01e48d.104630a1d137040.6606090726740752467@ghanshyammann.com> ---- On Thu, 08 Jun 2023 14:25:12 -0700 Tony Breeds wrote --- > Hello all, > Following on from "Future of Fedora and CentOS Stream Test > Images"[1] discussion on service-discuss the opendev infra team have > removed the unused Fedora content from the AFS mirrors. Now it's time > to look at removing the Fedora-36 content from the mirrors. I'd like > to, again, point out the Fedora 36 isn't the latest release and is > infact EOL[4]. Removal of the Fedora 36 content will start with > something like "[configure-mirrors] Allow per distribution disabling > of mirrors"[2]. Once merged, and activated, this means that any/all > Fedora based jobs will pull OS content from the Fedora project mirrors > directly. There isn't an ETA but "soon" is a fair estimate. Any > Fedora jobs will potentially run slower and will be more subject to > network failures, this is also the situation with existing rocky linux > jobs and there hasn't been any major problems[3]. Projects using > Fedora are encouraged to switch to CentOS Stream or Rocky Linux. As > at some stage the Fedora nodepool images may also need to go away. Thanks, Tony for sending this on ML. Update on this: QA team started removing the support for the fedora distro from Devstack[1] and we encourage projects to remove the jobs and nodeset usage. Soon we will remove the fedora-supported code, also [1] https://review.opendev.org/q/topic:drop-old-distros -gmann > > OpenStack users of Fedora (master) appear to be: > > * name: bifrost-integration-tinyipa-fedora-latest > where: https://opendev.org/openstack/bifrost/src/branch/master/zuul.d/bifrost-jobs.yaml#L170 > recent builds: > https://zuul.openstack.org/builds?job_name=bifrost-integration-tinyipa-fedora-latest&branch=master&result=SUCCESS&result=FAILURE&skip=0&limit=10 > * name: bifrost-integration-redfish-uefi-fedora-latest > where: https://opendev.org/openstack/bifrost/src/branch/master/zuul.d/bifrost-jobs.yaml#L175 > recent builds: > https://zuul.openstack.org/builds?job_name=bifrost-integration-redfish-uefi-fedora-latest&branch=master&result=SUCCESS&result=FAILURE&skip=0&limit=10 > * name: barbican-dogtag-tox-functional > where: https://opendev.org/openstack/barbican/src/branch/master/.zuul.yaml#L16 > recent builds: > https://zuul.openstack.org/builds?job_name=barbican-dogtag-tox-functional&branch=master&result=SUCCESS&result=FAILURE&skip=0&limit=10 > * name: devstack-plugin-ceph-tempest-fedora-latest > where: https://opendev.org/openstack/devstack-plugin-ceph/src/branch/master/.zuul.yaml#L114 > recent builds: > https://zuul.openstack.org/builds?job_name=devstack-plugin-ceph-tempest-fedora-latest&branch=master&result=SUCCESS&result=FAILURE&skip=0&limit=10 > * name: devstack-platform-fedora-latest > where: https://opendev.org/openstack/devstack/src/branch/master/.zuul.yaml#L861 > recent builds: > https://zuul.openstack.org/builds?job_name=devstack-platform-fedora-latest&branch=master&result=SUCCESS&result=FAILURE&skip=0&limit=10 > * name: devstack-platform-fedora-latest-virt-preview > where: https://opendev.org/openstack/devstack/src/branch/master/.zuul.yaml#L868 > recent builds: > https://zuul.openstack.org/builds?job_name=devstack-platform-fedora-latest-virt-preview&branch=master&result=SUCCESS&result=FAILURE&skip=0&limit=10 > Notes: This COPR has CentOS Stream packages > * name: validations-libs-podified-podman > where: https://opendev.org/openstack/validations-libs/src/branch/master/.zuul.yaml#L3 > recent builds: > https://zuul.openstack.org/builds?job_name=validations-libs-podified-podman&branch=master&result=SUCCESS&result=FAILURE&skip=0&limit=10 > * name: novajoin-functional > where: https://opendev.org/x/novajoin/src/branch/master/.zuul.yaml#L13 > recent builds: > https://zuul.openstack.org/builds?job_name=novajoin-functional&branch=master&result=SUCCESS&result=FAILURE&skip=0&limit=10 > * name: bindep-fedora-latest > where: https://opendev.org/opendev/bindep/src/branch/master/.zuul.yaml#L24 > recent builds: > https://zuul.openstack.org/builds?job_name=bindep-fedora-latest&branch=master&result=SUCCESS&result=FAILURE&skip=0&limit=10 > > Looking at these jobs they're all non-voting or disabled apart from: > * validations-libs-podified-podman, which is part of which is part of > tripleo and as such the future is hazy > * bindep-fedora-latest, This job would seem to be a loss, however from > a funcational test POV Fedora and Centos Stream are compatible > > [1] https://lists.opendev.org/archives/list/service-discuss at lists.opendev.org/thread/IOYIYWGTZW3TM4TR2N47XY6X7EB2W2A6/ > [2] https://review.opendev.org/c/zuul/zuul-jobs/+/884935?tab=change-view-tab-header-zuul-results-summary > [3] This may be in part due to the scale/number of those jobs > [4] https://docs.fedoraproject.org/en-US/releases/eol/ > Yours Tony. > > From mrunge at matthias-runge.de Fri Jun 9 05:59:02 2023 From: mrunge at matthias-runge.de (Matthias Runge) Date: Fri, 9 Jun 2023 07:59:02 +0200 Subject: Monitoring In-Reply-To: References: Message-ID: On Thu, Jun 08, 2023 at 09:19:11AM -0300, Murilo Morais wrote: > Good morning everybody! Good morning, > > Guys, which monitoring stack do you recommend to monitor Host and VM > resources? Or what monitoring strategies do they recommend? This highly depends on your understanding of the term "monitoring". To track resources used by OpenStack users/projects, I'd recommend ceilometer. In addition to ceilometer, you'll probably need something to track resource usage in infrastructure nodes, e.g memory usage, disk usage, I/O etc. My recommendation would be to use something like collectd or node_exporter. Then you'd need a data storage for storing the metrics. Things like gnocchi or prometheus come into mind. The choice of the tools will probably be influenced by the available knowledge, the way and where OpenStack is installed. What is not covered above is: tracking if services are up and running, any kind of centralized logging, etc. Matthias -- Matthias Runge From geguileo at redhat.com Fri Jun 9 08:25:21 2023 From: geguileo at redhat.com (Gorka Eguileor) Date: Fri, 9 Jun 2023 10:25:21 +0200 Subject: Strange Cinder rpc timeouts after Xena upgrade In-Reply-To: <956aff3e-920b-3807-5990-95e7d796df75@sdsc.edu> References: <956aff3e-920b-3807-5990-95e7d796df75@sdsc.edu> Message-ID: <20230609082521.yxgviq3j7xuto64g@localhost> On 06/06, Colby Walsworth wrote: > Hey Everyone, > > We just upgraded from Wallaby to Xena. We use Ceph as our volume/image > backend. All instances from before the upgrade seem to be unable to be > shelved/unshelved or live migrated (have not tried offline migration yet). > Any new instances that are created work fine and all those tasks work fine > on those. The problem seems to be with cinder: > > cinder.api.v3.attachments [req-2af34fab-a4b5-43f3-bd9a-94f360201533 > e28435e0a66740968c523e6376c57f68 179f701c810c425ab004548cc1f76bc9 - default > default] Unable to create attachment for volume.: > oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to > message ID 8eea6a00568a477c89ced8db17f1179d > > the migrations and shelve/unshelve processes start on the hypervisor but > eventually error out with the above message. Any new instances I create do > not have this issue at all. I have done all the db upgrades so I don't > believe this is a db record update issue. > > Any ideas of what might be causing this? > > Thanks, > > Colby > > Hi, Do you have DEBUG log levels for the request? It would help if we could have the logs from cinder-api and cinder-volume for the request. I assume that the cinder-volume service doesn't appear as down in `cinder service-list`. Cheers, Gorka. From ralonsoh at redhat.com Fri Jun 9 09:09:17 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 9 Jun 2023 11:09:17 +0200 Subject: [neutron] Neutron drivers meeting cancelled Message-ID: Hello Neutrinos: Due to the lack of agenda [1], today's meeting is cancelled. Please remember that all meetings will be cancelled next week too because of the Vancouver PTG. Have a nice weekend! [1]https://wiki.openstack.org/wiki/Meetings/NeutronDrivers -------------- next part -------------- An HTML attachment was scrubbed... URL: From arxcruz at redhat.com Fri Jun 9 10:36:15 2023 From: arxcruz at redhat.com (Arx Cruz) Date: Fri, 9 Jun 2023 12:36:15 +0200 Subject: [gate][tripleo] gate blocker In-Reply-To: References: Message-ID: Hello, This is clear now. Thanks, Arx Cruz On Wed, Jun 7, 2023 at 12:38?PM Ananya Banerjee wrote: > Hello, > > Jobs on Centos 9 deploying standalone or multinode are failing standalone > deploy or overcloud deploy with > > https://bugs.launchpad.net/tripleo/+bug/2023019 . > > Fixes are in progress, please hold rechecks for the moment if you hit this. > > Thanks, > Ananya > > -- > > Ananya Banerjee, RHCSA, RHCE-OSP > > Software Engineer > > Red Hat EMEA > > anbanerj at redhat.com > M: +491784949931 IM: frenzy_friday > @RedHat Red Hat > Red Hat > > > -- Arx Cruz Software Engineer Red Hat EMEA arxcruz at redhat.com @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Fri Jun 9 13:08:16 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Fri, 9 Jun 2023 18:38:16 +0530 Subject: [cinder] Spec Review Day 09 June In-Reply-To: References: Message-ID: Hello Argonauts, Festival of spec reviews is in less than one hour. I've created an etherpad and a meeting link for the same. Etherpad: https://etherpad.opendev.org/p/cinder-2023-2-bobcat-specs Meeting link: https://meet.google.com/juc-wgcw-kvo See you there! Thanks Rajat Dhasmana On Wed, Jun 7, 2023 at 9:06?PM Rajat Dhasmana wrote: > Hello Argonauts, > > Inspired from the nova team, we will be having our day of spec review. > As decided in today's cinder meeting, we will be having spec review day on > 09th June i.e. this Friday. > Following are the details: > > Date: 09 June, 2023 > Time: 1400-1500 UTC > Connection Information: Meeting link will be shared in the cinder channel > before the event > > Thanks > Rajat Dhasmana > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gopalrajora1995 at gmail.com Fri Jun 9 09:51:40 2023 From: gopalrajora1995 at gmail.com (GOPAL RAJORA) Date: Fri, 9 Jun 2023 11:51:40 +0200 Subject: Octavia does not work properly. In-Reply-To: References: Message-ID: Hi Michael Thank you for your kind update. So, do you mean there is a problem in the opensack-helm chart of GitHub in terms of Octavia deployment? Thanks. Gopal. On Thu, Jun 8, 2023 at 6:19?PM Michael Johnson wrote: > Hi Gopal, > > You will have to inquire with the openstack-helm deployment tooling > team. This is an openstack-helm issue and not an Octavia one. > > Maybe start a new thread with "openstack-helm" in the subject line. > > Michael > > On Tue, Jun 6, 2023 at 9:31?AM GOPAL RAJORA > wrote: > > > > Hi Michael > > Thank you for your update. > > > > So, I deployed the Octavia service by using following 3 scripts. > > > > > https://github.com/openstack/openstack-helm/blob/master/tools/deployment/developer/common/180-create-resource-for-octavia.sh > > > https://github.com/openstack/openstack-helm/blob/master/tools/deployment/developer/common/190-create-octavia-certs.sh > > > https://github.com/openstack/openstack-helm/blob/master/tools/deployment/developer/common/200-octavia.sh > > > > So, can't I use these scripts for the deployment in reality? > > Looking forward to hear from you. > > > > Thanks, > > Gopal. > > > > > > > > On Tue, Jun 6, 2023 at 4:37?PM Michael Johnson > wrote: > >> > >> I think you really meant to say your deployment tooling does not work > >> properly. Octavia works just fine as many of us use it every day. > >> > >> What deployment tool are you using? (there are something like 12 that > >> support deploying Octavia) > >> > >> I say this because Octavia does not change the configuration of > >> keystone nor nova, and rarely is a configuration change in neutron > >> needed. So if keystone and nova are having problems after deploying > >> Octavia, it is your deployment tooling that is having a problem. > >> > >> Michael > >> > >> On Tue, Jun 6, 2023 at 12:53?AM Gregory Thiemonge < > gthiemonge at redhat.com> wrote: > >> > > >> > Hi Gopal, > >> > > >> > What deployment method are you using? > >> > > >> > My first guess is that the HTTP server/proxy that handles the API > endpoints is broken, can you check the logs there? > >> > > >> > You mention that disabling the octavia health manager fixes the > issue. Do you mean that the other projects (like Nova) work fine when the > health manager is disabled but don't work when it is enabled? > >> > > >> > Greg > >> > > >> > > >> > On Mon, Jun 5, 2023 at 4:57?PM GOPAL RAJORA < > gopalrajora1995 at gmail.com> wrote: > >> >> > >> >> Hi everyone. > >> >> Hope you are well. > >> >> This is Gopal. > >> >> > >> >> I am contacting you as I need your help. > >> >> > >> >> So, recently I installed the Octavia service on the Openstack server. > >> >> After installation of the Octavia, several services such as Nova, > Neutron and Keystone started not to work and I can not access the horizon > page either. > >> >> > >> >> So, I have checked the logs, for the moment, keystone service does > not work after the Octavia installation. > >> >> > >> >> Unable to establish connection to > http://keystone-api.openstack.svc.cluster.local:5000/v3/auth/tokens: > ('Connection aborted.', RemoteDisconnected('Remote end closed connection > without response')) > >> >> > >> >> So, I tried to install the Octavia service several times, so if I > disable the health manager, it seems everything works properly, but as you > know, Octavia can not work properly if the heath manger is not installed. > >> >> > >> >> So, can you please help me regarding this problem? > >> >> > >> >> Thanks. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Fri Jun 9 14:12:06 2023 From: satish.txt at gmail.com (Satish Patel) Date: Fri, 9 Jun 2023 10:12:06 -0400 Subject: [kolla] Image building process getting stuck Message-ID: Folks, I am trying to build images using 2023.1 kolla tag but every time it's getting stuck here and there and I have to ctrl+c and re-run the build. It's very frustrating. How do CI jobs build images without getting stuck? Because of that I have to build images role-by-role instead of all images in a single shot. Example, This is stuck here since 24 hours and not moving at all : https://paste.opendev.org/show/bf1rRhg9V3BCy4KmHPRg/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Fri Jun 9 17:37:54 2023 From: elod.illes at est.tech (=?utf-8?B?RWzDtWQgSWxsw6lz?=) Date: Fri, 9 Jun 2023 17:37:54 +0000 Subject: [cinder][all][tc][ops][stable] EOL EM branches In-Reply-To: <1889ba771fa.dc5178a4122684.3942458206927791638@ghanshyammann.com> References: <20230606173231.jrbuwdx2lupt7t7r@yuggoth.org> <18892247997.12b624f8c128041.5749054378410411347@ghanshyammann.com> <20230606194842.hegozazpxgi4ibye@yuggoth.org> <18896dbc4d7.deb1706c39430.148593531405490350@ghanshyammann.com> <18896f7b35c.d76eef8a40628.8327667355238766257@ghanshyammann.com> <1889ba771fa.dc5178a4122684.3942458206927791638@ghanshyammann.com> Message-ID: Hi, Thanks for starting this thread. As a stable maintainer let me also share my thoughts: - It is really sad to see, that important CVE fixes haven't arrived to ? old stable branches, that is clearly a sign that stable maintenance is ? not in a good shape on EM branches - I also see that stable maintenance is not in a good shape on ? 'maintained' branches either (2023.1, zed and yoga) for most of the ? projects, so that is also a visible problem - so, I understand that teams want to focus more on their maintained ? branches Still, I don't feel that eliminating the 'failed experiment' with Extended Maintenance process would solve our issue. Though I agree that, if we now EOL all our EM branches, that would really call some attention to some vendors, operators, etc. (Would it help? Would companies step up to spend more resources on upstream maintenance? Good question.) Now let me also share some general thoughts about the topics people brought up in this thread (please don't read those which does not interest you o:)): 1) who should maintain EM branches? Well, it was stressed out several times that maintainers can be different then the project core team. Though how I understood this is not like that there is a 'stable maintainer' group for 'maintained' branches and a completely different 'extended maintainer' group for branches in 'extended maintenance', rather that it's not necessary the *same* core team that develops the master branch. As a trivial way of working is that A maintainer from X company is maybe interested back to let's say Wallaby branch, so that is what branches they maintain, where they primarily backport bug fixes, review backports, fixes CI; B maintainer from Y company then does the same but till Xena branch; etc. (Of course, the best is if time to time they can help out each other). I know this is quite idealistic... I believe that maintainers have an *employer* with an interest to keep XY branch as maintained as possible and they act accordingly as much as possible (idealistic, too). 2) what tests to keep? I understand that keeping old CI jobs functional is cumbersome and resource needy. On the other hand, *every* vendor and companies benefit from this, as downstream CI usually not that good as the upstream one OR very expensive (resource, maintenance, fixing the uncaught bugs, etc). I also tend to agree the opinion that only keeping unit tests and functional tests doesn't give enough confidence in quality. Devstack based tempest tests need to be kept, though I agree that teams can somewhat reduce their job count, maybe with rationalising and dropping somewhat redundant, expensive, time and resource consuming test jobs (like different kind of grenade jobs, non-voting jobs, special case jobs, etc), at least when they start to fail and there's no one to fix them. 3) are EM branches 'fully maintained'? The original idea was that it isn't. EM just means that maintainers can propose backports and can review them. It doesn't mean that every bugfix will be backported and reviewed (unfortunately the same can be seen in 'maintained' branches as well). Though I also share the views of those, who say that the fact that important CVE fixes don't arrive to even "younger" EM branches is a signal that those branches are doomed and probably should be EOL'd. (Though some of you noted that the recent CVEs have cross project dependencies, thus not that trivial to backport them.) 4) real count of EM branches According to releases.openstack.org, it's 7, yes. In reality it's rather 5 or less. For some projects it's even just 2 or 3. Because it can be different for every project. Yes, we could have EOL'd waaay earlier rocky (for every project) and stein branches (rocky is like 98% EOL'd, waiting for only some of the PTLs to approve their project's EOL transition patch; Stein is really rotten, but will be next as soon as rocky is done; but for example stable/train gate for Nova, Neutron, etc, is not blocked, tests can pass, even though there are not many patches that get merged). 5) is EM a 'failed experiment'? Somewhat yes, somewhat no. There have been many tested bug fixes that landed on those branches over the years, so companies could have benefit from them. But of course, it's still not equal to consider those branches as 'maintained', so yes that could rise some misconception. And as you also said, not getting CVE fixes landed shows that those branches are far from being 'maintained'. Anyway, personally, I would not end this 'experiment' (as I'm probably too optimistic :)), but I see that stable maintenance is a problem. I'm curious about where this thread will lead and happy to see that this got a forum topic on the Vancouver PTG schedule (thanks Tony! :)). Unfortunately, I cannot participate as I could not travel there, but I'm eagerly waiting for the thoughts (and maybe resolution) of the in-person discussion! Thanks for your time reading this long monologue o:) Cheers, El?d Ill?s irc: elodilles @ #openstack-stable / #openstack-release From: Ghanshyam Mann Sent: Thursday, June 8, 2023 5:37 PM To: Brian Rosmaita Cc: openstack-discuss Subject: Re: [cinder][all][tc][ops][stable] EOL EM branches ? ---- On Thu, 08 Jun 2023 06:39:28 -0700? Brian Rosmaita? wrote --- ?> On 6/7/23 1:46 PM, Ghanshyam Mann wrote: ?> >?? ---- On Wed, 07 Jun 2023 10:38:03 -0700? Jay Faulkner? wrote --- ?> >?? > ?> >?? > On Wed, Jun 7, 2023 at 10:23?AM Ghanshyam Mann gmann at ghanshyammann.com> wrote: ?> >?? > ?> >?? > ?---- On Tue, 06 Jun 2023 12:48:43 -0700? Jeremy Stanley? wrote --- ?> >?? > ?> On 2023-06-06 12:17:23 -0700 (-0700), Ghanshyam Mann wrote: ?> [snip] ?> >?? > I agree. If I see the main overhead in EM maintenance is keeping testing green. ?> >?? > it is not easy to keep 11 branches (including Em, supported stable and master) ?> >?? > testing up to date. My point is if we remove all the integration testing (can keep pep8 ?> >?? > and unit tests) at the time the branch move to EM will solve the problem that the upstream ?> >?? > community faces to maintain EM branches. ?> >?? > ?> >?? > ?> >?? > This, IMO, is akin to retiring the branches. How could I, as a developer, patch an older version of a branch against a vulnerability of the style of the recent Cinder one, where the impact is felt cross-project, and you clearly need a working dev environment (such as devstack). ?> >?? > If, as you propose, we stopped doing any integration testing on branches older than 18 months, we would be de-facto retiring the integration testing infrastructure, which shares a huge amount of DNA with our dev tooling infrastructure. ?> > ?> > It is not the same as retiring but if we see it can still run unit/functional tests and changes have been ?> > tested till supported and stable so we did testing of those fixes at some level. And there cannot be the ?> > case where I apply the fix directly to the EM branch. ?> ?> I agree with Jay on this.? IMO, what keeps devstack functional in the EM ?> branches is that it's needed to run tempest tests.? If we rely on ?> unit/functional tests only, that motivation goes away. ?> ?> Further, as Jay points out, a working OpenStack deployment requires a ?> harmonization of multiple components beyond the individual projects' ?> unit/functional tests.? For example, this (invalid) bug: ?>??? https://bugs.launchpad.net/cinder/+bug/2020382 ?> was reported after backporting a patch that had gone through the normal ?> backport process upstream from master through stable/xena without ?> skipping any branches; the xena patch applied cleanly and I'm pretty ?> sure it passed unit and functional tests (I didn't run them myself). ?> The issue did not occur until the code was actually used by cinder ?> interacting with a real nova. ?> ?> So relying on unit and functional tests only is not adequate.? When I ?> approve a backport, I'm supposed to be fairly confident that the change ?> is low-risk and will not cause regressions.? Clean tempest jobs give me ?> some useful evidence when making that assessment.? A patch that passes ?> CI is not guaranteed to be OK, but if it causes a CI failure, we know ?> it's not OK. ?> ?> > In our current doc also, we have the minimum testing expectation and I am just saying to reduce the testing ?> > at the time branch moved to EM instead of waiting for the gate to break and getting frustrated while backporting. ?> > ?> > EM as we meant since starting it not upstream maintained/guaranteed things so leaving testing expectation at ?> > downstream is no bug change than what current policy is. ?> ?> That's correct, but as I think has been mentioned elsewhere in this ?> thread, this has not proved to be workable.? The stable cores on the ?> project teams take their work seriously, and even though the docs say ?> that EM branches should be treated caveat emptor, we still feel that our ?> approval should mean something.? So even though the docs say there's no ?> guarantee on EM branches, nobody wants to have their name show up as ?> approving a patch that caused a regression, even in an EM branch. ?> ?> Further (and I don't think I'm speaking only for myself here), I don't ?> like the idea of other people merging unvetted stuff into our codebase. ?> But that hasn't become an issue, because as Jeremy pointed out earlier, ?> no one outside the project teams has showed up to take ownership of EM ?> branches.? Though (to get back to my real point here) if such people did ?> show up, I would expect them to also maintain the full CI, including ?> tempest integration tests, for the reasons I mentioned earlier.? So I'm ?> against the idea unit/functional testing is adequate for EM branches. I do not disagree with Jay and you on more and more testing, but I am saying reducing testing (which is what the original idea was) is one of the tradeoffs between keeping extended branches available for fixes for a long time and upstream maintenance costs.? We are clearly at the stage where the upstream community cannot maintain them with proper testing. Either we have to remove the idea of EM or try any new idea that can add more cost in upstream maintenance. I still do not find it very odd that we do not guarantee the EM backport fixes testing but at the same time make sure they are tested all the way from master to supported stable branches backporting. Leave the complete testing to the downstream consumers to test properly before applying the fixes. ?> ?> > ?> > -gmann ?> ?> (By the way, I am not implying that gmann is in favor of poor QA.? He's ?> articulated clearly what the current docs say about EM branches.? But ?> he's also been heroically responsible for keeping a lot of the EM ?> integration gates functional!) Apart from maintaining, pinning tempest/plugins version also takes a lot of time. Now I am starting the pinning tempest/plugins for recent EM stable/xena and it requires a large amount of time to test/pin the compatible version of tempest and plugins on stable/xena. -gmann ?> ?> >?? > ?> >?? > I don't know what the answer is; but this as a middle ground seems like the worst of all worlds: the branches still exist, and we will not have the tools to (manually, not just CI) test meaningful changes on them. ?> >?? > Just a thought! ?> >?? > -Jay FaulknerIronic PTL ?> > ?> ?> ?> From alsotoes at gmail.com Fri Jun 9 19:02:44 2023 From: alsotoes at gmail.com (Alvaro Soto) Date: Fri, 9 Jun 2023 13:02:44 -0600 Subject: Community attendance - OpenInfra Summit Vancouver In-Reply-To: References: Message-ID: I've posted this for LATAM community in case you want to use it as a communication channel =) Feel free to invite others. https://twitter.com/OpenInfraCDMX/status/1667005673958146050 ~~ Hola Comunidad!!!! si alguien va a @OpenInfraSummit en Vancouver al siguiente semana. No duden en conectarse al slack de la comunidad y entrar al canal #openinfra -summit para que nos juntemos =) Nos vemos en Vancouver!!!!! @openinfradev https://slack.openinfra.mx ~~ Kind regards, Alvaro Soto. On Thu, Jun 8, 2023 at 11:13?AM Alvaro Soto wrote: > Awesome! Thank you so much! > > On Wed, Jun 7, 2023 at 5:46?AM Roberto Bartzen Acosta < > roberto.acosta at luizalabs.com> wrote: > >> Hi Alvaro, >> >> nice! of course we can meet there! all are welcome :) >> We are trying to schedule something on Tuesday, I already sent messages >> to Iury and Carlos. >> >> Cheers, >> Roberto >> >> Em ter., 6 de jun. de 2023 ?s 06:35, Alvaro Soto >> escreveu: >> >>> Hello Roberto, >>> I'm not from Brazil (I'm based on Mexico) but as part of LATAM >>> community, I'll love to be part in local projects :) >>> >>> I'll be at OIS, I'll be nice to talk about community challenges for our >>> local community. >>> >>> Cheers. >>> --- >>> Alvaro Soto. >>> >>> Note: My work hours may not be your work hours. Please do not feel the >>> need to respond during a time that is not convenient for you. >>> ---------------------------------------------------------- >>> Great people talk about ideas, >>> ordinary people talk about things, >>> small people talk... about other people. >>> >>> On Thu, Jun 1, 2023, 6:47 AM Iury Gregory wrote: >>> >>>> Hi Roberto, >>>> >>>> I know some Brazilians that will be attending the OIS Vancouver, >>>> including me. >>>> >>>> Em qui., 1 de jun. de 2023 ?s 09:00, Roberto Bartzen Acosta < >>>> roberto.acosta at luizalabs.com> escreveu: >>>> >>>>> Hello, >>>>> >>>>> Will anyone from the Brazilian community attend the OpenInfra in >>>>> Vancouver? >>>>> >>>>> I would like to meet other members from Brazil and discuss the >>>>> challenges and possibilities of using OpenStack in Brazilian >>>>> infrastructures. You can ping me on IRC too (racosta). >>>>> >>>>> Kind regards, >>>>> Roberto >>>>> >>>>> >>>>> *?Esta mensagem ? direcionada apenas para os endere?os constantes no >>>>> cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no >>>>> cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa >>>>> mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o >>>>> imediatamente anuladas e proibidas?.* >>>>> >>>>> *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para >>>>> assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o >>>>> poder? aceitar a responsabilidade por quaisquer perdas ou danos causados >>>>> por esse e-mail ou por seus anexos?.* >>>>> >>>> >>>> >>>> -- >>>> *Att[]'s* >>>> >>>> *Iury Gregory Melo Ferreira * >>>> *MSc in Computer Science at UFCG* >>>> *Ironic PTL * >>>> *Senior Software Engineer at Red Hat Brazil* >>>> *Social*: https://www.linkedin.com/in/iurygregory >>>> *E-mail: iurygregory at gmail.com * >>>> >>> >> >> *?Esta mensagem ? direcionada apenas para os endere?os constantes no >> cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no >> cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa >> mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o >> imediatamente anuladas e proibidas?.* >> >> *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para >> assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o >> poder? aceitar a responsabilidade por quaisquer perdas ou danos causados >> por esse e-mail ou por seus anexos?.* >> > > > -- > > Alvaro Soto > > *Note: My work hours may not be your work hours. Please do not feel the > need to respond during a time that is not convenient for you.* > ---------------------------------------------------------- > Great people talk about ideas, > ordinary people talk about things, > small people talk... about other people. > -- Alvaro Soto *Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you.* ---------------------------------------------------------- Great people talk about ideas, ordinary people talk about things, small people talk... about other people. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Fri Jun 9 19:23:13 2023 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 9 Jun 2023 15:23:13 -0400 Subject: [cinder][ptg] cinder PTG schedule Message-ID: <4fbc4c03-7c22-c33f-14bc-d328ff0a1a2f@gmail.com> Hello Argonauts, We're going to have a small presence at the PTG next week, so we've only scheduled 3 sessions: Wednesday 14 June 10:20-10:50 Support for NVMe-OF in os-brick 15:50-16:20 Cinder Operator Half-Hour Thursday 15 June 16:40-17:10 Open Discussion with the Cinder project team All times are Vancouver time (UTC-7) and all will take place at Table #13 in the PTG area. Here's the Cinder PTG etherpad: https://etherpad.opendev.org/p/vancouver-ptg-june-2023 cheers, brian From rosmaita.fossdev at gmail.com Fri Jun 9 19:23:20 2023 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 9 Jun 2023 15:23:20 -0400 Subject: [cinder][ops] cinder operator events at the summit/PTG Message-ID: <26f6a916-395b-873a-e25e-685e0f05c1c2@gmail.com> Hello operators, Just want to alert you about some opportunities to give feedback to the Cinder project team while you're at the summit or PTG: Wednesday 14 June, 11:40-12:10 Forum session: Cinder, the OpenStack Block Storage service ... how are we doing? (Looking for feedback from operators, vendors, and end-users) Vancouver Convention Centre - Room 9 Wednesday 14 June, 15:50-16:20 PTG session: Operator Half-Hour (Focus on operator feedback) Table #13 in the PTG area Thursday 15 June, 16:40-17:10 PTG session: Open Discussion with the Cinder project team (Focus on developers, but willing to discuss anything, really) Table #13 in the PTG area From rosmaita.fossdev at gmail.com Fri Jun 9 19:23:33 2023 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 9 Jun 2023 15:23:33 -0400 Subject: [glance][ops][ptg] glance operator event at the summit/PTG Message-ID: Hello operators, Come by and meet members of the Glance project team and give us feedback, unusual use cases, feature requests, etc. Wednesday 14 June 15:10-15:40 (UTC-7) Table #13 in the PTG area etherpad: https://etherpad.opendev.org/p/vancouver-june2023-glance From iurygregory at gmail.com Fri Jun 9 19:33:22 2023 From: iurygregory at gmail.com (Iury Gregory) Date: Fri, 9 Jun 2023 16:33:22 -0300 Subject: [ironic][ops] Forum Session at OIS Vancouver 2023 Message-ID: Hello everyone, I would like to invite everyone interested in ironic to join our Forum Session next week during the Open Infrastructure Summit in Vancouver! The forum session *Ironic: Our Future starts Now! *will be held on *Wed, June 14, 10:20am - 10:50am | Vancouver Convention Centre - Room 9* This is a chance to provide feedback to our work, see you all next week in Vancouver! -- *Att[]'s* *Iury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Ironic PTL * *Senior Software Engineer at Red Hat Brazil* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Fri Jun 9 20:47:02 2023 From: tobias.urdin at binero.com (Tobias Urdin) Date: Fri, 9 Jun 2023 20:47:02 +0000 Subject: [puppet] Sessions in June 2023 PTG In-Reply-To: References: Message-ID: <934D3198-BEAC-42BC-964C-AFC5E13606CB@binero.com> Hello Takashi, Thanks for arranging all this I will be there! Looking forward to meet you in person. Best regards Tobias On 8 Jun 2023, at 05:45, Takashi Kajinami wrote: ? Hello, I'm attending the upcoming OpenInfra Summit and PTG at Vancouver so would like to moderate some puppet sessions in PTG. I've reserved slots from 14:30 to 16:20 on Wednesday. However we can be flexible so please let me know if you are interested but have any conflicts. https://etherpad.opendev.org/p/vancouver-june2023-puppet-openstack I've added a few topics including the recent discussion about module modernization. However it'd be nice if we can have a few more topics, especially any feedback from users or (potential) new contributors. Please add your name and topics if you are planning to join us. Thank you, Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Fri Jun 9 21:08:41 2023 From: tobias.urdin at binero.com (Tobias Urdin) Date: Fri, 9 Jun 2023 21:08:41 +0000 Subject: [oslo][largescale-sig] HTTP base direct RPC oslo.messaging driver contribution In-Reply-To: References: , Message-ID: Hello, Interesting! Looking forward to reading the spec and potential code for this. Best regards Tobias On 7 Jun 2023, at 10:41, Herve Beraud wrote: ? Le mer. 7 juin 2023 ? 00:31, Jay Faulkner > a ?crit : I'm interested in this as well, please add me to the spec if you need additional brains :). I'll also be at the summit if you'd like to discuss any of it in person. -- Jay Faulkner Ironic PTL On Tue, Jun 6, 2023 at 3:14?PM Julia Kreger > wrote: Jumping in because the thread has been rather reminiscent of the json-rpc messaging feature ironic carries so our users don't have to run with rabbit. I suspect Ironic might be happy to propose it to oslo.messaging if this http driver is acceptable. Indeed, it could be interesting, thanks Julia. Please feel free to add me as a reviewer on the spec. -Julia On Tue, Jun 6, 2023 at 2:10?PM Masahito Muroi > wrote: Hi, Thank you everyone for the kindly reply. I got the PTG situation. Submitting the spec seems to be a nice first step. We don't have public repository of the driver because of internal repository structure reason. The repository is really stick to the current internal repository structure now. Cleaning up repository would take time so that we didn't do the extra tasks. best regards. Masahito -----Original Message----- From: "Takashi Kajinami"> To: "Masahito Muroi">; Cc: >; "Herve Beraud">; Sent: 2023/06/06(?) 18:50 (GMT+09:00) Subject: Re: [oslo] HTTP base direct RPC oslo.messaging driver contribution Hello, This is very interesting and I agree having the spec would be the good way to move this forward. We have not requested oslo sessions in the upcoming PTG but Stephen and I are attending it so will be available for the discussion. Because some other cores such as Herve won't be there, we'd need to continue further discussions after PTG in spec review, but if that early in-person discussion sounds helpful for you then I'll reserve a table. Thank you, Takashi On Tue, Jun 6, 2023 at 4:48 PM Herve Beraud > wrote: Hello, Indeed, Oslo doesn't have PTG sessions. Best regards Le lun. 5 juin 2023 ? 10:42, Masahito Muroi > a ?crit : Hello Herve, Thank you for the quick replying. Let us prepare the spec and submit it. btw, does olso team have PTG in the up-comming summit? We'd like to get a quick feedback of the spec if time is allowed in the PTG. But it looks like oslo team won't have PTG there. best regards, Masahito -----Original Message----- From: "Herve Beraud"> To: "????">; Cc: >; Sent: 2023/06/05(?) 17:21 (GMT+09:00) Subject: Re: [oslo] HTTP base direct RPC oslo.messaging driver contribution Hello Masahito, Submission to oslo-spec is a good starting point. Best regards Le lun. 5 juin 2023 ? 10:04, ???? > a ?crit : Hi oslo team, We'd like to contribute HTTP base direct RPC driver to the oslo.messaging community. We have developed the HTTP base driver internally. We have been using the driver in the production with over 10K hypervisors now. I checked the IRC meeting log of the oslo team[1], but there is no regluar meeting in 2023. Is it okay to submit oslo-spec[2] to propose the driver directly, or is there another good place to discuss the feature before submitting a spec? 1. https://meetings.opendev.org/#Oslo_Team_Meeting 2. https://opendev.org/openstack/oslo-specs best regards, Masahito -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ [https://ack.mail.navercorp.com/readReceipt/notify/?img=srYmFoKqaAblMrMYaqumK6EqaxE%2FMqkCaxUqMotdFzKwFxCoF4ivK6F0M4igMX%2B0Mogw74lRpzM5W4C5bX0q%2BzkR74FTWx%2FsWXI0WNFdM6FO763GbrF9bXFgWz0q%2BHK5WXI0WNFdM6FO74eZpm%3D%3D.gif] -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Jun 9 21:36:09 2023 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 9 Jun 2023 23:36:09 +0200 Subject: [release] Release countdown for week R-16, June 12-16 Message-ID: <8f81ce84-71b8-1234-bae6-100fbe7b9bb5@openstack.org> Development Focus ----------------- The bobcat-2 milestone will happen in next month, on July 6, 2023. 2023.2 Bobcat-related specs should now be finalized so that teams can move to implementation ASAP. Some teams observe specific deadlines on the second milestone (mostly spec freezes): please refer to https://releases.openstack.org/bobcat/schedule.html for details. General Information ------------------- Please remember that libraries need to be released at least once per milestone period. At milestone 2, the release team will propose releases for any library that has not been otherwise released since milestone 1. Other non-library deliverables that follow the cycle-with-intermediary release model should have an intermediary release before milestone-2. Those who haven't will be proposed to switch to the cycle-with-rc model, which is more suited to deliverables that are released only once per cycle. At milestone-2 we also freeze the contents of the final release. If you have a new deliverable that should be included in the final release, you should make sure it has a deliverable file in: https://opendev.org/openstack/releases/src/branch/master/deliverables/bobcat You should request a beta release (or intermediary release) for those new deliverables by milestone-2. We understand some may not be quite ready for a full release yet, but if you have something minimally viable to get released it would be good to do a 0.x release to exercise the release tooling for your deliverables. See the MembershipFreeze description for more details: https://releases.openstack.org/bobcat/schedule.html#b-mf Finally, now may be a good time for teams to check on any stable releases that need to be done for your deliverables. If you have bugfixes that have been backported, but no stable release getting those. If you are unsure what is out there committed but not released, in the openstack/releases repo, running the command "tools/list_stable_unreleased_changes.sh " gives a nice report. Upcoming Deadlines & Dates -------------------------- OpenInfra Summit in Vancouver: From June 13 to June 15, 2023 Bobcat-2 Milestone: July 6, 2023 -- Thierry Carrez (ttx) From vrook at wikimedia.org Fri Jun 9 23:58:32 2023 From: vrook at wikimedia.org (Vivian Rook) Date: Fri, 9 Jun 2023 19:58:32 -0400 Subject: [magnum] kubectl loses access after 31 days In-Reply-To: References: Message-ID: It would appear that this is due to podman logs getting large. I've got one now that is about 7G and growing. I see https://opendev.org/openstack/magnum/commit/9d543960d2827ede5be4f851b1cb62c986981f32 was included a few years ago that should limit to 50M, perhaps this is not working as expected in more recent times? Or are there any settings that this needs to limit logs that I might not have set? Thank you! On Tue, Jun 6, 2023 at 3:11?PM Vivian Rook wrote: > After 31 days I lose kubectl access to magnum clusters. This has happened > consistently for any cluster that I have deployed. The clusters run just > fine, though around 31 days of operation kubectl cannot connect, and the > web service shows the service as down (Though the web service on the > cluster is responding enough to say that nothing is working, so the cluster > has not completely crashed) > > All kubectl commands have a long pause (about 10 minutes) then gives > errors like: > > Error from server (Timeout): the server was unable to return a response in > the time allotted, but may still be processing the request (get > deployments.apps) > Unable to connect to the server: stream error: stream ID 11; > INTERNAL_ERROR; received from peer > > I have a little more information in > https://phabricator.wikimedia.org/T336586 > It feels like a cert is expiring as it always seems to happen right about > 31 days after deployment. Does magnum have some kind of certificate like > that? I checked the kubectl certs, they were set to be fine for years, so I > don't think it is them unless I didn't check them correctly (Let's not > discount that possibility, I totally could have read the wrong bit of the > cert). > > I can still generate a new kubectl config file with > openstack coe cluster config > > Though the resulting configuration will have the same issue as the > original config (long pause, then timeout errors). I have also tried to run: > openstack coe ca rotate > > Which is accepted and seems to run fine, but after that point if I > regenerate a kubeconfig file as above I get new errors when running kubectl: > Unable to connect to the server: x509: certificate signed by unknown > authority (possibly because of "crypto/rsa: verification error" while > trying to verify candidate authority certificate "") > > If the key rotation would work, and I'm not doing it correctly, I would be > delighted to hear how to run it correctly. Though ideally I would like to > find where the original key is failing, and if it is an expiration, how to > set it to a longer time. > > Thank you! > -- > > *Vivian Rook (They/Them)* > Site Reliability Engineer > Wikimedia Foundation > -- *Vivian Rook (They/Them)* Site Reliability Engineer Wikimedia Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From masahito.muroi at linecorp.com Sat Jun 10 09:14:16 2023 From: masahito.muroi at linecorp.com (=?utf-8?B?TWFzYWhpdG8gTXVyb2k=?=) Date: Sat, 10 Jun 2023 18:14:16 +0900 Subject: =?utf-8?B?UmU6IFtvc2xvXVtsYXJnZXNjYWxlLXNpZ10gSFRUUCBiYXNlIGRpcmVjdCBSUEMgb3Nsbw==?= =?utf-8?B?Lm1lc3NhZ2luZyBkcml2ZXIgY29udHJpYnV0aW9u?= In-Reply-To: References: Message-ID: <926bd380c381258c1885782ee5735d@cweb02.nmdf.nhnsystem.com> Hi all, We have pushed the spec. Please feel free to review it. https://review.opendev.org/c/openstack/oslo-specs/+/885809 best regards, Masahito -----Original Message----- From: "Jay Faulkner" To: "Julia Kreger"; Cc: "Masahito Muroi"; "Takashi Kajinami"; ; "Herve Beraud"; "Arnaud Morin"; Sent: 2023/06/07(?) 07:31 (GMT+09:00) Subject: Re: [oslo][largescale-sig] HTTP base direct RPC oslo.messaging driver contribution I'm interested in this as well, please add me to the spec if you need additional brains :). I'll also be at the summit if you'd like to discuss any of it in person. -- Jay Faulkner Ironic PTL On Tue, Jun 6, 2023 at 3:14 PM Julia Kreger wrote: Jumping in because the thread has been rather reminiscent of the json-rpc messaging feature ironic carries so our users don't have to run with rabbit. I suspect Ironic might be happy to propose it to oslo.messaging if this http driver is acceptable. Please feel free to add me as a reviewer on the spec. -Julia On Tue, Jun 6, 2023 at 2:10 PM Masahito Muroi wrote: Hi, Thank you everyone for the kindly reply. I got the PTG situation. Submitting the spec seems to be a nice first step. We don't have public repository of the driver because of internal repository structure reason. The repository is really stick to the current internal repository structure now. Cleaning up repository would take time so that we didn't do the extra tasks. best regards. Masahito -----Original Message----- From: "Takashi Kajinami" To: "Masahito Muroi"; Cc: ; "Herve Beraud"; Sent: 2023/06/06(?) 18:50 (GMT+09:00) Subject: Re: [oslo] HTTP base direct RPC oslo.messaging driver contribution Hello, This is very interesting and I agree having the spec would be the good way to move this forward. We have not requested oslo sessions in the upcoming PTG but Stephen and I are attending it so will be available for the discussion. Because some other cores such as Herve won't be there, we'd need to continue further discussions after PTG in spec review, but if that early in-person discussion sounds helpful for you then I'll reserve a table. Thank you, Takashi On Tue, Jun 6, 2023 at 4:48 PM Herve Beraud wrote: Hello, Indeed, Oslo doesn't have PTG sessions. Best regards Le lun. 5 juin 2023 ? 10:42, Masahito Muroi a ?crit : Hello Herve, Thank you for the quick replying. Let us prepare the spec and submit it. btw, does olso team have PTG in the up-comming summit? We'd like to get a quick feedback of the spec if time is allowed in the PTG. But it looks like oslo team won't have PTG there. best regards, Masahito -----Original Message----- From: "Herve Beraud" To: "????"; Cc: ; Sent: 2023/06/05(?) 17:21 (GMT+09:00) Subject: Re: [oslo] HTTP base direct RPC oslo.messaging driver contribution Hello Masahito, Submission to oslo-spec is a good starting point. Best regards Le lun. 5 juin 2023 ? 10:04, ???? a ?crit : Hi oslo team, We'd like to contribute HTTP base direct RPC driver to the oslo.messaging community. We have developed the HTTP base driver internally. We have been using the driver in the production with over 10K hypervisors now. I checked the IRC meeting log of the oslo team[1], but there is no regluar meeting in 2023. Is it okay to submit oslo-spec[2] to propose the driver directly, or is there another good place to discuss the feature before submitting a spec? 1. https://meetings.opendev.org/#Oslo_Team_Meeting 2. https://opendev.org/openstack/oslo-specs best regards, Masahito -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sun Jun 11 02:16:35 2023 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 10 Jun 2023 22:16:35 -0400 Subject: [kolla-ansible][ovn][ovs] connection dropped and inactivity errors Message-ID: Folks, I am getting some strange errors on my kolla based OVN deployment. I have only 5 nodes so it's not a large deployment. Are there any ovn related tuning options which I missed ? I have the following timers configured at present. ovn-openflow-probe-interval="60" ovn-remote-probe-interval="60000" root at ctrl1:~# tail -f /var/log/kolla/openvswitch/ov*.log ==> /var/log/kolla/openvswitch/ovn-controller.log <== 2023-06-09T06:45:47.615Z|00091|lflow_cache|INFO|Detected cache inactivity (last active 30001 ms ago): trimming cache 2023-06-09T07:11:12.908Z|00092|lflow_cache|INFO|Detected cache inactivity (last active 30002 ms ago): trimming cache 2023-06-09T07:12:21.225Z|00093|lflow_cache|INFO|Detected cache inactivity (last active 30001 ms ago): trimming cache 2023-06-10T19:13:10.382Z|00094|lflow_cache|INFO|Detected cache inactivity (last active 30001 ms ago): trimming cache 2023-06-10T19:17:10.734Z|00095|lflow_cache|INFO|Detected cache inactivity (last active 30002 ms ago): trimming cache 2023-06-10T19:18:33.270Z|00096|lflow_cache|INFO|Detected cache inactivity (last active 30004 ms ago): trimming cache 2023-06-10T19:25:23.987Z|00097|lflow_cache|INFO|Detected cache inactivity (last active 30002 ms ago): trimming cache 2023-06-10T19:32:03.981Z|00098|lflow_cache|INFO|Detected cache inactivity (last active 30003 ms ago): trimming cache 2023-06-10T19:36:59.153Z|00099|lflow_cache|INFO|Detected cache inactivity (last active 30006 ms ago): trimming cache 2023-06-10T20:18:34.798Z|00100|lflow_cache|INFO|Detected cache inactivity (last active 30002 ms ago): trimming cache ==> /var/log/kolla/openvswitch/ovn-nb-db.log <== 2023-06-10T20:52:34.461Z|00518|reconnect|WARN|tcp:192.168.1.11:56798: connection dropped (Connection reset by peer) 2023-06-10T20:52:34.463Z|00519|reconnect|WARN|tcp:192.168.1.13:57048: connection dropped (Connection reset by peer) 2023-06-10T20:52:34.464Z|00520|reconnect|WARN|tcp:192.168.1.11:56792: connection dropped (Connection reset by peer) 2023-06-10T20:52:34.465Z|00521|reconnect|WARN|tcp:192.168.1.13:57016: connection dropped (Connection reset by peer) 2023-06-10T20:52:34.466Z|00522|reconnect|WARN|tcp:192.168.1.11:56746: connection dropped (Connection reset by peer) 2023-06-10T20:52:34.466Z|00523|reconnect|WARN|tcp:192.168.1.11:56742: connection dropped (Connection reset by peer) 2023-06-10T20:52:34.468Z|00524|reconnect|WARN|tcp:192.168.1.11:56786: connection dropped (Connection reset by peer) 2023-06-10T20:52:34.472Z|00525|reconnect|WARN|tcp:192.168.1.11:56784: connection dropped (Connection reset by peer) 2023-06-10T20:52:34.474Z|00526|reconnect|WARN|tcp:192.168.1.13:57044: connection dropped (Connection reset by peer) 2023-06-10T20:52:34.484Z|00527|reconnect|WARN|tcp:192.168.1.11:56760: connection dropped (Connection reset by peer) ==> /var/log/kolla/openvswitch/ovn-northd.log <== 2023-06-10T20:52:34.377Z|00695|reconnect|INFO|tcp:192.168.1.12:6641: connected 2023-06-10T20:52:34.379Z|00696|ovsdb_cs|INFO|tcp:192.168.1.12:6641: clustered database server is not cluster leader; trying another server 2023-06-10T20:52:34.379Z|00697|ovsdb_cs|INFO|tcp:192.168.1.12:6641: clustered database server is not cluster leader; trying another server 2023-06-10T20:52:34.379Z|00698|reconnect|INFO|tcp:192.168.1.12:6641: connection attempt timed out 2023-06-10T20:52:34.380Z|00699|reconnect|INFO|tcp:192.168.1.11:6641: connecting... 2023-06-10T20:52:34.380Z|00700|reconnect|INFO|tcp:192.168.1.11:6641: connected 2023-06-10T20:52:34.408Z|00701|ovsdb_cs|INFO|tcp:192.168.1.11:6641: clustered database server is not cluster leader; trying another server 2023-06-10T20:52:34.408Z|00702|reconnect|INFO|tcp:192.168.1.11:6641: connection attempt timed out 2023-06-10T20:52:35.409Z|00703|reconnect|INFO|tcp:192.168.1.13:6641: connecting... 2023-06-10T20:52:35.409Z|00704|reconnect|INFO|tcp:192.168.1.13:6641: connected ==> /var/log/kolla/openvswitch/ovn-sb-db.log <== 2023-06-10T20:51:27.588Z|00496|raft|INFO|Transferring leadership to write a snapshot. 2023-06-10T20:51:27.597Z|00497|raft|INFO|rejected append_reply (not leader) 2023-06-10T20:51:27.597Z|00498|raft|INFO|rejected append_reply (not leader) 2023-06-10T20:51:27.597Z|00499|raft|INFO|server d051 is leader for term 3574 2023-06-10T20:51:27.663Z|00500|jsonrpc|WARN|tcp:192.168.1.13:45144: receive error: Connection reset by peer 2023-06-10T20:51:27.664Z|00501|reconnect|WARN|tcp:192.168.1.13:45144: connection dropped (Connection reset by peer) 2023-06-10T20:51:27.665Z|00502|jsonrpc|WARN|tcp:192.168.1.12:47858: receive error: Connection reset by peer 2023-06-10T20:51:27.665Z|00503|reconnect|WARN|tcp:192.168.1.12:47858: connection dropped (Connection reset by peer) 2023-06-10T20:51:27.667Z|00504|jsonrpc|WARN|tcp:192.168.1.11:41500: receive error: Connection reset by peer 2023-06-10T20:51:27.667Z|00505|reconnect|WARN|tcp:192.168.1.11:41500: connection dropped (Connection reset by peer) ==> /var/log/kolla/openvswitch/ovsdb-server.log <== 2023-05-29T02:05:02.891Z|00027|reconnect|WARN|unix#67466: connection dropped (Connection reset by peer) 2023-05-31T16:20:33.494Z|00028|reconnect|ERR|tcp:127.0.0.1:59928: no response to inactivity probe after 5 seconds, disconnecting 2023-06-01T20:43:23.516Z|00001|vlog|INFO|opened log file /var/log/kolla/openvswitch/ovsdb-server.log 2023-06-01T20:43:23.520Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.17.3 2023-06-01T20:43:33.522Z|00003|memory|INFO|7216 kB peak resident set size after 10.0 seconds 2023-06-01T20:43:33.522Z|00004|memory|INFO|atoms:826 cells:770 monitors:5 sessions:3 2023-06-03T07:44:05.774Z|00005|reconnect|ERR|tcp:127.0.0.1:40098: no response to inactivity probe after 5 seconds, disconnecting 2023-06-03T09:41:49.039Z|00006|reconnect|ERR|tcp:127.0.0.1:60042: no response to inactivity probe after 5 seconds, disconnecting 2023-06-11T02:05:08.802Z|00007|jsonrpc|WARN|unix#26478: receive error: Connection reset by peer 2023-06-11T02:05:08.803Z|00008|reconnect|WARN|unix#26478: connection dropped (Connection reset by peer) ==> /var/log/kolla/openvswitch/ovs-vswitchd.log <== 2023-06-10T19:16:50.733Z|00138|connmgr|INFO|br-int<->unix#3: 4 flow_mods in the 2 s starting 10 s ago (4 adds) 2023-06-10T19:18:13.267Z|00139|connmgr|INFO|br-int<->unix#3: 14 flow_mods 10 s ago (14 adds) 2023-06-10T19:19:13.267Z|00140|connmgr|INFO|br-int<->unix#3: 4 flow_mods 39 s ago (4 adds) 2023-06-10T19:22:23.652Z|00141|connmgr|INFO|br-int<->unix#3: 2 flow_mods 10 s ago (2 adds) 2023-06-10T19:24:11.917Z|00142|connmgr|INFO|br-int<->unix#3: 16 flow_mods 10 s ago (16 deletes) 2023-06-10T19:25:11.917Z|00143|connmgr|INFO|br-int<->unix#3: 42 flow_mods in the 26 s starting 44 s ago (28 adds, 14 deletes) 2023-06-10T19:31:41.912Z|00144|connmgr|INFO|br-int<->unix#3: 113 flow_mods in the 2 s starting 10 s ago (6 adds, 107 deletes) 2023-06-10T19:36:29.369Z|00145|connmgr|INFO|br-int<->unix#3: 109 flow_mods in the 9 s starting 10 s ago (103 adds, 6 deletes) 2023-06-10T19:39:49.885Z|00146|connmgr|INFO|br-int<->unix#3: 2 flow_mods 10 s ago (2 adds) 2023-06-10T20:18:12.593Z|00147|connmgr|INFO|br-int<->unix#3: 111 flow_mods in the 2 s starting 10 s ago (6 adds, 105 deletes) -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sun Jun 11 02:28:10 2023 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 10 Jun 2023 22:28:10 -0400 Subject: [kolla] rabbitmq failed to build image using 2023.1 release Message-ID: Folks, Do you know how to solve this? I am using release 2023.1 of kolla to build images using ubuntu 22.04 root at docker-reg:/opt/kolla/etc/kolla# kolla-build --registry docker-reg:4000 --config-file kolla-build.conf --debug --threads 1 --skip-existing --push --cache --format none rabbitmq INFO:kolla.common.utils:Using engine: docker INFO:kolla.common.utils:Found the container image folder at /usr/local/share/kolla/docker INFO:kolla.common.utils:Added image rabbitmq to queue INFO:kolla.common.utils:Attempt number: 1 to run task: BuildTask(rabbitmq) DEBUG:kolla.common.utils.rabbitmq:Processing INFO:kolla.common.utils.rabbitmq:Building started at 2023-06-11 02:25:44.208880 DEBUG:kolla.common.utils.rabbitmq:Turned 0 plugins into plugins archive DEBUG:kolla.common.utils.rabbitmq:Turned 0 additions into additions archive INFO:kolla.common.utils.rabbitmq:Step 1/11 : FROM docker-reg:4000/kolla/base:2023.1 INFO:kolla.common.utils.rabbitmq: ---> 4551f4af8ddf INFO:kolla.common.utils.rabbitmq:Step 2/11 : LABEL maintainer="Kolla Project (https://launchpad.net/kolla)" name="rabbitmq" build-date="20230611" INFO:kolla.common.utils.rabbitmq: ---> Using cache INFO:kolla.common.utils.rabbitmq: ---> 6c2ef10499f7 INFO:kolla.common.utils.rabbitmq:Step 3/11 : RUN usermod --append --home /var/lib/rabbitmq --groups kolla rabbitmq && mkdir -p /var/lib/rabbitmq && chown -R 42439:42439 /var/lib/rabbitmq INFO:kolla.common.utils.rabbitmq: ---> Using cache INFO:kolla.common.utils.rabbitmq: ---> 29ef8940f40b INFO:kolla.common.utils.rabbitmq:Step 4/11 : RUN echo 'Uris: https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu' >/etc/apt/sources.list.d/erlang.sources && echo 'Components: main' >>/etc/apt/sources.list.d/erlang.sources && echo 'Types: deb' >>/etc/apt/sources.list.d/erlang.sources && echo 'Suites: jammy' >>/etc/apt/sources.list.d/erlang.sources && echo 'Signed-By: /etc/kolla/apt-keys/erlang-ppa.gpg' >>/etc/apt/sources.list.d/erlang.sources && echo 'Uris: https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu' >/etc/apt/sources.list.d/rabbitmq.sources && echo 'Components: main' >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Types: deb' >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Suites: jammy' >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Signed-By: /etc/kolla/apt-keys/rabbitmq.gpg' >>/etc/apt/sources.list.d/rabbitmq.sources INFO:kolla.common.utils.rabbitmq: ---> Using cache INFO:kolla.common.utils.rabbitmq: ---> 6d92a7342a90 INFO:kolla.common.utils.rabbitmq:Step 5/11 : RUN apt-get --error-on=any update && apt-get -y install --no-install-recommends logrotate rabbitmq-server && apt-get clean && rm -rf /var/lib/apt/lists/* INFO:kolla.common.utils.rabbitmq: ---> Running in 0deab7961445 INFO:kolla.common.utils.rabbitmq:Get:1 http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope InRelease [5,463 B] INFO:kolla.common.utils.rabbitmq:Get:2 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB] INFO:kolla.common.utils.rabbitmq:Get:3 http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [3,447 B] INFO:kolla.common.utils.rabbitmq:Get:7 http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope/main amd64 Packages [126 kB] INFO:kolla.common.utils.rabbitmq:Get:8 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy InRelease [5,152 B] INFO:kolla.common.utils.rabbitmq:Get:9 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu jammy InRelease [18.1 kB] INFO:kolla.common.utils.rabbitmq:Get:5 http://mirror.siena.edu/ubuntu jammy-updates InRelease [119 kB] INFO:kolla.common.utils.rabbitmq:Get:10 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages [9,044 B] INFO:kolla.common.utils.rabbitmq:Get:6 http://ftp.usf.edu/pub/ubuntu jammy-security InRelease [110 kB] INFO:kolla.common.utils.rabbitmq:Get:11 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 Packages [27.0 kB] INFO:kolla.common.utils.rabbitmq:Get:4 https://archive.linux.duke.edu/ubuntu jammy InRelease [270 kB] INFO:kolla.common.utils.rabbitmq:Get:12 http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 Packages [49.4 kB] INFO:kolla.common.utils.rabbitmq:Get:14 http://ubuntu.osuosl.org/ubuntu jammy-updates/main amd64 Packages [857 kB] INFO:kolla.common.utils.rabbitmq:Get:17 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu jammy/main amd64 Packages [8,167 B] INFO:kolla.common.utils.rabbitmq:Get:16 http://pubmirrors.dal.corespace.com/ubuntu jammy-security/universe amd64 Packages [928 kB] INFO:kolla.common.utils.rabbitmq:Get:15 https://atl.mirrors.clouvider.net/ubuntu jammy-security/main amd64 Packages [575 kB] INFO:kolla.common.utils.rabbitmq:Get:19 http://www.club.cc.cmu.edu/pub/ubuntu jammy/main amd64 Packages [1,792 kB] INFO:kolla.common.utils.rabbitmq:Get:18 http://mirror.team-cymru.com/ubuntu jammy/universe amd64 Packages [17.5 MB] INFO:kolla.common.utils.rabbitmq:Get:13 http://mirrors.syringanetworks.net/ubuntu-archive jammy-updates/universe amd64 Packages [1,176 kB] INFO:kolla.common.utils.rabbitmq:Fetched 23.7 MB in 6s (4,091 kB/s) INFO:kolla.common.utils.rabbitmq:Reading package lists... INFO:kolla.common.utils.rabbitmq:Reading package lists... INFO:kolla.common.utils.rabbitmq:Building dependency tree... INFO:kolla.common.utils.rabbitmq:Reading state information... INFO:kolla.common.utils.rabbitmq:Some packages could not be installed. This may mean that you have INFO:kolla.common.utils.rabbitmq:requested an impossible situation or if you are using the unstable INFO:kolla.common.utils.rabbitmq:distribution that some required packages have not yet been created INFO:kolla.common.utils.rabbitmq:or been moved out of Incoming. INFO:kolla.common.utils.rabbitmq:The following information may help to resolve the situation: INFO:kolla.common.utils.rabbitmq:The following packages have unmet dependencies: INFO:kolla.common.utils.rabbitmq: rabbitmq-server : Depends: erlang-base (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.rabbitmq: erlang-base-hipe (< 1:26.0) but it is not installable or INFO:kolla.common.utils.rabbitmq: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.rabbitmq: Depends: erlang-crypto (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.rabbitmq: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.rabbitmq: Depends: erlang-eldap (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.rabbitmq: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.rabbitmq: Depends: erlang-inets (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.rabbitmq: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.rabbitmq: Depends: erlang-mnesia (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.rabbitmq: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.rabbitmq: Depends: erlang-os-mon (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.rabbitmq: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.rabbitmq: Depends: erlang-parsetools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.rabbitmq: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.rabbitmq: Depends: erlang-public-key (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.rabbitmq: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.rabbitmq: Depends: erlang-runtime-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.rabbitmq: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.rabbitmq: Depends: erlang-ssl (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.rabbitmq: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.rabbitmq: Depends: erlang-syntax-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.rabbitmq: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.rabbitmq: Depends: erlang-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.rabbitmq: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.rabbitmq: Depends: erlang-xmerl (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.rabbitmq: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.rabbitmq:E: Unable to correct problems, you have held broken packages. INFO:kolla.common.utils.rabbitmq: -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Mon Jun 12 04:43:59 2023 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Mon, 12 Jun 2023 13:43:59 +0900 Subject: [ptg][tacker] Tacker PTG schedule Message-ID: Hi team, We are going to have our PTG session on this Thursday 15:00-17:50 UTC at room 10. We also have setup webex so that you can join from remote. See tacker's etherpad for the link of the remote session. https://etherpad.opendev.org/p/vancouver-june2023-tacker Thanks, Yasufumi From yasufum.o at gmail.com Mon Jun 12 04:46:37 2023 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Mon, 12 Jun 2023 13:46:37 +0900 Subject: [tacker] Skip weekly IRC meeting Message-ID: Hi team, Since we'll have PTG this week, I would like to skip the next IRC meeting on Jun 13. Thanks, Yasufumi From michal.arbet at ultimum.io Mon Jun 12 09:48:03 2023 From: michal.arbet at ultimum.io (Michal Arbet) Date: Mon, 12 Jun 2023 11:48:03 +0200 Subject: [kolla] rabbitmq failed to build image using 2023.1 release In-Reply-To: References: Message-ID: APT dependencies broken :( Michal Arbet Openstack Engineer Ultimum Technologies a.s. Na Po???? 1047/26, 11000 Praha 1 Czech Republic +420 604 228 897 michal.arbet at ultimum.io *https://ultimum.io * LinkedIn | Twitter | Facebook ne 11. 6. 2023 v 4:37 odes?latel Satish Patel napsal: > Folks, > > Do you know how to solve this? I am using release 2023.1 of kolla to build > images using ubuntu 22.04 > > root at docker-reg:/opt/kolla/etc/kolla# kolla-build --registry > docker-reg:4000 --config-file kolla-build.conf --debug --threads 1 > --skip-existing --push --cache --format none rabbitmq > INFO:kolla.common.utils:Using engine: docker > INFO:kolla.common.utils:Found the container image folder at > /usr/local/share/kolla/docker > INFO:kolla.common.utils:Added image rabbitmq to queue > INFO:kolla.common.utils:Attempt number: 1 to run task: BuildTask(rabbitmq) > DEBUG:kolla.common.utils.rabbitmq:Processing > INFO:kolla.common.utils.rabbitmq:Building started at 2023-06-11 > 02:25:44.208880 > DEBUG:kolla.common.utils.rabbitmq:Turned 0 plugins into plugins archive > DEBUG:kolla.common.utils.rabbitmq:Turned 0 additions into additions archive > INFO:kolla.common.utils.rabbitmq:Step 1/11 : FROM > docker-reg:4000/kolla/base:2023.1 > INFO:kolla.common.utils.rabbitmq: ---> 4551f4af8ddf > INFO:kolla.common.utils.rabbitmq:Step 2/11 : LABEL maintainer="Kolla > Project (https://launchpad.net/kolla)" name="rabbitmq" > build-date="20230611" > INFO:kolla.common.utils.rabbitmq: ---> Using cache > INFO:kolla.common.utils.rabbitmq: ---> 6c2ef10499f7 > INFO:kolla.common.utils.rabbitmq:Step 3/11 : RUN usermod --append --home > /var/lib/rabbitmq --groups kolla rabbitmq && mkdir -p /var/lib/rabbitmq > && chown -R 42439:42439 /var/lib/rabbitmq > INFO:kolla.common.utils.rabbitmq: ---> Using cache > INFO:kolla.common.utils.rabbitmq: ---> 29ef8940f40b > INFO:kolla.common.utils.rabbitmq:Step 4/11 : RUN echo 'Uris: > https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu' > >/etc/apt/sources.list.d/erlang.sources && echo 'Components: main' > >>/etc/apt/sources.list.d/erlang.sources && echo 'Types: deb' > >>/etc/apt/sources.list.d/erlang.sources && echo 'Suites: jammy' > >>/etc/apt/sources.list.d/erlang.sources && echo 'Signed-By: > /etc/kolla/apt-keys/erlang-ppa.gpg' > >>/etc/apt/sources.list.d/erlang.sources && echo 'Uris: > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu' > >/etc/apt/sources.list.d/rabbitmq.sources && echo 'Components: main' > >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Types: deb' > >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Suites: jammy' > >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Signed-By: > /etc/kolla/apt-keys/rabbitmq.gpg' >>/etc/apt/sources.list.d/rabbitmq.sources > INFO:kolla.common.utils.rabbitmq: ---> Using cache > INFO:kolla.common.utils.rabbitmq: ---> 6d92a7342a90 > INFO:kolla.common.utils.rabbitmq:Step 5/11 : RUN apt-get --error-on=any > update && apt-get -y install --no-install-recommends logrotate > rabbitmq-server && apt-get clean && rm -rf /var/lib/apt/lists/* > INFO:kolla.common.utils.rabbitmq: ---> Running in 0deab7961445 > INFO:kolla.common.utils.rabbitmq:Get:1 > http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope > InRelease [5,463 B] > INFO:kolla.common.utils.rabbitmq:Get:2 http://archive.ubuntu.com/ubuntu > jammy-backports InRelease [108 kB] > INFO:kolla.common.utils.rabbitmq:Get:3 > http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [3,447 B] > INFO:kolla.common.utils.rabbitmq:Get:7 > http://ubuntu-cloud.archive.canonical.com/ubuntu > jammy-updates/antelope/main amd64 Packages [126 kB] > INFO:kolla.common.utils.rabbitmq:Get:8 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy > InRelease [5,152 B] > INFO:kolla.common.utils.rabbitmq:Get:9 > https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu jammy > InRelease [18.1 kB] > INFO:kolla.common.utils.rabbitmq:Get:5 http://mirror.siena.edu/ubuntu > jammy-updates InRelease [119 kB] > INFO:kolla.common.utils.rabbitmq:Get:10 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages [9,044 B] > INFO:kolla.common.utils.rabbitmq:Get:6 http://ftp.usf.edu/pub/ubuntu > jammy-security InRelease [110 kB] > INFO:kolla.common.utils.rabbitmq:Get:11 http://archive.ubuntu.com/ubuntu > jammy-backports/universe amd64 Packages [27.0 kB] > INFO:kolla.common.utils.rabbitmq:Get:4 > https://archive.linux.duke.edu/ubuntu jammy InRelease [270 kB] > INFO:kolla.common.utils.rabbitmq:Get:12 http://archive.ubuntu.com/ubuntu > jammy-backports/main amd64 Packages [49.4 kB] > INFO:kolla.common.utils.rabbitmq:Get:14 http://ubuntu.osuosl.org/ubuntu > jammy-updates/main amd64 Packages [857 kB] > INFO:kolla.common.utils.rabbitmq:Get:17 > https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu > jammy/main amd64 Packages [8,167 B] > INFO:kolla.common.utils.rabbitmq:Get:16 > http://pubmirrors.dal.corespace.com/ubuntu jammy-security/universe amd64 > Packages [928 kB] > INFO:kolla.common.utils.rabbitmq:Get:15 > https://atl.mirrors.clouvider.net/ubuntu jammy-security/main amd64 > Packages [575 kB] > INFO:kolla.common.utils.rabbitmq:Get:19 > http://www.club.cc.cmu.edu/pub/ubuntu jammy/main amd64 Packages [1,792 kB] > INFO:kolla.common.utils.rabbitmq:Get:18 > http://mirror.team-cymru.com/ubuntu jammy/universe amd64 Packages [17.5 > MB] > INFO:kolla.common.utils.rabbitmq:Get:13 > http://mirrors.syringanetworks.net/ubuntu-archive jammy-updates/universe > amd64 Packages [1,176 kB] > INFO:kolla.common.utils.rabbitmq:Fetched 23.7 MB in 6s (4,091 kB/s) > INFO:kolla.common.utils.rabbitmq:Reading package lists... > INFO:kolla.common.utils.rabbitmq:Reading package lists... > INFO:kolla.common.utils.rabbitmq:Building dependency tree... > INFO:kolla.common.utils.rabbitmq:Reading state information... > INFO:kolla.common.utils.rabbitmq:Some packages could not be installed. > This may mean that you have > INFO:kolla.common.utils.rabbitmq:requested an impossible situation or if > you are using the unstable > INFO:kolla.common.utils.rabbitmq:distribution that some required packages > have not yet been created > INFO:kolla.common.utils.rabbitmq:or been moved out of Incoming. > INFO:kolla.common.utils.rabbitmq:The following information may help to > resolve the situation: > INFO:kolla.common.utils.rabbitmq:The following packages have unmet > dependencies: > INFO:kolla.common.utils.rabbitmq: rabbitmq-server : Depends: erlang-base > (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > INFO:kolla.common.utils.rabbitmq: > erlang-base-hipe (< 1:26.0) but it is not installable or > INFO:kolla.common.utils.rabbitmq: esl-erlang (< > 1:26.0) but it is not installable > INFO:kolla.common.utils.rabbitmq: Depends: erlang-crypto > (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > INFO:kolla.common.utils.rabbitmq: esl-erlang (< > 1:26.0) but it is not installable > INFO:kolla.common.utils.rabbitmq: Depends: erlang-eldap > (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > INFO:kolla.common.utils.rabbitmq: esl-erlang (< > 1:26.0) but it is not installable > INFO:kolla.common.utils.rabbitmq: Depends: erlang-inets > (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > INFO:kolla.common.utils.rabbitmq: esl-erlang (< > 1:26.0) but it is not installable > INFO:kolla.common.utils.rabbitmq: Depends: erlang-mnesia > (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > INFO:kolla.common.utils.rabbitmq: esl-erlang (< > 1:26.0) but it is not installable > INFO:kolla.common.utils.rabbitmq: Depends: erlang-os-mon > (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > INFO:kolla.common.utils.rabbitmq: esl-erlang (< > 1:26.0) but it is not installable > INFO:kolla.common.utils.rabbitmq: Depends: > erlang-parsetools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be > installed or > INFO:kolla.common.utils.rabbitmq: esl-erlang (< > 1:26.0) but it is not installable > INFO:kolla.common.utils.rabbitmq: Depends: > erlang-public-key (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be > installed or > INFO:kolla.common.utils.rabbitmq: esl-erlang (< > 1:26.0) but it is not installable > INFO:kolla.common.utils.rabbitmq: Depends: > erlang-runtime-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to > be installed or > INFO:kolla.common.utils.rabbitmq: esl-erlang (< > 1:26.0) but it is not installable > INFO:kolla.common.utils.rabbitmq: Depends: erlang-ssl (< > 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > INFO:kolla.common.utils.rabbitmq: esl-erlang (< > 1:26.0) but it is not installable > INFO:kolla.common.utils.rabbitmq: Depends: > erlang-syntax-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to > be installed or > INFO:kolla.common.utils.rabbitmq: esl-erlang (< > 1:26.0) but it is not installable > INFO:kolla.common.utils.rabbitmq: Depends: erlang-tools > (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > INFO:kolla.common.utils.rabbitmq: esl-erlang (< > 1:26.0) but it is not installable > INFO:kolla.common.utils.rabbitmq: Depends: erlang-xmerl > (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > INFO:kolla.common.utils.rabbitmq: esl-erlang (< > 1:26.0) but it is not installable > INFO:kolla.common.utils.rabbitmq:E: Unable to correct problems, you have > held broken packages. > INFO:kolla.common.utils.rabbitmq: > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.arbet at ultimum.io Mon Jun 12 10:15:28 2023 From: michal.arbet at ultimum.io (Michal Arbet) Date: Mon, 12 Jun 2023 12:15:28 +0200 Subject: [kolla] rabbitmq failed to build image using 2023.1 release In-Reply-To: References: Message-ID: https://review.opendev.org/c/openstack/kolla/+/885857 Michal Arbet Openstack Engineer Ultimum Technologies a.s. Na Po???? 1047/26, 11000 Praha 1 Czech Republic +420 604 228 897 michal.arbet at ultimum.io *https://ultimum.io * LinkedIn | Twitter | Facebook po 12. 6. 2023 v 11:48 odes?latel Michal Arbet napsal: > APT dependencies broken :( > Michal Arbet > Openstack Engineer > > Ultimum Technologies a.s. > Na Po???? 1047/26, 11000 Praha 1 > Czech Republic > > +420 604 228 897 > michal.arbet at ultimum.io > *https://ultimum.io * > > LinkedIn | Twitter > | Facebook > > > > ne 11. 6. 2023 v 4:37 odes?latel Satish Patel > napsal: > >> Folks, >> >> Do you know how to solve this? I am using release 2023.1 of kolla to >> build images using ubuntu 22.04 >> >> root at docker-reg:/opt/kolla/etc/kolla# kolla-build --registry >> docker-reg:4000 --config-file kolla-build.conf --debug --threads 1 >> --skip-existing --push --cache --format none rabbitmq >> INFO:kolla.common.utils:Using engine: docker >> INFO:kolla.common.utils:Found the container image folder at >> /usr/local/share/kolla/docker >> INFO:kolla.common.utils:Added image rabbitmq to queue >> INFO:kolla.common.utils:Attempt number: 1 to run task: BuildTask(rabbitmq) >> DEBUG:kolla.common.utils.rabbitmq:Processing >> INFO:kolla.common.utils.rabbitmq:Building started at 2023-06-11 >> 02:25:44.208880 >> DEBUG:kolla.common.utils.rabbitmq:Turned 0 plugins into plugins archive >> DEBUG:kolla.common.utils.rabbitmq:Turned 0 additions into additions >> archive >> INFO:kolla.common.utils.rabbitmq:Step 1/11 : FROM >> docker-reg:4000/kolla/base:2023.1 >> INFO:kolla.common.utils.rabbitmq: ---> 4551f4af8ddf >> INFO:kolla.common.utils.rabbitmq:Step 2/11 : LABEL maintainer="Kolla >> Project (https://launchpad.net/kolla)" name="rabbitmq" >> build-date="20230611" >> INFO:kolla.common.utils.rabbitmq: ---> Using cache >> INFO:kolla.common.utils.rabbitmq: ---> 6c2ef10499f7 >> INFO:kolla.common.utils.rabbitmq:Step 3/11 : RUN usermod --append --home >> /var/lib/rabbitmq --groups kolla rabbitmq && mkdir -p /var/lib/rabbitmq >> && chown -R 42439:42439 /var/lib/rabbitmq >> INFO:kolla.common.utils.rabbitmq: ---> Using cache >> INFO:kolla.common.utils.rabbitmq: ---> 29ef8940f40b >> INFO:kolla.common.utils.rabbitmq:Step 4/11 : RUN echo 'Uris: >> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu' >> >/etc/apt/sources.list.d/erlang.sources && echo 'Components: main' >> >>/etc/apt/sources.list.d/erlang.sources && echo 'Types: deb' >> >>/etc/apt/sources.list.d/erlang.sources && echo 'Suites: jammy' >> >>/etc/apt/sources.list.d/erlang.sources && echo 'Signed-By: >> /etc/kolla/apt-keys/erlang-ppa.gpg' >> >>/etc/apt/sources.list.d/erlang.sources && echo 'Uris: >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu' >> >/etc/apt/sources.list.d/rabbitmq.sources && echo 'Components: main' >> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Types: deb' >> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Suites: jammy' >> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Signed-By: >> /etc/kolla/apt-keys/rabbitmq.gpg' >>/etc/apt/sources.list.d/rabbitmq.sources >> INFO:kolla.common.utils.rabbitmq: ---> Using cache >> INFO:kolla.common.utils.rabbitmq: ---> 6d92a7342a90 >> INFO:kolla.common.utils.rabbitmq:Step 5/11 : RUN apt-get --error-on=any >> update && apt-get -y install --no-install-recommends logrotate >> rabbitmq-server && apt-get clean && rm -rf /var/lib/apt/lists/* >> INFO:kolla.common.utils.rabbitmq: ---> Running in 0deab7961445 >> INFO:kolla.common.utils.rabbitmq:Get:1 >> http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope >> InRelease [5,463 B] >> INFO:kolla.common.utils.rabbitmq:Get:2 http://archive.ubuntu.com/ubuntu >> jammy-backports InRelease [108 kB] >> INFO:kolla.common.utils.rabbitmq:Get:3 >> http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [3,447 B] >> INFO:kolla.common.utils.rabbitmq:Get:7 >> http://ubuntu-cloud.archive.canonical.com/ubuntu >> jammy-updates/antelope/main amd64 Packages [126 kB] >> INFO:kolla.common.utils.rabbitmq:Get:8 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy InRelease [5,152 B] >> INFO:kolla.common.utils.rabbitmq:Get:9 >> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu jammy >> InRelease [18.1 kB] >> INFO:kolla.common.utils.rabbitmq:Get:5 http://mirror.siena.edu/ubuntu >> jammy-updates InRelease [119 kB] >> INFO:kolla.common.utils.rabbitmq:Get:10 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages [9,044 B] >> INFO:kolla.common.utils.rabbitmq:Get:6 http://ftp.usf.edu/pub/ubuntu >> jammy-security InRelease [110 kB] >> INFO:kolla.common.utils.rabbitmq:Get:11 http://archive.ubuntu.com/ubuntu >> jammy-backports/universe amd64 Packages [27.0 kB] >> INFO:kolla.common.utils.rabbitmq:Get:4 >> https://archive.linux.duke.edu/ubuntu jammy InRelease [270 kB] >> INFO:kolla.common.utils.rabbitmq:Get:12 http://archive.ubuntu.com/ubuntu >> jammy-backports/main amd64 Packages [49.4 kB] >> INFO:kolla.common.utils.rabbitmq:Get:14 http://ubuntu.osuosl.org/ubuntu >> jammy-updates/main amd64 Packages [857 kB] >> INFO:kolla.common.utils.rabbitmq:Get:17 >> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >> jammy/main amd64 Packages [8,167 B] >> INFO:kolla.common.utils.rabbitmq:Get:16 >> http://pubmirrors.dal.corespace.com/ubuntu jammy-security/universe amd64 >> Packages [928 kB] >> INFO:kolla.common.utils.rabbitmq:Get:15 >> https://atl.mirrors.clouvider.net/ubuntu jammy-security/main amd64 >> Packages [575 kB] >> INFO:kolla.common.utils.rabbitmq:Get:19 >> http://www.club.cc.cmu.edu/pub/ubuntu jammy/main amd64 Packages [1,792 >> kB] >> INFO:kolla.common.utils.rabbitmq:Get:18 >> http://mirror.team-cymru.com/ubuntu jammy/universe amd64 Packages [17.5 >> MB] >> INFO:kolla.common.utils.rabbitmq:Get:13 >> http://mirrors.syringanetworks.net/ubuntu-archive jammy-updates/universe >> amd64 Packages [1,176 kB] >> INFO:kolla.common.utils.rabbitmq:Fetched 23.7 MB in 6s (4,091 kB/s) >> INFO:kolla.common.utils.rabbitmq:Reading package lists... >> INFO:kolla.common.utils.rabbitmq:Reading package lists... >> INFO:kolla.common.utils.rabbitmq:Building dependency tree... >> INFO:kolla.common.utils.rabbitmq:Reading state information... >> INFO:kolla.common.utils.rabbitmq:Some packages could not be installed. >> This may mean that you have >> INFO:kolla.common.utils.rabbitmq:requested an impossible situation or if >> you are using the unstable >> INFO:kolla.common.utils.rabbitmq:distribution that some required packages >> have not yet been created >> INFO:kolla.common.utils.rabbitmq:or been moved out of Incoming. >> INFO:kolla.common.utils.rabbitmq:The following information may help to >> resolve the situation: >> INFO:kolla.common.utils.rabbitmq:The following packages have unmet >> dependencies: >> INFO:kolla.common.utils.rabbitmq: rabbitmq-server : Depends: erlang-base >> (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> INFO:kolla.common.utils.rabbitmq: >> erlang-base-hipe (< 1:26.0) but it is not installable or >> INFO:kolla.common.utils.rabbitmq: esl-erlang >> (< 1:26.0) but it is not installable >> INFO:kolla.common.utils.rabbitmq: Depends: >> erlang-crypto (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >> installed or >> INFO:kolla.common.utils.rabbitmq: esl-erlang >> (< 1:26.0) but it is not installable >> INFO:kolla.common.utils.rabbitmq: Depends: erlang-eldap >> (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> INFO:kolla.common.utils.rabbitmq: esl-erlang >> (< 1:26.0) but it is not installable >> INFO:kolla.common.utils.rabbitmq: Depends: erlang-inets >> (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> INFO:kolla.common.utils.rabbitmq: esl-erlang >> (< 1:26.0) but it is not installable >> INFO:kolla.common.utils.rabbitmq: Depends: >> erlang-mnesia (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >> installed or >> INFO:kolla.common.utils.rabbitmq: esl-erlang >> (< 1:26.0) but it is not installable >> INFO:kolla.common.utils.rabbitmq: Depends: >> erlang-os-mon (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >> installed or >> INFO:kolla.common.utils.rabbitmq: esl-erlang >> (< 1:26.0) but it is not installable >> INFO:kolla.common.utils.rabbitmq: Depends: >> erlang-parsetools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >> installed or >> INFO:kolla.common.utils.rabbitmq: esl-erlang >> (< 1:26.0) but it is not installable >> INFO:kolla.common.utils.rabbitmq: Depends: >> erlang-public-key (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >> installed or >> INFO:kolla.common.utils.rabbitmq: esl-erlang >> (< 1:26.0) but it is not installable >> INFO:kolla.common.utils.rabbitmq: Depends: >> erlang-runtime-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to >> be installed or >> INFO:kolla.common.utils.rabbitmq: esl-erlang >> (< 1:26.0) but it is not installable >> INFO:kolla.common.utils.rabbitmq: Depends: erlang-ssl >> (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> INFO:kolla.common.utils.rabbitmq: esl-erlang >> (< 1:26.0) but it is not installable >> INFO:kolla.common.utils.rabbitmq: Depends: >> erlang-syntax-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to >> be installed or >> INFO:kolla.common.utils.rabbitmq: esl-erlang >> (< 1:26.0) but it is not installable >> INFO:kolla.common.utils.rabbitmq: Depends: erlang-tools >> (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> INFO:kolla.common.utils.rabbitmq: esl-erlang >> (< 1:26.0) but it is not installable >> INFO:kolla.common.utils.rabbitmq: Depends: erlang-xmerl >> (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> INFO:kolla.common.utils.rabbitmq: esl-erlang >> (< 1:26.0) but it is not installable >> INFO:kolla.common.utils.rabbitmq:E: Unable to correct problems, you have >> held broken packages. >> INFO:kolla.common.utils.rabbitmq: >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at protonmail.com Mon Jun 12 10:31:15 2023 From: dtantsur at protonmail.com (Dmitry Tantsur) Date: Mon, 12 Jun 2023 10:31:15 +0000 Subject: New repo for os-net-config? In-Reply-To: <0fcc7dbc078566c3f692f6824e3248f4cfefeb3b.camel@redhat.com> References: <20230607194243.nan7j3loophhp6cy@yuggoth.org> <0fcc7dbc078566c3f692f6824e3248f4cfefeb3b.camel@redhat.com> Message-ID: On 6/8/23 12:53, smooney at redhat.com wrote: > On Wed, 2023-06-07 at 15:44 -0700, Dan Sneddon wrote: >> On Wed, Jun 7, 2023 at 12:49 PM Jeremy Stanley wrote: >> >>> On 2023-06-07 12:07:59 -0700 (-0700), Dan Sneddon wrote: >>>> On Wed, Jun 7, 2023 at 9:00 AM Clark Boylan >>> wrote: >>> [...] >>>>> Is there some reason the existing repository won't work? >>>> >>>> Currently os-net-config is a part of the TripleO project. Since >>>> TripleO is retiring and being replaced by something hosted on >>>> GitHub, we are no longer maintaining the Master/Zed branches. >>> [...] >>> >>> Maybe you misunderstood. To restate: Is there any reason the people >>> who want to use and maintain the openstack/os-net-config master and >>> stable/zed (or other) branches can't just adopt the project? It's >>> well within the TC's power to grant control of that repository to >>> another project team who isn't TripleO. >>> >>> Which use cases specifically (outside of Red Hat's lingering >>> interest in the stable/wallaby branches of TripleO repositories) are >>> you referring to? >>> -- >>> Jeremy Stanley >> >> >>> >> The people who want to continue to use os-net-config are developing the >> replacement for TripleO, but they have moved to GutHub. That?s an option >> for os-net-config, but not my first preference. >> I have over the years heard of companies using os-net-config for various >> use cases. It?s possible that posting to openstack-discuss won?t reach any >> of those users, but it doesn?t hurt to ask here. > i will need to reread the thread but i thought that there was reference to baremetal > use cases i had assumed that meant ironic? > > just because we are using GitHub for the replacement for tripleo is not a reason to move > things to github by default. movign to github for the code review as im sure you are aware > is a much worse code review interface. we would be loosing the release tooling and ablity to > publish to pypi. the ci would need to be ported amoung other things. > > There is also an open question of if/how much of os-net-config will continue to be used for the > operator based installer. alternatives are being considered although we have known gaps. > no desicssion has been made on if we will continue with os-net-config or replace it with nmstate > or a hybird of the two. > > a quick search https://codesearch.opendev.org/?q=os-net-config&i=nope&literal=nope&files=&excludeFiles=&repos= > seams to indicate that os-net-config is used by: > - openstack-virtual-baremetal > - networking-bigswitch > - possibly starlingx > > so i would suggest moving it to ironic governance and keeping the repo as is to suppot the virtual baremetal usecase. None of these repos are part of the baremetal project. I cannot speak for the PTL or for the team, but I highly doubt we'll be in a position to adopt os-net-config. Dmitry > >> >> -Dan >> >> > > From murilo at evocorp.com.br Mon Jun 12 10:50:52 2023 From: murilo at evocorp.com.br (Murilo Morais) Date: Mon, 12 Jun 2023 07:50:52 -0300 Subject: Monitoring In-Reply-To: References: Message-ID: Thanks a lot for the tips, I'll try it! Em sex., 9 de jun. de 2023 ?s 03:00, Matthias Runge < mrunge at matthias-runge.de> escreveu: > On Thu, Jun 08, 2023 at 09:19:11AM -0300, Murilo Morais wrote: > > Good morning everybody! > > Good morning, > > > > Guys, which monitoring stack do you recommend to monitor Host and VM > > resources? Or what monitoring strategies do they recommend? > > This highly depends on your understanding of the term "monitoring". > > To track resources used by OpenStack users/projects, I'd recommend > ceilometer. In addition to ceilometer, you'll probably need something > to track resource usage in infrastructure nodes, e.g memory usage, disk > usage, I/O etc. My recommendation would be to use something like > collectd or node_exporter. > > Then you'd need a data storage for storing the metrics. Things like > gnocchi or prometheus come into mind. > > The choice of the tools will probably be influenced by the available > knowledge, the way and where OpenStack is installed. > > What is not covered above is: tracking if services are up and running, > any kind of centralized logging, etc. > > Matthias > > -- > Matthias Runge > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Mon Jun 12 14:55:30 2023 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 12 Jun 2023 16:55:30 +0200 Subject: [all][requirements] Automated management of setup.py Message-ID: Hello, We recently had a patch [1] submitted to cloudkitty-dashboard that dropped support for Python 2.7 bits in setup.py (which is fine), but at the same time removed the following line: # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT I pointed it out to the author but they never updated their patch. This comment is managed by the update.py script in requirements [2]. Should there be an effort to update setup.py across all repositories using this script? Thanks, Pierre Riteau (priteau) [1] https://review.opendev.org/c/openstack/cloudkitty-dashboard/+/884339 [2] https://opendev.org/openstack/requirements/src/branch/master/openstack_requirements/cmds/update.py From knikolla at bu.edu Mon Jun 12 14:59:15 2023 From: knikolla at bu.edu (Nikolla, Kristi) Date: Mon, 12 Jun 2023 14:59:15 +0000 Subject: [tc] No weekly meeting on June 13 Message-ID: <2CDF3E13-82E1-4E13-AA84-BB519CA04F26@bu.edu> Hi all, As this week is the Open Infra Summit and PTG there will be no weekly meeting of the TC on June 13. The next meeting will be Tuesday, June 20, 2023. Best, Kristi Nikolla From ralonsoh at redhat.com Mon Jun 12 15:55:08 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 12 Jun 2023 08:55:08 -0700 Subject: [neutron][ptg] PTG schedule Message-ID: Hello all: This is the PTG week and we are thrilled to have you there. Just as a heads-up, on Tuesday at 11:20 [1] we have the "*Neutron meet and greet: Operators feedback session*". Join us to share your feedback. We have also booked the* table 15* [2] on Wednesday and Thursday, from 9:00 to 12:50. Please remember our previously used etherpad [3] is now combined in the PTG generated etherpad [4]. Please use this last link [4]. Regards and see you tomorrow! [1]https://vancouver2023.openinfra.dev/a/schedule [2]https://ptg.opendev.org/ptg.html [3]https://etherpad.opendev.org/p/neutron-vancouver-2023 [4]https://etherpad.opendev.org/p/vancouver-june2023-neutron -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Jun 12 16:17:36 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 12 Jun 2023 16:17:36 +0000 Subject: [all][requirements] Automated management of setup.py In-Reply-To: References: Message-ID: <20230612161735.dt7kruxv6bf65mq2@yuggoth.org> On 2023-06-12 16:55:30 +0200 (+0200), Pierre Riteau wrote: > We recently had a patch [1] submitted to cloudkitty-dashboard that > dropped support for Python 2.7 bits in setup.py (which is fine), but > at the same time removed the following line: > > # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT > > I pointed it out to the author but they never updated their patch. > > This comment is managed by the update.py script in requirements [2]. > Should there be an effort to update setup.py across all repositories > using this script? [...] It's safe to remove those comments now when you find them. We stopped syncing requirements lists during the Rocky cycle: https://lists.openstack.org/pipermail/openstack-dev/2018-March/128352.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mhabdullah at ip-tribe.com Mon Jun 12 11:46:08 2023 From: mhabdullah at ip-tribe.com (Mohd Haniff Bin Abdullah) Date: Mon, 12 Jun 2023 11:46:08 +0000 Subject: package installation Message-ID: Hi, We have installed OpenStack CLI VM and can access to CLI. Our VM does not have internet connection. Thus, we need the whole package to be downloaded and sftp to the VM. Where could we get the package? I got this from previous colleague, but he left our company and we don't have a clue where he got that packages.tar.gz. Would you please help us? Thank you so much! [cid:image001.png at 01D99D66.73040A90] Regards, Haniff -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 118147 bytes Desc: image001.png URL: From thomas at goirand.fr Mon Jun 12 16:10:09 2023 From: thomas at goirand.fr (thomas at goirand.fr) Date: Mon, 12 Jun 2023 09:10:09 -0700 Subject: [puppet] Sessions in June 2023 PTG In-Reply-To: <934D3198-BEAC-42BC-964C-AFC5E13606CB@binero.com> References: <934D3198-BEAC-42BC-964C-AFC5E13606CB@binero.com> Message-ID: <119fc547-29db-4a76-9e6f-4e09d9e4bb56@goirand.fr> Hi, I'm already in Vancouver, and I'll join you guys. I'm very much looking forward meeting you guys for real this time. Cheers, Thomas On Jun 9, 2023 1:48 PM, Tobias Urdin wrote: Hello Takashi, Thanks for arranging all this I will be there! Looking forward to meet you in person. Best regards Tobias On 8 Jun 2023, at 05:45, Takashi Kajinami wrote: ? Hello, I'm attending the upcoming OpenInfra Summit and PTG at Vancouver so would like to moderate some puppet sessions in PTG. I've reserved slots from 14:30 to 16:20 on Wednesday. However we can be flexible so please let me know if you are interested but have any conflicts. https://etherpad.opendev.org/p/vancouver-june2023-puppet-openstack I've added a few topics including the recent discussion about module modernization. However it'd be nice if we can have a few more topics, especially any feedback from users or (potential) new contributors. Please add your name and topics if you are planning to join us. Thank you, Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From i.maximets at ovn.org Mon Jun 12 16:22:51 2023 From: i.maximets at ovn.org (Ilya Maximets) Date: Mon, 12 Jun 2023 18:22:51 +0200 Subject: [ovs-discuss] [kolla-ansible][ovn][ovs] connection dropped and inactivity errors In-Reply-To: References: Message-ID: On 6/11/23 04:16, Satish Patel via discuss wrote: > Folks, > > I am getting some strange errors on my kolla based OVN deployment. I have only 5 nodes so it's not a large deployment. Are there any ovn related tuning options which I missed ? > > I have the following timers configured at present.? > > ovn-openflow-probe-interval="60" > > ovn-remote-probe-interval="60000" > > > ==> /var/log/kolla/openvswitch/ovsdb-server.log <== > 2023-05-29T02:05:02.891Z|00027|reconnect|WARN|unix#67466: connection dropped (Connection reset by peer) > 2023-05-31T16:20:33.494Z|00028|reconnect|ERR|tcp:127.0.0.1:59928 : no response to inactivity probe after 5 seconds, disconnecting > 2023-06-01T20:43:23.516Z|00001|vlog|INFO|opened log file /var/log/kolla/openvswitch/ovsdb-server.log > 2023-06-01T20:43:23.520Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.17.3 > 2023-06-01T20:43:33.522Z|00003|memory|INFO|7216 kB peak resident set size after 10.0 seconds > 2023-06-01T20:43:33.522Z|00004|memory|INFO|atoms:826 cells:770 monitors:5 sessions:3 > 2023-06-03T07:44:05.774Z|00005|reconnect|ERR|tcp:127.0.0.1:40098 : no response to inactivity probe after 5 seconds, disconnecting > 2023-06-03T09:41:49.039Z|00006|reconnect|ERR|tcp:127.0.0.1:60042 : no response to inactivity probe after 5 seconds, disconnecting All the logs, except for these ones, can appear under normal circumstances and do not indicate any real issues on their own. I'm not sure what is connecting to a local ovsdb-server from the localhost via TCP. It's not OVN. Maybe some OpenStack component? In any case, this doesn't seem related to the core OVN itself. Best regards, Ilya Maximets. From abishop at redhat.com Mon Jun 12 16:44:22 2023 From: abishop at redhat.com (Alan Bishop) Date: Mon, 12 Jun 2023 09:44:22 -0700 Subject: [puppet] Sessions in June 2023 PTG In-Reply-To: <119fc547-29db-4a76-9e6f-4e09d9e4bb56@goirand.fr> References: <934D3198-BEAC-42BC-964C-AFC5E13606CB@binero.com> <119fc547-29db-4a76-9e6f-4e09d9e4bb56@goirand.fr> Message-ID: On Mon, Jun 12, 2023 at 9:39?AM wrote: > Hi, > > I'm already in Vancouver, and I'll join you guys. I'm very much looking > forward meeting you guys for real this time. > I'll be there as well, and I'm also looking forward to meeting everyone f2f. Alan > > Cheers, > > Thomas > > On Jun 9, 2023 1:48 PM, Tobias Urdin wrote: > > Hello Takashi, > > Thanks for arranging all this I will be there! Looking forward to meet you > in person. > > Best regards > Tobias > > On 8 Jun 2023, at 05:45, Takashi Kajinami wrote: > > ? > Hello, > > > I'm attending the upcoming OpenInfra Summit and PTG at Vancouver > so would like to moderate some puppet sessions in PTG. > > I've reserved slots from 14:30 to 16:20 on Wednesday. > However we can be flexible so please let me know if you are interested but > have any conflicts. > https://etherpad.opendev.org/p/vancouver-june2023-puppet-openstack > > I've added a few topics including the recent discussion about module > modernization. > However it'd be nice if we can have a few more topics, especially any > feedback from > users or (potential) new contributors. Please add your name and topics if > you are > planning to join us. > > Thank you, > Takashi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Mon Jun 12 20:33:01 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Mon, 12 Jun 2023 13:33:01 -0700 Subject: New repo for os-net-config? In-Reply-To: References: <20230607194243.nan7j3loophhp6cy@yuggoth.org> <0fcc7dbc078566c3f692f6824e3248f4cfefeb3b.camel@redhat.com> Message-ID: On Mon, Jun 12, 2023 at 3:51?AM Dmitry Tantsur wrote: > > > On 6/8/23 12:53, smooney at redhat.com wrote: > > On Wed, 2023-06-07 at 15:44 -0700, Dan Sneddon wrote: > >> On Wed, Jun 7, 2023 at 12:49 PM Jeremy Stanley > wrote: > >> > >>> On 2023-06-07 12:07:59 -0700 (-0700), Dan Sneddon wrote: > >>>> On Wed, Jun 7, 2023 at 9:00 AM Clark Boylan > >>> wrote: > >>> [...] > >>>>> Is there some reason the existing repository won't work? > >>>> > >>>> Currently os-net-config is a part of the TripleO project. Since > >>>> TripleO is retiring and being replaced by something hosted on > >>>> GitHub, we are no longer maintaining the Master/Zed branches. > >>> [...] > >>> > >>> Maybe you misunderstood. To restate: Is there any reason the people > >>> who want to use and maintain the openstack/os-net-config master and > >>> stable/zed (or other) branches can't just adopt the project? It's > >>> well within the TC's power to grant control of that repository to > >>> another project team who isn't TripleO. > >>> > >>> Which use cases specifically (outside of Red Hat's lingering > >>> interest in the stable/wallaby branches of TripleO repositories) are > >>> you referring to? > >>> -- > >>> Jeremy Stanley > >> > >> > >>> > >> The people who want to continue to use os-net-config are developing the > >> replacement for TripleO, but they have moved to GutHub. That?s an option > >> for os-net-config, but not my first preference. > >> I have over the years heard of companies using os-net-config for various > >> use cases. It?s possible that posting to openstack-discuss won?t reach > any > >> of those users, but it doesn?t hurt to ask here. > > i will need to reread the thread but i thought that there was reference > to baremetal > > use cases i had assumed that meant ironic? > > > > just because we are using GitHub for the replacement for tripleo is not > a reason to move > > things to github by default. movign to github for the code review as im > sure you are aware > > is a much worse code review interface. we would be loosing the release > tooling and ablity to > > publish to pypi. the ci would need to be ported amoung other things. > > > > There is also an open question of if/how much of os-net-config will > continue to be used for the > > operator based installer. alternatives are being considered although we > have known gaps. > > no desicssion has been made on if we will continue with os-net-config or > replace it with nmstate > > or a hybird of the two. > > > > a quick search > https://codesearch.opendev.org/?q=os-net-config&i=nope&literal=nope&files=&excludeFiles=&repos= > > seams to indicate that os-net-config is used by: > > - openstack-virtual-baremetal > > - networking-bigswitch > > - possibly starlingx > > > > so i would suggest moving it to ironic governance and keeping the repo > as is to suppot the virtual baremetal usecase. > > None of these repos are part of the baremetal project. I cannot speak > for the PTL or for the team, but I highly doubt we'll be in a position > to adopt os-net-config. > > Dmitry > > +1 I don't think any Ironic documentation refers to using os-net-config -- it's a single option for how to configure your images post-boot out of a large number (many of which remain supported). I agree with Dmitry that it's hard to see this fitting well in the baremetal program. Thanks, Jay Faulkner Ironic PTL TC Vice-Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Tue Jun 13 00:18:21 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Tue, 13 Jun 2023 07:18:21 +0700 Subject: [OPENSTACK][rabbitmq] using quorum queues In-Reply-To: References: Message-ID: Hello Huettner, I have used the quorum queue since March and it is ok until now. It looks more stable than the classic queue. Some feedback to you. Thank you. Nguyen Huu Khoi. On Mon, May 8, 2023 at 1:14?PM Felix H?ttner wrote: > Hi Nguyen, > > > > we are using quorum queues for one of our deployments. So fare we did not > have any issue with them. They also seem to survive restarts without issues > (however reply queues are still broken afterwards in a small amount of > cases, but they are no quorum/mirrored queues anyway). > > > > So I would recommend them for everyone that creates a new cluster. > > > > -- > > Felix Huettner > > > > *From:* Nguy?n H?u Kh?i > *Sent:* Saturday, May 6, 2023 4:29 AM > *To:* OpenStack Discuss > *Subject:* [OPENSTACK][rabbitmq] using quorum queues > > > > Hello guys. > > IS there any guy who uses the quorum queue for openstack? Could you give > some feedback to compare with classic queue? > > Thank you. > > Nguyen Huu Khoi > > Diese E Mail enth?lt m?glicherweise vertrauliche Inhalte und ist nur f?r > die Verwertung durch den vorgesehenen Empf?nger bestimmt. > Sollten Sie nicht der vorgesehene Empf?nger sein, setzen Sie den Absender > bitte unverz?glich in Kenntnis und l?schen diese E Mail. > > Hinweise zum Datenschutz finden Sie hier > . > > > This e-mail may contain confidential content and is intended only for the > specified recipient/s. > If you are not the intended recipient, please inform the sender > immediately and delete this e-mail. > > Information on data protection can be found here > . > -------------- next part -------------- An HTML attachment was scrubbed... URL: From saphi070 at gmail.com Tue Jun 13 01:43:15 2023 From: saphi070 at gmail.com (Sa Pham) Date: Tue, 13 Jun 2023 08:43:15 +0700 Subject: [OPENSTACK][rabbitmq] using quorum queues In-Reply-To: References: Message-ID: Hi Kh?i, Why do you say using the quorum queue is more stable than the classic queue ? Thanks, On Tue, Jun 13, 2023 at 7:26?AM Nguy?n H?u Kh?i wrote: > Hello Huettner, > I have used the quorum queue since March and it is ok until now. It looks > more stable than the classic queue. Some feedback to you. > Thank you. > Nguyen Huu Khoi. > > > > On Mon, May 8, 2023 at 1:14?PM Felix H?ttner > wrote: > >> Hi Nguyen, >> >> >> >> we are using quorum queues for one of our deployments. So fare we did not >> have any issue with them. They also seem to survive restarts without issues >> (however reply queues are still broken afterwards in a small amount of >> cases, but they are no quorum/mirrored queues anyway). >> >> >> >> So I would recommend them for everyone that creates a new cluster. >> >> >> >> -- >> >> Felix Huettner >> >> >> >> *From:* Nguy?n H?u Kh?i >> *Sent:* Saturday, May 6, 2023 4:29 AM >> *To:* OpenStack Discuss >> *Subject:* [OPENSTACK][rabbitmq] using quorum queues >> >> >> >> Hello guys. >> >> IS there any guy who uses the quorum queue for openstack? Could you give >> some feedback to compare with classic queue? >> >> Thank you. >> >> Nguyen Huu Khoi >> >> Diese E Mail enth?lt m?glicherweise vertrauliche Inhalte und ist nur f?r >> die Verwertung durch den vorgesehenen Empf?nger bestimmt. >> Sollten Sie nicht der vorgesehene Empf?nger sein, setzen Sie den Absender >> bitte unverz?glich in Kenntnis und l?schen diese E Mail. >> >> Hinweise zum Datenschutz finden Sie hier >> . >> >> >> This e-mail may contain confidential content and is intended only for the >> specified recipient/s. >> If you are not the intended recipient, please inform the sender >> immediately and delete this e-mail. >> >> Information on data protection can be found here >> . >> > -- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Tue Jun 13 02:05:16 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Tue, 13 Jun 2023 09:05:16 +0700 Subject: [OPENSTACK][rabbitmq] using quorum queues In-Reply-To: References: Message-ID: Hello. Firstly, when I used the classic queue and sometimes, my rabbitmq cluster was broken, the computers showed state down and I needed to restart the computer service to make it up. Secondly, 1 of 3 controller is down but my system still works although it is not very first as fully controller. I ran it for about 3 months compared with classic. My openstack is Yoga and use Kolla-Ansible as a deployment tool, Nguyen Huu Khoi On Tue, Jun 13, 2023 at 8:43?AM Sa Pham wrote: > Hi Kh?i, > > Why do you say using the quorum queue is more stable than the classic > queue ? > > Thanks, > > > > On Tue, Jun 13, 2023 at 7:26?AM Nguy?n H?u Kh?i > wrote: > >> Hello Huettner, >> I have used the quorum queue since March and it is ok until now. It looks >> more stable than the classic queue. Some feedback to you. >> Thank you. >> Nguyen Huu Khoi. >> >> >> >> On Mon, May 8, 2023 at 1:14?PM Felix H?ttner >> wrote: >> >>> Hi Nguyen, >>> >>> >>> >>> we are using quorum queues for one of our deployments. So fare we did >>> not have any issue with them. They also seem to survive restarts without >>> issues (however reply queues are still broken afterwards in a small amount >>> of cases, but they are no quorum/mirrored queues anyway). >>> >>> >>> >>> So I would recommend them for everyone that creates a new cluster. >>> >>> >>> >>> -- >>> >>> Felix Huettner >>> >>> >>> >>> *From:* Nguy?n H?u Kh?i >>> *Sent:* Saturday, May 6, 2023 4:29 AM >>> *To:* OpenStack Discuss >>> *Subject:* [OPENSTACK][rabbitmq] using quorum queues >>> >>> >>> >>> Hello guys. >>> >>> IS there any guy who uses the quorum queue for openstack? Could you give >>> some feedback to compare with classic queue? >>> >>> Thank you. >>> >>> Nguyen Huu Khoi >>> >>> Diese E Mail enth?lt m?glicherweise vertrauliche Inhalte und ist nur f?r >>> die Verwertung durch den vorgesehenen Empf?nger bestimmt. >>> Sollten Sie nicht der vorgesehene Empf?nger sein, setzen Sie den >>> Absender bitte unverz?glich in Kenntnis und l?schen diese E Mail. >>> >>> Hinweise zum Datenschutz finden Sie hier >>> . >>> >>> >>> This e-mail may contain confidential content and is intended only for >>> the specified recipient/s. >>> If you are not the intended recipient, please inform the sender >>> immediately and delete this e-mail. >>> >>> Information on data protection can be found here >>> . >>> >> > > -- > Sa Pham Dang > Skype: great_bn > Phone/Telegram: 0986.849.582 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From saphi070 at gmail.com Tue Jun 13 02:14:56 2023 From: saphi070 at gmail.com (Sa Pham) Date: Tue, 13 Jun 2023 09:14:56 +0700 Subject: [OPENSTACK][rabbitmq] using quorum queues In-Reply-To: References: Message-ID: Dear Kh?i, Thanks for your reply. On Tue, Jun 13, 2023 at 9:05?AM Nguy?n H?u Kh?i wrote: > Hello. > Firstly, when I used the classic queue and sometimes, my rabbitmq cluster > was broken, the computers showed state down and I needed to restart the > computer service to make it up. Secondly, 1 of 3 controller is down but my > system still works although it is not very first as fully controller. I ran > it for about 3 months compared with classic. My openstack is Yoga and use > Kolla-Ansible as a deployment tool, > Nguyen Huu Khoi > > > On Tue, Jun 13, 2023 at 8:43?AM Sa Pham wrote: > >> Hi Kh?i, >> >> Why do you say using the quorum queue is more stable than the classic >> queue ? >> >> Thanks, >> >> >> >> On Tue, Jun 13, 2023 at 7:26?AM Nguy?n H?u Kh?i < >> nguyenhuukhoinw at gmail.com> wrote: >> >>> Hello Huettner, >>> I have used the quorum queue since March and it is ok until now. It >>> looks more stable than the classic queue. Some feedback to you. >>> Thank you. >>> Nguyen Huu Khoi. >>> >>> >>> >>> On Mon, May 8, 2023 at 1:14?PM Felix H?ttner >>> wrote: >>> >>>> Hi Nguyen, >>>> >>>> >>>> >>>> we are using quorum queues for one of our deployments. So fare we did >>>> not have any issue with them. They also seem to survive restarts without >>>> issues (however reply queues are still broken afterwards in a small amount >>>> of cases, but they are no quorum/mirrored queues anyway). >>>> >>>> >>>> >>>> So I would recommend them for everyone that creates a new cluster. >>>> >>>> >>>> >>>> -- >>>> >>>> Felix Huettner >>>> >>>> >>>> >>>> *From:* Nguy?n H?u Kh?i >>>> *Sent:* Saturday, May 6, 2023 4:29 AM >>>> *To:* OpenStack Discuss >>>> *Subject:* [OPENSTACK][rabbitmq] using quorum queues >>>> >>>> >>>> >>>> Hello guys. >>>> >>>> IS there any guy who uses the quorum queue for openstack? Could you >>>> give some feedback to compare with classic queue? >>>> >>>> Thank you. >>>> >>>> Nguyen Huu Khoi >>>> >>>> Diese E Mail enth?lt m?glicherweise vertrauliche Inhalte und ist nur >>>> f?r die Verwertung durch den vorgesehenen Empf?nger bestimmt. >>>> Sollten Sie nicht der vorgesehene Empf?nger sein, setzen Sie den >>>> Absender bitte unverz?glich in Kenntnis und l?schen diese E Mail. >>>> >>>> Hinweise zum Datenschutz finden Sie hier >>>> . >>>> >>>> >>>> This e-mail may contain confidential content and is intended only for >>>> the specified recipient/s. >>>> If you are not the intended recipient, please inform the sender >>>> immediately and delete this e-mail. >>>> >>>> Information on data protection can be found here >>>> . >>>> >>> >> >> -- >> Sa Pham Dang >> Skype: great_bn >> Phone/Telegram: 0986.849.582 >> >> >> -- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcafarel at redhat.com Tue Jun 13 08:31:01 2023 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Tue, 13 Jun 2023 10:31:01 +0200 Subject: [neutron] Bug deputy report (week starting on June-5-2022) Message-ID: Hey neutrinos, time for a new bug deputy rotation again! Here are the bugs reported between June 5 and 11. Overall, a quiet week, and all bugs have proposed patches. Handing over the deputy role to Lajos, and remember that all meetings are cancelled this week because of the Vancouver PTG Medium * [neutron-api] remove leader_only for maintenance worker - https://bugs.launchpad.net/bugs/2022914 Patch proposed https://review.opendev.org/c/openstack/neutron/+/885240 * OVN DB sync acls Timeout - https://bugs.launchpad.net/bugs/2023130 Patch proposed to batch the transaction https://review.opendev.org/c/openstack/neutron/+/885224 * [OVN] Agent deletion only removes the "Chassis" register, not the "Chassis_Private" one - https://bugs.launchpad.net/bugs/2023171 Patch by ralonsoh https://review.opendev.org/c/openstack/neutron/+/885744 -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From gthiemonge at redhat.com Tue Jun 13 11:13:49 2023 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Tue, 13 Jun 2023 13:13:49 +0200 Subject: [Octavia] cancelling meeting June 14th Message-ID: Hi Folks, As discussed during the last meeting, we are cancelling tomorrow's meeting, it conflicts with the Octavia Forum Session at the OpenInfra summit. Thanks, Gregory -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue Jun 13 15:40:11 2023 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 13 Jun 2023 17:40:11 +0200 Subject: [all][requirements] Automated management of setup.py In-Reply-To: <20230612161735.dt7kruxv6bf65mq2@yuggoth.org> References: <20230612161735.dt7kruxv6bf65mq2@yuggoth.org> Message-ID: On Mon, 12 Jun 2023 at 18:22, Jeremy Stanley wrote: > > On 2023-06-12 16:55:30 +0200 (+0200), Pierre Riteau wrote: > > We recently had a patch [1] submitted to cloudkitty-dashboard that > > dropped support for Python 2.7 bits in setup.py (which is fine), but > > at the same time removed the following line: > > > > # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT > > > > I pointed it out to the author but they never updated their patch. > > > > This comment is managed by the update.py script in requirements [2]. > > Should there be an effort to update setup.py across all repositories > > using this script? > [...] > > It's safe to remove those comments now when you find them. We > stopped syncing requirements lists during the Rocky cycle: > > https://lists.openstack.org/pipermail/openstack-dev/2018-March/128352.html I didn't realise this effort also stopped the sync of setup.py. Thanks for clarifying. From fungi at yuggoth.org Tue Jun 13 15:48:05 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 13 Jun 2023 15:48:05 +0000 Subject: [all][requirements] Automated management of setup.py In-Reply-To: References: <20230612161735.dt7kruxv6bf65mq2@yuggoth.org> Message-ID: <20230613154805.f2ihvywfi7yemrxk@yuggoth.org> On 2023-06-13 17:40:11 +0200 (+0200), Pierre Riteau wrote: [...] > I didn't realise this effort also stopped the sync of setup.py. > Thanks for clarifying. Yes, it was all done by the same script/job, it's just that some requirements (specifically setup_requires like PBR) had to be managed in setup.py instead of requirements*.txt. Skimming major project histories for setup.py, the last time that was revised by the automated job seems to have been 2017-03-02, so over 6 years ago now. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From atidor12 at gmail.com Tue Jun 13 17:24:41 2023 From: atidor12 at gmail.com (altidor JB) Date: Tue, 13 Jun 2023 13:24:41 -0400 Subject: Openstack /port mirroir issue [Neutron] In-Reply-To: References: Message-ID: Hello, The issues is that I can't get it to work on the specified architecture. I'm trying to redirect traffic to a snort instance on my openstack architecture. Thanks On Wed, Jun 7, 2023, 02:55 Lajos Katona wrote: > Hi, > I don't know about any deployment tooling efforts for tap-as-a-service. > Do you have perhaps any kind of issues or you are just at the first step: > the deployment? > > Best regards > Lajos Katona (lajoskatona) > > altidor JB ezt ?rta (id?pont: 2023. j?n. 7., Sze, > 0:42): > >> Hello, >> I've set up Openstack Zed on Ubuntu Jammy using Juju and MAAS. I'm >> experimenting with some IDS on the architecture and need to implement some >> kind of port mirroring capacity. I've been trying with Tap as a Service but >> can't find any resources for the installation/configuration on my >> architecture. The git refers to Devstack implementation. >> I'm using the large-scale deployment on 6 servers. Can anyone point me in >> the right direction? >> Thanks! >> JB >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Tue Jun 13 18:09:57 2023 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 13 Jun 2023 11:09:57 -0700 Subject: [all][requirements] Automated management of setup.py In-Reply-To: <20230612161735.dt7kruxv6bf65mq2@yuggoth.org> References: <20230612161735.dt7kruxv6bf65mq2@yuggoth.org> Message-ID: On Mon, 2023-06-12 at 16:17 +0000, Jeremy Stanley wrote: > On 2023-06-12 16:55:30 +0200 (+0200), Pierre Riteau wrote: > > We recently had a patch [1] submitted to cloudkitty-dashboard that > > dropped support for Python 2.7 bits in setup.py (which is fine), but > > at the same time removed the following line: > > > > # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT > > > > I pointed it out to the author but they never updated their patch. > > > > This comment is managed by the update.py script in requirements [2]. > > Should there be an effort to update setup.py across all repositories > > using this script? > [...] > > It's safe to remove those comments now when you find them. We > stopped syncing requirements lists during the Rocky cycle: > > https://lists.openstack.org/pipermail/openstack-dev/2018-March/128352.html Somewhat related: you can also remove this note from your requirements.txt files too: # The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. This doesn't apply since pip 20.3, which introduced the new dependency resolver. Cheers, Stephen From tkajinam at redhat.com Tue Jun 13 19:23:43 2023 From: tkajinam at redhat.com (Takashi Kajinami) Date: Wed, 14 Jun 2023 04:23:43 +0900 Subject: [storlets] Proposal to make Train/Ussuri/Victoria EOL In-Reply-To: References: Message-ID: Because I've not heard any objections for long, I've pushed the release patch to EOL these branches. In case you have any last minute concerns then please let us know in that review. https://review.opendev.org/c/openstack/releases/+/886026 On Mon, Mar 20, 2023 at 1:01?AM Takashi Kajinami wrote: > Hello, > > > Currently we have multiple stable branches open but we haven't seen any > backport > proposed so far. To reduce number of branches we have to maintain, I'd > like to > propose retiring old stable branches(train, ussuri and victoria). > > In case you have any concerns, please let me know. > > Thank you, > Takashi Kajinami > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.acosta at luizalabs.com Tue Jun 13 22:01:36 2023 From: roberto.acosta at luizalabs.com (Roberto Bartzen Acosta) Date: Tue, 13 Jun 2023 15:01:36 -0700 Subject: Community attendance - OpenInfra Summit Vancouver In-Reply-To: References: Message-ID: Hey folks, everyone is invited for meet us in the registration area at 17:05. Cheers Em sex, 9 de jun de 2023 12:02, Alvaro Soto escreveu: > I've posted this for LATAM community in case you want to use it as a > communication channel =) > Feel free to invite others. > > https://twitter.com/OpenInfraCDMX/status/1667005673958146050 > ~~ > Hola Comunidad!!!! si alguien va a > @OpenInfraSummit > en Vancouver al siguiente semana. No duden en conectarse al slack de la > comunidad y entrar al canal #openinfra > -summit para que > nos juntemos =) Nos vemos en Vancouver!!!!! > @openinfradev > https://slack.openinfra.mx > ~~ > > Kind regards, > Alvaro Soto. > > On Thu, Jun 8, 2023 at 11:13?AM Alvaro Soto wrote: > >> Awesome! Thank you so much! >> >> On Wed, Jun 7, 2023 at 5:46?AM Roberto Bartzen Acosta < >> roberto.acosta at luizalabs.com> wrote: >> >>> Hi Alvaro, >>> >>> nice! of course we can meet there! all are welcome :) >>> We are trying to schedule something on Tuesday, I already sent messages >>> to Iury and Carlos. >>> >>> Cheers, >>> Roberto >>> >>> Em ter., 6 de jun. de 2023 ?s 06:35, Alvaro Soto >>> escreveu: >>> >>>> Hello Roberto, >>>> I'm not from Brazil (I'm based on Mexico) but as part of LATAM >>>> community, I'll love to be part in local projects :) >>>> >>>> I'll be at OIS, I'll be nice to talk about community challenges for our >>>> local community. >>>> >>>> Cheers. >>>> --- >>>> Alvaro Soto. >>>> >>>> Note: My work hours may not be your work hours. Please do not feel the >>>> need to respond during a time that is not convenient for you. >>>> ---------------------------------------------------------- >>>> Great people talk about ideas, >>>> ordinary people talk about things, >>>> small people talk... about other people. >>>> >>>> On Thu, Jun 1, 2023, 6:47 AM Iury Gregory >>>> wrote: >>>> >>>>> Hi Roberto, >>>>> >>>>> I know some Brazilians that will be attending the OIS Vancouver, >>>>> including me. >>>>> >>>>> Em qui., 1 de jun. de 2023 ?s 09:00, Roberto Bartzen Acosta < >>>>> roberto.acosta at luizalabs.com> escreveu: >>>>> >>>>>> Hello, >>>>>> >>>>>> Will anyone from the Brazilian community attend the OpenInfra in >>>>>> Vancouver? >>>>>> >>>>>> I would like to meet other members from Brazil and discuss the >>>>>> challenges and possibilities of using OpenStack in Brazilian >>>>>> infrastructures. You can ping me on IRC too (racosta). >>>>>> >>>>>> Kind regards, >>>>>> Roberto >>>>>> >>>>>> >>>>>> *?Esta mensagem ? direcionada apenas para os endere?os constantes no >>>>>> cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no >>>>>> cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa >>>>>> mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o >>>>>> imediatamente anuladas e proibidas?.* >>>>>> >>>>>> *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para >>>>>> assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o >>>>>> poder? aceitar a responsabilidade por quaisquer perdas ou danos causados >>>>>> por esse e-mail ou por seus anexos?.* >>>>>> >>>>> >>>>> >>>>> -- >>>>> *Att[]'s* >>>>> >>>>> *Iury Gregory Melo Ferreira * >>>>> *MSc in Computer Science at UFCG* >>>>> *Ironic PTL * >>>>> *Senior Software Engineer at Red Hat Brazil* >>>>> *Social*: https://www.linkedin.com/in/iurygregory >>>>> *E-mail: iurygregory at gmail.com * >>>>> >>>> >>> >>> *?Esta mensagem ? direcionada apenas para os endere?os constantes no >>> cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no >>> cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa >>> mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o >>> imediatamente anuladas e proibidas?.* >>> >>> *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para >>> assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o >>> poder? aceitar a responsabilidade por quaisquer perdas ou danos causados >>> por esse e-mail ou por seus anexos?.* >>> >> >> >> -- >> >> Alvaro Soto >> >> *Note: My work hours may not be your work hours. Please do not feel the >> need to respond during a time that is not convenient for you.* >> ---------------------------------------------------------- >> Great people talk about ideas, >> ordinary people talk about things, >> small people talk... about other people. >> > > > -- > > Alvaro Soto > > *Note: My work hours may not be your work hours. Please do not feel the > need to respond during a time that is not convenient for you.* > ---------------------------------------------------------- > Great people talk about ideas, > ordinary people talk about things, > small people talk... about other people. > -- _?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?._ *?**?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From alsotoes at gmail.com Tue Jun 13 22:54:30 2023 From: alsotoes at gmail.com (Alvaro Soto) Date: Tue, 13 Jun 2023 15:54:30 -0700 Subject: [ironic] ARM Support in CI: Call for vendors / contributors / interested parties In-Reply-To: References: <6faf5514-2ac8-9e8b-c543-0f8125b4001b@rd.bbc.co.uk> <728b14da-c275-e59b-0345-ad246a4ced26@rd.bbc.co.uk> Message-ID: Just to bump this email and present you guys Jose Miguel, who it's in interested in this as well. Cheers!!! --- Alvaro Soto. Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you. ---------------------------------------------------------- Great people talk about ideas, ordinary people talk about things, small people talk... about other people. On Wed, Apr 5, 2023, 2:24 AM Riccardo Pittau wrote: > Hey Alvaro, > > We've discussed support for ubuntu arm64 image during the last weekly > meeting on Monday and agreed to provide it. > I plan to start working on that this week in the > ironic-python-agent-builder repository. > > Ciao > Riccardo > > On Tue, Apr 4, 2023 at 11:00?PM Alvaro Soto wrote: > >> I saw CentOS 8/9 and Debian images; any plans on working with Ubuntu? >> >> Cheers! >> >> On Tue, Apr 4, 2023 at 2:16?PM Jonathan Rosser < >> jonathan.rosser at rd.bbc.co.uk> wrote: >> >>> Hi Jay, >>> >>> We did not need to make any changes to Ironic. >>> >>> At the time we first got things working I don't think there was a >>> published ARM64 image, but it would have been of great benefit as it was >>> another component to bootstrap and have uncertainty about if we had done >>> it properly. >>> >>> I've uploaded the published experimental image to our environment and >>> will have an opportunity to test that soon. >>> >>> Jon. >>> >>> On 31/03/2023 17:01, Jay Faulkner wrote: >>> > Thanks for responding, Jonathan! >>> > >>> > Did you have to make any downstream changes to Ironic for this to >>> > work? Are you using our published ARM64 image or using their own? >>> > >>> > Thanks, >>> > Jay Faulkner >>> > Ironic PTL >>> > >>> > >>> > On Fri, Mar 31, 2023 at 7:56?AM Jonathan Rosser >>> > wrote: >>> > >>> > I have Ironic working with Supermicro MegaDC / Ampere CPU in a >>> > R12SPD-A >>> > system board using the ipmi driver. >>> > >>> > Jon. >>> > >>> > On 29/03/2023 19:39, Jay Faulkner wrote: >>> > > Hi stackers, >>> > > >>> > > Ironic has published an experimental Ironic Python Agent image >>> for >>> > > ARM64 >>> > > >>> > ( >>> https://tarballs.opendev.org/openstack/ironic-python-agent-builder/dib/files/ >>> ) >>> > >>> > > and discussed promoting this image to supported via CI testing. >>> > > However, we have a problem: there are no Ironic developers with >>> > easy >>> > > access to ARM hardware at the moment, and no Ironic developers >>> with >>> > > free time to commit to improving our support of ARM hardware. >>> > > >>> > > So we're putting out a call for help: >>> > > - If you're a hardware vendor and want your ARM hardware >>> supported? >>> > > Please come talk to the Ironic community about setting up >>> > third-party-CI. >>> > > - Are you an operator or contributor from a company invested in >>> ARM >>> > > bare metal? Please come join the Ironic community to help us >>> build >>> > > this support. >>> > > >>> > > Thanks, >>> > > Jay Faulkner >>> > > Ironic PTL >>> > > >>> > > >>> > >>> >>> >> >> -- >> >> Alvaro Soto >> >> *Note: My work hours may not be your work hours. Please do not feel the >> need to respond during a time that is not convenient for you.* >> ---------------------------------------------------------- >> Great people talk about ideas, >> ordinary people talk about things, >> small people talk... about other people. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.arbet at ultimum.io Wed Jun 14 09:41:44 2023 From: michal.arbet at ultimum.io (Michal Arbet) Date: Wed, 14 Jun 2023 11:41:44 +0200 Subject: missing erlang-base_25.3.2.2-1rmq1ppa1~ubuntu22.04.1_amd64.deb Message-ID: Hello, We are installing rabbitmq-server from https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/debian and erlang from your ppa repository *ppa:rabbitmq/rabbitmq-erlang*. We have erlang pinned as below Package: erlang* Pin: version 1:25.* Pin-Priority: 1000 Problem is that you removed erlang-base_25* and there is only erlang-base_26.0.1-1rmq1ppa1~ubuntu22.04.1_arm64.deb Please, is there any reason why you removed erlang-base_25* for other ubuntu versions ? Because I can see 25* version for ubuntu 18.04 only Please, can u help us and upload erlang 25* also for other ubuntu versions ? Thank you very much Log below : ()[root at builder /]# apt update;apt install rabbitmq-server Get:1 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB] Get:2 http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope InRelease [5,463 B] Get:3 http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [228 B] Get:7 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu jammy InRelease [18.1 kB] Get:8 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy InRelease [5,152 B] Get:4 http://ftp.cvut.cz/ubuntu jammy InRelease [270 kB] Get:9 http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope/main amd64 Packages [126 kB] Get:5 http://ucho.ignum.cz/ubuntu jammy-updates InRelease [119 kB] Get:6 https://mirror.it4i.cz/ubuntu jammy-security InRelease [110 kB] Get:10 http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 Packages [49.4 kB] Get:11 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 Packages [27.0 kB] Get:12 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu jammy/main amd64 Packages [8,167 B] Get:13 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages [9,044 B] Get:14 http://ftp.cvut.cz/ubuntu jammy-updates/universe amd64 Packages [1,178 kB] Get:15 https://mirror.it4i.cz/ubuntu jammy-updates/main amd64 Packages [861 kB] Get:16 https://cz.archive.ubuntu.com/ubuntu jammy/universe amd64 Packages [17.5 MB] Get:18 http://archive.ubuntu.com/ubuntu jammy-security/main amd64 Packages [579 kB] Get:19 https://mirror.it4i.cz/ubuntu jammy-security/universe amd64 Packages [928 kB] Get:17 https://cz.archive.ubuntu.com/ubuntu jammy/main amd64 Packages [1,792 kB] Fetched 23.7 MB in 4s (6,499 kB/s) Reading package lists... Done Building dependency tree... Done Reading state information... Done All packages are up to date. Reading package lists... Done Building dependency tree... Done Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: rabbitmq-server : Depends: erlang-base (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or erlang-base-hipe (< 1:26.0) but it is not installable or esl-erlang (< 1:26.0) but it is not installable Depends: erlang-crypto (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or esl-erlang (< 1:26.0) but it is not installable Depends: erlang-eldap (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or esl-erlang (< 1:26.0) but it is not installable Depends: erlang-inets (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or esl-erlang (< 1:26.0) but it is not installable Depends: erlang-mnesia (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or esl-erlang (< 1:26.0) but it is not installable Depends: erlang-os-mon (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or esl-erlang (< 1:26.0) but it is not installable Depends: erlang-parsetools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or esl-erlang (< 1:26.0) but it is not installable Depends: erlang-public-key (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or esl-erlang (< 1:26.0) but it is not installable Depends: erlang-runtime-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or esl-erlang (< 1:26.0) but it is not installable Depends: erlang-ssl (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or esl-erlang (< 1:26.0) but it is not installable Depends: erlang-syntax-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or esl-erlang (< 1:26.0) but it is not installable Depends: erlang-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or esl-erlang (< 1:26.0) but it is not installable Depends: erlang-xmerl (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or esl-erlang (< 1:26.0) but it is not installable E: Unable to correct problems, you have held broken packages. ()[root at builder /]# apt-cache policy rabbitmq-server ^[[Arabbitmq-server: Installed: (none) Candidate: 3.11.16-1 Version table: 3.12.0-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.18-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.17-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.16-1 1000 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.15-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.14-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.13-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.12-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.11-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.10-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.9-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.8-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.7-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.6-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.5-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.4-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.3-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.2-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.1-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.11.0-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.24-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.23-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.22-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.21-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.20-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.19-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.18-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.17-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.16-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.14-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.13-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.12-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.11-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.10-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.9-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.8-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.7-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.6-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.5-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.4-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.2-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.1-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.10.0-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.9.29-1 500 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages 3.9.13-1ubuntu0.22.04.1 500 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main amd64 Packages 3.9.13-1 500 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 Packages ()[root at builder /]# apt-cache policy erlang-base erlang-base: Installed: (none) Candidate: 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 Version table: 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 500 500 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu jammy/main amd64 Packages 1:24.2.1+dfsg-1ubuntu0.1 500 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main amd64 Packages 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-security/main amd64 Packages 1:24.2.1+dfsg-1 500 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 Packages Michal Arbet Openstack Engineer Ultimum Technologies a.s. Na Po???? 1047/26, 11000 Praha 1 Czech Republic +420 604 228 897 michal.arbet at ultimum.io *https://ultimum.io * LinkedIn | Twitter | Facebook -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Wed Jun 14 10:14:58 2023 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Wed, 14 Jun 2023 12:14:58 +0200 Subject: [designate] Proposal to deprecate the agent framework and agent based backends In-Reply-To: <5727cbd3-833a-965a-5f77-1fb704ec0d98@inovex.de> References: <16546577-07af-c24a-5ff6-c45eeeba9517@inovex.de> <5727cbd3-833a-965a-5f77-1fb704ec0d98@inovex.de> Message-ID: <9ef8353d-c7ec-1996-8a23-c1967d1a8f1f@inovex.de> Hello, On 04/05/2023 10:07, Christian Rohmann wrote: > Just an update to our commitment:? We started working on the > implementation. There now is a first iteration for support of Catalog Zones pushed: ? * https://review.opendev.org/c/openstack/designate/+/885594 PTAL and start commenting / reviewing so we can iron out any issues to have this ready for the Bobcat release. Regards Christian From senrique at redhat.com Wed Jun 14 10:19:51 2023 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 14 Jun 2023 11:19:51 +0100 Subject: Cinder Bug Report 2023-06-14 Message-ID: Hello Argonauts, Cinder Bug Meeting Etherpad *Low* - HPE 3par: Minor error occurs during retype of volume. - *Status*: Fix proposed to master . *Wishlist* - Prevent co-locating charms that may be conflicting with each other . - *Status*: I've removed cinder on the projects affected because the problem only affects Charms. Cheers, -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From garcetto at gmail.com Wed Jun 14 10:28:58 2023 From: garcetto at gmail.com (garcetto) Date: Wed, 14 Jun 2023 12:28:58 +0200 Subject: [cinder-backup] incremental backup on nfs and ceph Message-ID: good morning, i was trying cinder-backup to backup cinder volumes from nfs backend and ceph rbd backend. both my test show that i can do incremental cinder-backup, but i dont get how technically the cinder-backup could "understand" the new incremental blocks from previous backup? maybe on ceph it leverages ceph-diff? but on nfs backend, how it knows changed blocks? thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Wed Jun 14 11:04:08 2023 From: eblock at nde.ag (Eugen Block) Date: Wed, 14 Jun 2023 11:04:08 +0000 Subject: [cinder-backup] incremental backup on nfs and ceph In-Reply-To: Message-ID: <20230614110408.Horde.WQqQZ1IeXEQ0O-5A47aER6s@webmail.nde.ag> Hi, forwarding Sofia's response from a recent thread [1]. Check out this explanation [2]. [1] https://lists.openstack.org/pipermail/openstack-discuss/2023-May/033818.html [1] https://web.archive.org/web/20160404120859/http://gorka.eguileor.com/inside-cinders-incremental-backup/?replytocom=2267 Zitat von garcetto : > good morning, > i was trying cinder-backup to backup cinder volumes from nfs backend and > ceph rbd backend. > both my test show that i can do incremental cinder-backup, but i dont get > how technically the cinder-backup could "understand" the new incremental > blocks from previous backup? > maybe on ceph it leverages ceph-diff? > but on nfs backend, how it knows changed blocks? > > thank you From murilo at evocorp.com.br Wed Jun 14 11:51:51 2023 From: murilo at evocorp.com.br (Murilo Morais) Date: Wed, 14 Jun 2023 08:51:51 -0300 Subject: [OSA] Multiple Deployment Hosts Message-ID: Good morning everybody! Is it possible to use another Host as Deployment Host? For example, if a problem occurs in the "primary" Deployment Host, use a second backup to perform some kind of intervention/change. Thanks in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From maksim.malchuk at gmail.com Wed Jun 14 12:56:23 2023 From: maksim.malchuk at gmail.com (Maksim Malchuk) Date: Wed, 14 Jun 2023 15:56:23 +0300 Subject: [OSA] Multiple Deployment Hosts In-Reply-To: References: Message-ID: Hi Murilo, Sure. But you should transfer the configuration and inventory to the backup host yourself. On Wed, Jun 14, 2023 at 2:59?PM Murilo Morais wrote: > Good morning everybody! > > Is it possible to use another Host as Deployment Host? For example, if a > problem occurs in the "primary" Deployment Host, use a second backup to > perform some kind of intervention/change. > > Thanks in advance! > -- Regards, Maksim Malchuk -------------- next part -------------- An HTML attachment was scrubbed... URL: From murilo at evocorp.com.br Wed Jun 14 13:36:29 2023 From: murilo at evocorp.com.br (Murilo Morais) Date: Wed, 14 Jun 2023 10:36:29 -0300 Subject: [OSA] Multiple Deployment Hosts In-Reply-To: References: Message-ID: Hello Maksim! I will try, thanks! Em qua., 14 de jun. de 2023 ?s 09:56, Maksim Malchuk < maksim.malchuk at gmail.com> escreveu: > Hi Murilo, > > Sure. But you should transfer the configuration and inventory to the > backup host yourself. > > On Wed, Jun 14, 2023 at 2:59?PM Murilo Morais > wrote: > >> Good morning everybody! >> >> Is it possible to use another Host as Deployment Host? For example, if a >> problem occurs in the "primary" Deployment Host, use a second backup to >> perform some kind of intervention/change. >> >> Thanks in advance! >> > > > -- > Regards, > Maksim Malchuk > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Wed Jun 14 14:31:11 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 14 Jun 2023 07:31:11 -0700 Subject: [OSA] Multiple Deployment Hosts In-Reply-To: References: Message-ID: Eventually, the only statefull thing on deploy host is a folder with configuration (/etc/openstack_deploy by default), which technically can be stored in git. The most troublesome part of storing it there are actually SSL certificates, because as of today there's no way to store them in an encrypted way like with ansible-vault or with hashi vault. We're planning to add such support in the next release though. But if it's fine with your policies to store certificates (and their private keys) in plain text in git, I would suggest doing that. As it's matter of couple of minutes then to spawn a new deploy host whenever needed. And moreover, you can create a CI/CD jobs/pipelines to ensure the state of deploy host or making it "ephemeral" at all. On Wed, Jun 14, 2023, 06:42 Murilo Morais wrote: > Hello Maksim! > > I will try, thanks! > > Em qua., 14 de jun. de 2023 ?s 09:56, Maksim Malchuk < > maksim.malchuk at gmail.com> escreveu: > >> Hi Murilo, >> >> Sure. But you should transfer the configuration and inventory to the >> backup host yourself. >> >> On Wed, Jun 14, 2023 at 2:59?PM Murilo Morais >> wrote: >> >>> Good morning everybody! >>> >>> Is it possible to use another Host as Deployment Host? For example, if >>> a problem occurs in the "primary" Deployment Host, use a second backup to >>> perform some kind of intervention/change. >>> >>> Thanks in advance! >>> >> >> >> -- >> Regards, >> Maksim Malchuk >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Wed Jun 14 14:50:57 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 14 Jun 2023 07:50:57 -0700 Subject: missing erlang-base_25.3.2.2-1rmq1ppa1~ubuntu22.04.1_amd64.deb In-Reply-To: References: Message-ID: Hi, Thanks for the reply. But is it possible to publish deb packages as GitHub releases also for https://github.com/rabbitmq/erlang-debian-package? As right now approach for Deb and rpm differs quite a lot, which makes it really tough to find similar way/source for both rhel and Debian. On Wed, Jun 14, 2023, 07:42 Arnaud Cogolu?gnes wrote: > The RabbitMQ Cloudsmith repositories do have bandwidth quotas. > > There have never been ARM 64 Erlang packages on our Cloudsmith > repositories. > > We (manually) build ARM 64 Erlang RPM packages and upload them on GitHub > [1]. > > Our PPA does provide ARM 64 Erlang packages, but again, you can install > only the latest version from there. This is not something we have control > over. Note we upload source packages to PPA and it builds them. The > infrastructure we use internally to build binary packages does not support > ARM 64. > > > [1] https://github.com/rabbitmq/erlang-rpm/releases/tag/v25.3 > > On Wed, Jun 14, 2023 at 4:22?PM Michal Arbet > wrote: > >> That's what I am talking about :(, do you think you can somehow provide >> arm64 packages again ? >> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From stig.openstack at telfer.org Wed Jun 14 16:52:01 2023 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 14 Jun 2023 09:52:01 -0700 Subject: [scientific-sig] PTG session today, 11:00 PT Message-ID: Hi all - We have a Scientific SIG session today at the PTG, co-located at the Open Infra Summit. The session starts 11:00 at table 6 in the ground floor auditorium. Everyone is welcome. Cheers, Stig From jay at gr-oss.io Wed Jun 14 16:58:14 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Wed, 14 Jun 2023 09:58:14 -0700 Subject: [ironic] ARM Support in CI: Call for vendors / contributors / interested parties In-Reply-To: References: <6faf5514-2ac8-9e8b-c543-0f8125b4001b@rd.bbc.co.uk> <728b14da-c275-e59b-0345-ad246a4ced26@rd.bbc.co.uk> Message-ID: This is exciting! Are you all at the OpenStack Summit? If so I'd love to see you at the PTG. If not, let's figure out a path forward :) remotely. Thanks, Jay Faulkner Ironic PTL On Tue, Jun 13, 2023 at 4:05?PM Alvaro Soto wrote: > Just to bump this email and present you guys Jose Miguel, who it's in > interested in this as well. > > Cheers!!! > --- > Alvaro Soto. > > Note: My work hours may not be your work hours. Please do not feel the > need to respond during a time that is not convenient for you. > ---------------------------------------------------------- > Great people talk about ideas, > ordinary people talk about things, > small people talk... about other people. > > On Wed, Apr 5, 2023, 2:24 AM Riccardo Pittau wrote: > >> Hey Alvaro, >> >> We've discussed support for ubuntu arm64 image during the last weekly >> meeting on Monday and agreed to provide it. >> I plan to start working on that this week in the >> ironic-python-agent-builder repository. >> >> Ciao >> Riccardo >> >> On Tue, Apr 4, 2023 at 11:00?PM Alvaro Soto wrote: >> >>> I saw CentOS 8/9 and Debian images; any plans on working with Ubuntu? >>> >>> Cheers! >>> >>> On Tue, Apr 4, 2023 at 2:16?PM Jonathan Rosser < >>> jonathan.rosser at rd.bbc.co.uk> wrote: >>> >>>> Hi Jay, >>>> >>>> We did not need to make any changes to Ironic. >>>> >>>> At the time we first got things working I don't think there was a >>>> published ARM64 image, but it would have been of great benefit as it >>>> was >>>> another component to bootstrap and have uncertainty about if we had >>>> done >>>> it properly. >>>> >>>> I've uploaded the published experimental image to our environment and >>>> will have an opportunity to test that soon. >>>> >>>> Jon. >>>> >>>> On 31/03/2023 17:01, Jay Faulkner wrote: >>>> > Thanks for responding, Jonathan! >>>> > >>>> > Did you have to make any downstream changes to Ironic for this to >>>> > work? Are you using our published ARM64 image or using their own? >>>> > >>>> > Thanks, >>>> > Jay Faulkner >>>> > Ironic PTL >>>> > >>>> > >>>> > On Fri, Mar 31, 2023 at 7:56?AM Jonathan Rosser >>>> > wrote: >>>> > >>>> > I have Ironic working with Supermicro MegaDC / Ampere CPU in a >>>> > R12SPD-A >>>> > system board using the ipmi driver. >>>> > >>>> > Jon. >>>> > >>>> > On 29/03/2023 19:39, Jay Faulkner wrote: >>>> > > Hi stackers, >>>> > > >>>> > > Ironic has published an experimental Ironic Python Agent image >>>> for >>>> > > ARM64 >>>> > > >>>> > ( >>>> https://tarballs.opendev.org/openstack/ironic-python-agent-builder/dib/files/ >>>> ) >>>> > >>>> > > and discussed promoting this image to supported via CI testing. >>>> > > However, we have a problem: there are no Ironic developers with >>>> > easy >>>> > > access to ARM hardware at the moment, and no Ironic developers >>>> with >>>> > > free time to commit to improving our support of ARM hardware. >>>> > > >>>> > > So we're putting out a call for help: >>>> > > - If you're a hardware vendor and want your ARM hardware >>>> supported? >>>> > > Please come talk to the Ironic community about setting up >>>> > third-party-CI. >>>> > > - Are you an operator or contributor from a company invested in >>>> ARM >>>> > > bare metal? Please come join the Ironic community to help us >>>> build >>>> > > this support. >>>> > > >>>> > > Thanks, >>>> > > Jay Faulkner >>>> > > Ironic PTL >>>> > > >>>> > > >>>> > >>>> >>>> >>> >>> -- >>> >>> Alvaro Soto >>> >>> *Note: My work hours may not be your work hours. Please do not feel the >>> need to respond during a time that is not convenient for you.* >>> ---------------------------------------------------------- >>> Great people talk about ideas, >>> ordinary people talk about things, >>> small people talk... about other people. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From acogoluegnes at gmail.com Wed Jun 14 12:37:40 2023 From: acogoluegnes at gmail.com (=?UTF-8?Q?Arnaud_Cogolu=C3=A8gnes?=) Date: Wed, 14 Jun 2023 14:37:40 +0200 Subject: missing erlang-base_25.3.2.2-1rmq1ppa1~ubuntu22.04.1_amd64.deb In-Reply-To: References: Message-ID: According to Cloudsmith web UI [1], the package is still there. Maybe a network glitch? We (the RabbitMQ team) keep the last patch release of each Erlang minor release (25.3.x, 25.2.x, 25.1.x, etc). [1] https://cloudsmith.io/~rabbitmq/repos/rabbitmq-erlang/packages/?q=distribution%3Aubuntu+AND+distribution%3Ajammy+AND+version%3A1%3A25*+AND+name%3A%27%5Eerlang-base%24%27 On Wed, Jun 14, 2023 at 11:41?AM Michal Arbet wrote: > Hello, > > We are installing rabbitmq-server from > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/debian and > erlang from your ppa repository *ppa:rabbitmq/rabbitmq-erlang*. > > We have erlang pinned as below > > Package: erlang* > Pin: version 1:25.* > Pin-Priority: 1000 > > Problem is that you removed erlang-base_25* and there is > only erlang-base_26.0.1-1rmq1ppa1~ubuntu22.04.1_arm64.deb > > Please, is there any reason why you removed erlang-base_25* for other > ubuntu versions ? > Because I can see 25* version for ubuntu 18.04 only > > Please, can u help us and upload erlang 25* also for other ubuntu versions > ? > > Thank you very much > > Log below : > > > > > ()[root at builder /]# apt update;apt install rabbitmq-server > Get:1 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB] > Get:2 http://ubuntu-cloud.archive.canonical.com/ubuntu > jammy-updates/antelope InRelease [5,463 B] > > Get:3 http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [228 B] > > > Get:7 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu > jammy InRelease [18.1 kB] > > Get:8 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy InRelease [5,152 B] > Get:4 http://ftp.cvut.cz/ubuntu jammy InRelease [270 kB] > > Get:9 http://ubuntu-cloud.archive.canonical.com/ubuntu > jammy-updates/antelope/main amd64 Packages [126 kB] > > Get:5 http://ucho.ignum.cz/ubuntu jammy-updates InRelease [119 kB] > > Get:6 https://mirror.it4i.cz/ubuntu jammy-security InRelease [110 kB] > > Get:10 http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 > Packages [49.4 kB] > Get:11 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 > Packages [27.0 kB] > Get:12 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu > jammy/main amd64 Packages [8,167 B] > Get:13 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages [9,044 B] > Get:14 http://ftp.cvut.cz/ubuntu jammy-updates/universe amd64 Packages > [1,178 kB] > Get:15 https://mirror.it4i.cz/ubuntu jammy-updates/main amd64 Packages > [861 kB] > Get:16 https://cz.archive.ubuntu.com/ubuntu jammy/universe amd64 Packages > [17.5 MB] > Get:18 http://archive.ubuntu.com/ubuntu jammy-security/main amd64 > Packages [579 kB] > Get:19 https://mirror.it4i.cz/ubuntu jammy-security/universe amd64 > Packages [928 kB] > Get:17 https://cz.archive.ubuntu.com/ubuntu jammy/main amd64 Packages > [1,792 kB] > Fetched 23.7 MB in 4s (6,499 kB/s) > Reading package lists... Done > Building dependency tree... Done > Reading state information... Done > All packages are up to date. > Reading package lists... Done > Building dependency tree... Done > Reading state information... Done > Some packages could not be installed. This may mean that you have > requested an impossible situation or if you are using the unstable > distribution that some required packages have not yet been created > or been moved out of Incoming. > The following information may help to resolve the situation: > > The following packages have unmet dependencies: > rabbitmq-server : Depends: erlang-base (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > erlang-base-hipe (< 1:26.0) but it is not > installable or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-crypto (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-eldap (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-inets (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-mnesia (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-os-mon (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-parsetools (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-public-key (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-runtime-tools (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-ssl (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-syntax-tools (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-tools (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-xmerl (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > E: Unable to correct problems, you have held broken packages. > > > ()[root at builder /]# apt-cache policy rabbitmq-server > ^[[Arabbitmq-server: > Installed: (none) > Candidate: 3.11.16-1 > Version table: > 3.12.0-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.18-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.17-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.16-1 1000 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.15-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.14-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.13-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.12-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.11-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.10-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.9-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.8-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.7-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.6-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.5-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.4-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.3-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.2-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.1-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.0-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.24-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.23-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.22-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.21-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.20-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.19-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.18-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.17-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.16-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.14-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.13-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.12-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.11-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.10-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.9-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.8-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.7-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.6-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.5-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.4-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.2-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.1-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.0-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.9.29-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.9.13-1ubuntu0.22.04.1 500 > 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main > amd64 Packages > 3.9.13-1 500 > 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 > Packages > > > ()[root at builder /]# apt-cache policy erlang-base > erlang-base: > Installed: (none) > Candidate: 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 > Version table: > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 500 > 500 > https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu > jammy/main amd64 Packages > 1:24.2.1+dfsg-1ubuntu0.1 500 > 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main > amd64 Packages > 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-security/main > amd64 Packages > 1:24.2.1+dfsg-1 500 > 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 > Packages > > > Michal Arbet > Openstack Engineer > > Ultimum Technologies a.s. > Na Po???? 1047/26, 11000 Praha 1 > Czech Republic > > +420 604 228 897 > michal.arbet at ultimum.io > *https://ultimum.io * > > LinkedIn | Twitter > | Facebook > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.arbet at ultimum.io Wed Jun 14 13:11:42 2023 From: michal.arbet at ultimum.io (Michal Arbet) Date: Wed, 14 Jun 2023 15:11:42 +0200 Subject: missing erlang-base_25.3.2.2-1rmq1ppa1~ubuntu22.04.1_amd64.deb In-Reply-To: References: Message-ID: We are using ppa:rabbitmq/rabbitmq-erlang because there were also arm64 packages while cloudsmith don't have arm64 Any advice ? Michal Arbet Openstack Engineer Ultimum Technologies a.s. Na Po???? 1047/26, 11000 Praha 1 Czech Republic +420 604 228 897 michal.arbet at ultimum.io *https://ultimum.io * LinkedIn | Twitter | Facebook st 14. 6. 2023 v 14:38 odes?latel Arnaud Cogolu?gnes napsal: > According to Cloudsmith web UI [1], the package is still there. Maybe a > network glitch? > > We (the RabbitMQ team) keep the last patch release of each Erlang minor > release (25.3.x, 25.2.x, 25.1.x, etc). > > [1] > https://cloudsmith.io/~rabbitmq/repos/rabbitmq-erlang/packages/?q=distribution%3Aubuntu+AND+distribution%3Ajammy+AND+version%3A1%3A25*+AND+name%3A%27%5Eerlang-base%24%27 > > On Wed, Jun 14, 2023 at 11:41?AM Michal Arbet > wrote: > >> Hello, >> >> We are installing rabbitmq-server from >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/debian and >> erlang from your ppa repository *ppa:rabbitmq/rabbitmq-erlang*. >> >> We have erlang pinned as below >> >> Package: erlang* >> Pin: version 1:25.* >> Pin-Priority: 1000 >> >> Problem is that you removed erlang-base_25* and there is >> only erlang-base_26.0.1-1rmq1ppa1~ubuntu22.04.1_arm64.deb >> >> Please, is there any reason why you removed erlang-base_25* for other >> ubuntu versions ? >> Because I can see 25* version for ubuntu 18.04 only >> >> Please, can u help us and upload erlang 25* also for other ubuntu >> versions ? >> >> Thank you very much >> >> Log below : >> >> >> >> >> ()[root at builder /]# apt update;apt install rabbitmq-server >> Get:1 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB] >> Get:2 http://ubuntu-cloud.archive.canonical.com/ubuntu >> jammy-updates/antelope InRelease [5,463 B] >> >> Get:3 http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [228 B] >> >> >> Get:7 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >> jammy InRelease [18.1 kB] >> >> Get:8 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy InRelease [5,152 B] >> Get:4 http://ftp.cvut.cz/ubuntu jammy InRelease [270 kB] >> >> Get:9 http://ubuntu-cloud.archive.canonical.com/ubuntu >> jammy-updates/antelope/main amd64 Packages [126 kB] >> >> Get:5 http://ucho.ignum.cz/ubuntu jammy-updates InRelease [119 kB] >> >> Get:6 https://mirror.it4i.cz/ubuntu jammy-security InRelease [110 kB] >> >> Get:10 http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 >> Packages [49.4 kB] >> Get:11 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 >> Packages [27.0 kB] >> Get:12 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >> jammy/main amd64 Packages [8,167 B] >> Get:13 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages [9,044 B] >> Get:14 http://ftp.cvut.cz/ubuntu jammy-updates/universe amd64 Packages >> [1,178 kB] >> Get:15 https://mirror.it4i.cz/ubuntu jammy-updates/main amd64 Packages >> [861 kB] >> Get:16 https://cz.archive.ubuntu.com/ubuntu jammy/universe amd64 >> Packages [17.5 MB] >> Get:18 http://archive.ubuntu.com/ubuntu jammy-security/main amd64 >> Packages [579 kB] >> Get:19 https://mirror.it4i.cz/ubuntu jammy-security/universe amd64 >> Packages [928 kB] >> Get:17 https://cz.archive.ubuntu.com/ubuntu jammy/main amd64 Packages >> [1,792 kB] >> Fetched 23.7 MB in 4s (6,499 kB/s) >> Reading package lists... Done >> Building dependency tree... Done >> Reading state information... Done >> All packages are up to date. >> Reading package lists... Done >> Building dependency tree... Done >> Reading state information... Done >> Some packages could not be installed. This may mean that you have >> requested an impossible situation or if you are using the unstable >> distribution that some required packages have not yet been created >> or been moved out of Incoming. >> The following information may help to resolve the situation: >> >> The following packages have unmet dependencies: >> rabbitmq-server : Depends: erlang-base (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> erlang-base-hipe (< 1:26.0) but it is not >> installable or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-crypto (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-eldap (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-inets (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-mnesia (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-os-mon (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-parsetools (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-public-key (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-runtime-tools (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-ssl (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-syntax-tools (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-tools (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-xmerl (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> E: Unable to correct problems, you have held broken packages. >> >> >> ()[root at builder /]# apt-cache policy rabbitmq-server >> ^[[Arabbitmq-server: >> Installed: (none) >> Candidate: 3.11.16-1 >> Version table: >> 3.12.0-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.18-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.17-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.16-1 1000 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.15-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.14-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.13-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.12-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.11-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.10-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.9-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.8-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.7-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.6-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.5-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.4-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.3-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.2-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.1-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.0-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.24-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.23-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.22-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.21-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.20-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.19-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.18-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.17-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.16-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.14-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.13-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.12-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.11-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.10-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.9-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.8-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.7-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.6-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.5-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.4-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.2-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.1-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.0-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.9.29-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.9.13-1ubuntu0.22.04.1 500 >> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main >> amd64 Packages >> 3.9.13-1 500 >> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 >> Packages >> >> >> ()[root at builder /]# apt-cache policy erlang-base >> erlang-base: >> Installed: (none) >> Candidate: 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 >> Version table: >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 500 >> 500 >> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >> jammy/main amd64 Packages >> 1:24.2.1+dfsg-1ubuntu0.1 500 >> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main >> amd64 Packages >> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-security/main >> amd64 Packages >> 1:24.2.1+dfsg-1 500 >> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 >> Packages >> >> >> Michal Arbet >> Openstack Engineer >> >> Ultimum Technologies a.s. >> Na Po???? 1047/26, 11000 Praha 1 >> Czech Republic >> >> +420 604 228 897 >> michal.arbet at ultimum.io >> *https://ultimum.io * >> >> LinkedIn | >> Twitter | Facebook >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.arbet at ultimum.io Wed Jun 14 13:12:42 2023 From: michal.arbet at ultimum.io (Michal Arbet) Date: Wed, 14 Jun 2023 15:12:42 +0200 Subject: missing erlang-base_25.3.2.2-1rmq1ppa1~ubuntu22.04.1_amd64.deb In-Reply-To: References: Message-ID: We are using ppa:rabbitmq/rabbitmq-erlang because there were also arm64 packages while cloudsmith don't have arm64 Any advice ? Michal Arbet Openstack Engineer st 14. 6. 2023 v 15:11 odes?latel Michal Arbet napsal: > We are using ppa:rabbitmq/rabbitmq-erlang because there were also arm64 > packages while cloudsmith don't have arm64 > Any advice ? > > > Michal Arbet > Openstack Engineer > > Ultimum Technologies a.s. > Na Po???? 1047/26, 11000 Praha 1 > Czech Republic > > +420 604 228 897 > michal.arbet at ultimum.io > *https://ultimum.io * > > LinkedIn | Twitter > | Facebook > > > > st 14. 6. 2023 v 14:38 odes?latel Arnaud Cogolu?gnes < > acogoluegnes at gmail.com> napsal: > >> According to Cloudsmith web UI [1], the package is still there. Maybe a >> network glitch? >> >> We (the RabbitMQ team) keep the last patch release of each Erlang minor >> release (25.3.x, 25.2.x, 25.1.x, etc). >> >> [1] >> https://cloudsmith.io/~rabbitmq/repos/rabbitmq-erlang/packages/?q=distribution%3Aubuntu+AND+distribution%3Ajammy+AND+version%3A1%3A25*+AND+name%3A%27%5Eerlang-base%24%27 >> >> On Wed, Jun 14, 2023 at 11:41?AM Michal Arbet >> wrote: >> >>> Hello, >>> >>> We are installing rabbitmq-server from >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/debian and >>> erlang from your ppa repository *ppa:rabbitmq/rabbitmq-erlang*. >>> >>> We have erlang pinned as below >>> >>> Package: erlang* >>> Pin: version 1:25.* >>> Pin-Priority: 1000 >>> >>> Problem is that you removed erlang-base_25* and there is >>> only erlang-base_26.0.1-1rmq1ppa1~ubuntu22.04.1_arm64.deb >>> >>> Please, is there any reason why you removed erlang-base_25* for other >>> ubuntu versions ? >>> Because I can see 25* version for ubuntu 18.04 only >>> >>> Please, can u help us and upload erlang 25* also for other ubuntu >>> versions ? >>> >>> Thank you very much >>> >>> Log below : >>> >>> >>> >>> >>> ()[root at builder /]# apt update;apt install rabbitmq-server >>> Get:1 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 >>> kB] >>> Get:2 http://ubuntu-cloud.archive.canonical.com/ubuntu >>> jammy-updates/antelope InRelease [5,463 B] >>> >>> Get:3 http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [228 B] >>> >>> >>> Get:7 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>> jammy InRelease [18.1 kB] >>> >>> Get:8 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy InRelease [5,152 B] >>> Get:4 http://ftp.cvut.cz/ubuntu jammy InRelease [270 kB] >>> >>> Get:9 http://ubuntu-cloud.archive.canonical.com/ubuntu >>> jammy-updates/antelope/main amd64 Packages [126 kB] >>> >>> Get:5 http://ucho.ignum.cz/ubuntu jammy-updates InRelease [119 kB] >>> >>> Get:6 https://mirror.it4i.cz/ubuntu jammy-security InRelease [110 kB] >>> >>> Get:10 http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 >>> Packages [49.4 kB] >>> Get:11 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 >>> Packages [27.0 kB] >>> Get:12 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>> jammy/main amd64 Packages [8,167 B] >>> Get:13 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages [9,044 B] >>> Get:14 http://ftp.cvut.cz/ubuntu jammy-updates/universe amd64 Packages >>> [1,178 kB] >>> Get:15 https://mirror.it4i.cz/ubuntu jammy-updates/main amd64 Packages >>> [861 kB] >>> Get:16 https://cz.archive.ubuntu.com/ubuntu jammy/universe amd64 >>> Packages [17.5 MB] >>> Get:18 http://archive.ubuntu.com/ubuntu jammy-security/main amd64 >>> Packages [579 kB] >>> Get:19 https://mirror.it4i.cz/ubuntu jammy-security/universe amd64 >>> Packages [928 kB] >>> Get:17 https://cz.archive.ubuntu.com/ubuntu jammy/main amd64 Packages >>> [1,792 kB] >>> Fetched 23.7 MB in 4s (6,499 kB/s) >>> Reading package lists... Done >>> Building dependency tree... Done >>> Reading state information... Done >>> All packages are up to date. >>> Reading package lists... Done >>> Building dependency tree... Done >>> Reading state information... Done >>> Some packages could not be installed. This may mean that you have >>> requested an impossible situation or if you are using the unstable >>> distribution that some required packages have not yet been created >>> or been moved out of Incoming. >>> The following information may help to resolve the situation: >>> >>> The following packages have unmet dependencies: >>> rabbitmq-server : Depends: erlang-base (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> erlang-base-hipe (< 1:26.0) but it is not >>> installable or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-crypto (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-eldap (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-inets (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-mnesia (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-os-mon (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-parsetools (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-public-key (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-runtime-tools (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-ssl (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-syntax-tools (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-tools (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-xmerl (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> E: Unable to correct problems, you have held broken packages. >>> >>> >>> ()[root at builder /]# apt-cache policy rabbitmq-server >>> ^[[Arabbitmq-server: >>> Installed: (none) >>> Candidate: 3.11.16-1 >>> Version table: >>> 3.12.0-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.18-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.17-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.16-1 1000 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.15-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.14-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.13-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.12-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.11-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.10-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.9-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.8-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.7-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.6-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.5-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.4-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.3-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.2-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.1-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.0-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.24-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.23-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.22-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.21-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.20-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.19-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.18-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.17-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.16-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.14-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.13-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.12-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.11-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.10-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.9-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.8-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.7-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.6-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.5-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.4-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.2-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.1-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.0-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.9.29-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.9.13-1ubuntu0.22.04.1 500 >>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main >>> amd64 Packages >>> 3.9.13-1 500 >>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 >>> Packages >>> >>> >>> ()[root at builder /]# apt-cache policy erlang-base >>> erlang-base: >>> Installed: (none) >>> Candidate: 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 >>> Version table: >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 500 >>> 500 >>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>> jammy/main amd64 Packages >>> 1:24.2.1+dfsg-1ubuntu0.1 500 >>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main >>> amd64 Packages >>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-security/main >>> amd64 Packages >>> 1:24.2.1+dfsg-1 500 >>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 >>> Packages >>> >>> >>> Michal Arbet >>> Openstack Engineer >>> >>> Ultimum Technologies a.s. >>> Na Po???? 1047/26, 11000 Praha 1 >>> Czech Republic >>> >>> +420 604 228 897 >>> michal.arbet at ultimum.io >>> *https://ultimum.io * >>> >>> LinkedIn | >>> Twitter | Facebook >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.arbet at ultimum.io Wed Jun 14 13:14:07 2023 From: michal.arbet at ultimum.io (Michal Arbet) Date: Wed, 14 Jun 2023 15:14:07 +0200 Subject: missing erlang-base_25.3.2.2-1rmq1ppa1~ubuntu22.04.1_amd64.deb In-Reply-To: References: Message-ID: I mean this one - > https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu/pool/main/e/erlang/ Michal Arbet Openstack Engineer Ultimum Technologies a.s. Na Po???? 1047/26, 11000 Praha 1 Czech Republic +420 604 228 897 michal.arbet at ultimum.io *https://ultimum.io * LinkedIn | Twitter | Facebook st 14. 6. 2023 v 15:12 odes?latel Michal Arbet napsal: > We are using ppa:rabbitmq/rabbitmq-erlang because there were also arm64 > packages while cloudsmith don't have arm64 > Any advice ? > Michal Arbet > Openstack Engineer > > > st 14. 6. 2023 v 15:11 odes?latel Michal Arbet > napsal: > >> We are using ppa:rabbitmq/rabbitmq-erlang because there were also arm64 >> packages while cloudsmith don't have arm64 >> Any advice ? >> >> >> Michal Arbet >> Openstack Engineer >> >> Ultimum Technologies a.s. >> Na Po???? 1047/26, 11000 Praha 1 >> Czech Republic >> >> +420 604 228 897 >> michal.arbet at ultimum.io >> *https://ultimum.io * >> >> LinkedIn | >> Twitter | Facebook >> >> >> >> st 14. 6. 2023 v 14:38 odes?latel Arnaud Cogolu?gnes < >> acogoluegnes at gmail.com> napsal: >> >>> According to Cloudsmith web UI [1], the package is still there. Maybe a >>> network glitch? >>> >>> We (the RabbitMQ team) keep the last patch release of each Erlang minor >>> release (25.3.x, 25.2.x, 25.1.x, etc). >>> >>> [1] >>> https://cloudsmith.io/~rabbitmq/repos/rabbitmq-erlang/packages/?q=distribution%3Aubuntu+AND+distribution%3Ajammy+AND+version%3A1%3A25*+AND+name%3A%27%5Eerlang-base%24%27 >>> >>> On Wed, Jun 14, 2023 at 11:41?AM Michal Arbet >>> wrote: >>> >>>> Hello, >>>> >>>> We are installing rabbitmq-server from >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/debian >>>> and erlang from your ppa repository *ppa:rabbitmq/rabbitmq-erlang*. >>>> >>>> We have erlang pinned as below >>>> >>>> Package: erlang* >>>> Pin: version 1:25.* >>>> Pin-Priority: 1000 >>>> >>>> Problem is that you removed erlang-base_25* and there is >>>> only erlang-base_26.0.1-1rmq1ppa1~ubuntu22.04.1_arm64.deb >>>> >>>> Please, is there any reason why you removed erlang-base_25* for other >>>> ubuntu versions ? >>>> Because I can see 25* version for ubuntu 18.04 only >>>> >>>> Please, can u help us and upload erlang 25* also for other ubuntu >>>> versions ? >>>> >>>> Thank you very much >>>> >>>> Log below : >>>> >>>> >>>> >>>> >>>> ()[root at builder /]# apt update;apt install rabbitmq-server >>>> Get:1 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 >>>> kB] >>>> Get:2 http://ubuntu-cloud.archive.canonical.com/ubuntu >>>> jammy-updates/antelope InRelease [5,463 B] >>>> >>>> Get:3 http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [228 B] >>>> >>>> >>>> Get:7 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>> jammy InRelease [18.1 kB] >>>> >>>> Get:8 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy InRelease [5,152 B] >>>> Get:4 http://ftp.cvut.cz/ubuntu jammy InRelease [270 kB] >>>> >>>> Get:9 http://ubuntu-cloud.archive.canonical.com/ubuntu >>>> jammy-updates/antelope/main amd64 Packages [126 kB] >>>> >>>> Get:5 http://ucho.ignum.cz/ubuntu jammy-updates InRelease [119 kB] >>>> >>>> Get:6 https://mirror.it4i.cz/ubuntu jammy-security InRelease [110 kB] >>>> >>>> Get:10 http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 >>>> Packages [49.4 kB] >>>> Get:11 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 >>>> Packages [27.0 kB] >>>> Get:12 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>> jammy/main amd64 Packages [8,167 B] >>>> Get:13 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages [9,044 B] >>>> Get:14 http://ftp.cvut.cz/ubuntu jammy-updates/universe amd64 Packages >>>> [1,178 kB] >>>> Get:15 https://mirror.it4i.cz/ubuntu jammy-updates/main amd64 Packages >>>> [861 kB] >>>> Get:16 https://cz.archive.ubuntu.com/ubuntu jammy/universe amd64 >>>> Packages [17.5 MB] >>>> Get:18 http://archive.ubuntu.com/ubuntu jammy-security/main amd64 >>>> Packages [579 kB] >>>> Get:19 https://mirror.it4i.cz/ubuntu jammy-security/universe amd64 >>>> Packages [928 kB] >>>> Get:17 https://cz.archive.ubuntu.com/ubuntu jammy/main amd64 Packages >>>> [1,792 kB] >>>> Fetched 23.7 MB in 4s (6,499 kB/s) >>>> Reading package lists... Done >>>> Building dependency tree... Done >>>> Reading state information... Done >>>> All packages are up to date. >>>> Reading package lists... Done >>>> Building dependency tree... Done >>>> Reading state information... Done >>>> Some packages could not be installed. This may mean that you have >>>> requested an impossible situation or if you are using the unstable >>>> distribution that some required packages have not yet been created >>>> or been moved out of Incoming. >>>> The following information may help to resolve the situation: >>>> >>>> The following packages have unmet dependencies: >>>> rabbitmq-server : Depends: erlang-base (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> erlang-base-hipe (< 1:26.0) but it is not >>>> installable or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-crypto (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-eldap (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-inets (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-mnesia (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-os-mon (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-parsetools (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-public-key (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-runtime-tools (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-ssl (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-syntax-tools (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-tools (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-xmerl (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> E: Unable to correct problems, you have held broken packages. >>>> >>>> >>>> ()[root at builder /]# apt-cache policy rabbitmq-server >>>> ^[[Arabbitmq-server: >>>> Installed: (none) >>>> Candidate: 3.11.16-1 >>>> Version table: >>>> 3.12.0-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.18-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.17-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.16-1 1000 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.15-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.14-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.13-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.12-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.11-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.10-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.9-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.8-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.7-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.6-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.5-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.4-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.3-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.2-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.1-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.0-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.24-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.23-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.22-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.21-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.20-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.19-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.18-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.17-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.16-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.14-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.13-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.12-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.11-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.10-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.9-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.8-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.7-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.6-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.5-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.4-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.2-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.1-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.0-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.9.29-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.9.13-1ubuntu0.22.04.1 500 >>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main >>>> amd64 Packages >>>> 3.9.13-1 500 >>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 >>>> Packages >>>> >>>> >>>> ()[root at builder /]# apt-cache policy erlang-base >>>> erlang-base: >>>> Installed: (none) >>>> Candidate: 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 >>>> Version table: >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 500 >>>> 500 >>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>> jammy/main amd64 Packages >>>> 1:24.2.1+dfsg-1ubuntu0.1 500 >>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main >>>> amd64 Packages >>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt >>>> jammy-security/main amd64 Packages >>>> 1:24.2.1+dfsg-1 500 >>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 >>>> Packages >>>> >>>> >>>> Michal Arbet >>>> Openstack Engineer >>>> >>>> Ultimum Technologies a.s. >>>> Na Po???? 1047/26, 11000 Praha 1 >>>> Czech Republic >>>> >>>> +420 604 228 897 >>>> michal.arbet at ultimum.io >>>> *https://ultimum.io * >>>> >>>> LinkedIn | >>>> Twitter | Facebook >>>> >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From acogoluegnes at gmail.com Wed Jun 14 13:20:59 2023 From: acogoluegnes at gmail.com (=?UTF-8?Q?Arnaud_Cogolu=C3=A8gnes?=) Date: Wed, 14 Jun 2023 15:20:59 +0200 Subject: missing erlang-base_25.3.2.2-1rmq1ppa1~ubuntu22.04.1_amd64.deb In-Reply-To: References: Message-ID: The logs above mentioned both Cloudsmith and PPA, use one or the other, not both. PPA makes only the latest built version available, in this case, 26.0. This is something we don't control with PPA. On Wed, Jun 14, 2023 at 3:14?PM Michal Arbet wrote: > I mean this one - > > https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu/pool/main/e/erlang/ > Michal Arbet > Openstack Engineer > > Ultimum Technologies a.s. > Na Po???? 1047/26, 11000 Praha 1 > Czech Republic > > +420 604 228 897 > michal.arbet at ultimum.io > *https://ultimum.io * > > LinkedIn | Twitter > | Facebook > > > > st 14. 6. 2023 v 15:12 odes?latel Michal Arbet > napsal: > >> We are using ppa:rabbitmq/rabbitmq-erlang because there were also arm64 >> packages while cloudsmith don't have arm64 >> Any advice ? >> Michal Arbet >> Openstack Engineer >> >> >> st 14. 6. 2023 v 15:11 odes?latel Michal Arbet >> napsal: >> >>> We are using ppa:rabbitmq/rabbitmq-erlang because there were also arm64 >>> packages while cloudsmith don't have arm64 >>> Any advice ? >>> >>> >>> Michal Arbet >>> Openstack Engineer >>> >>> Ultimum Technologies a.s. >>> Na Po???? 1047/26, 11000 Praha 1 >>> Czech Republic >>> >>> +420 604 228 897 >>> michal.arbet at ultimum.io >>> *https://ultimum.io * >>> >>> LinkedIn | >>> Twitter | Facebook >>> >>> >>> >>> st 14. 6. 2023 v 14:38 odes?latel Arnaud Cogolu?gnes < >>> acogoluegnes at gmail.com> napsal: >>> >>>> According to Cloudsmith web UI [1], the package is still there. Maybe a >>>> network glitch? >>>> >>>> We (the RabbitMQ team) keep the last patch release of each Erlang minor >>>> release (25.3.x, 25.2.x, 25.1.x, etc). >>>> >>>> [1] >>>> https://cloudsmith.io/~rabbitmq/repos/rabbitmq-erlang/packages/?q=distribution%3Aubuntu+AND+distribution%3Ajammy+AND+version%3A1%3A25*+AND+name%3A%27%5Eerlang-base%24%27 >>>> >>>> On Wed, Jun 14, 2023 at 11:41?AM Michal Arbet >>>> wrote: >>>> >>>>> Hello, >>>>> >>>>> We are installing rabbitmq-server from >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/debian >>>>> and erlang from your ppa repository *ppa:rabbitmq/rabbitmq-erlang*. >>>>> >>>>> We have erlang pinned as below >>>>> >>>>> Package: erlang* >>>>> Pin: version 1:25.* >>>>> Pin-Priority: 1000 >>>>> >>>>> Problem is that you removed erlang-base_25* and there is >>>>> only erlang-base_26.0.1-1rmq1ppa1~ubuntu22.04.1_arm64.deb >>>>> >>>>> Please, is there any reason why you removed erlang-base_25* for other >>>>> ubuntu versions ? >>>>> Because I can see 25* version for ubuntu 18.04 only >>>>> >>>>> Please, can u help us and upload erlang 25* also for other ubuntu >>>>> versions ? >>>>> >>>>> Thank you very much >>>>> >>>>> Log below : >>>>> >>>>> >>>>> >>>>> >>>>> ()[root at builder /]# apt update;apt install rabbitmq-server >>>>> Get:1 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 >>>>> kB] >>>>> Get:2 http://ubuntu-cloud.archive.canonical.com/ubuntu >>>>> jammy-updates/antelope InRelease [5,463 B] >>>>> >>>>> Get:3 http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [228 B] >>>>> >>>>> >>>>> Get:7 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>>> jammy InRelease [18.1 kB] >>>>> >>>>> Get:8 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy InRelease [5,152 B] >>>>> Get:4 http://ftp.cvut.cz/ubuntu jammy InRelease [270 kB] >>>>> >>>>> Get:9 http://ubuntu-cloud.archive.canonical.com/ubuntu >>>>> jammy-updates/antelope/main amd64 Packages [126 kB] >>>>> >>>>> Get:5 http://ucho.ignum.cz/ubuntu jammy-updates InRelease [119 kB] >>>>> >>>>> Get:6 https://mirror.it4i.cz/ubuntu jammy-security InRelease [110 kB] >>>>> >>>>> Get:10 http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 >>>>> Packages [49.4 kB] >>>>> Get:11 http://archive.ubuntu.com/ubuntu jammy-backports/universe >>>>> amd64 Packages [27.0 kB] >>>>> Get:12 >>>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>>> jammy/main amd64 Packages [8,167 B] >>>>> Get:13 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages [9,044 B] >>>>> Get:14 http://ftp.cvut.cz/ubuntu jammy-updates/universe amd64 >>>>> Packages [1,178 kB] >>>>> Get:15 https://mirror.it4i.cz/ubuntu jammy-updates/main amd64 >>>>> Packages [861 kB] >>>>> Get:16 https://cz.archive.ubuntu.com/ubuntu jammy/universe amd64 >>>>> Packages [17.5 MB] >>>>> Get:18 http://archive.ubuntu.com/ubuntu jammy-security/main amd64 >>>>> Packages [579 kB] >>>>> Get:19 https://mirror.it4i.cz/ubuntu jammy-security/universe amd64 >>>>> Packages [928 kB] >>>>> Get:17 https://cz.archive.ubuntu.com/ubuntu jammy/main amd64 Packages >>>>> [1,792 kB] >>>>> Fetched 23.7 MB in 4s (6,499 kB/s) >>>>> Reading package lists... Done >>>>> Building dependency tree... Done >>>>> Reading state information... Done >>>>> All packages are up to date. >>>>> Reading package lists... Done >>>>> Building dependency tree... Done >>>>> Reading state information... Done >>>>> Some packages could not be installed. This may mean that you have >>>>> requested an impossible situation or if you are using the unstable >>>>> distribution that some required packages have not yet been created >>>>> or been moved out of Incoming. >>>>> The following information may help to resolve the situation: >>>>> >>>>> The following packages have unmet dependencies: >>>>> rabbitmq-server : Depends: erlang-base (< 1:26.0) but >>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>> erlang-base-hipe (< 1:26.0) but it is not >>>>> installable or >>>>> esl-erlang (< 1:26.0) but it is not >>>>> installable >>>>> Depends: erlang-crypto (< 1:26.0) but >>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>> esl-erlang (< 1:26.0) but it is not >>>>> installable >>>>> Depends: erlang-eldap (< 1:26.0) but >>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>> esl-erlang (< 1:26.0) but it is not >>>>> installable >>>>> Depends: erlang-inets (< 1:26.0) but >>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>> esl-erlang (< 1:26.0) but it is not >>>>> installable >>>>> Depends: erlang-mnesia (< 1:26.0) but >>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>> esl-erlang (< 1:26.0) but it is not >>>>> installable >>>>> Depends: erlang-os-mon (< 1:26.0) but >>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>> esl-erlang (< 1:26.0) but it is not >>>>> installable >>>>> Depends: erlang-parsetools (< 1:26.0) but >>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>> esl-erlang (< 1:26.0) but it is not >>>>> installable >>>>> Depends: erlang-public-key (< 1:26.0) but >>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>> esl-erlang (< 1:26.0) but it is not >>>>> installable >>>>> Depends: erlang-runtime-tools (< 1:26.0) but >>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>> esl-erlang (< 1:26.0) but it is not >>>>> installable >>>>> Depends: erlang-ssl (< 1:26.0) but >>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>> esl-erlang (< 1:26.0) but it is not >>>>> installable >>>>> Depends: erlang-syntax-tools (< 1:26.0) but >>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>> esl-erlang (< 1:26.0) but it is not >>>>> installable >>>>> Depends: erlang-tools (< 1:26.0) but >>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>> esl-erlang (< 1:26.0) but it is not >>>>> installable >>>>> Depends: erlang-xmerl (< 1:26.0) but >>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>> esl-erlang (< 1:26.0) but it is not >>>>> installable >>>>> E: Unable to correct problems, you have held broken packages. >>>>> >>>>> >>>>> ()[root at builder /]# apt-cache policy rabbitmq-server >>>>> ^[[Arabbitmq-server: >>>>> Installed: (none) >>>>> Candidate: 3.11.16-1 >>>>> Version table: >>>>> 3.12.0-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.18-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.17-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.16-1 1000 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.15-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.14-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.13-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.12-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.11-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.10-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.9-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.8-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.7-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.6-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.5-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.4-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.3-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.2-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.1-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.11.0-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.24-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.23-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.22-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.21-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.20-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.19-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.18-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.17-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.16-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.14-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.13-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.12-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.11-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.10-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.9-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.8-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.7-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.6-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.5-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.4-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.2-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.1-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.10.0-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.9.29-1 500 >>>>> 500 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages >>>>> 3.9.13-1ubuntu0.22.04.1 500 >>>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt >>>>> jammy-updates/main amd64 Packages >>>>> 3.9.13-1 500 >>>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 >>>>> Packages >>>>> >>>>> >>>>> ()[root at builder /]# apt-cache policy erlang-base >>>>> erlang-base: >>>>> Installed: (none) >>>>> Candidate: 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 >>>>> Version table: >>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 500 >>>>> 500 >>>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>>> jammy/main amd64 Packages >>>>> 1:24.2.1+dfsg-1ubuntu0.1 500 >>>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt >>>>> jammy-updates/main amd64 Packages >>>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt >>>>> jammy-security/main amd64 Packages >>>>> 1:24.2.1+dfsg-1 500 >>>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 >>>>> Packages >>>>> >>>>> >>>>> Michal Arbet >>>>> Openstack Engineer >>>>> >>>>> Ultimum Technologies a.s. >>>>> Na Po???? 1047/26, 11000 Praha 1 >>>>> Czech Republic >>>>> >>>>> +420 604 228 897 >>>>> michal.arbet at ultimum.io >>>>> *https://ultimum.io * >>>>> >>>>> LinkedIn | >>>>> Twitter | Facebook >>>>> >>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.arbet at ultimum.io Wed Jun 14 13:28:37 2023 From: michal.arbet at ultimum.io (Michal Arbet) Date: Wed, 14 Jun 2023 15:28:37 +0200 Subject: missing erlang-base_25.3.2.2-1rmq1ppa1~ubuntu22.04.1_amd64.deb In-Reply-To: References: Message-ID: Ah, ok , do you provide also arm64 packages please ? Michal Arbet Openstack Engineer Ultimum Technologies a.s. Na Po???? 1047/26, 11000 Praha 1 Czech Republic +420 604 228 897 michal.arbet at ultimum.io *https://ultimum.io * LinkedIn | Twitter | Facebook st 14. 6. 2023 v 15:21 odes?latel Arnaud Cogolu?gnes napsal: > The logs above mentioned both Cloudsmith and PPA, use one or the other, > not both. > > PPA makes only the latest built version available, in this case, 26.0. > This is something we don't control with PPA. > > On Wed, Jun 14, 2023 at 3:14?PM Michal Arbet > wrote: > >> I mean this one - > >> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu/pool/main/e/erlang/ >> Michal Arbet >> Openstack Engineer >> >> Ultimum Technologies a.s. >> Na Po???? 1047/26, 11000 Praha 1 >> Czech Republic >> >> +420 604 228 897 >> michal.arbet at ultimum.io >> *https://ultimum.io * >> >> LinkedIn | >> Twitter | Facebook >> >> >> >> st 14. 6. 2023 v 15:12 odes?latel Michal Arbet >> napsal: >> >>> We are using ppa:rabbitmq/rabbitmq-erlang because there were also arm64 >>> packages while cloudsmith don't have arm64 >>> Any advice ? >>> Michal Arbet >>> Openstack Engineer >>> >>> >>> st 14. 6. 2023 v 15:11 odes?latel Michal Arbet >>> napsal: >>> >>>> We are using ppa:rabbitmq/rabbitmq-erlang because there were also arm64 >>>> packages while cloudsmith don't have arm64 >>>> Any advice ? >>>> >>>> >>>> Michal Arbet >>>> Openstack Engineer >>>> >>>> Ultimum Technologies a.s. >>>> Na Po???? 1047/26, 11000 Praha 1 >>>> Czech Republic >>>> >>>> +420 604 228 897 >>>> michal.arbet at ultimum.io >>>> *https://ultimum.io * >>>> >>>> LinkedIn | >>>> Twitter | Facebook >>>> >>>> >>>> >>>> st 14. 6. 2023 v 14:38 odes?latel Arnaud Cogolu?gnes < >>>> acogoluegnes at gmail.com> napsal: >>>> >>>>> According to Cloudsmith web UI [1], the package is still there. Maybe >>>>> a network glitch? >>>>> >>>>> We (the RabbitMQ team) keep the last patch release of each Erlang >>>>> minor release (25.3.x, 25.2.x, 25.1.x, etc). >>>>> >>>>> [1] >>>>> https://cloudsmith.io/~rabbitmq/repos/rabbitmq-erlang/packages/?q=distribution%3Aubuntu+AND+distribution%3Ajammy+AND+version%3A1%3A25*+AND+name%3A%27%5Eerlang-base%24%27 >>>>> >>>>> On Wed, Jun 14, 2023 at 11:41?AM Michal Arbet >>>>> wrote: >>>>> >>>>>> Hello, >>>>>> >>>>>> We are installing rabbitmq-server from >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/debian >>>>>> and erlang from your ppa repository *ppa:rabbitmq/rabbitmq-erlang*. >>>>>> >>>>>> We have erlang pinned as below >>>>>> >>>>>> Package: erlang* >>>>>> Pin: version 1:25.* >>>>>> Pin-Priority: 1000 >>>>>> >>>>>> Problem is that you removed erlang-base_25* and there is >>>>>> only erlang-base_26.0.1-1rmq1ppa1~ubuntu22.04.1_arm64.deb >>>>>> >>>>>> Please, is there any reason why you removed erlang-base_25* for other >>>>>> ubuntu versions ? >>>>>> Because I can see 25* version for ubuntu 18.04 only >>>>>> >>>>>> Please, can u help us and upload erlang 25* also for other ubuntu >>>>>> versions ? >>>>>> >>>>>> Thank you very much >>>>>> >>>>>> Log below : >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> ()[root at builder /]# apt update;apt install rabbitmq-server >>>>>> Get:1 http://archive.ubuntu.com/ubuntu jammy-backports InRelease >>>>>> [108 kB] >>>>>> Get:2 http://ubuntu-cloud.archive.canonical.com/ubuntu >>>>>> jammy-updates/antelope InRelease [5,463 B] >>>>>> >>>>>> Get:3 http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [228 B] >>>>>> >>>>>> >>>>>> Get:7 >>>>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>>>> jammy InRelease [18.1 kB] >>>>>> >>>>>> Get:8 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy InRelease [5,152 B] >>>>>> Get:4 http://ftp.cvut.cz/ubuntu jammy InRelease [270 kB] >>>>>> >>>>>> Get:9 http://ubuntu-cloud.archive.canonical.com/ubuntu >>>>>> jammy-updates/antelope/main amd64 Packages [126 kB] >>>>>> >>>>>> Get:5 http://ucho.ignum.cz/ubuntu jammy-updates InRelease [119 kB] >>>>>> >>>>>> Get:6 https://mirror.it4i.cz/ubuntu jammy-security InRelease [110 >>>>>> kB] >>>>>> >>>>>> Get:10 http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 >>>>>> Packages [49.4 kB] >>>>>> Get:11 http://archive.ubuntu.com/ubuntu jammy-backports/universe >>>>>> amd64 Packages [27.0 kB] >>>>>> Get:12 >>>>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>>>> jammy/main amd64 Packages [8,167 B] >>>>>> Get:13 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages [9,044 B] >>>>>> Get:14 http://ftp.cvut.cz/ubuntu jammy-updates/universe amd64 >>>>>> Packages [1,178 kB] >>>>>> Get:15 https://mirror.it4i.cz/ubuntu jammy-updates/main amd64 >>>>>> Packages [861 kB] >>>>>> Get:16 https://cz.archive.ubuntu.com/ubuntu jammy/universe amd64 >>>>>> Packages [17.5 MB] >>>>>> Get:18 http://archive.ubuntu.com/ubuntu jammy-security/main amd64 >>>>>> Packages [579 kB] >>>>>> Get:19 https://mirror.it4i.cz/ubuntu jammy-security/universe amd64 >>>>>> Packages [928 kB] >>>>>> Get:17 https://cz.archive.ubuntu.com/ubuntu jammy/main amd64 >>>>>> Packages [1,792 kB] >>>>>> Fetched 23.7 MB in 4s (6,499 kB/s) >>>>>> Reading package lists... Done >>>>>> Building dependency tree... Done >>>>>> Reading state information... Done >>>>>> All packages are up to date. >>>>>> Reading package lists... Done >>>>>> Building dependency tree... Done >>>>>> Reading state information... Done >>>>>> Some packages could not be installed. This may mean that you have >>>>>> requested an impossible situation or if you are using the unstable >>>>>> distribution that some required packages have not yet been created >>>>>> or been moved out of Incoming. >>>>>> The following information may help to resolve the situation: >>>>>> >>>>>> The following packages have unmet dependencies: >>>>>> rabbitmq-server : Depends: erlang-base (< 1:26.0) but >>>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>>> erlang-base-hipe (< 1:26.0) but it is not >>>>>> installable or >>>>>> esl-erlang (< 1:26.0) but it is not >>>>>> installable >>>>>> Depends: erlang-crypto (< 1:26.0) but >>>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>>> esl-erlang (< 1:26.0) but it is not >>>>>> installable >>>>>> Depends: erlang-eldap (< 1:26.0) but >>>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>>> esl-erlang (< 1:26.0) but it is not >>>>>> installable >>>>>> Depends: erlang-inets (< 1:26.0) but >>>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>>> esl-erlang (< 1:26.0) but it is not >>>>>> installable >>>>>> Depends: erlang-mnesia (< 1:26.0) but >>>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>>> esl-erlang (< 1:26.0) but it is not >>>>>> installable >>>>>> Depends: erlang-os-mon (< 1:26.0) but >>>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>>> esl-erlang (< 1:26.0) but it is not >>>>>> installable >>>>>> Depends: erlang-parsetools (< 1:26.0) but >>>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>>> esl-erlang (< 1:26.0) but it is not >>>>>> installable >>>>>> Depends: erlang-public-key (< 1:26.0) but >>>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>>> esl-erlang (< 1:26.0) but it is not >>>>>> installable >>>>>> Depends: erlang-runtime-tools (< 1:26.0) but >>>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>>> esl-erlang (< 1:26.0) but it is not >>>>>> installable >>>>>> Depends: erlang-ssl (< 1:26.0) but >>>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>>> esl-erlang (< 1:26.0) but it is not >>>>>> installable >>>>>> Depends: erlang-syntax-tools (< 1:26.0) but >>>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>>> esl-erlang (< 1:26.0) but it is not >>>>>> installable >>>>>> Depends: erlang-tools (< 1:26.0) but >>>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>>> esl-erlang (< 1:26.0) but it is not >>>>>> installable >>>>>> Depends: erlang-xmerl (< 1:26.0) but >>>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>>>> esl-erlang (< 1:26.0) but it is not >>>>>> installable >>>>>> E: Unable to correct problems, you have held broken packages. >>>>>> >>>>>> >>>>>> ()[root at builder /]# apt-cache policy rabbitmq-server >>>>>> ^[[Arabbitmq-server: >>>>>> Installed: (none) >>>>>> Candidate: 3.11.16-1 >>>>>> Version table: >>>>>> 3.12.0-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.18-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.17-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.16-1 1000 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.15-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.14-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.13-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.12-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.11-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.10-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.9-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.8-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.7-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.6-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.5-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.4-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.3-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.2-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.1-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.11.0-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.24-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.23-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.22-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.21-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.20-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.19-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.18-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.17-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.16-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.14-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.13-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.12-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.11-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.10-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.9-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.8-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.7-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.6-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.5-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.4-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.2-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.1-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.10.0-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.9.29-1 500 >>>>>> 500 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 3.9.13-1ubuntu0.22.04.1 500 >>>>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt >>>>>> jammy-updates/main amd64 Packages >>>>>> 3.9.13-1 500 >>>>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 >>>>>> Packages >>>>>> >>>>>> >>>>>> ()[root at builder /]# apt-cache policy erlang-base >>>>>> erlang-base: >>>>>> Installed: (none) >>>>>> Candidate: 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 >>>>>> Version table: >>>>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 500 >>>>>> 500 >>>>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>>>> jammy/main amd64 Packages >>>>>> 1:24.2.1+dfsg-1ubuntu0.1 500 >>>>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt >>>>>> jammy-updates/main amd64 Packages >>>>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt >>>>>> jammy-security/main amd64 Packages >>>>>> 1:24.2.1+dfsg-1 500 >>>>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 >>>>>> Packages >>>>>> >>>>>> >>>>>> Michal Arbet >>>>>> Openstack Engineer >>>>>> >>>>>> Ultimum Technologies a.s. >>>>>> Na Po???? 1047/26, 11000 Praha 1 >>>>>> Czech Republic >>>>>> >>>>>> +420 604 228 897 >>>>>> michal.arbet at ultimum.io >>>>>> *https://ultimum.io * >>>>>> >>>>>> LinkedIn | >>>>>> Twitter | Facebook >>>>>> >>>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Wed Jun 14 13:34:58 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 14 Jun 2023 06:34:58 -0700 Subject: missing erlang-base_25.3.2.2-1rmq1ppa1~ubuntu22.04.1_amd64.deb In-Reply-To: References: Message-ID: Well, if you're using rabbitmq server from Cloudsmith, for me it would make total sense to use cloudsmith repo for Erlang as well. They also do have new community repositories (ppa1.novemberbrain.com) that keep old versions of both rabbit and Erlang without aggressively rotating them. But using https://dl.cloudsmith.io/public/rabbitmq/rabbitmq- erlang is still an option. On Wed, Jun 14, 2023, 02:46 Michal Arbet wrote: > Hello, > > We are installing rabbitmq-server from > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/debian and > erlang from your ppa repository *ppa:rabbitmq/rabbitmq-erlang*. > > We have erlang pinned as below > > Package: erlang* > Pin: version 1:25.* > Pin-Priority: 1000 > > Problem is that you removed erlang-base_25* and there is > only erlang-base_26.0.1-1rmq1ppa1~ubuntu22.04.1_arm64.deb > > Please, is there any reason why you removed erlang-base_25* for other > ubuntu versions ? > Because I can see 25* version for ubuntu 18.04 only > > Please, can u help us and upload erlang 25* also for other ubuntu versions > ? > > Thank you very much > > Log below : > > > > > ()[root at builder /]# apt update;apt install rabbitmq-server > Get:1 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB] > Get:2 http://ubuntu-cloud.archive.canonical.com/ubuntu > jammy-updates/antelope InRelease [5,463 B] > > Get:3 http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [228 B] > > > Get:7 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu > jammy InRelease [18.1 kB] > > Get:8 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy InRelease [5,152 B] > Get:4 http://ftp.cvut.cz/ubuntu jammy InRelease [270 kB] > > Get:9 http://ubuntu-cloud.archive.canonical.com/ubuntu > jammy-updates/antelope/main amd64 Packages [126 kB] > > Get:5 http://ucho.ignum.cz/ubuntu jammy-updates InRelease [119 kB] > > Get:6 https://mirror.it4i.cz/ubuntu jammy-security InRelease [110 kB] > > Get:10 http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 > Packages [49.4 kB] > Get:11 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 > Packages [27.0 kB] > Get:12 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu > jammy/main amd64 Packages [8,167 B] > Get:13 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages [9,044 B] > Get:14 http://ftp.cvut.cz/ubuntu jammy-updates/universe amd64 Packages > [1,178 kB] > Get:15 https://mirror.it4i.cz/ubuntu jammy-updates/main amd64 Packages > [861 kB] > Get:16 https://cz.archive.ubuntu.com/ubuntu jammy/universe amd64 Packages > [17.5 MB] > Get:18 http://archive.ubuntu.com/ubuntu jammy-security/main amd64 > Packages [579 kB] > Get:19 https://mirror.it4i.cz/ubuntu jammy-security/universe amd64 > Packages [928 kB] > Get:17 https://cz.archive.ubuntu.com/ubuntu jammy/main amd64 Packages > [1,792 kB] > Fetched 23.7 MB in 4s (6,499 kB/s) > Reading package lists... Done > Building dependency tree... Done > Reading state information... Done > All packages are up to date. > Reading package lists... Done > Building dependency tree... Done > Reading state information... Done > Some packages could not be installed. This may mean that you have > requested an impossible situation or if you are using the unstable > distribution that some required packages have not yet been created > or been moved out of Incoming. > The following information may help to resolve the situation: > > The following packages have unmet dependencies: > rabbitmq-server : Depends: erlang-base (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > erlang-base-hipe (< 1:26.0) but it is not > installable or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-crypto (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-eldap (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-inets (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-mnesia (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-os-mon (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-parsetools (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-public-key (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-runtime-tools (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-ssl (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-syntax-tools (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-tools (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > Depends: erlang-xmerl (< 1:26.0) but > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or > esl-erlang (< 1:26.0) but it is not installable > E: Unable to correct problems, you have held broken packages. > > > ()[root at builder /]# apt-cache policy rabbitmq-server > ^[[Arabbitmq-server: > Installed: (none) > Candidate: 3.11.16-1 > Version table: > 3.12.0-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.18-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.17-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.16-1 1000 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.15-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.14-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.13-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.12-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.11-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.10-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.9-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.8-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.7-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.6-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.5-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.4-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.3-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.2-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.1-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.11.0-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.24-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.23-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.22-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.21-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.20-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.19-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.18-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.17-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.16-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.14-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.13-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.12-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.11-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.10-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.9-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.8-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.7-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.6-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.5-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.4-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.2-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.1-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.10.0-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.9.29-1 500 > 500 > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu > jammy/main amd64 Packages > 3.9.13-1ubuntu0.22.04.1 500 > 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main > amd64 Packages > 3.9.13-1 500 > 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 > Packages > > > ()[root at builder /]# apt-cache policy erlang-base > erlang-base: > Installed: (none) > Candidate: 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 > Version table: > 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 500 > 500 > https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu > jammy/main amd64 Packages > 1:24.2.1+dfsg-1ubuntu0.1 500 > 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main > amd64 Packages > 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-security/main > amd64 Packages > 1:24.2.1+dfsg-1 500 > 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 > Packages > > > Michal Arbet > Openstack Engineer > > Ultimum Technologies a.s. > Na Po???? 1047/26, 11000 Praha 1 > Czech Republic > > +420 604 228 897 > michal.arbet at ultimum.io > *https://ultimum.io * > > LinkedIn | Twitter > | Facebook > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.arbet at ultimum.io Wed Jun 14 13:45:49 2023 From: michal.arbet at ultimum.io (Michal Arbet) Date: Wed, 14 Jun 2023 15:45:49 +0200 Subject: missing erlang-base_25.3.2.2-1rmq1ppa1~ubuntu22.04.1_amd64.deb In-Reply-To: References: Message-ID: Regarding cloudsmith repo, rabbitmq official doc saying : The Cloudsmith repository has a monthly traffic quota that can be exhausted. For this reason, examples below use a Cloudsmith repository mirror. All packages in the mirror repository are signed using the same signing key. https://www.rabbitmq.com/install-debian.html#apt-cloudsmith They suggest use of ppa1.novemberain.com mirror as well.... but there is no arm64 support , check arm64 test jobs ... https://review.opendev.org/c/openstack/kolla/+/885857 Build failed (ARM64 pipeline). https://zuul.opendev.org/t/openstack/buildset/8e91ab6f2d994e07a6b92b198f65f91c kolla-build-centos9s-aarch64 https://zuul.opendev.org/t/openstack/build/3c75751b23a546cab078652640b92dd9 : FAILURE in 18m 05s (non-voting) kolla-build-debian-aarch64 https://zuul.opendev.org/t/openstack/build/4a7f9340527844ebb93ac7d6d2bb3605 : FAILURE in 15m 33s kolla-ansible-debian-aarch64 https://zuul.opendev.org/t/openstack/build/164d881ae46941a2a82ed7cc8ac6b1ad : FAILURE in 20m 12s openstack-tox-py38-arm64 https://zuul.opendev.org/t/openstack/build/77b15f75e1ff4ab58d7628d10df14d92 : SUCCESS in 4m 17s (non-voting) openstack-tox-py39-arm64 https://zuul.opendev.org/t/openstack/build/a8f5470d6d0c4dc1919f8c33141e5907 : SUCCESS in 2m 31s (non-voting) openstack-tox-py310-arm64 https://zuul.opendev.org/t/openstack/build/69eeba53244b436eaacfa926ef277707 : SUCCESS in 4m 43s (non-voting) kolla-build-rocky9-aarch64 https://zuul.opendev.org/t/openstack/build/ed02088b2fb34f689b64f9a5d30b42cf : FAILURE in 17m 25s (non-voting) kolla-build-ubuntu-aarch64 https://zuul.opendev.org/t/openstack/build/e59c3f7e6510479d8262a55f9fabc448 : FAILURE in 14m 41s (non-voting) Michal Arbet Openstack Engineer Ultimum Technologies a.s. Na Po???? 1047/26, 11000 Praha 1 Czech Republic +420 604 228 897 michal.arbet at ultimum.io *https://ultimum.io * LinkedIn | Twitter | Facebook st 14. 6. 2023 v 15:35 odes?latel Dmitriy Rabotyagov < noonedeadpunk at gmail.com> napsal: > Well, if you're using rabbitmq server from Cloudsmith, for me it would > make total sense to use cloudsmith repo for Erlang as well. > > They also do have new community repositories (ppa1.novemberbrain.com) > that keep old versions of both rabbit and Erlang without aggressively > rotating them. But using > https://dl.cloudsmith.io/public/rabbitmq/rabbitmq- > erlang > is still an option. > > > On Wed, Jun 14, 2023, 02:46 Michal Arbet wrote: > >> Hello, >> >> We are installing rabbitmq-server from >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/debian and >> erlang from your ppa repository *ppa:rabbitmq/rabbitmq-erlang*. >> >> We have erlang pinned as below >> >> Package: erlang* >> Pin: version 1:25.* >> Pin-Priority: 1000 >> >> Problem is that you removed erlang-base_25* and there is >> only erlang-base_26.0.1-1rmq1ppa1~ubuntu22.04.1_arm64.deb >> >> Please, is there any reason why you removed erlang-base_25* for other >> ubuntu versions ? >> Because I can see 25* version for ubuntu 18.04 only >> >> Please, can u help us and upload erlang 25* also for other ubuntu >> versions ? >> >> Thank you very much >> >> Log below : >> >> >> >> >> ()[root at builder /]# apt update;apt install rabbitmq-server >> Get:1 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB] >> Get:2 http://ubuntu-cloud.archive.canonical.com/ubuntu >> jammy-updates/antelope InRelease [5,463 B] >> >> Get:3 http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [228 B] >> >> >> Get:7 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >> jammy InRelease [18.1 kB] >> >> Get:8 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy InRelease [5,152 B] >> Get:4 http://ftp.cvut.cz/ubuntu jammy InRelease [270 kB] >> >> Get:9 http://ubuntu-cloud.archive.canonical.com/ubuntu >> jammy-updates/antelope/main amd64 Packages [126 kB] >> >> Get:5 http://ucho.ignum.cz/ubuntu jammy-updates InRelease [119 kB] >> >> Get:6 https://mirror.it4i.cz/ubuntu jammy-security InRelease [110 kB] >> >> Get:10 http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 >> Packages [49.4 kB] >> Get:11 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 >> Packages [27.0 kB] >> Get:12 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >> jammy/main amd64 Packages [8,167 B] >> Get:13 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages [9,044 B] >> Get:14 http://ftp.cvut.cz/ubuntu jammy-updates/universe amd64 Packages >> [1,178 kB] >> Get:15 https://mirror.it4i.cz/ubuntu jammy-updates/main amd64 Packages >> [861 kB] >> Get:16 https://cz.archive.ubuntu.com/ubuntu jammy/universe amd64 >> Packages [17.5 MB] >> Get:18 http://archive.ubuntu.com/ubuntu jammy-security/main amd64 >> Packages [579 kB] >> Get:19 https://mirror.it4i.cz/ubuntu jammy-security/universe amd64 >> Packages [928 kB] >> Get:17 https://cz.archive.ubuntu.com/ubuntu jammy/main amd64 Packages >> [1,792 kB] >> Fetched 23.7 MB in 4s (6,499 kB/s) >> Reading package lists... Done >> Building dependency tree... Done >> Reading state information... Done >> All packages are up to date. >> Reading package lists... Done >> Building dependency tree... Done >> Reading state information... Done >> Some packages could not be installed. This may mean that you have >> requested an impossible situation or if you are using the unstable >> distribution that some required packages have not yet been created >> or been moved out of Incoming. >> The following information may help to resolve the situation: >> >> The following packages have unmet dependencies: >> rabbitmq-server : Depends: erlang-base (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> erlang-base-hipe (< 1:26.0) but it is not >> installable or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-crypto (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-eldap (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-inets (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-mnesia (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-os-mon (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-parsetools (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-public-key (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-runtime-tools (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-ssl (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-syntax-tools (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-tools (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> Depends: erlang-xmerl (< 1:26.0) but >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not >> installable >> E: Unable to correct problems, you have held broken packages. >> >> >> ()[root at builder /]# apt-cache policy rabbitmq-server >> ^[[Arabbitmq-server: >> Installed: (none) >> Candidate: 3.11.16-1 >> Version table: >> 3.12.0-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.18-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.17-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.16-1 1000 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.15-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.14-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.13-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.12-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.11-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.10-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.9-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.8-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.7-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.6-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.5-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.4-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.3-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.2-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.1-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.11.0-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.24-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.23-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.22-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.21-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.20-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.19-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.18-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.17-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.16-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.14-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.13-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.12-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.11-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.10-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.9-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.8-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.7-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.6-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.5-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.4-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.2-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.1-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.10.0-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.9.29-1 500 >> 500 >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >> jammy/main amd64 Packages >> 3.9.13-1ubuntu0.22.04.1 500 >> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main >> amd64 Packages >> 3.9.13-1 500 >> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 >> Packages >> >> >> ()[root at builder /]# apt-cache policy erlang-base >> erlang-base: >> Installed: (none) >> Candidate: 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 >> Version table: >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 500 >> 500 >> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >> jammy/main amd64 Packages >> 1:24.2.1+dfsg-1ubuntu0.1 500 >> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main >> amd64 Packages >> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-security/main >> amd64 Packages >> 1:24.2.1+dfsg-1 500 >> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 >> Packages >> >> >> Michal Arbet >> Openstack Engineer >> >> Ultimum Technologies a.s. >> Na Po???? 1047/26, 11000 Praha 1 >> Czech Republic >> >> +420 604 228 897 >> michal.arbet at ultimum.io >> *https://ultimum.io * >> >> LinkedIn | >> Twitter | Facebook >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Wed Jun 14 14:08:28 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 14 Jun 2023 07:08:28 -0700 Subject: missing erlang-base_25.3.2.2-1rmq1ppa1~ubuntu22.04.1_amd64.deb In-Reply-To: References: Message-ID: I was pretty sure that dl.cloudsmith.io, that has traffic quota, had arm64 built Erlang at some point, but indeed checking this now, it's obviously not there anymore:( Yeah, that's indeed quite a pita with Erlang for Ubuntu... Back in the days we were also trying https://www.erlang-solutions.com/downloads/ but they're not building for arm anymore either, and repo is constantly broken on top of that On Wed, Jun 14, 2023, 06:46 Michal Arbet wrote: > Regarding cloudsmith repo, rabbitmq official doc saying : > > The Cloudsmith repository has a monthly traffic quota that can be > exhausted. For this reason, examples below use a Cloudsmith repository > mirror. All packages in the mirror repository are signed using the same > signing key. > > https://www.rabbitmq.com/install-debian.html#apt-cloudsmith > > They suggest use of ppa1.novemberain.com mirror as well.... but there is > no arm64 support , check arm64 test jobs ... > > https://review.opendev.org/c/openstack/kolla/+/885857 > > > Build failed (ARM64 pipeline). > > https://zuul.opendev.org/t/openstack/buildset/8e91ab6f2d994e07a6b92b198f65f91c > > kolla-build-centos9s-aarch64 > https://zuul.opendev.org/t/openstack/build/3c75751b23a546cab078652640b92dd9 > : FAILURE in 18m 05s (non-voting) > kolla-build-debian-aarch64 > https://zuul.opendev.org/t/openstack/build/4a7f9340527844ebb93ac7d6d2bb3605 > : FAILURE in 15m 33s > kolla-ansible-debian-aarch64 > https://zuul.opendev.org/t/openstack/build/164d881ae46941a2a82ed7cc8ac6b1ad > : FAILURE in 20m 12s > openstack-tox-py38-arm64 > https://zuul.opendev.org/t/openstack/build/77b15f75e1ff4ab58d7628d10df14d92 > : SUCCESS in 4m 17s (non-voting) > openstack-tox-py39-arm64 > https://zuul.opendev.org/t/openstack/build/a8f5470d6d0c4dc1919f8c33141e5907 > : SUCCESS in 2m 31s (non-voting) > openstack-tox-py310-arm64 > https://zuul.opendev.org/t/openstack/build/69eeba53244b436eaacfa926ef277707 > : SUCCESS in 4m 43s (non-voting) > kolla-build-rocky9-aarch64 > https://zuul.opendev.org/t/openstack/build/ed02088b2fb34f689b64f9a5d30b42cf > : FAILURE in 17m 25s (non-voting) > kolla-build-ubuntu-aarch64 > https://zuul.opendev.org/t/openstack/build/e59c3f7e6510479d8262a55f9fabc448 > : FAILURE in 14m 41s (non-voting) > > > Michal Arbet > Openstack Engineer > > Ultimum Technologies a.s. > Na Po???? 1047/26, 11000 Praha 1 > Czech Republic > > +420 604 228 897 > michal.arbet at ultimum.io > *https://ultimum.io * > > LinkedIn | Twitter > | Facebook > > > > st 14. 6. 2023 v 15:35 odes?latel Dmitriy Rabotyagov < > noonedeadpunk at gmail.com> napsal: > >> Well, if you're using rabbitmq server from Cloudsmith, for me it would >> make total sense to use cloudsmith repo for Erlang as well. >> >> They also do have new community repositories (ppa1.novemberbrain.com) >> that keep old versions of both rabbit and Erlang without aggressively >> rotating them. But using >> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq- >> erlang >> is still an option. >> >> >> On Wed, Jun 14, 2023, 02:46 Michal Arbet wrote: >> >>> Hello, >>> >>> We are installing rabbitmq-server from >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/debian and >>> erlang from your ppa repository *ppa:rabbitmq/rabbitmq-erlang*. >>> >>> We have erlang pinned as below >>> >>> Package: erlang* >>> Pin: version 1:25.* >>> Pin-Priority: 1000 >>> >>> Problem is that you removed erlang-base_25* and there is >>> only erlang-base_26.0.1-1rmq1ppa1~ubuntu22.04.1_arm64.deb >>> >>> Please, is there any reason why you removed erlang-base_25* for other >>> ubuntu versions ? >>> Because I can see 25* version for ubuntu 18.04 only >>> >>> Please, can u help us and upload erlang 25* also for other ubuntu >>> versions ? >>> >>> Thank you very much >>> >>> Log below : >>> >>> >>> >>> >>> ()[root at builder /]# apt update;apt install rabbitmq-server >>> Get:1 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 >>> kB] >>> Get:2 http://ubuntu-cloud.archive.canonical.com/ubuntu >>> jammy-updates/antelope InRelease [5,463 B] >>> >>> Get:3 http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [228 B] >>> >>> >>> Get:7 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>> jammy InRelease [18.1 kB] >>> >>> Get:8 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy InRelease [5,152 B] >>> Get:4 http://ftp.cvut.cz/ubuntu jammy InRelease [270 kB] >>> >>> Get:9 http://ubuntu-cloud.archive.canonical.com/ubuntu >>> jammy-updates/antelope/main amd64 Packages [126 kB] >>> >>> Get:5 http://ucho.ignum.cz/ubuntu jammy-updates InRelease [119 kB] >>> >>> Get:6 https://mirror.it4i.cz/ubuntu jammy-security InRelease [110 kB] >>> >>> Get:10 http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 >>> Packages [49.4 kB] >>> Get:11 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 >>> Packages [27.0 kB] >>> Get:12 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>> jammy/main amd64 Packages [8,167 B] >>> Get:13 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages [9,044 B] >>> Get:14 http://ftp.cvut.cz/ubuntu jammy-updates/universe amd64 Packages >>> [1,178 kB] >>> Get:15 https://mirror.it4i.cz/ubuntu jammy-updates/main amd64 Packages >>> [861 kB] >>> Get:16 https://cz.archive.ubuntu.com/ubuntu jammy/universe amd64 >>> Packages [17.5 MB] >>> Get:18 http://archive.ubuntu.com/ubuntu jammy-security/main amd64 >>> Packages [579 kB] >>> Get:19 https://mirror.it4i.cz/ubuntu jammy-security/universe amd64 >>> Packages [928 kB] >>> Get:17 https://cz.archive.ubuntu.com/ubuntu jammy/main amd64 Packages >>> [1,792 kB] >>> Fetched 23.7 MB in 4s (6,499 kB/s) >>> Reading package lists... Done >>> Building dependency tree... Done >>> Reading state information... Done >>> All packages are up to date. >>> Reading package lists... Done >>> Building dependency tree... Done >>> Reading state information... Done >>> Some packages could not be installed. This may mean that you have >>> requested an impossible situation or if you are using the unstable >>> distribution that some required packages have not yet been created >>> or been moved out of Incoming. >>> The following information may help to resolve the situation: >>> >>> The following packages have unmet dependencies: >>> rabbitmq-server : Depends: erlang-base (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> erlang-base-hipe (< 1:26.0) but it is not >>> installable or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-crypto (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-eldap (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-inets (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-mnesia (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-os-mon (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-parsetools (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-public-key (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-runtime-tools (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-ssl (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-syntax-tools (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-tools (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> Depends: erlang-xmerl (< 1:26.0) but >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> esl-erlang (< 1:26.0) but it is not >>> installable >>> E: Unable to correct problems, you have held broken packages. >>> >>> >>> ()[root at builder /]# apt-cache policy rabbitmq-server >>> ^[[Arabbitmq-server: >>> Installed: (none) >>> Candidate: 3.11.16-1 >>> Version table: >>> 3.12.0-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.18-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.17-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.16-1 1000 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.15-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.14-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.13-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.12-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.11-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.10-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.9-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.8-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.7-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.6-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.5-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.4-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.3-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.2-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.1-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.11.0-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.24-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.23-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.22-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.21-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.20-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.19-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.18-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.17-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.16-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.14-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.13-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.12-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.11-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.10-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.9-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.8-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.7-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.6-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.5-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.4-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.2-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.1-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.10.0-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.9.29-1 500 >>> 500 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages >>> 3.9.13-1ubuntu0.22.04.1 500 >>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main >>> amd64 Packages >>> 3.9.13-1 500 >>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 >>> Packages >>> >>> >>> ()[root at builder /]# apt-cache policy erlang-base >>> erlang-base: >>> Installed: (none) >>> Candidate: 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 >>> Version table: >>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 500 >>> 500 >>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>> jammy/main amd64 Packages >>> 1:24.2.1+dfsg-1ubuntu0.1 500 >>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main >>> amd64 Packages >>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-security/main >>> amd64 Packages >>> 1:24.2.1+dfsg-1 500 >>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 >>> Packages >>> >>> >>> Michal Arbet >>> Openstack Engineer >>> >>> Ultimum Technologies a.s. >>> Na Po???? 1047/26, 11000 Praha 1 >>> Czech Republic >>> >>> +420 604 228 897 >>> michal.arbet at ultimum.io >>> *https://ultimum.io * >>> >>> LinkedIn | >>> Twitter | Facebook >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.arbet at ultimum.io Wed Jun 14 14:22:27 2023 From: michal.arbet at ultimum.io (Michal Arbet) Date: Wed, 14 Jun 2023 16:22:27 +0200 Subject: missing erlang-base_25.3.2.2-1rmq1ppa1~ubuntu22.04.1_amd64.deb In-Reply-To: References: Message-ID: That's what I am talking about :(, do you think you can somehow provide arm64 packages again ? Michal Arbet Openstack Engineer Ultimum Technologies a.s. Na Po???? 1047/26, 11000 Praha 1 Czech Republic +420 604 228 897 michal.arbet at ultimum.io *https://ultimum.io * LinkedIn | Twitter | Facebook st 14. 6. 2023 v 16:09 odes?latel Dmitriy Rabotyagov < noonedeadpunk at gmail.com> napsal: > I was pretty sure that dl.cloudsmith.io, that has traffic quota, had > arm64 built Erlang at some point, but indeed checking this now, it's > obviously not there anymore:( > > Yeah, that's indeed quite a pita with Erlang for Ubuntu... Back in the > days we were also trying https://www.erlang-solutions.com/downloads/ but > they're not building for arm anymore either, and repo is constantly broken > on top of that > > On Wed, Jun 14, 2023, 06:46 Michal Arbet wrote: > >> Regarding cloudsmith repo, rabbitmq official doc saying : >> >> The Cloudsmith repository has a monthly traffic quota that can be >> exhausted. For this reason, examples below use a Cloudsmith repository >> mirror. All packages in the mirror repository are signed using the same >> signing key. >> >> https://www.rabbitmq.com/install-debian.html#apt-cloudsmith >> >> They suggest use of ppa1.novemberain.com mirror as well.... but there is >> no arm64 support , check arm64 test jobs ... >> >> https://review.opendev.org/c/openstack/kolla/+/885857 >> >> >> Build failed (ARM64 pipeline). >> >> https://zuul.opendev.org/t/openstack/buildset/8e91ab6f2d994e07a6b92b198f65f91c >> >> kolla-build-centos9s-aarch64 >> https://zuul.opendev.org/t/openstack/build/3c75751b23a546cab078652640b92dd9 >> : FAILURE in 18m 05s (non-voting) >> kolla-build-debian-aarch64 >> https://zuul.opendev.org/t/openstack/build/4a7f9340527844ebb93ac7d6d2bb3605 >> : FAILURE in 15m 33s >> kolla-ansible-debian-aarch64 >> https://zuul.opendev.org/t/openstack/build/164d881ae46941a2a82ed7cc8ac6b1ad >> : FAILURE in 20m 12s >> openstack-tox-py38-arm64 >> https://zuul.opendev.org/t/openstack/build/77b15f75e1ff4ab58d7628d10df14d92 >> : SUCCESS in 4m 17s (non-voting) >> openstack-tox-py39-arm64 >> https://zuul.opendev.org/t/openstack/build/a8f5470d6d0c4dc1919f8c33141e5907 >> : SUCCESS in 2m 31s (non-voting) >> openstack-tox-py310-arm64 >> https://zuul.opendev.org/t/openstack/build/69eeba53244b436eaacfa926ef277707 >> : SUCCESS in 4m 43s (non-voting) >> kolla-build-rocky9-aarch64 >> https://zuul.opendev.org/t/openstack/build/ed02088b2fb34f689b64f9a5d30b42cf >> : FAILURE in 17m 25s (non-voting) >> kolla-build-ubuntu-aarch64 >> https://zuul.opendev.org/t/openstack/build/e59c3f7e6510479d8262a55f9fabc448 >> : FAILURE in 14m 41s (non-voting) >> >> >> Michal Arbet >> Openstack Engineer >> >> Ultimum Technologies a.s. >> Na Po???? 1047/26, 11000 Praha 1 >> Czech Republic >> >> +420 604 228 897 >> michal.arbet at ultimum.io >> *https://ultimum.io * >> >> LinkedIn | >> Twitter | Facebook >> >> >> >> st 14. 6. 2023 v 15:35 odes?latel Dmitriy Rabotyagov < >> noonedeadpunk at gmail.com> napsal: >> >>> Well, if you're using rabbitmq server from Cloudsmith, for me it would >>> make total sense to use cloudsmith repo for Erlang as well. >>> >>> They also do have new community repositories (ppa1.novemberbrain.com) >>> that keep old versions of both rabbit and Erlang without aggressively >>> rotating them. But using >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq- >>> erlang >>> is still an option. >>> >>> >>> On Wed, Jun 14, 2023, 02:46 Michal Arbet >>> wrote: >>> >>>> Hello, >>>> >>>> We are installing rabbitmq-server from >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/debian >>>> and erlang from your ppa repository *ppa:rabbitmq/rabbitmq-erlang*. >>>> >>>> We have erlang pinned as below >>>> >>>> Package: erlang* >>>> Pin: version 1:25.* >>>> Pin-Priority: 1000 >>>> >>>> Problem is that you removed erlang-base_25* and there is >>>> only erlang-base_26.0.1-1rmq1ppa1~ubuntu22.04.1_arm64.deb >>>> >>>> Please, is there any reason why you removed erlang-base_25* for other >>>> ubuntu versions ? >>>> Because I can see 25* version for ubuntu 18.04 only >>>> >>>> Please, can u help us and upload erlang 25* also for other ubuntu >>>> versions ? >>>> >>>> Thank you very much >>>> >>>> Log below : >>>> >>>> >>>> >>>> >>>> ()[root at builder /]# apt update;apt install rabbitmq-server >>>> Get:1 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 >>>> kB] >>>> Get:2 http://ubuntu-cloud.archive.canonical.com/ubuntu >>>> jammy-updates/antelope InRelease [5,463 B] >>>> >>>> Get:3 http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [228 B] >>>> >>>> >>>> Get:7 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>> jammy InRelease [18.1 kB] >>>> >>>> Get:8 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy InRelease [5,152 B] >>>> Get:4 http://ftp.cvut.cz/ubuntu jammy InRelease [270 kB] >>>> >>>> Get:9 http://ubuntu-cloud.archive.canonical.com/ubuntu >>>> jammy-updates/antelope/main amd64 Packages [126 kB] >>>> >>>> Get:5 http://ucho.ignum.cz/ubuntu jammy-updates InRelease [119 kB] >>>> >>>> Get:6 https://mirror.it4i.cz/ubuntu jammy-security InRelease [110 kB] >>>> >>>> Get:10 http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 >>>> Packages [49.4 kB] >>>> Get:11 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 >>>> Packages [27.0 kB] >>>> Get:12 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>> jammy/main amd64 Packages [8,167 B] >>>> Get:13 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages [9,044 B] >>>> Get:14 http://ftp.cvut.cz/ubuntu jammy-updates/universe amd64 Packages >>>> [1,178 kB] >>>> Get:15 https://mirror.it4i.cz/ubuntu jammy-updates/main amd64 Packages >>>> [861 kB] >>>> Get:16 https://cz.archive.ubuntu.com/ubuntu jammy/universe amd64 >>>> Packages [17.5 MB] >>>> Get:18 http://archive.ubuntu.com/ubuntu jammy-security/main amd64 >>>> Packages [579 kB] >>>> Get:19 https://mirror.it4i.cz/ubuntu jammy-security/universe amd64 >>>> Packages [928 kB] >>>> Get:17 https://cz.archive.ubuntu.com/ubuntu jammy/main amd64 Packages >>>> [1,792 kB] >>>> Fetched 23.7 MB in 4s (6,499 kB/s) >>>> Reading package lists... Done >>>> Building dependency tree... Done >>>> Reading state information... Done >>>> All packages are up to date. >>>> Reading package lists... Done >>>> Building dependency tree... Done >>>> Reading state information... Done >>>> Some packages could not be installed. This may mean that you have >>>> requested an impossible situation or if you are using the unstable >>>> distribution that some required packages have not yet been created >>>> or been moved out of Incoming. >>>> The following information may help to resolve the situation: >>>> >>>> The following packages have unmet dependencies: >>>> rabbitmq-server : Depends: erlang-base (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> erlang-base-hipe (< 1:26.0) but it is not >>>> installable or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-crypto (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-eldap (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-inets (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-mnesia (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-os-mon (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-parsetools (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-public-key (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-runtime-tools (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-ssl (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-syntax-tools (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-tools (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> Depends: erlang-xmerl (< 1:26.0) but >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> esl-erlang (< 1:26.0) but it is not >>>> installable >>>> E: Unable to correct problems, you have held broken packages. >>>> >>>> >>>> ()[root at builder /]# apt-cache policy rabbitmq-server >>>> ^[[Arabbitmq-server: >>>> Installed: (none) >>>> Candidate: 3.11.16-1 >>>> Version table: >>>> 3.12.0-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.18-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.17-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.16-1 1000 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.15-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.14-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.13-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.12-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.11-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.10-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.9-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.8-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.7-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.6-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.5-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.4-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.3-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.2-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.1-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.11.0-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.24-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.23-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.22-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.21-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.20-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.19-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.18-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.17-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.16-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.14-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.13-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.12-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.11-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.10-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.9-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.8-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.7-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.6-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.5-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.4-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.2-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.1-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.10.0-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.9.29-1 500 >>>> 500 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages >>>> 3.9.13-1ubuntu0.22.04.1 500 >>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main >>>> amd64 Packages >>>> 3.9.13-1 500 >>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 >>>> Packages >>>> >>>> >>>> ()[root at builder /]# apt-cache policy erlang-base >>>> erlang-base: >>>> Installed: (none) >>>> Candidate: 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 >>>> Version table: >>>> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 500 >>>> 500 >>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>> jammy/main amd64 Packages >>>> 1:24.2.1+dfsg-1ubuntu0.1 500 >>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main >>>> amd64 Packages >>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt >>>> jammy-security/main amd64 Packages >>>> 1:24.2.1+dfsg-1 500 >>>> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 >>>> Packages >>>> >>>> >>>> Michal Arbet >>>> Openstack Engineer >>>> >>>> Ultimum Technologies a.s. >>>> Na Po???? 1047/26, 11000 Praha 1 >>>> Czech Republic >>>> >>>> +420 604 228 897 >>>> michal.arbet at ultimum.io >>>> *https://ultimum.io * >>>> >>>> LinkedIn | >>>> Twitter | Facebook >>>> >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From acogoluegnes at gmail.com Wed Jun 14 14:41:43 2023 From: acogoluegnes at gmail.com (=?UTF-8?Q?Arnaud_Cogolu=C3=A8gnes?=) Date: Wed, 14 Jun 2023 16:41:43 +0200 Subject: missing erlang-base_25.3.2.2-1rmq1ppa1~ubuntu22.04.1_amd64.deb In-Reply-To: References: Message-ID: The RabbitMQ Cloudsmith repositories do have bandwidth quotas. There have never been ARM 64 Erlang packages on our Cloudsmith repositories. We (manually) build ARM 64 Erlang RPM packages and upload them on GitHub [1]. Our PPA does provide ARM 64 Erlang packages, but again, you can install only the latest version from there. This is not something we have control over. Note we upload source packages to PPA and it builds them. The infrastructure we use internally to build binary packages does not support ARM 64. [1] https://github.com/rabbitmq/erlang-rpm/releases/tag/v25.3 On Wed, Jun 14, 2023 at 4:22?PM Michal Arbet wrote: > That's what I am talking about :(, do you think you can somehow provide > arm64 packages again ? > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From acogoluegnes at gmail.com Wed Jun 14 14:58:18 2023 From: acogoluegnes at gmail.com (=?UTF-8?Q?Arnaud_Cogolu=C3=A8gnes?=) Date: Wed, 14 Jun 2023 16:58:18 +0200 Subject: missing erlang-base_25.3.2.2-1rmq1ppa1~ubuntu22.04.1_amd64.deb In-Reply-To: References: Message-ID: There's the Cloudsmith repository you can use as an APT and as a YUM repository. If you have requests, you can create an issue on the respective GitHub repositories [1] [2]. [1] https://github.com/rabbitmq/erlang-debian-package [2] https://github.com/rabbitmq/erlang-rpm On Wed, Jun 14, 2023 at 4:51?PM Dmitriy Rabotyagov wrote: > Hi, > > Thanks for the reply. > > But is it possible to publish deb packages as GitHub releases also for > https://github.com/rabbitmq/erlang-debian-package? As right now approach > for Deb and rpm differs quite a lot, which makes it really tough to find > similar way/source for both rhel and Debian. > >> >>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at shrug.pw Wed Jun 14 18:30:42 2023 From: neil at shrug.pw (Neil Hanlon) Date: Wed, 14 Jun 2023 11:30:42 -0700 Subject: [ironic] ARM Support in CI: Call for vendors / contributors / interested parties In-Reply-To: References: <6faf5514-2ac8-9e8b-c543-0f8125b4001b@rd.bbc.co.uk> <728b14da-c275-e59b-0345-ad246a4ced26@rd.bbc.co.uk> Message-ID: jumping in as well to say I'm interested. I have an open patch for DIB to allow cross arch building, too, which might make this more attainable (?) somehow. Additionally, RHEL and by extension Rocky and CentOS now have multiple arm64 kernels available with 4k and 64k page tables, respectively. It would be good to perform tests on both these variants, I think. On Wed, Jun 14, 2023, 10:10 Jay Faulkner wrote: > This is exciting! Are you all at the OpenStack Summit? If so I'd love to > see you at the PTG. > I'm here! ? > If not, let's figure out a path forward :) remotely. > > Thanks, > Jay Faulkner > Ironic PTL > > On Tue, Jun 13, 2023 at 4:05?PM Alvaro Soto wrote: > >> Just to bump this email and present you guys Jose Miguel, who it's in >> interested in this as well. >> >> Cheers!!! >> --- >> Alvaro Soto. >> >> Note: My work hours may not be your work hours. Please do not feel the >> need to respond during a time that is not convenient for you. >> ---------------------------------------------------------- >> Great people talk about ideas, >> ordinary people talk about things, >> small people talk... about other people. >> >> On Wed, Apr 5, 2023, 2:24 AM Riccardo Pittau wrote: >> >>> Hey Alvaro, >>> >>> We've discussed support for ubuntu arm64 image during the last weekly >>> meeting on Monday and agreed to provide it. >>> I plan to start working on that this week in the >>> ironic-python-agent-builder repository. >>> >>> Ciao >>> Riccardo >>> >>> On Tue, Apr 4, 2023 at 11:00?PM Alvaro Soto wrote: >>> >>>> I saw CentOS 8/9 and Debian images; any plans on working with Ubuntu? >>>> >>>> Cheers! >>>> >>>> On Tue, Apr 4, 2023 at 2:16?PM Jonathan Rosser < >>>> jonathan.rosser at rd.bbc.co.uk> wrote: >>>> >>>>> Hi Jay, >>>>> >>>>> We did not need to make any changes to Ironic. >>>>> >>>>> At the time we first got things working I don't think there was a >>>>> published ARM64 image, but it would have been of great benefit as it >>>>> was >>>>> another component to bootstrap and have uncertainty about if we had >>>>> done >>>>> it properly. >>>>> >>>>> I've uploaded the published experimental image to our environment and >>>>> will have an opportunity to test that soon. >>>>> >>>>> Jon. >>>>> >>>>> On 31/03/2023 17:01, Jay Faulkner wrote: >>>>> > Thanks for responding, Jonathan! >>>>> > >>>>> > Did you have to make any downstream changes to Ironic for this to >>>>> > work? Are you using our published ARM64 image or using their own? >>>>> > >>>>> > Thanks, >>>>> > Jay Faulkner >>>>> > Ironic PTL >>>>> > >>>>> > >>>>> > On Fri, Mar 31, 2023 at 7:56?AM Jonathan Rosser >>>>> > wrote: >>>>> > >>>>> > I have Ironic working with Supermicro MegaDC / Ampere CPU in a >>>>> > R12SPD-A >>>>> > system board using the ipmi driver. >>>>> > >>>>> > Jon. >>>>> > >>>>> > On 29/03/2023 19:39, Jay Faulkner wrote: >>>>> > > Hi stackers, >>>>> > > >>>>> > > Ironic has published an experimental Ironic Python Agent image >>>>> for >>>>> > > ARM64 >>>>> > > >>>>> > ( >>>>> https://tarballs.opendev.org/openstack/ironic-python-agent-builder/dib/files/ >>>>> ) >>>>> > >>>>> > > and discussed promoting this image to supported via CI testing. >>>>> > > However, we have a problem: there are no Ironic developers with >>>>> > easy >>>>> > > access to ARM hardware at the moment, and no Ironic developers >>>>> with >>>>> > > free time to commit to improving our support of ARM hardware. >>>>> > > >>>>> > > So we're putting out a call for help: >>>>> > > - If you're a hardware vendor and want your ARM hardware >>>>> supported? >>>>> > > Please come talk to the Ironic community about setting up >>>>> > third-party-CI. >>>>> > > - Are you an operator or contributor from a company invested >>>>> in ARM >>>>> > > bare metal? Please come join the Ironic community to help us >>>>> build >>>>> > > this support. >>>>> > > >>>>> > > Thanks, >>>>> > > Jay Faulkner >>>>> > > Ironic PTL >>>>> > > >>>>> > > >>>>> > >>>>> >>>>> >>>> >>>> -- >>>> >>>> Alvaro Soto >>>> >>>> *Note: My work hours may not be your work hours. Please do not feel the >>>> need to respond during a time that is not convenient for you.* >>>> ---------------------------------------------------------- >>>> Great people talk about ideas, >>>> ordinary people talk about things, >>>> small people talk... about other people. >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Wed Jun 14 18:54:56 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Wed, 14 Jun 2023 11:54:56 -0700 Subject: [neutron][ptg] PTG sessions Message-ID: Hello Neutrinos: Thank you for your attendance to the meeting session yesterday and today to our PTG sessions in the ballroom. This mail is just a heads-up to remind you that we'll be in the ballroom tomorrow at 9:00AM too. You'll be welcome! If you have any questions and you think it will be helpful first to describe them "on paper", please use the Neutron etherpad: https://etherpad.opendev.org/p/vancouver-june2023-neutron. See you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlandy at redhat.com Wed Jun 14 19:13:12 2023 From: rlandy at redhat.com (Ronelle Landy) Date: Wed, 14 Jun 2023 15:13:12 -0400 Subject: [tripleo] Gate blocker - CentOS 9 jobs Message-ID: Hello All, There is currently a failure on CentOS 9 jobs that is impacting check/gate and periodic jobs - causing Tempest jobs to fail. The details of the issue are in: https://bugs.launchpad.net/tripleo/+bug/2023764. A workaround is being tested. We will update this thread when the gate is unblocked. Thank you, CI Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From haiwu.us at gmail.com Wed Jun 14 20:47:15 2023 From: haiwu.us at gmail.com (hai wu) Date: Wed, 14 Jun 2023 15:47:15 -0500 Subject: [nova] [keystone] Default logrotate configuration for apache2 Message-ID: It seems the default logrotate configuration for apache2 log files from vanilla OS installation is to do the following daily: postrotate if invoke-rc.d apache2 status > /dev/null 2>&1; then \ invoke-rc.d apache2 reload > /dev/null 2>&1; \ fi; endscript Sometimes I noticed that not all log entries would show up for the same day after apache2 got reloaded. Also it seems redhat openstack switched its logrotate config to use copytruncate instead of reloading apache2 iirc. Is there some known issues with reloading apache2 daily for logrotate config? Sometimes there are keystone 503 errors, and I am wondering if that's related to the logrotate default config to reload apache2 daily.. From mnasiadka at gmail.com Wed Jun 14 22:22:00 2023 From: mnasiadka at gmail.com (=?utf-8?Q?Micha=C5=82_Nasiadka?=) Date: Wed, 14 Jun 2023 15:22:00 -0700 Subject: missing erlang-base_25.3.2.2-1rmq1ppa1~ubuntu22.04.1_amd64.deb In-Reply-To: References: Message-ID: Hi Arnaud, So basically the thing that is missing - is a place where Erlang 25.x is available for Ubuntu/Debian arm64. We maintain releases that we can?t repackage to 3.12 easily, or most probably not at all - because it would most probably break existing user deployments of OpenStack Kolla. Is that something RabbitMQ team could provide - maybe in a separate PPA repo than the existing one [1] if it?s problematic to have it in the same PPA? [1]: https://launchpad.net/~rabbitmq/+archive/ubuntu/rabbitmq-erlang > On 14 Jun 2023, at 05:37, Arnaud Cogolu?gnes wrote: > > According to Cloudsmith web UI [1], the package is still there. Maybe a network glitch? > > We (the RabbitMQ team) keep the last patch release of each Erlang minor release (25.3.x, 25.2.x, 25.1.x, etc). > > [1] https://cloudsmith.io/~rabbitmq/repos/rabbitmq-erlang/packages/?q=distribution%3Aubuntu+AND+distribution%3Ajammy+AND+version%3A1%3A25*+AND+name%3A%27%5Eerlang-base%24%27 > > On Wed, Jun 14, 2023 at 11:41?AM Michal Arbet > wrote: >> Hello, >> >> We are installing rabbitmq-server from https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/debian and erlang from your ppa repository ppa:rabbitmq/rabbitmq-erlang. >> >> We have erlang pinned as below >> >> Package: erlang* >> Pin: version 1:25.* >> Pin-Priority: 1000 >> >> Problem is that you removed erlang-base_25* and there is only erlang-base_26.0.1-1rmq1ppa1~ubuntu22.04.1_arm64.deb >> >> Please, is there any reason why you removed erlang-base_25* for other ubuntu versions ? >> Because I can see 25* version for ubuntu 18.04 only >> >> Please, can u help us and upload erlang 25* also for other ubuntu versions ? >> >> Thank you very much >> >> Log below : >> >> >> >> >> ()[root at builder /]# apt update;apt install rabbitmq-server >> Get:1 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB] >> Get:2 http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope InRelease [5,463 B] >> Get:3 http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [228 B] >> Get:7 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu jammy InRelease [18.1 kB] >> Get:8 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy InRelease [5,152 B] >> Get:4 http://ftp.cvut.cz/ubuntu jammy InRelease [270 kB] >> Get:9 http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope/main amd64 Packages [126 kB] >> Get:5 http://ucho.ignum.cz/ubuntu jammy-updates InRelease [119 kB] >> Get:6 https://mirror.it4i.cz/ubuntu jammy-security InRelease [110 kB] >> Get:10 http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 Packages [49.4 kB] >> Get:11 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 Packages [27.0 kB] >> Get:12 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu jammy/main amd64 Packages [8,167 B] >> Get:13 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages [9,044 B] >> Get:14 http://ftp.cvut.cz/ubuntu jammy-updates/universe amd64 Packages [1,178 kB] >> Get:15 https://mirror.it4i.cz/ubuntu jammy-updates/main amd64 Packages [861 kB] >> Get:16 https://cz.archive.ubuntu.com/ubuntu jammy/universe amd64 Packages [17.5 MB] >> Get:18 http://archive.ubuntu.com/ubuntu jammy-security/main amd64 Packages [579 kB] >> Get:19 https://mirror.it4i.cz/ubuntu jammy-security/universe amd64 Packages [928 kB] >> Get:17 https://cz.archive.ubuntu.com/ubuntu jammy/main amd64 Packages [1,792 kB] >> Fetched 23.7 MB in 4s (6,499 kB/s) >> Reading package lists... Done >> Building dependency tree... Done >> Reading state information... Done >> All packages are up to date. >> Reading package lists... Done >> Building dependency tree... Done >> Reading state information... Done >> Some packages could not be installed. This may mean that you have >> requested an impossible situation or if you are using the unstable >> distribution that some required packages have not yet been created >> or been moved out of Incoming. >> The following information may help to resolve the situation: >> >> The following packages have unmet dependencies: >> rabbitmq-server : Depends: erlang-base (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> erlang-base-hipe (< 1:26.0) but it is not installable or >> esl-erlang (< 1:26.0) but it is not installable >> Depends: erlang-crypto (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not installable >> Depends: erlang-eldap (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not installable >> Depends: erlang-inets (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not installable >> Depends: erlang-mnesia (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not installable >> Depends: erlang-os-mon (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not installable >> Depends: erlang-parsetools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not installable >> Depends: erlang-public-key (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not installable >> Depends: erlang-runtime-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not installable >> Depends: erlang-ssl (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not installable >> Depends: erlang-syntax-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not installable >> Depends: erlang-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not installable >> Depends: erlang-xmerl (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >> esl-erlang (< 1:26.0) but it is not installable >> E: Unable to correct problems, you have held broken packages. >> >> >> ()[root at builder /]# apt-cache policy rabbitmq-server >> ^[[Arabbitmq-server: >> Installed: (none) >> Candidate: 3.11.16-1 >> Version table: >> 3.12.0-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.18-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.17-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.16-1 1000 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.15-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.14-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.13-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.12-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.11-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.10-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.9-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.8-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.7-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.6-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.5-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.4-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.3-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.2-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.1-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.11.0-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.24-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.23-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.22-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.21-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.20-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.19-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.18-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.17-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.16-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.14-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.13-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.12-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.11-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.10-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.9-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.8-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.7-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.6-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.5-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.4-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.2-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.1-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.10.0-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.9.29-1 500 >> 500 https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu jammy/main amd64 Packages >> 3.9.13-1ubuntu0.22.04.1 500 >> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main amd64 Packages >> 3.9.13-1 500 >> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 Packages >> >> >> ()[root at builder /]# apt-cache policy erlang-base >> erlang-base: >> Installed: (none) >> Candidate: 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 >> Version table: >> 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 500 >> 500 https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu jammy/main amd64 Packages >> 1:24.2.1+dfsg-1ubuntu0.1 500 >> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates/main amd64 Packages >> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy-security/main amd64 Packages >> 1:24.2.1+dfsg-1 500 >> 500 mirror://mirrors.ubuntu.com/mirrors.txt jammy/main amd64 Packages >> >> >> Michal Arbet >> Openstack Engineer >> >> Ultimum Technologies a.s. >> Na Po???? 1047/26, 11000 Praha 1 >> Czech Republic >> >> +420 604 228 897 <> >> michal.arbet at ultimum.io >> https://ultimum.io >> >> LinkedIn | Twitter | Facebook -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Thu Jun 15 09:33:32 2023 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Thu, 15 Jun 2023 09:33:32 +0000 Subject: [OPENSTACK][rabbitmq] using quorum queues In-Reply-To: References: Message-ID: Hey, We are also using quorum in some regions and plan to enable quorum everwhere. Note that we also manage to enable quorum for transient queues (using a custom patch as it's not doable with current oslo.messaging, see my request in [1]). We also introduced some custom changes in py-amqp to handle correctly the rabbit disconnections (see [2] and [3]). So far, the real improvment is achieved thanks to the combination of all of these changes, enabling quorum queue only was not enough for us to notice any improvment. The downside of quorum queues is that it consume more power on the rabbit cluster: you need more IO, CPU, RAM and network bandwith for the same number of queues (see [4]). It has to be taken into account. Cheers, Arnaud. [1] https://lists.openstack.org/pipermail/openstack-discuss/2023-April/033343.html [2] https://github.com/celery/py-amqp/pull/410 [3] https://github.com/celery/py-amqp/pull/405 [4] https://plik.ovh/file/nHCny7psDCTrEm76/Pq9ASO9wUd8HRk4C/s_1686817000.png On 13.06.23 - 09:14, Sa Pham wrote: > Dear Kh?i, > > Thanks for your reply. > > > > On Tue, Jun 13, 2023 at 9:05?AM Nguy?n H?u Kh?i > wrote: > > > Hello. > > Firstly, when I used the classic queue and sometimes, my rabbitmq cluster > > was broken, the computers showed state down and I needed to restart the > > computer service to make it up. Secondly, 1 of 3 controller is down but my > > system still works although it is not very first as fully controller. I ran > > it for about 3 months compared with classic. My openstack is Yoga and use > > Kolla-Ansible as a deployment tool, > > Nguyen Huu Khoi > > > > > > On Tue, Jun 13, 2023 at 8:43?AM Sa Pham wrote: > > > >> Hi Kh?i, > >> > >> Why do you say using the quorum queue is more stable than the classic > >> queue ? > >> > >> Thanks, > >> > >> > >> > >> On Tue, Jun 13, 2023 at 7:26?AM Nguy?n H?u Kh?i < > >> nguyenhuukhoinw at gmail.com> wrote: > >> > >>> Hello Huettner, > >>> I have used the quorum queue since March and it is ok until now. It > >>> looks more stable than the classic queue. Some feedback to you. > >>> Thank you. > >>> Nguyen Huu Khoi. > >>> > >>> > >>> > >>> On Mon, May 8, 2023 at 1:14?PM Felix H?ttner > >>> wrote: > >>> > >>>> Hi Nguyen, > >>>> > >>>> > >>>> > >>>> we are using quorum queues for one of our deployments. So fare we did > >>>> not have any issue with them. They also seem to survive restarts without > >>>> issues (however reply queues are still broken afterwards in a small amount > >>>> of cases, but they are no quorum/mirrored queues anyway). > >>>> > >>>> > >>>> > >>>> So I would recommend them for everyone that creates a new cluster. > >>>> > >>>> > >>>> > >>>> -- > >>>> > >>>> Felix Huettner > >>>> > >>>> > >>>> > >>>> *From:* Nguy?n H?u Kh?i > >>>> *Sent:* Saturday, May 6, 2023 4:29 AM > >>>> *To:* OpenStack Discuss > >>>> *Subject:* [OPENSTACK][rabbitmq] using quorum queues > >>>> > >>>> > >>>> > >>>> Hello guys. > >>>> > >>>> IS there any guy who uses the quorum queue for openstack? Could you > >>>> give some feedback to compare with classic queue? > >>>> > >>>> Thank you. > >>>> > >>>> Nguyen Huu Khoi > >>>> > >>>> Diese E Mail enth?lt m?glicherweise vertrauliche Inhalte und ist nur > >>>> f?r die Verwertung durch den vorgesehenen Empf?nger bestimmt. > >>>> Sollten Sie nicht der vorgesehene Empf?nger sein, setzen Sie den > >>>> Absender bitte unverz?glich in Kenntnis und l?schen diese E Mail. > >>>> > >>>> Hinweise zum Datenschutz finden Sie hier > >>>> . > >>>> > >>>> > >>>> This e-mail may contain confidential content and is intended only for > >>>> the specified recipient/s. > >>>> If you are not the intended recipient, please inform the sender > >>>> immediately and delete this e-mail. > >>>> > >>>> Information on data protection can be found here > >>>> . > >>>> > >>> > >> > >> -- > >> Sa Pham Dang > >> Skype: great_bn > >> Phone/Telegram: 0986.849.582 > >> > >> > >> > > -- > Sa Pham Dang > Skype: great_bn > Phone/Telegram: 0986.849.582 From arnaud.morin at gmail.com Thu Jun 15 09:47:22 2023 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Thu, 15 Jun 2023 09:47:22 +0000 Subject: [nova] update_resources_interval parameter Message-ID: Hey team, I'd like to understand the stakes behing the update_resources_interval parameter ([1]) We decided on our side to increase this value from 60sec to 600sec (see [2]). What I understand is that is will "delay" the update of metrics on nova side. I mostly think that these metrics are used by filter scheduler to select the best host when scheduling. Is there anything else it can affect? Cheers, Arnaud. [1] https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.update_resources_interval [2] https://review.opendev.org/c/openstack/large-scale/+/886166 From christian.rohmann at inovex.de Thu Jun 15 13:15:22 2023 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Thu, 15 Jun 2023 15:15:22 +0200 Subject: [all][db] Lots of redundant DB indices ? Message-ID: <8d2826de-d292-01d8-e457-a7404b4ade1a@inovex.de> Helolo openstack-discuss! I recently saw lots of warnings the like of: "?/usr/lib/python3/dist-packages/pymysql/cursors.py:170: Warning: (1831, 'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release') result = self._query(query)" This originated from MySQL error 1831 (see [1]). I then raised a few bugs and also did some changes in regards to the duplicate indices I noticed: * Placement https://storyboard.openstack.org/#!/story/2010251 / https://review.opendev.org/c/openstack/placement/+/856770 * Keystone - https://bugs.launchpad.net/keystone/+bug/1988297 / https://review.opendev.org/c/openstack/keystone/+/885463 * Neutron - https://bugs.launchpad.net/neutron/+bug/1988421 / https://review.opendev.org/c/openstack/neutron/+/885456 * Nova - https://review.opendev.org/c/openstack/nova/+/856757 But running Percona's pt-duplicate-key-checker ([2]) there are quite a few more redundant indices reported: > ALTER TABLE `cinder`.`driver_initiator_data` DROP INDEX > `ix_driver_initiator_data_initiator`; ALTER TABLE > `cinder`.`quota_usages` DROP INDEX `ix_quota_usages_project_id`; ALTER > TABLE `cinder`.`quota_usages` DROP INDEX > `quota_usage_project_resource_idx`; ALTER TABLE `designate`.`records` > DROP FOREIGN KEY `records_ibfk_2`; ALTER TABLE > `designate`.`recordsets` DROP INDEX `rrset_zoneid`; ALTER TABLE > `designate`.`recordsets` DROP INDEX `rrset_type`; ALTER TABLE > `glance`.`image_members` DROP INDEX `ix_image_members_image_id`; ALTER > TABLE `glance`.`image_members` DROP INDEX > `ix_image_members_image_id_member`; ALTER TABLE > `glance`.`image_properties` DROP INDEX `ix_image_properties_image_id`; > ALTER TABLE `glance`.`image_tags` DROP INDEX `ix_image_tags_image_id`; > ALTER TABLE `keystone`.`access_rule` DROP INDEX `external_id`; ALTER > TABLE `keystone`.`access_rule` DROP INDEX `user_id`; ALTER TABLE > `keystone`.`project_tag` DROP INDEX `project_id`; ALTER TABLE > `keystone`.`token` DROP INDEX `ix_token_expires`; ALTER TABLE > `mysql`.`transaction_registry` DROP INDEX `commit_timestamp`, ADD > INDEX `commit_timestamp` (`commit_timestamp`); ALTER TABLE > `neutron`.`addressgrouprbacs` DROP INDEX > `ix_addressgrouprbacs_target_project`; ALTER TABLE > `neutron`.`addressscoperbacs` DROP INDEX > `ix_addressscoperbacs_target_project`; ALTER TABLE > `neutron`.`floatingipdnses` DROP INDEX > `ix_floatingipdnses_floatingip_id`; ALTER TABLE > `neutron`.`networkdnsdomains` DROP INDEX > `ix_networkdnsdomains_network_id`; ALTER TABLE > `neutron`.`networkrbacs` DROP INDEX `ix_networkrbacs_target_project`; > ALTER TABLE `neutron`.`ovn_hash_ring` DROP INDEX > `ix_ovn_hash_ring_node_uuid`; ALTER TABLE > `neutron`.`ovn_revision_numbers` DROP INDEX > `ix_ovn_revision_numbers_resource_uuid`; ALTER TABLE > `neutron`.`portdataplanestatuses` DROP INDEX > `ix_portdataplanestatuses_port_id`; ALTER TABLE `neutron`.`portdnses` > DROP INDEX `ix_portdnses_port_id`; ALTER TABLE `neutron`.`ports` DROP > INDEX `ix_ports_network_id_mac_address`; ALTER TABLE > `neutron`.`portuplinkstatuspropagation` DROP INDEX > `ix_portuplinkstatuspropagation_port_id`; ALTER TABLE > `neutron`.`qos_minimum_bandwidth_rules` DROP INDEX > `ix_qos_minimum_bandwidth_rules_qos_policy_id`; ALTER TABLE > `neutron`.`qos_minimum_packet_rate_rules` DROP INDEX > `ix_qos_minimum_packet_rate_rules_qos_policy_id`; ALTER TABLE > `neutron`.`qos_packet_rate_limit_rules` DROP INDEX > `ix_qos_packet_rate_limit_rules_qos_policy_id`; ALTER TABLE > `neutron`.`qos_policies_default` DROP INDEX > `ix_qos_policies_default_project_id`; ALTER TABLE > `neutron`.`qospolicyrbacs` DROP INDEX > `ix_qospolicyrbacs_target_project`; ALTER TABLE `neutron`.`quotas` > DROP INDEX `ix_quotas_project_id`; ALTER TABLE `neutron`.`quotausages` > DROP INDEX `ix_quotausages_project_id`; ALTER TABLE > `neutron`.`securitygrouprbacs` DROP INDEX > `ix_securitygrouprbacs_target_project`; ALTER TABLE > `neutron`.`segmenthostmappings` DROP INDEX > `ix_segmenthostmappings_segment_id`; ALTER TABLE > `neutron`.`subnet_dns_publish_fixed_ips` DROP INDEX > `ix_subnet_dns_publish_fixed_ips_subnet_id`; ALTER TABLE > `neutron`.`subnetpoolrbacs` DROP INDEX > `ix_subnetpoolrbacs_target_project`; ALTER TABLE `nova`.`agent_builds` > DROP INDEX `agent_builds_hypervisor_os_arch_idx`; ALTER TABLE > `nova`.`block_device_mapping` DROP INDEX > `block_device_mapping_instance_uuid_idx`; ALTER TABLE > `nova`.`console_auth_tokens` DROP INDEX > `console_auth_tokens_token_hash_idx`; ALTER TABLE `nova`.`fixed_ips` > DROP INDEX `address`; ALTER TABLE `nova`.`fixed_ips` DROP INDEX > `network_id`; ALTER TABLE `nova`.`instance_actions` DROP INDEX > `instance_uuid_idx`; ALTER TABLE `nova`.`instance_type_extra_specs` > DROP INDEX `instance_type_extra_specs_instance_type_id_key_idx`; ALTER > TABLE `nova`.`instance_type_projects` DROP INDEX `instance_type_id`; > ALTER TABLE `nova`.`instances` DROP INDEX `uniq_instances0uuid`; ALTER > TABLE `nova`.`instances` DROP INDEX `instances_project_id_idx`; ALTER > TABLE `nova`.`inventories` DROP INDEX > `inventories_resource_provider_id_idx`; ALTER TABLE > `nova`.`inventories` DROP INDEX > `inventories_resource_provider_resource_class_idx`; ALTER TABLE > `nova`.`networks` DROP INDEX `networks_vlan_deleted_idx`; ALTER TABLE > `nova`.`resource_providers` DROP INDEX `resource_providers_name_idx`; > ALTER TABLE `nova`.`resource_providers` DROP INDEX > `resource_providers_uuid_idx`; ALTER TABLE `nova_api`.`build_requests` > DROP INDEX `build_requests_instance_uuid_idx`; ALTER TABLE > `nova_api`.`cell_mappings` DROP INDEX `uuid_idx`; ALTER TABLE > `nova_api`.`flavor_extra_specs` DROP INDEX > `flavor_extra_specs_flavor_id_key_idx`; ALTER TABLE > `nova_api`.`host_mappings` DROP INDEX `host_idx`; ALTER TABLE > `nova_api`.`instance_mappings` DROP INDEX `instance_uuid_idx`; ALTER > TABLE `nova_api`.`inventories` DROP INDEX > `inventories_resource_provider_id_idx`; ALTER TABLE > `nova_api`.`inventories` DROP INDEX > `inventories_resource_provider_resource_class_idx`; ALTER TABLE > `nova_api`.`placement_aggregates` DROP INDEX > `ix_placement_aggregates_uuid`; ALTER TABLE > `nova_api`.`project_user_quotas` DROP INDEX > `project_user_quotas_user_id_idx`; ALTER TABLE > `nova_api`.`request_specs` DROP INDEX > `request_spec_instance_uuid_idx`; ALTER TABLE > `nova_api`.`resource_providers` DROP INDEX > `resource_providers_name_idx`; ALTER TABLE > `nova_api`.`resource_providers` DROP INDEX > `resource_providers_uuid_idx`; ALTER TABLE `nova_cell0`.`agent_builds` > DROP INDEX `agent_builds_hypervisor_os_arch_idx`; ALTER TABLE > `nova_cell0`.`block_device_mapping` DROP INDEX > `block_device_mapping_instance_uuid_idx`; ALTER TABLE > `nova_cell0`.`console_auth_tokens` DROP INDEX > `console_auth_tokens_token_hash_idx`; ALTER TABLE > `nova_cell0`.`fixed_ips` DROP INDEX `address`; ALTER TABLE > `nova_cell0`.`fixed_ips` DROP INDEX `network_id`; ALTER TABLE > `nova_cell0`.`instance_actions` DROP INDEX `instance_uuid_idx`; ALTER > TABLE `nova_cell0`.`instance_type_extra_specs` DROP INDEX > `instance_type_extra_specs_instance_type_id_key_idx`; ALTER TABLE > `nova_cell0`.`instance_type_projects` DROP INDEX `instance_type_id`; > ALTER TABLE `nova_cell0`.`instances` DROP INDEX `uniq_instances0uuid`; > ALTER TABLE `nova_cell0`.`instances` DROP INDEX > `instances_project_id_idx`; ALTER TABLE `nova_cell0`.`inventories` > DROP INDEX `inventories_resource_provider_id_idx`; ALTER TABLE > `nova_cell0`.`inventories` DROP INDEX > `inventories_resource_provider_resource_class_idx`; ALTER TABLE > `nova_cell0`.`networks` DROP INDEX `networks_vlan_deleted_idx`; ALTER > TABLE `nova_cell0`.`resource_providers` DROP INDEX > `resource_providers_name_idx`; ALTER TABLE > `nova_cell0`.`resource_providers` DROP INDEX > `resource_providers_uuid_idx`; ALTER TABLE `placement`.`inventories` > DROP INDEX `inventories_resource_provider_id_idx`; ALTER TABLE > `placement`.`inventories` DROP INDEX > `inventories_resource_provider_resource_class_idx`; ALTER TABLE > `placement`.`placement_aggregates` DROP INDEX > `ix_placement_aggregates_uuid`; ALTER TABLE > `placement`.`resource_providers` DROP INDEX > `resource_providers_name_idx`; ALTER TABLE > `placement`.`resource_providers` DROP INDEX > `resource_providers_uuid_idx`; > # > ######################################################################## > # Summary of indexes # > ######################################################################## > # Size Duplicate Indexes 35810 # Total Duplicate Indexes 84 # Total > Indexes 1792 (When adding more services (Telemetry, ...) there are even more indices reported.) Does it make sense to address this more broadly maybe? Especially with all the changes happening for SQLAlchemy 2.x currently? Regards Christian [1] https://mariadb.com/kb/en/mariadb-error-codes/ [2] https://docs.percona.com/percona-toolkit/pt-duplicate-key-checker.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Thu Jun 15 16:07:19 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Thu, 15 Jun 2023 09:07:19 -0700 Subject: [neutron][ptg] Neutron PTG sessions Message-ID: Hello Neutrinos: Just a heads-up: we are in table 15 in the ballroom. You are welcome to join us to ask any question, make any suggestion or just for a cup of coffee. See you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From yuta.kazato at ntt.com Thu Jun 15 21:17:12 2023 From: yuta.kazato at ntt.com (Yuta Kazato) Date: Thu, 15 Jun 2023 21:17:12 +0000 Subject: [ptg][tacker] Tacker PTG schedule changed (16:00-17:00) In-Reply-To: References: Message-ID: Hi Tacker team, Today we decide to change the timeslot of our PTG session for reasons of agenda changes and room limitations. New timeslot and room is 16:00-17:00 UTC at room 10 and remote. See tacker's etherpad for PTG in deital, thanks. https://etherpad.opendev.org/p/vancouver-june2023-tacker -----Original Message----- From: Yasufumi Ogawa Sent: Monday, June 12, 2023 1:44 PM To: openstack-discuss Subject: [ptg][tacker] Tacker PTG schedule Hi team, We are going to have our PTG session on this Thursday 15:00-17:50 UTC at room 10. We also have setup webex so that you can join from remote. See tacker's etherpad for the link of the remote session. https://etherpad.opendev.org/p/vancouver-june2023-tacker Thanks, Yasufumi From jamesleong123098 at gmail.com Fri Jun 16 00:01:36 2023 From: jamesleong123098 at gmail.com (James Leong) Date: Thu, 15 Jun 2023 19:01:36 -0500 Subject: [kolla-ansible][zun][cinder] Restricting the amount of disk when creating containers Message-ID: Hi all, I am using the yoga version of OpenStack with the deployment tool of kolla-ansible. How would I create a container on the horizon dashboard with a specific disk size? Every time when I create a container, I get the below error. "Your host does not support the disk quota feature." Is it essential to have a cinder (Volume set up)? I have not enabled Cinder in my use case. If so, would it be possible to provide some information regardless of how I can enable Cinder on OpenStack? In the past month, I attempted to enable Cinder, but it never worked on my site. Thanks for your help, James -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Fri Jun 16 02:18:47 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Fri, 16 Jun 2023 09:18:47 +0700 Subject: [kolla-ansible][zun][cinder] Restricting the amount of disk when creating containers In-Reply-To: References: Message-ID: Hello. what happened with you when enable cinder? Nguyen Huu Khoi On Fri, Jun 16, 2023 at 7:14?AM James Leong wrote: > Hi all, > > I am using the yoga version of OpenStack with the deployment tool of > kolla-ansible. How would I create a container on the horizon dashboard with > a specific disk size? Every time when I create a container, I get the below > error. > "Your host does not support the disk quota feature." > > Is it essential to have a cinder (Volume set up)? I have not enabled > Cinder in my use case. If so, would it be possible to provide some > information regardless of how I can enable Cinder on OpenStack? In the past > month, I attempted to enable Cinder, but it never worked on my site. > > Thanks for your help, > James > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Fri Jun 16 06:45:01 2023 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Fri, 16 Jun 2023 08:45:01 +0200 Subject: [all][db] Lots of redundant DB indices ? In-Reply-To: <8d2826de-d292-01d8-e457-a7404b4ade1a@inovex.de> References: <8d2826de-d292-01d8-e457-a7404b4ade1a@inovex.de> Message-ID: (Sending this message again, as apparently the formatting was messed up the first time). Hello OpenStack-Discuss, I recently saw lots of warnings the like of: "?/usr/lib/python3/dist-packages/pymysql/cursors.py:170: Warning: (1831, 'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release') result = self._query(query)" This originated from MySQL error 1831 (see [1]). I then raised a few bugs and also did some changes in regards to the duplicate indices I noticed: ?* Placement https://storyboard.openstack.org/#!/story/2010251 / https://review.opendev.org/c/openstack/placement/+/856770 ?* Keystone - https://bugs.launchpad.net/keystone/+bug/1988297 / https://review.opendev.org/c/openstack/keystone/+/885463 ?* Neutron -? https://bugs.launchpad.net/neutron/+bug/1988421 / https://review.opendev.org/c/openstack/neutron/+/885456 ?* Nova - https://review.opendev.org/c/openstack/nova/+/856757 But running Percona's pt-duplicate-key-checker ([2]) there are quite a few more redundant indices reported: > ALTER TABLE `cinder`.`driver_initiator_data` DROP INDEX `ix_driver_initiator_data_initiator`; > ALTER TABLE `cinder`.`quota_usages` DROP INDEX `ix_quota_usages_project_id`; > ALTER TABLE `cinder`.`quota_usages` DROP INDEX `quota_usage_project_resource_idx`; > ALTER TABLE `designate`.`records` DROP FOREIGN KEY `records_ibfk_2`; > ALTER TABLE `designate`.`recordsets` DROP INDEX `rrset_zoneid`; > ALTER TABLE `designate`.`recordsets` DROP INDEX `rrset_type`; > ALTER TABLE `glance`.`image_members` DROP INDEX `ix_image_members_image_id`; > ALTER TABLE `glance`.`image_members` DROP INDEX `ix_image_members_image_id_member`; > ALTER TABLE `glance`.`image_properties` DROP INDEX `ix_image_properties_image_id`; > ALTER TABLE `glance`.`image_tags` DROP INDEX `ix_image_tags_image_id`; > ALTER TABLE `keystone`.`access_rule` DROP INDEX `external_id`; > ALTER TABLE `keystone`.`access_rule` DROP INDEX `user_id`; > ALTER TABLE `keystone`.`project_tag` DROP INDEX `project_id`; > ALTER TABLE `keystone`.`token` DROP INDEX `ix_token_expires`; > ALTER TABLE `mysql`.`transaction_registry` DROP INDEX `commit_timestamp`, ADD INDEX `commit_timestamp` (`commit_timestamp`); > ALTER TABLE `neutron`.`addressgrouprbacs` DROP INDEX `ix_addressgrouprbacs_target_project`; > ALTER TABLE `neutron`.`addressscoperbacs` DROP INDEX `ix_addressscoperbacs_target_project`; > ALTER TABLE `neutron`.`floatingipdnses` DROP INDEX `ix_floatingipdnses_floatingip_id`; > ALTER TABLE `neutron`.`networkdnsdomains` DROP INDEX `ix_networkdnsdomains_network_id`; > ALTER TABLE `neutron`.`networkrbacs` DROP INDEX `ix_networkrbacs_target_project`; > ALTER TABLE `neutron`.`ovn_hash_ring` DROP INDEX `ix_ovn_hash_ring_node_uuid`; > ALTER TABLE `neutron`.`ovn_revision_numbers` DROP INDEX `ix_ovn_revision_numbers_resource_uuid`; > ALTER TABLE `neutron`.`portdataplanestatuses` DROP INDEX `ix_portdataplanestatuses_port_id`; > ALTER TABLE `neutron`.`portdnses` DROP INDEX `ix_portdnses_port_id`; > ALTER TABLE `neutron`.`ports` DROP INDEX `ix_ports_network_id_mac_address`; > ALTER TABLE `neutron`.`portuplinkstatuspropagation` DROP INDEX `ix_portuplinkstatuspropagation_port_id`; > ALTER TABLE `neutron`.`qos_minimum_bandwidth_rules` DROP INDEX `ix_qos_minimum_bandwidth_rules_qos_policy_id`; > ALTER TABLE `neutron`.`qos_minimum_packet_rate_rules` DROP INDEX `ix_qos_minimum_packet_rate_rules_qos_policy_id`; > ALTER TABLE `neutron`.`qos_packet_rate_limit_rules` DROP INDEX `ix_qos_packet_rate_limit_rules_qos_policy_id`; > ALTER TABLE `neutron`.`qos_policies_default` DROP INDEX `ix_qos_policies_default_project_id`; > ALTER TABLE `neutron`.`qospolicyrbacs` DROP INDEX `ix_qospolicyrbacs_target_project`; > ALTER TABLE `neutron`.`quotas` DROP INDEX `ix_quotas_project_id`; > ALTER TABLE `neutron`.`quotausages` DROP INDEX `ix_quotausages_project_id`; > ALTER TABLE `neutron`.`securitygrouprbacs` DROP INDEX `ix_securitygrouprbacs_target_project`; > ALTER TABLE `neutron`.`segmenthostmappings` DROP INDEX `ix_segmenthostmappings_segment_id`; > ALTER TABLE `neutron`.`subnet_dns_publish_fixed_ips` DROP INDEX `ix_subnet_dns_publish_fixed_ips_subnet_id`; > ALTER TABLE `neutron`.`subnetpoolrbacs` DROP INDEX `ix_subnetpoolrbacs_target_project`; > ALTER TABLE `nova`.`agent_builds` DROP INDEX `agent_builds_hypervisor_os_arch_idx`; > ALTER TABLE `nova`.`block_device_mapping` DROP INDEX `block_device_mapping_instance_uuid_idx`; > ALTER TABLE `nova`.`console_auth_tokens` DROP INDEX `console_auth_tokens_token_hash_idx`; > ALTER TABLE `nova`.`fixed_ips` DROP INDEX `address`; > ALTER TABLE `nova`.`fixed_ips` DROP INDEX `network_id`; > ALTER TABLE `nova`.`instance_actions` DROP INDEX `instance_uuid_idx`; > ALTER TABLE `nova`.`instance_type_extra_specs` DROP INDEX `instance_type_extra_specs_instance_type_id_key_idx`; > ALTER TABLE `nova`.`instance_type_projects` DROP INDEX `instance_type_id`; > ALTER TABLE `nova`.`instances` DROP INDEX `uniq_instances0uuid`; > ALTER TABLE `nova`.`instances` DROP INDEX `instances_project_id_idx`; > ALTER TABLE `nova`.`inventories` DROP INDEX `inventories_resource_provider_id_idx`; > ALTER TABLE `nova`.`inventories` DROP INDEX `inventories_resource_provider_resource_class_idx`; > ALTER TABLE `nova`.`networks` DROP INDEX `networks_vlan_deleted_idx`; > ALTER TABLE `nova`.`resource_providers` DROP INDEX `resource_providers_name_idx`; > ALTER TABLE `nova`.`resource_providers` DROP INDEX `resource_providers_uuid_idx`; > ALTER TABLE `nova_api`.`build_requests` DROP INDEX `build_requests_instance_uuid_idx`; > ALTER TABLE `nova_api`.`cell_mappings` DROP INDEX `uuid_idx`; > ALTER TABLE `nova_api`.`flavor_extra_specs` DROP INDEX `flavor_extra_specs_flavor_id_key_idx`; > ALTER TABLE `nova_api`.`host_mappings` DROP INDEX `host_idx`; > ALTER TABLE `nova_api`.`instance_mappings` DROP INDEX `instance_uuid_idx`; > ALTER TABLE `nova_api`.`inventories` DROP INDEX `inventories_resource_provider_id_idx`; > ALTER TABLE `nova_api`.`inventories` DROP INDEX `inventories_resource_provider_resource_class_idx`; > ALTER TABLE `nova_api`.`placement_aggregates` DROP INDEX `ix_placement_aggregates_uuid`; > ALTER TABLE `nova_api`.`project_user_quotas` DROP INDEX `project_user_quotas_user_id_idx`; > ALTER TABLE `nova_api`.`request_specs` DROP INDEX `request_spec_instance_uuid_idx`; > ALTER TABLE `nova_api`.`resource_providers` DROP INDEX `resource_providers_name_idx`; > ALTER TABLE `nova_api`.`resource_providers` DROP INDEX `resource_providers_uuid_idx`; > ALTER TABLE `nova_cell0`.`agent_builds` DROP INDEX `agent_builds_hypervisor_os_arch_idx`; > ALTER TABLE `nova_cell0`.`block_device_mapping` DROP INDEX `block_device_mapping_instance_uuid_idx`; > ALTER TABLE `nova_cell0`.`console_auth_tokens` DROP INDEX `console_auth_tokens_token_hash_idx`; > ALTER TABLE `nova_cell0`.`fixed_ips` DROP INDEX `address`; > ALTER TABLE `nova_cell0`.`fixed_ips` DROP INDEX `network_id`; > ALTER TABLE `nova_cell0`.`instance_actions` DROP INDEX `instance_uuid_idx`; > ALTER TABLE `nova_cell0`.`instance_type_extra_specs` DROP INDEX `instance_type_extra_specs_instance_type_id_key_idx`; > ALTER TABLE `nova_cell0`.`instance_type_projects` DROP INDEX `instance_type_id`; > ALTER TABLE `nova_cell0`.`instances` DROP INDEX `uniq_instances0uuid`; > ALTER TABLE `nova_cell0`.`instances` DROP INDEX `instances_project_id_idx`; > ALTER TABLE `nova_cell0`.`inventories` DROP INDEX `inventories_resource_provider_id_idx`; > ALTER TABLE `nova_cell0`.`inventories` DROP INDEX `inventories_resource_provider_resource_class_idx`; > ALTER TABLE `nova_cell0`.`networks` DROP INDEX `networks_vlan_deleted_idx`; > ALTER TABLE `nova_cell0`.`resource_providers` DROP INDEX `resource_providers_name_idx`; > ALTER TABLE `nova_cell0`.`resource_providers` DROP INDEX `resource_providers_uuid_idx`; > ALTER TABLE `placement`.`inventories` DROP INDEX `inventories_resource_provider_id_idx`; > ALTER TABLE `placement`.`inventories` DROP INDEX `inventories_resource_provider_resource_class_idx`; > ALTER TABLE `placement`.`placement_aggregates` DROP INDEX `ix_placement_aggregates_uuid`; > ALTER TABLE `placement`.`resource_providers` DROP INDEX `resource_providers_name_idx`; > ALTER TABLE `placement`.`resource_providers` DROP INDEX `resource_providers_uuid_idx`; > > > > # ######################################################################## > # Summary of indexes > # ######################################################################## > > # Size Duplicate Indexes?? 35810 > # Total Duplicate Indexes? 84 > # Total Indexes??????????? 1792 > > (When adding more services (Telemetry, ...) there are even more indices reported.) Does it make sense to address this more broadly maybe? Especially with all the changes happening for SQLAlchemy 2.x currently? Regards Christian [1] https://mariadb.com/kb/en/mariadb-error-codes/ [2] https://docs.percona.com/percona-toolkit/pt-duplicate-key-checker.html From katonalala at gmail.com Fri Jun 16 07:43:34 2023 From: katonalala at gmail.com (Lajos Katona) Date: Fri, 16 Jun 2023 09:43:34 +0200 Subject: [all][db] Lots of redundant DB indices ? In-Reply-To: References: <8d2826de-d292-01d8-e457-a7404b4ade1a@inovex.de> Message-ID: Hi Christian, Rodolfo started to investigate this topic for Neutron, see the bug and patch for it: https://bugs.launchpad.net/neutron/+bug/2024044 https://review.opendev.org/c/openstack/neutron/+/886213 Please review and comment it :-) Lajos (lajoskatona) Christian Rohmann ezt ?rta (id?pont: 2023. j?n. 16., P, 8:57): > (Sending this message again, as apparently the formatting was messed up > the first time). > > > Hello OpenStack-Discuss, > > > > I recently saw lots of warnings the like of: > > "?/usr/lib/python3/dist-packages/pymysql/cursors.py:170: Warning: (1831, > 'Duplicate index `uniq_instances0uuid`. This is deprecated and will be > disallowed in a future release') > result = self._query(query)" > This originated from MySQL error 1831 (see [1]). > > > > I then raised a few bugs and also did some changes in regards to the > duplicate indices I noticed: > > * Placement https://storyboard.openstack.org/#!/story/2010251 / > https://review.opendev.org/c/openstack/placement/+/856770 > * Keystone - https://bugs.launchpad.net/keystone/+bug/1988297 / > https://review.opendev.org/c/openstack/keystone/+/885463 > * Neutron - https://bugs.launchpad.net/neutron/+bug/1988421 / > https://review.opendev.org/c/openstack/neutron/+/885456 > * Nova - https://review.opendev.org/c/openstack/nova/+/856757 > > > > > But running Percona's pt-duplicate-key-checker ([2]) there are quite a > few more redundant indices reported: > > > ALTER TABLE `cinder`.`driver_initiator_data` DROP INDEX > `ix_driver_initiator_data_initiator`; > > ALTER TABLE `cinder`.`quota_usages` DROP INDEX > `ix_quota_usages_project_id`; > > ALTER TABLE `cinder`.`quota_usages` DROP INDEX > `quota_usage_project_resource_idx`; > > ALTER TABLE `designate`.`records` DROP FOREIGN KEY `records_ibfk_2`; > > ALTER TABLE `designate`.`recordsets` DROP INDEX `rrset_zoneid`; > > ALTER TABLE `designate`.`recordsets` DROP INDEX `rrset_type`; > > ALTER TABLE `glance`.`image_members` DROP INDEX > `ix_image_members_image_id`; > > ALTER TABLE `glance`.`image_members` DROP INDEX > `ix_image_members_image_id_member`; > > ALTER TABLE `glance`.`image_properties` DROP INDEX > `ix_image_properties_image_id`; > > ALTER TABLE `glance`.`image_tags` DROP INDEX `ix_image_tags_image_id`; > > ALTER TABLE `keystone`.`access_rule` DROP INDEX `external_id`; > > ALTER TABLE `keystone`.`access_rule` DROP INDEX `user_id`; > > ALTER TABLE `keystone`.`project_tag` DROP INDEX `project_id`; > > ALTER TABLE `keystone`.`token` DROP INDEX `ix_token_expires`; > > ALTER TABLE `mysql`.`transaction_registry` DROP INDEX > `commit_timestamp`, ADD INDEX `commit_timestamp` (`commit_timestamp`); > > ALTER TABLE `neutron`.`addressgrouprbacs` DROP INDEX > `ix_addressgrouprbacs_target_project`; > > ALTER TABLE `neutron`.`addressscoperbacs` DROP INDEX > `ix_addressscoperbacs_target_project`; > > ALTER TABLE `neutron`.`floatingipdnses` DROP INDEX > `ix_floatingipdnses_floatingip_id`; > > ALTER TABLE `neutron`.`networkdnsdomains` DROP INDEX > `ix_networkdnsdomains_network_id`; > > ALTER TABLE `neutron`.`networkrbacs` DROP INDEX > `ix_networkrbacs_target_project`; > > ALTER TABLE `neutron`.`ovn_hash_ring` DROP INDEX > `ix_ovn_hash_ring_node_uuid`; > > ALTER TABLE `neutron`.`ovn_revision_numbers` DROP INDEX > `ix_ovn_revision_numbers_resource_uuid`; > > ALTER TABLE `neutron`.`portdataplanestatuses` DROP INDEX > `ix_portdataplanestatuses_port_id`; > > ALTER TABLE `neutron`.`portdnses` DROP INDEX `ix_portdnses_port_id`; > > ALTER TABLE `neutron`.`ports` DROP INDEX > `ix_ports_network_id_mac_address`; > > ALTER TABLE `neutron`.`portuplinkstatuspropagation` DROP INDEX > `ix_portuplinkstatuspropagation_port_id`; > > ALTER TABLE `neutron`.`qos_minimum_bandwidth_rules` DROP INDEX > `ix_qos_minimum_bandwidth_rules_qos_policy_id`; > > ALTER TABLE `neutron`.`qos_minimum_packet_rate_rules` DROP INDEX > `ix_qos_minimum_packet_rate_rules_qos_policy_id`; > > ALTER TABLE `neutron`.`qos_packet_rate_limit_rules` DROP INDEX > `ix_qos_packet_rate_limit_rules_qos_policy_id`; > > ALTER TABLE `neutron`.`qos_policies_default` DROP INDEX > `ix_qos_policies_default_project_id`; > > ALTER TABLE `neutron`.`qospolicyrbacs` DROP INDEX > `ix_qospolicyrbacs_target_project`; > > ALTER TABLE `neutron`.`quotas` DROP INDEX `ix_quotas_project_id`; > > ALTER TABLE `neutron`.`quotausages` DROP INDEX > `ix_quotausages_project_id`; > > ALTER TABLE `neutron`.`securitygrouprbacs` DROP INDEX > `ix_securitygrouprbacs_target_project`; > > ALTER TABLE `neutron`.`segmenthostmappings` DROP INDEX > `ix_segmenthostmappings_segment_id`; > > ALTER TABLE `neutron`.`subnet_dns_publish_fixed_ips` DROP INDEX > `ix_subnet_dns_publish_fixed_ips_subnet_id`; > > ALTER TABLE `neutron`.`subnetpoolrbacs` DROP INDEX > `ix_subnetpoolrbacs_target_project`; > > ALTER TABLE `nova`.`agent_builds` DROP INDEX > `agent_builds_hypervisor_os_arch_idx`; > > ALTER TABLE `nova`.`block_device_mapping` DROP INDEX > `block_device_mapping_instance_uuid_idx`; > > ALTER TABLE `nova`.`console_auth_tokens` DROP INDEX > `console_auth_tokens_token_hash_idx`; > > ALTER TABLE `nova`.`fixed_ips` DROP INDEX `address`; > > ALTER TABLE `nova`.`fixed_ips` DROP INDEX `network_id`; > > ALTER TABLE `nova`.`instance_actions` DROP INDEX `instance_uuid_idx`; > > ALTER TABLE `nova`.`instance_type_extra_specs` DROP INDEX > `instance_type_extra_specs_instance_type_id_key_idx`; > > ALTER TABLE `nova`.`instance_type_projects` DROP INDEX > `instance_type_id`; > > ALTER TABLE `nova`.`instances` DROP INDEX `uniq_instances0uuid`; > > ALTER TABLE `nova`.`instances` DROP INDEX `instances_project_id_idx`; > > ALTER TABLE `nova`.`inventories` DROP INDEX > `inventories_resource_provider_id_idx`; > > ALTER TABLE `nova`.`inventories` DROP INDEX > `inventories_resource_provider_resource_class_idx`; > > ALTER TABLE `nova`.`networks` DROP INDEX `networks_vlan_deleted_idx`; > > ALTER TABLE `nova`.`resource_providers` DROP INDEX > `resource_providers_name_idx`; > > ALTER TABLE `nova`.`resource_providers` DROP INDEX > `resource_providers_uuid_idx`; > > ALTER TABLE `nova_api`.`build_requests` DROP INDEX > `build_requests_instance_uuid_idx`; > > ALTER TABLE `nova_api`.`cell_mappings` DROP INDEX `uuid_idx`; > > ALTER TABLE `nova_api`.`flavor_extra_specs` DROP INDEX > `flavor_extra_specs_flavor_id_key_idx`; > > ALTER TABLE `nova_api`.`host_mappings` DROP INDEX `host_idx`; > > ALTER TABLE `nova_api`.`instance_mappings` DROP INDEX > `instance_uuid_idx`; > > ALTER TABLE `nova_api`.`inventories` DROP INDEX > `inventories_resource_provider_id_idx`; > > ALTER TABLE `nova_api`.`inventories` DROP INDEX > `inventories_resource_provider_resource_class_idx`; > > ALTER TABLE `nova_api`.`placement_aggregates` DROP INDEX > `ix_placement_aggregates_uuid`; > > ALTER TABLE `nova_api`.`project_user_quotas` DROP INDEX > `project_user_quotas_user_id_idx`; > > ALTER TABLE `nova_api`.`request_specs` DROP INDEX > `request_spec_instance_uuid_idx`; > > ALTER TABLE `nova_api`.`resource_providers` DROP INDEX > `resource_providers_name_idx`; > > ALTER TABLE `nova_api`.`resource_providers` DROP INDEX > `resource_providers_uuid_idx`; > > ALTER TABLE `nova_cell0`.`agent_builds` DROP INDEX > `agent_builds_hypervisor_os_arch_idx`; > > ALTER TABLE `nova_cell0`.`block_device_mapping` DROP INDEX > `block_device_mapping_instance_uuid_idx`; > > ALTER TABLE `nova_cell0`.`console_auth_tokens` DROP INDEX > `console_auth_tokens_token_hash_idx`; > > ALTER TABLE `nova_cell0`.`fixed_ips` DROP INDEX `address`; > > ALTER TABLE `nova_cell0`.`fixed_ips` DROP INDEX `network_id`; > > ALTER TABLE `nova_cell0`.`instance_actions` DROP INDEX > `instance_uuid_idx`; > > ALTER TABLE `nova_cell0`.`instance_type_extra_specs` DROP INDEX > `instance_type_extra_specs_instance_type_id_key_idx`; > > ALTER TABLE `nova_cell0`.`instance_type_projects` DROP INDEX > `instance_type_id`; > > ALTER TABLE `nova_cell0`.`instances` DROP INDEX `uniq_instances0uuid`; > > ALTER TABLE `nova_cell0`.`instances` DROP INDEX > `instances_project_id_idx`; > > ALTER TABLE `nova_cell0`.`inventories` DROP INDEX > `inventories_resource_provider_id_idx`; > > ALTER TABLE `nova_cell0`.`inventories` DROP INDEX > `inventories_resource_provider_resource_class_idx`; > > ALTER TABLE `nova_cell0`.`networks` DROP INDEX > `networks_vlan_deleted_idx`; > > ALTER TABLE `nova_cell0`.`resource_providers` DROP INDEX > `resource_providers_name_idx`; > > ALTER TABLE `nova_cell0`.`resource_providers` DROP INDEX > `resource_providers_uuid_idx`; > > ALTER TABLE `placement`.`inventories` DROP INDEX > `inventories_resource_provider_id_idx`; > > ALTER TABLE `placement`.`inventories` DROP INDEX > `inventories_resource_provider_resource_class_idx`; > > ALTER TABLE `placement`.`placement_aggregates` DROP INDEX > `ix_placement_aggregates_uuid`; > > ALTER TABLE `placement`.`resource_providers` DROP INDEX > `resource_providers_name_idx`; > > ALTER TABLE `placement`.`resource_providers` DROP INDEX > `resource_providers_uuid_idx`; > > > > > > > > # > ######################################################################## > > # Summary of indexes > > # > ######################################################################## > > > > # Size Duplicate Indexes 35810 > > # Total Duplicate Indexes 84 > > # Total Indexes 1792 > > > > > > (When adding more services (Telemetry, ...) there are even more indices > reported.) > > > > > Does it make sense to address this more broadly maybe? > Especially with all the changes happening for SQLAlchemy 2.x currently? > > > > > Regards > > > Christian > > > [1] https://mariadb.com/kb/en/mariadb-error-codes/ > [2] https://docs.percona.com/percona-toolkit/pt-duplicate-key-checker.html > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Fri Jun 16 14:06:49 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Fri, 16 Jun 2023 21:06:49 +0700 Subject: [nova] update_resources_interval parameter In-Reply-To: References: Message-ID: Hello. I think that is the reason why people don't use all quorum queue. https://bugs.launchpad.net/openstack-ansible/+bug/1607830 Nguyen Huu Khoi On Thu, Jun 15, 2023 at 4:55?PM Arnaud Morin wrote: > Hey team, > > I'd like to understand the stakes behing the update_resources_interval > parameter ([1]) > > We decided on our side to increase this value from 60sec to 600sec (see > [2]). > > What I understand is that is will "delay" the update of metrics on nova > side. > I mostly think that these metrics are used by filter scheduler to select > the best host when scheduling. > > Is there anything else it can affect? > > Cheers, > Arnaud. > > > [1] > https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.update_resources_interval > [2] https://review.opendev.org/c/openstack/large-scale/+/886166 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnasiadka at gmail.com Fri Jun 16 15:36:22 2023 From: mnasiadka at gmail.com (=?utf-8?Q?Micha=C5=82_Nasiadka?=) Date: Fri, 16 Jun 2023 08:36:22 -0700 Subject: [kolla] User forum Etherpad link Message-ID: <265D678F-32C0-4D7F-9377-3657081154DF@gmail.com> Hello Koalas, Thank you for attending the Kolla User forum on the Vancouver summit. You can check out the etherpad [1] for notes. [1]: https://etherpad.opendev.org/p/YVR23-kolla-user-forum Michal -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlandy at redhat.com Fri Jun 16 20:01:35 2023 From: rlandy at redhat.com (Ronelle Landy) Date: Fri, 16 Jun 2023 16:01:35 -0400 Subject: [tripleo] Gate blocker - CentOS 9 jobs In-Reply-To: References: Message-ID: On Wed, Jun 14, 2023 at 3:13?PM Ronelle Landy wrote: > Hello All, > > There is currently a failure on CentOS 9 jobs that is impacting check/gate > and periodic jobs - causing Tempest jobs to fail. > > The details of the issue are in: > https://bugs.launchpad.net/tripleo/+bug/2023764. > > A workaround is being tested. We will update this thread when the gate is > unblocked. > The gate blocker is cleared now. It is safe to recheck failed jobs. Thank you > > Thank you, > CI Team > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sun Jun 18 02:04:39 2023 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 17 Jun 2023 22:04:39 -0400 Subject: [OPENSTACK][rabbitmq] using quorum queues In-Reply-To: References: Message-ID: Great! This is good to know that Quorum is a good solution. Do you have a config to enable in kolla-ansible deployment? On Thu, Jun 15, 2023 at 5:43?AM Arnaud Morin wrote: > Hey, > > We are also using quorum in some regions and plan to enable quorum > everwhere. > > Note that we also manage to enable quorum for transient queues (using a > custom patch as it's not doable with current oslo.messaging, see my > request in [1]). > We also introduced some custom changes in py-amqp to handle correctly > the rabbit disconnections (see [2] and [3]). > > So far, the real improvment is achieved thanks to the combination of all > of these changes, enabling quorum queue only was not enough for us to > notice any improvment. > > The downside of quorum queues is that it consume more power on the > rabbit cluster: you need more IO, CPU, RAM and network bandwith for the > same number of queues (see [4]). > It has to be taken into account. > > Cheers, > Arnaud. > > [1] > https://lists.openstack.org/pipermail/openstack-discuss/2023-April/033343.html > [2] https://github.com/celery/py-amqp/pull/410 > [3] https://github.com/celery/py-amqp/pull/405 > [4] > https://plik.ovh/file/nHCny7psDCTrEm76/Pq9ASO9wUd8HRk4C/s_1686817000.png > > > > On 13.06.23 - 09:14, Sa Pham wrote: > > Dear Kh?i, > > > > Thanks for your reply. > > > > > > > > On Tue, Jun 13, 2023 at 9:05?AM Nguy?n H?u Kh?i < > nguyenhuukhoinw at gmail.com> > > wrote: > > > > > Hello. > > > Firstly, when I used the classic queue and sometimes, my rabbitmq > cluster > > > was broken, the computers showed state down and I needed to restart the > > > computer service to make it up. Secondly, 1 of 3 controller is down > but my > > > system still works although it is not very first as fully controller. > I ran > > > it for about 3 months compared with classic. My openstack is Yoga and > use > > > Kolla-Ansible as a deployment tool, > > > Nguyen Huu Khoi > > > > > > > > > On Tue, Jun 13, 2023 at 8:43?AM Sa Pham wrote: > > > > > >> Hi Kh?i, > > >> > > >> Why do you say using the quorum queue is more stable than the classic > > >> queue ? > > >> > > >> Thanks, > > >> > > >> > > >> > > >> On Tue, Jun 13, 2023 at 7:26?AM Nguy?n H?u Kh?i < > > >> nguyenhuukhoinw at gmail.com> wrote: > > >> > > >>> Hello Huettner, > > >>> I have used the quorum queue since March and it is ok until now. It > > >>> looks more stable than the classic queue. Some feedback to you. > > >>> Thank you. > > >>> Nguyen Huu Khoi. > > >>> > > >>> > > >>> > > >>> On Mon, May 8, 2023 at 1:14?PM Felix H?ttner > > > >>> wrote: > > >>> > > >>>> Hi Nguyen, > > >>>> > > >>>> > > >>>> > > >>>> we are using quorum queues for one of our deployments. So fare we > did > > >>>> not have any issue with them. They also seem to survive restarts > without > > >>>> issues (however reply queues are still broken afterwards in a small > amount > > >>>> of cases, but they are no quorum/mirrored queues anyway). > > >>>> > > >>>> > > >>>> > > >>>> So I would recommend them for everyone that creates a new cluster. > > >>>> > > >>>> > > >>>> > > >>>> -- > > >>>> > > >>>> Felix Huettner > > >>>> > > >>>> > > >>>> > > >>>> *From:* Nguy?n H?u Kh?i > > >>>> *Sent:* Saturday, May 6, 2023 4:29 AM > > >>>> *To:* OpenStack Discuss > > >>>> *Subject:* [OPENSTACK][rabbitmq] using quorum queues > > >>>> > > >>>> > > >>>> > > >>>> Hello guys. > > >>>> > > >>>> IS there any guy who uses the quorum queue for openstack? Could you > > >>>> give some feedback to compare with classic queue? > > >>>> > > >>>> Thank you. > > >>>> > > >>>> Nguyen Huu Khoi > > >>>> > > >>>> Diese E Mail enth?lt m?glicherweise vertrauliche Inhalte und ist nur > > >>>> f?r die Verwertung durch den vorgesehenen Empf?nger bestimmt. > > >>>> Sollten Sie nicht der vorgesehene Empf?nger sein, setzen Sie den > > >>>> Absender bitte unverz?glich in Kenntnis und l?schen diese E Mail. > > >>>> > > >>>> Hinweise zum Datenschutz finden Sie hier > > >>>> . > > >>>> > > >>>> > > >>>> This e-mail may contain confidential content and is intended only > for > > >>>> the specified recipient/s. > > >>>> If you are not the intended recipient, please inform the sender > > >>>> immediately and delete this e-mail. > > >>>> > > >>>> Information on data protection can be found here > > >>>> . > > >>>> > > >>> > > >> > > >> -- > > >> Sa Pham Dang > > >> Skype: great_bn > > >> Phone/Telegram: 0986.849.582 > > >> > > >> > > >> > > > > -- > > Sa Pham Dang > > Skype: great_bn > > Phone/Telegram: 0986.849.582 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sun Jun 18 02:45:49 2023 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 17 Jun 2023 22:45:49 -0400 Subject: [kolla] rabbitmq failed to build image using 2023.1 release In-Reply-To: References: Message-ID: Great! Michal, How did we miss this bug in CI ? I did install kolla using the "python3 -m pip install kolla==16.0.0" command. In this case, how do I upgrade it to fix bugs? Should I do a pip install --upgrade kolla? On Mon, Jun 12, 2023 at 6:15?AM Michal Arbet wrote: > https://review.opendev.org/c/openstack/kolla/+/885857 > Michal Arbet > Openstack Engineer > > Ultimum Technologies a.s. > Na Po???? 1047/26, 11000 Praha 1 > Czech Republic > > +420 604 228 897 > michal.arbet at ultimum.io > *https://ultimum.io * > > LinkedIn | Twitter > | Facebook > > > > po 12. 6. 2023 v 11:48 odes?latel Michal Arbet > napsal: > >> APT dependencies broken :( >> Michal Arbet >> Openstack Engineer >> >> Ultimum Technologies a.s. >> Na Po???? 1047/26, 11000 Praha 1 >> Czech Republic >> >> +420 604 228 897 >> michal.arbet at ultimum.io >> *https://ultimum.io * >> >> LinkedIn | >> Twitter | Facebook >> >> >> >> ne 11. 6. 2023 v 4:37 odes?latel Satish Patel >> napsal: >> >>> Folks, >>> >>> Do you know how to solve this? I am using release 2023.1 of kolla to >>> build images using ubuntu 22.04 >>> >>> root at docker-reg:/opt/kolla/etc/kolla# kolla-build --registry >>> docker-reg:4000 --config-file kolla-build.conf --debug --threads 1 >>> --skip-existing --push --cache --format none rabbitmq >>> INFO:kolla.common.utils:Using engine: docker >>> INFO:kolla.common.utils:Found the container image folder at >>> /usr/local/share/kolla/docker >>> INFO:kolla.common.utils:Added image rabbitmq to queue >>> INFO:kolla.common.utils:Attempt number: 1 to run task: >>> BuildTask(rabbitmq) >>> DEBUG:kolla.common.utils.rabbitmq:Processing >>> INFO:kolla.common.utils.rabbitmq:Building started at 2023-06-11 >>> 02:25:44.208880 >>> DEBUG:kolla.common.utils.rabbitmq:Turned 0 plugins into plugins archive >>> DEBUG:kolla.common.utils.rabbitmq:Turned 0 additions into additions >>> archive >>> INFO:kolla.common.utils.rabbitmq:Step 1/11 : FROM >>> docker-reg:4000/kolla/base:2023.1 >>> INFO:kolla.common.utils.rabbitmq: ---> 4551f4af8ddf >>> INFO:kolla.common.utils.rabbitmq:Step 2/11 : LABEL maintainer="Kolla >>> Project (https://launchpad.net/kolla)" name="rabbitmq" >>> build-date="20230611" >>> INFO:kolla.common.utils.rabbitmq: ---> Using cache >>> INFO:kolla.common.utils.rabbitmq: ---> 6c2ef10499f7 >>> INFO:kolla.common.utils.rabbitmq:Step 3/11 : RUN usermod --append --home >>> /var/lib/rabbitmq --groups kolla rabbitmq && mkdir -p /var/lib/rabbitmq >>> && chown -R 42439:42439 /var/lib/rabbitmq >>> INFO:kolla.common.utils.rabbitmq: ---> Using cache >>> INFO:kolla.common.utils.rabbitmq: ---> 29ef8940f40b >>> INFO:kolla.common.utils.rabbitmq:Step 4/11 : RUN echo 'Uris: >>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu' >>> >/etc/apt/sources.list.d/erlang.sources && echo 'Components: main' >>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Types: deb' >>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Suites: jammy' >>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Signed-By: >>> /etc/kolla/apt-keys/erlang-ppa.gpg' >>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Uris: >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu' >>> >/etc/apt/sources.list.d/rabbitmq.sources && echo 'Components: main' >>> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Types: deb' >>> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Suites: jammy' >>> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Signed-By: >>> /etc/kolla/apt-keys/rabbitmq.gpg' >>/etc/apt/sources.list.d/rabbitmq.sources >>> INFO:kolla.common.utils.rabbitmq: ---> Using cache >>> INFO:kolla.common.utils.rabbitmq: ---> 6d92a7342a90 >>> INFO:kolla.common.utils.rabbitmq:Step 5/11 : RUN apt-get --error-on=any >>> update && apt-get -y install --no-install-recommends logrotate >>> rabbitmq-server && apt-get clean && rm -rf /var/lib/apt/lists/* >>> INFO:kolla.common.utils.rabbitmq: ---> Running in 0deab7961445 >>> INFO:kolla.common.utils.rabbitmq:Get:1 >>> http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope >>> InRelease [5,463 B] >>> INFO:kolla.common.utils.rabbitmq:Get:2 http://archive.ubuntu.com/ubuntu >>> jammy-backports InRelease [108 kB] >>> INFO:kolla.common.utils.rabbitmq:Get:3 >>> http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [3,447 B] >>> INFO:kolla.common.utils.rabbitmq:Get:7 >>> http://ubuntu-cloud.archive.canonical.com/ubuntu >>> jammy-updates/antelope/main amd64 Packages [126 kB] >>> INFO:kolla.common.utils.rabbitmq:Get:8 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy InRelease [5,152 B] >>> INFO:kolla.common.utils.rabbitmq:Get:9 >>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu jammy >>> InRelease [18.1 kB] >>> INFO:kolla.common.utils.rabbitmq:Get:5 http://mirror.siena.edu/ubuntu >>> jammy-updates InRelease [119 kB] >>> INFO:kolla.common.utils.rabbitmq:Get:10 >>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>> jammy/main amd64 Packages [9,044 B] >>> INFO:kolla.common.utils.rabbitmq:Get:6 http://ftp.usf.edu/pub/ubuntu >>> jammy-security InRelease [110 kB] >>> INFO:kolla.common.utils.rabbitmq:Get:11 http://archive.ubuntu.com/ubuntu >>> jammy-backports/universe amd64 Packages [27.0 kB] >>> INFO:kolla.common.utils.rabbitmq:Get:4 >>> https://archive.linux.duke.edu/ubuntu jammy InRelease [270 kB] >>> INFO:kolla.common.utils.rabbitmq:Get:12 http://archive.ubuntu.com/ubuntu >>> jammy-backports/main amd64 Packages [49.4 kB] >>> INFO:kolla.common.utils.rabbitmq:Get:14 http://ubuntu.osuosl.org/ubuntu >>> jammy-updates/main amd64 Packages [857 kB] >>> INFO:kolla.common.utils.rabbitmq:Get:17 >>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>> jammy/main amd64 Packages [8,167 B] >>> INFO:kolla.common.utils.rabbitmq:Get:16 >>> http://pubmirrors.dal.corespace.com/ubuntu jammy-security/universe >>> amd64 Packages [928 kB] >>> INFO:kolla.common.utils.rabbitmq:Get:15 >>> https://atl.mirrors.clouvider.net/ubuntu jammy-security/main amd64 >>> Packages [575 kB] >>> INFO:kolla.common.utils.rabbitmq:Get:19 >>> http://www.club.cc.cmu.edu/pub/ubuntu jammy/main amd64 Packages [1,792 >>> kB] >>> INFO:kolla.common.utils.rabbitmq:Get:18 >>> http://mirror.team-cymru.com/ubuntu jammy/universe amd64 Packages [17.5 >>> MB] >>> INFO:kolla.common.utils.rabbitmq:Get:13 >>> http://mirrors.syringanetworks.net/ubuntu-archive >>> jammy-updates/universe amd64 Packages [1,176 kB] >>> INFO:kolla.common.utils.rabbitmq:Fetched 23.7 MB in 6s (4,091 kB/s) >>> INFO:kolla.common.utils.rabbitmq:Reading package lists... >>> INFO:kolla.common.utils.rabbitmq:Reading package lists... >>> INFO:kolla.common.utils.rabbitmq:Building dependency tree... >>> INFO:kolla.common.utils.rabbitmq:Reading state information... >>> INFO:kolla.common.utils.rabbitmq:Some packages could not be installed. >>> This may mean that you have >>> INFO:kolla.common.utils.rabbitmq:requested an impossible situation or if >>> you are using the unstable >>> INFO:kolla.common.utils.rabbitmq:distribution that some required >>> packages have not yet been created >>> INFO:kolla.common.utils.rabbitmq:or been moved out of Incoming. >>> INFO:kolla.common.utils.rabbitmq:The following information may help to >>> resolve the situation: >>> INFO:kolla.common.utils.rabbitmq:The following packages have unmet >>> dependencies: >>> INFO:kolla.common.utils.rabbitmq: rabbitmq-server : Depends: erlang-base >>> (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> INFO:kolla.common.utils.rabbitmq: >>> erlang-base-hipe (< 1:26.0) but it is not installable or >>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>> (< 1:26.0) but it is not installable >>> INFO:kolla.common.utils.rabbitmq: Depends: >>> erlang-crypto (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>> installed or >>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>> (< 1:26.0) but it is not installable >>> INFO:kolla.common.utils.rabbitmq: Depends: >>> erlang-eldap (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>> installed or >>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>> (< 1:26.0) but it is not installable >>> INFO:kolla.common.utils.rabbitmq: Depends: >>> erlang-inets (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>> installed or >>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>> (< 1:26.0) but it is not installable >>> INFO:kolla.common.utils.rabbitmq: Depends: >>> erlang-mnesia (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>> installed or >>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>> (< 1:26.0) but it is not installable >>> INFO:kolla.common.utils.rabbitmq: Depends: >>> erlang-os-mon (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>> installed or >>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>> (< 1:26.0) but it is not installable >>> INFO:kolla.common.utils.rabbitmq: Depends: >>> erlang-parsetools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>> installed or >>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>> (< 1:26.0) but it is not installable >>> INFO:kolla.common.utils.rabbitmq: Depends: >>> erlang-public-key (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>> installed or >>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>> (< 1:26.0) but it is not installable >>> INFO:kolla.common.utils.rabbitmq: Depends: >>> erlang-runtime-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to >>> be installed or >>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>> (< 1:26.0) but it is not installable >>> INFO:kolla.common.utils.rabbitmq: Depends: erlang-ssl >>> (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>> (< 1:26.0) but it is not installable >>> INFO:kolla.common.utils.rabbitmq: Depends: >>> erlang-syntax-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to >>> be installed or >>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>> (< 1:26.0) but it is not installable >>> INFO:kolla.common.utils.rabbitmq: Depends: >>> erlang-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>> installed or >>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>> (< 1:26.0) but it is not installable >>> INFO:kolla.common.utils.rabbitmq: Depends: >>> erlang-xmerl (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>> installed or >>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>> (< 1:26.0) but it is not installable >>> INFO:kolla.common.utils.rabbitmq:E: Unable to correct problems, you have >>> held broken packages. >>> INFO:kolla.common.utils.rabbitmq: >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Sun Jun 18 08:48:37 2023 From: arnaud.morin at gmail.com (Arnaud) Date: Sun, 18 Jun 2023 10:48:37 +0200 Subject: [OPENSTACK][rabbitmq] using quorum queues In-Reply-To: References: Message-ID: We are not using kolla on our side, sorry Le 18 juin 2023 04:04:39 GMT+02:00, Satish Patel a ?crit?: >Great! This is good to know that Quorum is a good solution. > >Do you have a config to enable in kolla-ansible deployment? > >On Thu, Jun 15, 2023 at 5:43?AM Arnaud Morin wrote: > >> Hey, >> >> We are also using quorum in some regions and plan to enable quorum >> everwhere. >> >> Note that we also manage to enable quorum for transient queues (using a >> custom patch as it's not doable with current oslo.messaging, see my >> request in [1]). >> We also introduced some custom changes in py-amqp to handle correctly >> the rabbit disconnections (see [2] and [3]). >> >> So far, the real improvment is achieved thanks to the combination of all >> of these changes, enabling quorum queue only was not enough for us to >> notice any improvment. >> >> The downside of quorum queues is that it consume more power on the >> rabbit cluster: you need more IO, CPU, RAM and network bandwith for the >> same number of queues (see [4]). >> It has to be taken into account. >> >> Cheers, >> Arnaud. >> >> [1] >> https://lists.openstack.org/pipermail/openstack-discuss/2023-April/033343.html >> [2] https://github.com/celery/py-amqp/pull/410 >> [3] https://github.com/celery/py-amqp/pull/405 >> [4] >> https://plik.ovh/file/nHCny7psDCTrEm76/Pq9ASO9wUd8HRk4C/s_1686817000.png >> >> >> >> On 13.06.23 - 09:14, Sa Pham wrote: >> > Dear Kh?i, >> > >> > Thanks for your reply. >> > >> > >> > >> > On Tue, Jun 13, 2023 at 9:05?AM Nguy?n H?u Kh?i < >> nguyenhuukhoinw at gmail.com> >> > wrote: >> > >> > > Hello. >> > > Firstly, when I used the classic queue and sometimes, my rabbitmq >> cluster >> > > was broken, the computers showed state down and I needed to restart the >> > > computer service to make it up. Secondly, 1 of 3 controller is down >> but my >> > > system still works although it is not very first as fully controller. >> I ran >> > > it for about 3 months compared with classic. My openstack is Yoga and >> use >> > > Kolla-Ansible as a deployment tool, >> > > Nguyen Huu Khoi >> > > >> > > >> > > On Tue, Jun 13, 2023 at 8:43?AM Sa Pham wrote: >> > > >> > >> Hi Kh?i, >> > >> >> > >> Why do you say using the quorum queue is more stable than the classic >> > >> queue ? >> > >> >> > >> Thanks, >> > >> >> > >> >> > >> >> > >> On Tue, Jun 13, 2023 at 7:26?AM Nguy?n H?u Kh?i < >> > >> nguyenhuukhoinw at gmail.com> wrote: >> > >> >> > >>> Hello Huettner, >> > >>> I have used the quorum queue since March and it is ok until now. It >> > >>> looks more stable than the classic queue. Some feedback to you. >> > >>> Thank you. >> > >>> Nguyen Huu Khoi. >> > >>> >> > >>> >> > >>> >> > >>> On Mon, May 8, 2023 at 1:14?PM Felix H?ttner >> >> > >>> wrote: >> > >>> >> > >>>> Hi Nguyen, >> > >>>> >> > >>>> >> > >>>> >> > >>>> we are using quorum queues for one of our deployments. So fare we >> did >> > >>>> not have any issue with them. They also seem to survive restarts >> without >> > >>>> issues (however reply queues are still broken afterwards in a small >> amount >> > >>>> of cases, but they are no quorum/mirrored queues anyway). >> > >>>> >> > >>>> >> > >>>> >> > >>>> So I would recommend them for everyone that creates a new cluster. >> > >>>> >> > >>>> >> > >>>> >> > >>>> -- >> > >>>> >> > >>>> Felix Huettner >> > >>>> >> > >>>> >> > >>>> >> > >>>> *From:* Nguy?n H?u Kh?i >> > >>>> *Sent:* Saturday, May 6, 2023 4:29 AM >> > >>>> *To:* OpenStack Discuss >> > >>>> *Subject:* [OPENSTACK][rabbitmq] using quorum queues >> > >>>> >> > >>>> >> > >>>> >> > >>>> Hello guys. >> > >>>> >> > >>>> IS there any guy who uses the quorum queue for openstack? Could you >> > >>>> give some feedback to compare with classic queue? >> > >>>> >> > >>>> Thank you. >> > >>>> >> > >>>> Nguyen Huu Khoi >> > >>>> >> > >>>> Diese E Mail enth?lt m?glicherweise vertrauliche Inhalte und ist nur >> > >>>> f?r die Verwertung durch den vorgesehenen Empf?nger bestimmt. >> > >>>> Sollten Sie nicht der vorgesehene Empf?nger sein, setzen Sie den >> > >>>> Absender bitte unverz?glich in Kenntnis und l?schen diese E Mail. >> > >>>> >> > >>>> Hinweise zum Datenschutz finden Sie hier >> > >>>> . >> > >>>> >> > >>>> >> > >>>> This e-mail may contain confidential content and is intended only >> for >> > >>>> the specified recipient/s. >> > >>>> If you are not the intended recipient, please inform the sender >> > >>>> immediately and delete this e-mail. >> > >>>> >> > >>>> Information on data protection can be found here >> > >>>> . >> > >>>> >> > >>> >> > >> >> > >> -- >> > >> Sa Pham Dang >> > >> Skype: great_bn >> > >> Phone/Telegram: 0986.849.582 >> > >> >> > >> >> > >> >> > >> > -- >> > Sa Pham Dang >> > Skype: great_bn >> > Phone/Telegram: 0986.849.582 >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Sun Jun 18 08:55:13 2023 From: arnaud.morin at gmail.com (Arnaud) Date: Sun, 18 Jun 2023 10:55:13 +0200 Subject: [nova] update_resources_interval parameter In-Reply-To: References: Message-ID: <60927B12-C22C-4F9D-BB4F-0B547E166DF9@gmail.com> Hello, you mean this is the reason why the transient queues are not HA? (And thus not quorum when using quorums). This is still a problem to me, when you lose a rabbit from the cluster. The probability to lose messages is very high. So the probability to have an instance in error is also very high (as a public cloud provider, our api is used a lot, there is a lot of creation/deletion of instances). Enabling HA for transient queues is helping in that situation. Le 16 juin 2023 16:06:49 GMT+02:00, "Nguy?n H?u Kh?i" a ?crit?: >Hello. >I think that is the reason why people don't use all quorum queue. > >https://bugs.launchpad.net/openstack-ansible/+bug/1607830 > >Nguyen Huu Khoi > > >On Thu, Jun 15, 2023 at 4:55?PM Arnaud Morin wrote: > >> Hey team, >> >> I'd like to understand the stakes behing the update_resources_interval >> parameter ([1]) >> >> We decided on our side to increase this value from 60sec to 600sec (see >> [2]). >> >> What I understand is that is will "delay" the update of metrics on nova >> side. >> I mostly think that these metrics are used by filter scheduler to select >> the best host when scheduling. >> >> Is there anything else it can affect? >> >> Cheers, >> Arnaud. >> >> >> [1] >> https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.update_resources_interval >> [2] https://review.opendev.org/c/openstack/large-scale/+/886166 >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From maksim.malchuk at gmail.com Sun Jun 18 21:13:29 2023 From: maksim.malchuk at gmail.com (Maksim Malchuk) Date: Mon, 19 Jun 2023 00:13:29 +0300 Subject: [kolla] rabbitmq failed to build image using 2023.1 release In-Reply-To: References: Message-ID: Satish, you should wait until the issue is fixed (the related patch is on the review) and new images built. On Sun, Jun 18, 2023 at 5:53?AM Satish Patel wrote: > Great! Michal, > > How did we miss this bug in CI ? > > I did install kolla using the "python3 -m pip install kolla==16.0.0" > command. In this case, how do I upgrade it to fix bugs? Should I do a pip > install --upgrade kolla? > > On Mon, Jun 12, 2023 at 6:15?AM Michal Arbet > wrote: > >> https://review.opendev.org/c/openstack/kolla/+/885857 >> Michal Arbet >> Openstack Engineer >> >> Ultimum Technologies a.s. >> Na Po???? 1047/26, 11000 Praha 1 >> Czech Republic >> >> +420 604 228 897 >> michal.arbet at ultimum.io >> *https://ultimum.io * >> >> LinkedIn | >> Twitter | Facebook >> >> >> >> po 12. 6. 2023 v 11:48 odes?latel Michal Arbet >> napsal: >> >>> APT dependencies broken :( >>> Michal Arbet >>> Openstack Engineer >>> >>> Ultimum Technologies a.s. >>> Na Po???? 1047/26, 11000 Praha 1 >>> Czech Republic >>> >>> +420 604 228 897 >>> michal.arbet at ultimum.io >>> *https://ultimum.io * >>> >>> LinkedIn | >>> Twitter | Facebook >>> >>> >>> >>> ne 11. 6. 2023 v 4:37 odes?latel Satish Patel >>> napsal: >>> >>>> Folks, >>>> >>>> Do you know how to solve this? I am using release 2023.1 of kolla to >>>> build images using ubuntu 22.04 >>>> >>>> root at docker-reg:/opt/kolla/etc/kolla# kolla-build --registry >>>> docker-reg:4000 --config-file kolla-build.conf --debug --threads 1 >>>> --skip-existing --push --cache --format none rabbitmq >>>> INFO:kolla.common.utils:Using engine: docker >>>> INFO:kolla.common.utils:Found the container image folder at >>>> /usr/local/share/kolla/docker >>>> INFO:kolla.common.utils:Added image rabbitmq to queue >>>> INFO:kolla.common.utils:Attempt number: 1 to run task: >>>> BuildTask(rabbitmq) >>>> DEBUG:kolla.common.utils.rabbitmq:Processing >>>> INFO:kolla.common.utils.rabbitmq:Building started at 2023-06-11 >>>> 02:25:44.208880 >>>> DEBUG:kolla.common.utils.rabbitmq:Turned 0 plugins into plugins archive >>>> DEBUG:kolla.common.utils.rabbitmq:Turned 0 additions into additions >>>> archive >>>> INFO:kolla.common.utils.rabbitmq:Step 1/11 : FROM >>>> docker-reg:4000/kolla/base:2023.1 >>>> INFO:kolla.common.utils.rabbitmq: ---> 4551f4af8ddf >>>> INFO:kolla.common.utils.rabbitmq:Step 2/11 : LABEL maintainer="Kolla >>>> Project (https://launchpad.net/kolla)" name="rabbitmq" >>>> build-date="20230611" >>>> INFO:kolla.common.utils.rabbitmq: ---> Using cache >>>> INFO:kolla.common.utils.rabbitmq: ---> 6c2ef10499f7 >>>> INFO:kolla.common.utils.rabbitmq:Step 3/11 : RUN usermod --append >>>> --home /var/lib/rabbitmq --groups kolla rabbitmq && mkdir -p >>>> /var/lib/rabbitmq && chown -R 42439:42439 /var/lib/rabbitmq >>>> INFO:kolla.common.utils.rabbitmq: ---> Using cache >>>> INFO:kolla.common.utils.rabbitmq: ---> 29ef8940f40b >>>> INFO:kolla.common.utils.rabbitmq:Step 4/11 : RUN echo 'Uris: >>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu' >>>> >/etc/apt/sources.list.d/erlang.sources && echo 'Components: main' >>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Types: deb' >>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Suites: jammy' >>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Signed-By: >>>> /etc/kolla/apt-keys/erlang-ppa.gpg' >>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Uris: >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu' >>>> >/etc/apt/sources.list.d/rabbitmq.sources && echo 'Components: main' >>>> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Types: deb' >>>> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Suites: jammy' >>>> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Signed-By: >>>> /etc/kolla/apt-keys/rabbitmq.gpg' >>/etc/apt/sources.list.d/rabbitmq.sources >>>> INFO:kolla.common.utils.rabbitmq: ---> Using cache >>>> INFO:kolla.common.utils.rabbitmq: ---> 6d92a7342a90 >>>> INFO:kolla.common.utils.rabbitmq:Step 5/11 : RUN apt-get --error-on=any >>>> update && apt-get -y install --no-install-recommends logrotate >>>> rabbitmq-server && apt-get clean && rm -rf /var/lib/apt/lists/* >>>> INFO:kolla.common.utils.rabbitmq: ---> Running in 0deab7961445 >>>> INFO:kolla.common.utils.rabbitmq:Get:1 >>>> http://ubuntu-cloud.archive.canonical.com/ubuntu >>>> jammy-updates/antelope InRelease [5,463 B] >>>> INFO:kolla.common.utils.rabbitmq:Get:2 http://archive.ubuntu.com/ubuntu >>>> jammy-backports InRelease [108 kB] >>>> INFO:kolla.common.utils.rabbitmq:Get:3 >>>> http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [3,447 B] >>>> INFO:kolla.common.utils.rabbitmq:Get:7 >>>> http://ubuntu-cloud.archive.canonical.com/ubuntu >>>> jammy-updates/antelope/main amd64 Packages [126 kB] >>>> INFO:kolla.common.utils.rabbitmq:Get:8 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy InRelease [5,152 B] >>>> INFO:kolla.common.utils.rabbitmq:Get:9 >>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu jammy >>>> InRelease [18.1 kB] >>>> INFO:kolla.common.utils.rabbitmq:Get:5 http://mirror.siena.edu/ubuntu >>>> jammy-updates InRelease [119 kB] >>>> INFO:kolla.common.utils.rabbitmq:Get:10 >>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>> jammy/main amd64 Packages [9,044 B] >>>> INFO:kolla.common.utils.rabbitmq:Get:6 http://ftp.usf.edu/pub/ubuntu >>>> jammy-security InRelease [110 kB] >>>> INFO:kolla.common.utils.rabbitmq:Get:11 >>>> http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 >>>> Packages [27.0 kB] >>>> INFO:kolla.common.utils.rabbitmq:Get:4 >>>> https://archive.linux.duke.edu/ubuntu jammy InRelease [270 kB] >>>> INFO:kolla.common.utils.rabbitmq:Get:12 >>>> http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 Packages >>>> [49.4 kB] >>>> INFO:kolla.common.utils.rabbitmq:Get:14 http://ubuntu.osuosl.org/ubuntu >>>> jammy-updates/main amd64 Packages [857 kB] >>>> INFO:kolla.common.utils.rabbitmq:Get:17 >>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>> jammy/main amd64 Packages [8,167 B] >>>> INFO:kolla.common.utils.rabbitmq:Get:16 >>>> http://pubmirrors.dal.corespace.com/ubuntu jammy-security/universe >>>> amd64 Packages [928 kB] >>>> INFO:kolla.common.utils.rabbitmq:Get:15 >>>> https://atl.mirrors.clouvider.net/ubuntu jammy-security/main amd64 >>>> Packages [575 kB] >>>> INFO:kolla.common.utils.rabbitmq:Get:19 >>>> http://www.club.cc.cmu.edu/pub/ubuntu jammy/main amd64 Packages [1,792 >>>> kB] >>>> INFO:kolla.common.utils.rabbitmq:Get:18 >>>> http://mirror.team-cymru.com/ubuntu jammy/universe amd64 Packages >>>> [17.5 MB] >>>> INFO:kolla.common.utils.rabbitmq:Get:13 >>>> http://mirrors.syringanetworks.net/ubuntu-archive >>>> jammy-updates/universe amd64 Packages [1,176 kB] >>>> INFO:kolla.common.utils.rabbitmq:Fetched 23.7 MB in 6s (4,091 kB/s) >>>> INFO:kolla.common.utils.rabbitmq:Reading package lists... >>>> INFO:kolla.common.utils.rabbitmq:Reading package lists... >>>> INFO:kolla.common.utils.rabbitmq:Building dependency tree... >>>> INFO:kolla.common.utils.rabbitmq:Reading state information... >>>> INFO:kolla.common.utils.rabbitmq:Some packages could not be installed. >>>> This may mean that you have >>>> INFO:kolla.common.utils.rabbitmq:requested an impossible situation or >>>> if you are using the unstable >>>> INFO:kolla.common.utils.rabbitmq:distribution that some required >>>> packages have not yet been created >>>> INFO:kolla.common.utils.rabbitmq:or been moved out of Incoming. >>>> INFO:kolla.common.utils.rabbitmq:The following information may help to >>>> resolve the situation: >>>> INFO:kolla.common.utils.rabbitmq:The following packages have unmet >>>> dependencies: >>>> INFO:kolla.common.utils.rabbitmq: rabbitmq-server : Depends: >>>> erlang-base (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>> installed or >>>> INFO:kolla.common.utils.rabbitmq: >>>> erlang-base-hipe (< 1:26.0) but it is not installable or >>>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>>> (< 1:26.0) but it is not installable >>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>> erlang-crypto (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>> installed or >>>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>>> (< 1:26.0) but it is not installable >>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>> erlang-eldap (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>> installed or >>>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>>> (< 1:26.0) but it is not installable >>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>> erlang-inets (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>> installed or >>>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>>> (< 1:26.0) but it is not installable >>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>> erlang-mnesia (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>> installed or >>>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>>> (< 1:26.0) but it is not installable >>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>> erlang-os-mon (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>> installed or >>>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>>> (< 1:26.0) but it is not installable >>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>> erlang-parsetools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>> installed or >>>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>>> (< 1:26.0) but it is not installable >>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>> erlang-public-key (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>> installed or >>>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>>> (< 1:26.0) but it is not installable >>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>> erlang-runtime-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to >>>> be installed or >>>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>>> (< 1:26.0) but it is not installable >>>> INFO:kolla.common.utils.rabbitmq: Depends: erlang-ssl >>>> (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be installed or >>>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>>> (< 1:26.0) but it is not installable >>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>> erlang-syntax-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to >>>> be installed or >>>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>>> (< 1:26.0) but it is not installable >>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>> erlang-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>> installed or >>>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>>> (< 1:26.0) but it is not installable >>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>> erlang-xmerl (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>> installed or >>>> INFO:kolla.common.utils.rabbitmq: esl-erlang >>>> (< 1:26.0) but it is not installable >>>> INFO:kolla.common.utils.rabbitmq:E: Unable to correct problems, you >>>> have held broken packages. >>>> INFO:kolla.common.utils.rabbitmq: >>>> >>> -- Regards, Maksim Malchuk -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamesleong123098 at gmail.com Sun Jun 18 22:33:59 2023 From: jamesleong123098 at gmail.com (James Leong) Date: Sun, 18 Jun 2023 17:33:59 -0500 Subject: [kolla-ansible][zun] Deleting container within code without authentication Message-ID: Hi all, I am currently playing around with the Zun component. I have deployed the yoga version of OpenStack using kolla-ansible. Is there a way to delete a container in the code base by giving the container id? It seems like without the proper context or authentication information, I will not be able to remove the container when calling the delete function in zun/api/controllers/v1/containers.py since the compute_api will not work. Would it be possible to create fake admin info within the code to enforce the container to be deleted? Apart from that, how can I create a compute API object from scratch? Best, Jame -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Sun Jun 18 23:51:18 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Mon, 19 Jun 2023 06:51:18 +0700 Subject: [nova] update_resources_interval parameter In-Reply-To: <60927B12-C22C-4F9D-BB4F-0B547E166DF9@gmail.com> References: <60927B12-C22C-4F9D-BB4F-0B547E166DF9@gmail.com> Message-ID: Hello. I replied via Rabbitmq Quorum topic. But we can setup it to workaround [oslo_messaging_rabbit] kombu_reconnect_delay=0.5 Nguyen Huu Khoi On Sun, Jun 18, 2023 at 3:55?PM Arnaud wrote: > Hello, you mean this is the reason why the transient queues are not HA? > (And thus not quorum when using quorums). > > This is still a problem to me, when you lose a rabbit from the cluster. > The probability to lose messages is very high. So the probability to have > an instance in error is also very high (as a public cloud provider, our api > is used a lot, there is a lot of creation/deletion of instances). > > Enabling HA for transient queues is helping in that situation. > > > Le 16 juin 2023 16:06:49 GMT+02:00, "Nguy?n H?u Kh?i" < > nguyenhuukhoinw at gmail.com> a ?crit : >> >> Hello. >> I think that is the reason why people don't use all quorum queue. >> >> https://bugs.launchpad.net/openstack-ansible/+bug/1607830 >> >> Nguyen Huu Khoi >> >> >> On Thu, Jun 15, 2023 at 4:55?PM Arnaud Morin >> wrote: >> >>> Hey team, >>> >>> I'd like to understand the stakes behing the update_resources_interval >>> parameter ([1]) >>> >>> We decided on our side to increase this value from 60sec to 600sec (see >>> [2]). >>> >>> What I understand is that is will "delay" the update of metrics on nova >>> side. >>> I mostly think that these metrics are used by filter scheduler to select >>> the best host when scheduling. >>> >>> Is there anything else it can affect? >>> >>> Cheers, >>> Arnaud. >>> >>> >>> [1] >>> https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.update_resources_interval >>> [2] https://review.opendev.org/c/openstack/large-scale/+/886166 >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Mon Jun 19 00:02:34 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Mon, 19 Jun 2023 07:02:34 +0700 Subject: [OPENSTACK][rabbitmq] using quorum queues In-Reply-To: References: Message-ID: Helo, With kolla ansible nano /etc/kolla/config/global.conf [oslo_messaging_rabbit] rabbit_quorum_queue = true but you need destroy (rm rabbitmq container and its volume ) and redeploy new one to make quorum queues work. Nguyen Huu Khoi On Sun, Jun 18, 2023 at 9:04?AM Satish Patel wrote: > Great! This is good to know that Quorum is a good solution. > > Do you have a config to enable in kolla-ansible deployment? > > On Thu, Jun 15, 2023 at 5:43?AM Arnaud Morin > wrote: > >> Hey, >> >> We are also using quorum in some regions and plan to enable quorum >> everwhere. >> >> Note that we also manage to enable quorum for transient queues (using a >> custom patch as it's not doable with current oslo.messaging, see my >> request in [1]). >> We also introduced some custom changes in py-amqp to handle correctly >> the rabbit disconnections (see [2] and [3]). >> >> So far, the real improvment is achieved thanks to the combination of all >> of these changes, enabling quorum queue only was not enough for us to >> notice any improvment. >> >> The downside of quorum queues is that it consume more power on the >> rabbit cluster: you need more IO, CPU, RAM and network bandwith for the >> same number of queues (see [4]). >> It has to be taken into account. >> >> Cheers, >> Arnaud. >> >> [1] >> https://lists.openstack.org/pipermail/openstack-discuss/2023-April/033343.html >> [2] https://github.com/celery/py-amqp/pull/410 >> [3] https://github.com/celery/py-amqp/pull/405 >> [4] >> https://plik.ovh/file/nHCny7psDCTrEm76/Pq9ASO9wUd8HRk4C/s_1686817000.png >> >> >> >> On 13.06.23 - 09:14, Sa Pham wrote: >> > Dear Kh?i, >> > >> > Thanks for your reply. >> > >> > >> > >> > On Tue, Jun 13, 2023 at 9:05?AM Nguy?n H?u Kh?i < >> nguyenhuukhoinw at gmail.com> >> > wrote: >> > >> > > Hello. >> > > Firstly, when I used the classic queue and sometimes, my rabbitmq >> cluster >> > > was broken, the computers showed state down and I needed to restart >> the >> > > computer service to make it up. Secondly, 1 of 3 controller is down >> but my >> > > system still works although it is not very first as fully controller. >> I ran >> > > it for about 3 months compared with classic. My openstack is Yoga and >> use >> > > Kolla-Ansible as a deployment tool, >> > > Nguyen Huu Khoi >> > > >> > > >> > > On Tue, Jun 13, 2023 at 8:43?AM Sa Pham wrote: >> > > >> > >> Hi Kh?i, >> > >> >> > >> Why do you say using the quorum queue is more stable than the classic >> > >> queue ? >> > >> >> > >> Thanks, >> > >> >> > >> >> > >> >> > >> On Tue, Jun 13, 2023 at 7:26?AM Nguy?n H?u Kh?i < >> > >> nguyenhuukhoinw at gmail.com> wrote: >> > >> >> > >>> Hello Huettner, >> > >>> I have used the quorum queue since March and it is ok until now. It >> > >>> looks more stable than the classic queue. Some feedback to you. >> > >>> Thank you. >> > >>> Nguyen Huu Khoi. >> > >>> >> > >>> >> > >>> >> > >>> On Mon, May 8, 2023 at 1:14?PM Felix H?ttner >> >> > >>> wrote: >> > >>> >> > >>>> Hi Nguyen, >> > >>>> >> > >>>> >> > >>>> >> > >>>> we are using quorum queues for one of our deployments. So fare we >> did >> > >>>> not have any issue with them. They also seem to survive restarts >> without >> > >>>> issues (however reply queues are still broken afterwards in a >> small amount >> > >>>> of cases, but they are no quorum/mirrored queues anyway). >> > >>>> >> > >>>> >> > >>>> >> > >>>> So I would recommend them for everyone that creates a new cluster. >> > >>>> >> > >>>> >> > >>>> >> > >>>> -- >> > >>>> >> > >>>> Felix Huettner >> > >>>> >> > >>>> >> > >>>> >> > >>>> *From:* Nguy?n H?u Kh?i >> > >>>> *Sent:* Saturday, May 6, 2023 4:29 AM >> > >>>> *To:* OpenStack Discuss >> > >>>> *Subject:* [OPENSTACK][rabbitmq] using quorum queues >> > >>>> >> > >>>> >> > >>>> >> > >>>> Hello guys. >> > >>>> >> > >>>> IS there any guy who uses the quorum queue for openstack? Could you >> > >>>> give some feedback to compare with classic queue? >> > >>>> >> > >>>> Thank you. >> > >>>> >> > >>>> Nguyen Huu Khoi >> > >>>> >> > >>>> Diese E Mail enth?lt m?glicherweise vertrauliche Inhalte und ist >> nur >> > >>>> f?r die Verwertung durch den vorgesehenen Empf?nger bestimmt. >> > >>>> Sollten Sie nicht der vorgesehene Empf?nger sein, setzen Sie den >> > >>>> Absender bitte unverz?glich in Kenntnis und l?schen diese E Mail. >> > >>>> >> > >>>> Hinweise zum Datenschutz finden Sie hier >> > >>>> . >> > >>>> >> > >>>> >> > >>>> This e-mail may contain confidential content and is intended only >> for >> > >>>> the specified recipient/s. >> > >>>> If you are not the intended recipient, please inform the sender >> > >>>> immediately and delete this e-mail. >> > >>>> >> > >>>> Information on data protection can be found here >> > >>>> . >> > >>>> >> > >>> >> > >> >> > >> -- >> > >> Sa Pham Dang >> > >> Skype: great_bn >> > >> Phone/Telegram: 0986.849.582 >> > >> >> > >> >> > >> >> > >> > -- >> > Sa Pham Dang >> > Skype: great_bn >> > Phone/Telegram: 0986.849.582 >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Mon Jun 19 01:23:11 2023 From: tkajinam at redhat.com (Takashi Kajinami) Date: Mon, 19 Jun 2023 10:23:11 +0900 Subject: [puppet] Summary of discussions at PTG Vancouver June 2023 Message-ID: Hello, Thank you all for joining our PTG session ! It was really nice to meet some of the team members in person. The etherpad for the discussion can be found in the link below, but I'll share a summary of our discussions in this email. In case you have any questions/concerns then feel free to let me know. https://etherpad.opendev.org/p/vancouver-june2023-puppet-openstack - Status update - We share the status of each person. - Unfortunately all of the members attending the discussion have limited resources especially for development - *Agreement:* we focus on priorities and de-prioritize items which does not cause immediate problems/breackages - *Agreement: *we ensure we finish the prioritized items in a specific release to keep our maintenance simple - Puppet 8 support - We added unit/lint tests with Puppet 8 for early testing, but adding integration tests is currently blocked by some ruby dependencies not yet available for Ruby 3.1 which is required by Puppet 8. - All of the operating system versions do not provide Ruby 3.1 now - Puppet 7 EOL is not yet declared - *Agreement*: We leave this as non-priority for now and re-work on it once Ruby 3.1 is globally available - Adaptation to puppetlab-stdlib 9.0.0 - Some deprecated items were removed. We adapted our modules but are still waiting for update in the dependent modules - validate_legacy was deprecated and causes large warning no*w* - Functions from stdlib should be now namespace-d to avoid warnings after bump is done - *Agreement:* We prioritize replacing validate_legacy by typed parameters - *Agreement*: We pin stdlib to an older version for now but attempt to bump it early - Module modernizations - Typed parameters - Replacing validate_legacy needs to be prioritized now to adapt to puppetlab-stdlib 9.0.0 - Implementing type validations for openstack config options require further discussions. Handling of os_service_defualt would be the main topic we have to sort - We prefer consistent implementations for all openstack service modules, while we can attempt some changes early in a few "independent" modules such as extras, vswitch, qdr - *Agreement: *We de-prioritize implementing validations for config options - *Agreement: *We ensure implementations is distributed to all modules consistently in a single release - Hieradata - This is "modern" design pattern, and it's ideal to replace legacy params class by it - However we don't have urgent requirement to complete this work - We have to create the common structure for hieradata files maintained in each repositories - Some concerns have been raised mainly how we can pick up some values once this change is made - *Agreement: *We de-prioritize this work for now - *Agreement: *We can start with the flat files (pattern 1 in https://etherpad.opendev.org/p/puppet-hieradata-structure ) - *Agreement: *tkajinam will submit a few examples and we review how this impact the existing usage - *Agreement: *Similarly to typed parameters, we should coordinate this work to make the change consistently - Making distro/version specific logic selectable by parameters - This work is tightly related to the hieradata work - *Agreement: *We basically leave this work until the above hieradata work is completed, and de-prioritize this. - Branch retirements - We have multiple branches open now and aim to reduce number of branches - Red Hat is interested in keeping train open until 2023Q3 (tentative) and wallaby for some times for donwstream - Others do not have requirement to maintain old releases in EM status - *Agreement: *We aim to retire train/ussuri/branch after 2023Q3. Retiring further branches is subject to future discussions Thank you, Takashi Kajinami -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Mon Jun 19 02:55:43 2023 From: satish.txt at gmail.com (Satish Patel) Date: Sun, 18 Jun 2023 22:55:43 -0400 Subject: [kolla] rabbitmq failed to build image using 2023.1 release In-Reply-To: References: Message-ID: Ok, thanks for the update. Just for my knowledge, how do I check it in review or in the build process. I saw the patch was merged so thought it must have kicked the gate and built images. On Sun, Jun 18, 2023 at 5:13?PM Maksim Malchuk wrote: > Satish, you should wait until the issue is fixed (the related patch is on > the review) and new images built. > > On Sun, Jun 18, 2023 at 5:53?AM Satish Patel wrote: > >> Great! Michal, >> >> How did we miss this bug in CI ? >> >> I did install kolla using the "python3 -m pip install kolla==16.0.0" >> command. In this case, how do I upgrade it to fix bugs? Should I do a pip >> install --upgrade kolla? >> >> On Mon, Jun 12, 2023 at 6:15?AM Michal Arbet >> wrote: >> >>> https://review.opendev.org/c/openstack/kolla/+/885857 >>> Michal Arbet >>> Openstack Engineer >>> >>> Ultimum Technologies a.s. >>> Na Po???? 1047/26, 11000 Praha 1 >>> Czech Republic >>> >>> +420 604 228 897 >>> michal.arbet at ultimum.io >>> *https://ultimum.io * >>> >>> LinkedIn | >>> Twitter | Facebook >>> >>> >>> >>> po 12. 6. 2023 v 11:48 odes?latel Michal Arbet >>> napsal: >>> >>>> APT dependencies broken :( >>>> Michal Arbet >>>> Openstack Engineer >>>> >>>> Ultimum Technologies a.s. >>>> Na Po???? 1047/26, 11000 Praha 1 >>>> Czech Republic >>>> >>>> +420 604 228 897 >>>> michal.arbet at ultimum.io >>>> *https://ultimum.io * >>>> >>>> LinkedIn | >>>> Twitter | Facebook >>>> >>>> >>>> >>>> ne 11. 6. 2023 v 4:37 odes?latel Satish Patel >>>> napsal: >>>> >>>>> Folks, >>>>> >>>>> Do you know how to solve this? I am using release 2023.1 of kolla to >>>>> build images using ubuntu 22.04 >>>>> >>>>> root at docker-reg:/opt/kolla/etc/kolla# kolla-build --registry >>>>> docker-reg:4000 --config-file kolla-build.conf --debug --threads 1 >>>>> --skip-existing --push --cache --format none rabbitmq >>>>> INFO:kolla.common.utils:Using engine: docker >>>>> INFO:kolla.common.utils:Found the container image folder at >>>>> /usr/local/share/kolla/docker >>>>> INFO:kolla.common.utils:Added image rabbitmq to queue >>>>> INFO:kolla.common.utils:Attempt number: 1 to run task: >>>>> BuildTask(rabbitmq) >>>>> DEBUG:kolla.common.utils.rabbitmq:Processing >>>>> INFO:kolla.common.utils.rabbitmq:Building started at 2023-06-11 >>>>> 02:25:44.208880 >>>>> DEBUG:kolla.common.utils.rabbitmq:Turned 0 plugins into plugins archive >>>>> DEBUG:kolla.common.utils.rabbitmq:Turned 0 additions into additions >>>>> archive >>>>> INFO:kolla.common.utils.rabbitmq:Step 1/11 : FROM >>>>> docker-reg:4000/kolla/base:2023.1 >>>>> INFO:kolla.common.utils.rabbitmq: ---> 4551f4af8ddf >>>>> INFO:kolla.common.utils.rabbitmq:Step 2/11 : LABEL maintainer="Kolla >>>>> Project (https://launchpad.net/kolla)" name="rabbitmq" >>>>> build-date="20230611" >>>>> INFO:kolla.common.utils.rabbitmq: ---> Using cache >>>>> INFO:kolla.common.utils.rabbitmq: ---> 6c2ef10499f7 >>>>> INFO:kolla.common.utils.rabbitmq:Step 3/11 : RUN usermod --append >>>>> --home /var/lib/rabbitmq --groups kolla rabbitmq && mkdir -p >>>>> /var/lib/rabbitmq && chown -R 42439:42439 /var/lib/rabbitmq >>>>> INFO:kolla.common.utils.rabbitmq: ---> Using cache >>>>> INFO:kolla.common.utils.rabbitmq: ---> 29ef8940f40b >>>>> INFO:kolla.common.utils.rabbitmq:Step 4/11 : RUN echo 'Uris: >>>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu' >>>>> >/etc/apt/sources.list.d/erlang.sources && echo 'Components: main' >>>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Types: deb' >>>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Suites: jammy' >>>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Signed-By: >>>>> /etc/kolla/apt-keys/erlang-ppa.gpg' >>>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Uris: >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu' >>>>> >/etc/apt/sources.list.d/rabbitmq.sources && echo 'Components: main' >>>>> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Types: deb' >>>>> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Suites: jammy' >>>>> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Signed-By: >>>>> /etc/kolla/apt-keys/rabbitmq.gpg' >>/etc/apt/sources.list.d/rabbitmq.sources >>>>> INFO:kolla.common.utils.rabbitmq: ---> Using cache >>>>> INFO:kolla.common.utils.rabbitmq: ---> 6d92a7342a90 >>>>> INFO:kolla.common.utils.rabbitmq:Step 5/11 : RUN apt-get >>>>> --error-on=any update && apt-get -y install --no-install-recommends >>>>> logrotate rabbitmq-server && apt-get clean && rm -rf /var/lib/apt/lists/* >>>>> INFO:kolla.common.utils.rabbitmq: ---> Running in 0deab7961445 >>>>> INFO:kolla.common.utils.rabbitmq:Get:1 >>>>> http://ubuntu-cloud.archive.canonical.com/ubuntu >>>>> jammy-updates/antelope InRelease [5,463 B] >>>>> INFO:kolla.common.utils.rabbitmq:Get:2 >>>>> http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB] >>>>> INFO:kolla.common.utils.rabbitmq:Get:3 >>>>> http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [3,447 B] >>>>> INFO:kolla.common.utils.rabbitmq:Get:7 >>>>> http://ubuntu-cloud.archive.canonical.com/ubuntu >>>>> jammy-updates/antelope/main amd64 Packages [126 kB] >>>>> INFO:kolla.common.utils.rabbitmq:Get:8 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy InRelease [5,152 B] >>>>> INFO:kolla.common.utils.rabbitmq:Get:9 >>>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>>> jammy InRelease [18.1 kB] >>>>> INFO:kolla.common.utils.rabbitmq:Get:5 http://mirror.siena.edu/ubuntu >>>>> jammy-updates InRelease [119 kB] >>>>> INFO:kolla.common.utils.rabbitmq:Get:10 >>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>> jammy/main amd64 Packages [9,044 B] >>>>> INFO:kolla.common.utils.rabbitmq:Get:6 http://ftp.usf.edu/pub/ubuntu >>>>> jammy-security InRelease [110 kB] >>>>> INFO:kolla.common.utils.rabbitmq:Get:11 >>>>> http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 >>>>> Packages [27.0 kB] >>>>> INFO:kolla.common.utils.rabbitmq:Get:4 >>>>> https://archive.linux.duke.edu/ubuntu jammy InRelease [270 kB] >>>>> INFO:kolla.common.utils.rabbitmq:Get:12 >>>>> http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 Packages >>>>> [49.4 kB] >>>>> INFO:kolla.common.utils.rabbitmq:Get:14 >>>>> http://ubuntu.osuosl.org/ubuntu jammy-updates/main amd64 Packages >>>>> [857 kB] >>>>> INFO:kolla.common.utils.rabbitmq:Get:17 >>>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>>> jammy/main amd64 Packages [8,167 B] >>>>> INFO:kolla.common.utils.rabbitmq:Get:16 >>>>> http://pubmirrors.dal.corespace.com/ubuntu jammy-security/universe >>>>> amd64 Packages [928 kB] >>>>> INFO:kolla.common.utils.rabbitmq:Get:15 >>>>> https://atl.mirrors.clouvider.net/ubuntu jammy-security/main amd64 >>>>> Packages [575 kB] >>>>> INFO:kolla.common.utils.rabbitmq:Get:19 >>>>> http://www.club.cc.cmu.edu/pub/ubuntu jammy/main amd64 Packages >>>>> [1,792 kB] >>>>> INFO:kolla.common.utils.rabbitmq:Get:18 >>>>> http://mirror.team-cymru.com/ubuntu jammy/universe amd64 Packages >>>>> [17.5 MB] >>>>> INFO:kolla.common.utils.rabbitmq:Get:13 >>>>> http://mirrors.syringanetworks.net/ubuntu-archive >>>>> jammy-updates/universe amd64 Packages [1,176 kB] >>>>> INFO:kolla.common.utils.rabbitmq:Fetched 23.7 MB in 6s (4,091 kB/s) >>>>> INFO:kolla.common.utils.rabbitmq:Reading package lists... >>>>> INFO:kolla.common.utils.rabbitmq:Reading package lists... >>>>> INFO:kolla.common.utils.rabbitmq:Building dependency tree... >>>>> INFO:kolla.common.utils.rabbitmq:Reading state information... >>>>> INFO:kolla.common.utils.rabbitmq:Some packages could not be installed. >>>>> This may mean that you have >>>>> INFO:kolla.common.utils.rabbitmq:requested an impossible situation or >>>>> if you are using the unstable >>>>> INFO:kolla.common.utils.rabbitmq:distribution that some required >>>>> packages have not yet been created >>>>> INFO:kolla.common.utils.rabbitmq:or been moved out of Incoming. >>>>> INFO:kolla.common.utils.rabbitmq:The following information may help to >>>>> resolve the situation: >>>>> INFO:kolla.common.utils.rabbitmq:The following packages have unmet >>>>> dependencies: >>>>> INFO:kolla.common.utils.rabbitmq: rabbitmq-server : Depends: >>>>> erlang-base (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>> installed or >>>>> INFO:kolla.common.utils.rabbitmq: >>>>> erlang-base-hipe (< 1:26.0) but it is not installable or >>>>> INFO:kolla.common.utils.rabbitmq: >>>>> esl-erlang (< 1:26.0) but it is not installable >>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>> erlang-crypto (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>> installed or >>>>> INFO:kolla.common.utils.rabbitmq: >>>>> esl-erlang (< 1:26.0) but it is not installable >>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>> erlang-eldap (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>> installed or >>>>> INFO:kolla.common.utils.rabbitmq: >>>>> esl-erlang (< 1:26.0) but it is not installable >>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>> erlang-inets (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>> installed or >>>>> INFO:kolla.common.utils.rabbitmq: >>>>> esl-erlang (< 1:26.0) but it is not installable >>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>> erlang-mnesia (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>> installed or >>>>> INFO:kolla.common.utils.rabbitmq: >>>>> esl-erlang (< 1:26.0) but it is not installable >>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>> erlang-os-mon (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>> installed or >>>>> INFO:kolla.common.utils.rabbitmq: >>>>> esl-erlang (< 1:26.0) but it is not installable >>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>> erlang-parsetools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>> installed or >>>>> INFO:kolla.common.utils.rabbitmq: >>>>> esl-erlang (< 1:26.0) but it is not installable >>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>> erlang-public-key (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>> installed or >>>>> INFO:kolla.common.utils.rabbitmq: >>>>> esl-erlang (< 1:26.0) but it is not installable >>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>> erlang-runtime-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to >>>>> be installed or >>>>> INFO:kolla.common.utils.rabbitmq: >>>>> esl-erlang (< 1:26.0) but it is not installable >>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>> erlang-ssl (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>> installed or >>>>> INFO:kolla.common.utils.rabbitmq: >>>>> esl-erlang (< 1:26.0) but it is not installable >>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>> erlang-syntax-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to >>>>> be installed or >>>>> INFO:kolla.common.utils.rabbitmq: >>>>> esl-erlang (< 1:26.0) but it is not installable >>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>> erlang-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>> installed or >>>>> INFO:kolla.common.utils.rabbitmq: >>>>> esl-erlang (< 1:26.0) but it is not installable >>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>> erlang-xmerl (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>> installed or >>>>> INFO:kolla.common.utils.rabbitmq: >>>>> esl-erlang (< 1:26.0) but it is not installable >>>>> INFO:kolla.common.utils.rabbitmq:E: Unable to correct problems, you >>>>> have held broken packages. >>>>> INFO:kolla.common.utils.rabbitmq: >>>>> >>>> > > -- > Regards, > Maksim Malchuk > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Mon Jun 19 03:45:34 2023 From: tkajinam at redhat.com (Takashi Kajinami) Date: Mon, 19 Jun 2023 12:45:34 +0900 Subject: [heat] Heat discussions at PTG Vancouver June 2023 Message-ID: Hello, Thank you all who attended the PTG discussions last week at Vancouver. We were small but it was nice to meet/see you in person and discuss how we can improve our projects I'll share a summary of the PTG discussions and also a few more topics related to our projects, which were discussed during Forum sessions. Please let me know if you have any questions or concerns. PTG sessions: - We exchanged information about how we use heat downstream. Jonathan and Alan from Georgia Cyber Center explained their quite interesting usage to launch an lab environment for their end users. - There are still a number of resource properties not yet supported by Heat but we want to prioritize implementations according to actual needs. We suggested creating a storyboard story in case any user finds out any missing properties to cover their use case. - Jonathan shared a very nice demo about their visualizer of heat stacks[1], which gives more meaningful visualization based on resource types. We agreed this can replace the current implementation in heat dashboard. We agreed that we'll explore if we can integrate this and deprecate the current implementation. [1] https://gitlab.com/gacybercenter/open/openstack-top-graph Forum sessions: - In SRBAC sessions status of individual projects were shared. Heat is lagging behind now and we have to focus on completing step 1(removing system scope). - In TC/Community Leaders communication, we agreed we are setting a timeline to bump sqlalchemy to 2.0 (followup would be sent separately). We removed usage of sqlalchemy-migrate as the first step to adapt sqlalchemy 2.0 but we still need some work for full adoption. We first need to add some testing as was done in Neutron already. Thank you, Takashi Kajinami -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Mon Jun 19 09:01:52 2023 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 19 Jun 2023 11:01:52 +0200 Subject: [neutron] Bug deputy report (June-12 - June-18) Message-ID: Hey Neutrinos, I was the bug deputy last week, so I send the summary from last week. GATE ISSUES: these seems to be related to OVN version bump and https://review.opendev.org/c/openstack/neutron/+/883681: * Gate: Fuctional test_virtual_port_host_update failes recently (#link https://bugs.launchpad.net/neutron/+bug/2023634 ) ** *Unassigned* * [trunk ports] subport doesn't reach status ACTIVE (#link https://bugs.launchpad.net/neutron/+bug/2024160 ) ** *Unassigned* HIGH * n-d-r: Peering failing on mixed IPv4 + IPv6 updates (#link https://bugs.launchpad.net/neutron/+bug/2023632 ) ** Dr. Jens Harbott * [OVN] Hash Ring nodes removed when "periodic worker" is killed (#link https://bugs.launchpad.net/neutron/+bug/2024205 ) ** Lucas Alvares Gomes * Adding a static route to router returns internal server error (#link https://bugs.launchpad.net/neutron/+bug/2024251 ) ** *Unassigned* MEDIUM * [sqlalchemy-20] Remove redundant indexes of some tables (#link https://bugs.launchpad.net/neutron/+bug/2024044 ) ** Rodolfo Alonso * OVN: Removal of chassis results in unbalanced distribution of LRPs (#link https://bugs.launchpad.net/neutron/+bug/2023993 ) ** *Unassigned* * create_subnet policy allows users to create subnet in the shared networks (#link https://bugs.launchpad.net/neutron/+bug/2023679 ) ** Slawek Kaplonski Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From maksim.malchuk at gmail.com Mon Jun 19 10:13:33 2023 From: maksim.malchuk at gmail.com (Maksim Malchuk) Date: Mon, 19 Jun 2023 13:13:33 +0300 Subject: [kolla] rabbitmq failed to build image using 2023.1 release In-Reply-To: References: Message-ID: The review was merged 2 days ago. The images are built in a weekly maner. On Mon, Jun 19, 2023 at 5:55?AM Satish Patel wrote: > Ok, thanks for the update. Just for my knowledge, how do I check it in > review or in the build process. I saw the patch was merged so thought it > must have kicked the gate and built images. > > On Sun, Jun 18, 2023 at 5:13?PM Maksim Malchuk > wrote: > >> Satish, you should wait until the issue is fixed (the related patch is on >> the review) and new images built. >> >> On Sun, Jun 18, 2023 at 5:53?AM Satish Patel >> wrote: >> >>> Great! Michal, >>> >>> How did we miss this bug in CI ? >>> >>> I did install kolla using the "python3 -m pip install kolla==16.0.0" >>> command. In this case, how do I upgrade it to fix bugs? Should I do a pip >>> install --upgrade kolla? >>> >>> On Mon, Jun 12, 2023 at 6:15?AM Michal Arbet >>> wrote: >>> >>>> https://review.opendev.org/c/openstack/kolla/+/885857 >>>> Michal Arbet >>>> Openstack Engineer >>>> >>>> Ultimum Technologies a.s. >>>> Na Po???? 1047/26, 11000 Praha 1 >>>> Czech Republic >>>> >>>> +420 604 228 897 >>>> michal.arbet at ultimum.io >>>> *https://ultimum.io * >>>> >>>> LinkedIn | >>>> Twitter | Facebook >>>> >>>> >>>> >>>> po 12. 6. 2023 v 11:48 odes?latel Michal Arbet >>>> napsal: >>>> >>>>> APT dependencies broken :( >>>>> Michal Arbet >>>>> Openstack Engineer >>>>> >>>>> Ultimum Technologies a.s. >>>>> Na Po???? 1047/26, 11000 Praha 1 >>>>> Czech Republic >>>>> >>>>> +420 604 228 897 >>>>> michal.arbet at ultimum.io >>>>> *https://ultimum.io * >>>>> >>>>> LinkedIn | >>>>> Twitter | Facebook >>>>> >>>>> >>>>> >>>>> ne 11. 6. 2023 v 4:37 odes?latel Satish Patel >>>>> napsal: >>>>> >>>>>> Folks, >>>>>> >>>>>> Do you know how to solve this? I am using release 2023.1 of kolla to >>>>>> build images using ubuntu 22.04 >>>>>> >>>>>> root at docker-reg:/opt/kolla/etc/kolla# kolla-build --registry >>>>>> docker-reg:4000 --config-file kolla-build.conf --debug --threads 1 >>>>>> --skip-existing --push --cache --format none rabbitmq >>>>>> INFO:kolla.common.utils:Using engine: docker >>>>>> INFO:kolla.common.utils:Found the container image folder at >>>>>> /usr/local/share/kolla/docker >>>>>> INFO:kolla.common.utils:Added image rabbitmq to queue >>>>>> INFO:kolla.common.utils:Attempt number: 1 to run task: >>>>>> BuildTask(rabbitmq) >>>>>> DEBUG:kolla.common.utils.rabbitmq:Processing >>>>>> INFO:kolla.common.utils.rabbitmq:Building started at 2023-06-11 >>>>>> 02:25:44.208880 >>>>>> DEBUG:kolla.common.utils.rabbitmq:Turned 0 plugins into plugins >>>>>> archive >>>>>> DEBUG:kolla.common.utils.rabbitmq:Turned 0 additions into additions >>>>>> archive >>>>>> INFO:kolla.common.utils.rabbitmq:Step 1/11 : FROM >>>>>> docker-reg:4000/kolla/base:2023.1 >>>>>> INFO:kolla.common.utils.rabbitmq: ---> 4551f4af8ddf >>>>>> INFO:kolla.common.utils.rabbitmq:Step 2/11 : LABEL maintainer="Kolla >>>>>> Project (https://launchpad.net/kolla)" name="rabbitmq" >>>>>> build-date="20230611" >>>>>> INFO:kolla.common.utils.rabbitmq: ---> Using cache >>>>>> INFO:kolla.common.utils.rabbitmq: ---> 6c2ef10499f7 >>>>>> INFO:kolla.common.utils.rabbitmq:Step 3/11 : RUN usermod --append >>>>>> --home /var/lib/rabbitmq --groups kolla rabbitmq && mkdir -p >>>>>> /var/lib/rabbitmq && chown -R 42439:42439 /var/lib/rabbitmq >>>>>> INFO:kolla.common.utils.rabbitmq: ---> Using cache >>>>>> INFO:kolla.common.utils.rabbitmq: ---> 29ef8940f40b >>>>>> INFO:kolla.common.utils.rabbitmq:Step 4/11 : RUN echo 'Uris: >>>>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu' >>>>>> >/etc/apt/sources.list.d/erlang.sources && echo 'Components: main' >>>>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Types: deb' >>>>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Suites: jammy' >>>>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Signed-By: >>>>>> /etc/kolla/apt-keys/erlang-ppa.gpg' >>>>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Uris: >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu' >>>>>> >/etc/apt/sources.list.d/rabbitmq.sources && echo 'Components: main' >>>>>> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Types: deb' >>>>>> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Suites: jammy' >>>>>> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Signed-By: >>>>>> /etc/kolla/apt-keys/rabbitmq.gpg' >>/etc/apt/sources.list.d/rabbitmq.sources >>>>>> INFO:kolla.common.utils.rabbitmq: ---> Using cache >>>>>> INFO:kolla.common.utils.rabbitmq: ---> 6d92a7342a90 >>>>>> INFO:kolla.common.utils.rabbitmq:Step 5/11 : RUN apt-get >>>>>> --error-on=any update && apt-get -y install --no-install-recommends >>>>>> logrotate rabbitmq-server && apt-get clean && rm -rf /var/lib/apt/lists/* >>>>>> INFO:kolla.common.utils.rabbitmq: ---> Running in 0deab7961445 >>>>>> INFO:kolla.common.utils.rabbitmq:Get:1 >>>>>> http://ubuntu-cloud.archive.canonical.com/ubuntu >>>>>> jammy-updates/antelope InRelease [5,463 B] >>>>>> INFO:kolla.common.utils.rabbitmq:Get:2 >>>>>> http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB] >>>>>> INFO:kolla.common.utils.rabbitmq:Get:3 >>>>>> http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [3,447 B] >>>>>> INFO:kolla.common.utils.rabbitmq:Get:7 >>>>>> http://ubuntu-cloud.archive.canonical.com/ubuntu >>>>>> jammy-updates/antelope/main amd64 Packages [126 kB] >>>>>> INFO:kolla.common.utils.rabbitmq:Get:8 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy InRelease [5,152 B] >>>>>> INFO:kolla.common.utils.rabbitmq:Get:9 >>>>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>>>> jammy InRelease [18.1 kB] >>>>>> INFO:kolla.common.utils.rabbitmq:Get:5 http://mirror.siena.edu/ubuntu >>>>>> jammy-updates InRelease [119 kB] >>>>>> INFO:kolla.common.utils.rabbitmq:Get:10 >>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>> jammy/main amd64 Packages [9,044 B] >>>>>> INFO:kolla.common.utils.rabbitmq:Get:6 http://ftp.usf.edu/pub/ubuntu >>>>>> jammy-security InRelease [110 kB] >>>>>> INFO:kolla.common.utils.rabbitmq:Get:11 >>>>>> http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 >>>>>> Packages [27.0 kB] >>>>>> INFO:kolla.common.utils.rabbitmq:Get:4 >>>>>> https://archive.linux.duke.edu/ubuntu jammy InRelease [270 kB] >>>>>> INFO:kolla.common.utils.rabbitmq:Get:12 >>>>>> http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 Packages >>>>>> [49.4 kB] >>>>>> INFO:kolla.common.utils.rabbitmq:Get:14 >>>>>> http://ubuntu.osuosl.org/ubuntu jammy-updates/main amd64 Packages >>>>>> [857 kB] >>>>>> INFO:kolla.common.utils.rabbitmq:Get:17 >>>>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>>>> jammy/main amd64 Packages [8,167 B] >>>>>> INFO:kolla.common.utils.rabbitmq:Get:16 >>>>>> http://pubmirrors.dal.corespace.com/ubuntu jammy-security/universe >>>>>> amd64 Packages [928 kB] >>>>>> INFO:kolla.common.utils.rabbitmq:Get:15 >>>>>> https://atl.mirrors.clouvider.net/ubuntu jammy-security/main amd64 >>>>>> Packages [575 kB] >>>>>> INFO:kolla.common.utils.rabbitmq:Get:19 >>>>>> http://www.club.cc.cmu.edu/pub/ubuntu jammy/main amd64 Packages >>>>>> [1,792 kB] >>>>>> INFO:kolla.common.utils.rabbitmq:Get:18 >>>>>> http://mirror.team-cymru.com/ubuntu jammy/universe amd64 Packages >>>>>> [17.5 MB] >>>>>> INFO:kolla.common.utils.rabbitmq:Get:13 >>>>>> http://mirrors.syringanetworks.net/ubuntu-archive >>>>>> jammy-updates/universe amd64 Packages [1,176 kB] >>>>>> INFO:kolla.common.utils.rabbitmq:Fetched 23.7 MB in 6s (4,091 kB/s) >>>>>> INFO:kolla.common.utils.rabbitmq:Reading package lists... >>>>>> INFO:kolla.common.utils.rabbitmq:Reading package lists... >>>>>> INFO:kolla.common.utils.rabbitmq:Building dependency tree... >>>>>> INFO:kolla.common.utils.rabbitmq:Reading state information... >>>>>> INFO:kolla.common.utils.rabbitmq:Some packages could not be >>>>>> installed. This may mean that you have >>>>>> INFO:kolla.common.utils.rabbitmq:requested an impossible situation or >>>>>> if you are using the unstable >>>>>> INFO:kolla.common.utils.rabbitmq:distribution that some required >>>>>> packages have not yet been created >>>>>> INFO:kolla.common.utils.rabbitmq:or been moved out of Incoming. >>>>>> INFO:kolla.common.utils.rabbitmq:The following information may help >>>>>> to resolve the situation: >>>>>> INFO:kolla.common.utils.rabbitmq:The following packages have unmet >>>>>> dependencies: >>>>>> INFO:kolla.common.utils.rabbitmq: rabbitmq-server : Depends: >>>>>> erlang-base (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>> installed or >>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>> erlang-base-hipe (< 1:26.0) but it is not installable or >>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>> erlang-crypto (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>> installed or >>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>> erlang-eldap (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>> installed or >>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>> erlang-inets (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>> installed or >>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>> erlang-mnesia (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>> installed or >>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>> erlang-os-mon (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>> installed or >>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>> erlang-parsetools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>> installed or >>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>> erlang-public-key (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>> installed or >>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>> erlang-runtime-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to >>>>>> be installed or >>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>> erlang-ssl (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>> installed or >>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>> erlang-syntax-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to >>>>>> be installed or >>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>> erlang-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>> installed or >>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>> erlang-xmerl (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>> installed or >>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>> INFO:kolla.common.utils.rabbitmq:E: Unable to correct problems, you >>>>>> have held broken packages. >>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>> >>>>> >> >> -- >> Regards, >> Maksim Malchuk >> >> -- Regards, Maksim Malchuk -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Mon Jun 19 13:31:50 2023 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Mon, 19 Jun 2023 15:31:50 +0200 Subject: [ceilometer][gnocchi][telemetry] (User) frontend or tool to create reports on telemetry data (GPU, RAM, disk, network, ...) Message-ID: Hello OpenStack-Discuss, I was wondering if there are any simple frontends or similar solutions to provide ceilometer's telemetry data, such as CPU, RAM, volume? throughput and IOPs, network throughput, ... to users? Something like a basic horizon dashboard? Or does anybody have written something using the API to create some nice graphs? I know there is Cloudkitty, but that looks more targeted towards doing rating and billing with this kind of data? Thanks and with regards Christian From noonedeadpunk at gmail.com Mon Jun 19 13:51:23 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Mon, 19 Jun 2023 15:51:23 +0200 Subject: [ceilometer][gnocchi][telemetry] (User) frontend or tool to create reports on telemetry data (GPU, RAM, disk, network, ...) In-Reply-To: References: Message-ID: I would say that you need to do that on the publisher side. For Gnocchi there's a Grafana plugin [1] that can be potentially used for representing data to users. I'm not aware of any Gnocchi integration with Horizon directly. Though it would make total sense to me to have such plugin and there was even an old spec for that [2] [1] https://grafana.com/grafana/plugins/gnocchixyz-gnocchi-datasource/ [2] https://blueprints.launchpad.net/horizon/+spec/horizon-gnocchi-graphs ??, 19 ???. 2023??. ? 15:36, Christian Rohmann : > > Hello OpenStack-Discuss, > > I was wondering if there are any simple frontends or similar solutions > to provide ceilometer's telemetry data, such as CPU, RAM, volume > throughput and IOPs, network throughput, ... to users? Something like a > basic horizon dashboard? > > Or does anybody have written something using the API to create some nice > graphs? > I know there is Cloudkitty, but that looks more targeted towards doing > rating and billing with this kind of data? > > > Thanks and with regards > > Christian > > From roberto.acosta at luizalabs.com Mon Jun 19 15:03:26 2023 From: roberto.acosta at luizalabs.com (Roberto Bartzen Acosta) Date: Mon, 19 Jun 2023 12:03:26 -0300 Subject: [neutron] - OVN heartbead - short agent_down_time Message-ID: Hello Neutron folks, We discussed in the Operators feedback session about OVN heartbeat and the use of "infinity" values for large-scale deployments because we have a significant infrastructure impact when a short 'agent_down_time' is configured. The merged patch [1] limited the maximum delay to 10 seconds. I understand the requirement to use random values to avoid load spikes, but why does this fix limit the heartbeat to 10 seconds? What is the goal of the agent_down_time parameter in this case? How will it work for someone who has hundreds of compute nodes / metadata agents? Regards, Roberto [1] - https://review.opendev.org/c/openstack/neutron/+/883687 -- _?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?._ *?**?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Mon Jun 19 16:35:09 2023 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 19 Jun 2023 18:35:09 +0200 Subject: [nova] Summary of discussions at Vancouver June 2023 (PTG + Forum meet&greet) Message-ID: Thanks folks that were joining our PTG but also our meet&greet forum session ! IT was again a productive week for us, as we were able to discuss with a lot of operators during around 4 hours (3.30' for the PTG and 30 mins for the forum session) Below you'll find the summary of all the topics we discussed during those hours but you can also look at the etherpads directly : * Nova meet&greet https://etherpad.opendev.org/p/nova-vancouver2023-meet-and-greet * Nova PTG sessions https://etherpad.opendev.org/p/vancouver-june2023-nova While there were unfortunately very few Nova maintainers and developers in both rooms, we've seen a packed room for the meet&greet and then at every hour of the PTG, we got at least one operator wanting to discuss with us. Can't tell how much I love this, hopefully we'll continue to discuss it between us afterwards. Now, let me tell you the summary : === Short survey at the meet&greet === * most of the operators (~21) in the meet&greet were either running Train or Yoga. Some of them were already running 2023.1 Antelope. Wow. * all of them were running libvirt but one running ironic driver. * using virtual GPUs/mdevs and setting flavors to define CPU usage were the two most in-use features (the next one is SR-IOV) * accordingly, the most needed missing quotas are for PCI devices, obviously. === Pain points at the meet&greet === * Availability zones : Basically, some operators use massively AZs and they want to get off some AZ (even if the instance is pinned) for maintenance reasons. We continued to discuss this usecase at the PTG (see below) * Update of userdata : we said this was coming with a proposed spec * Filtering flavors by a resource : this would require a nova spec * Hard affinity problems : we continued to discuss this at the PTG (see below) * Ceph backend for Nova compute and need for local ephemeral storage (case of GPUs) : this should be discussed in a spec * attach/detach issues with BDMs : we said this is a known issue, some bug report has to be filled for proper triaging * state of CVEs and the fact that old releases are still impacted : this is known and relates to the state of Extended Maintenance releases, which was addressed by another Forum session. === The PTG === * Affinity/Anti-affinity migration problems if hard policy : we explained the consensus we had at the Bobcat vPTG and a Bloomberg operator kindly volunteered to capture this agreement in a backlog spec (tl,dr: allow operators to violate the policy and let the servergroup API show the violation). More to come hopefully soon. * Exposing metrics : Some operator explained how the current use of Prometheus exporter is super slow since it's using the Nova APIs for gathering usage metrics. We proposed him to look into using Placement APIs for such case, he'll test and eventually come back to us if the existing Placement APIs aren't viable for him. * memfd-backed memory backing per instance : there are multiple concerns that need to be addressed in a spec. For example, we need to ensure the flavor extraspec is driver-agnostic, we need to care about the defaults and the potential upgrade concerns (like changing from anonymous to memfd) and how to interact with the existing hugepages feature in Nova. * using virtio9p instead of virtiofs for Manila shares : we discussed it intensively to eventually come up to the conclusion this wasn't worth the effort to implement it. Let's revisit this decision by the next PTG if no progress has been made on virtiofs migration feature gaps. * Manila/Nova cross-project PTG discussion on Manila share support : we discussed on the next Manila features (lock API and use of service roles). We agreed on the mandatory Nova configuration for service roles as a requirement for Manila shares usage. We also agreed on Manila storing the instance UUID as the semaphore for the lock API. FWIW, there was a packed table with 7 operators (all public clouds) *REALLY* interested in this feature [1]. That's it for Nova-related bits at Vancouver but other topics were worth being discussed. You can find the list of all discussed Forum sessions here : https://wiki.openstack.org/wiki/Forum/Vanvouver2023 I've found particular interest in the ExtendedMaintainance potential abandon discussion, OSC and SDK sessions, live-migration usecases and problems and a couple of others. I'd urge you to glance at all etherpads' contents. [1] https://photos.app.goo.gl/t1tSyk67GG6TRRW59 HTH, -Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Jun 19 17:01:44 2023 From: smooney at redhat.com (smooney at redhat.com) Date: Mon, 19 Jun 2023 18:01:44 +0100 Subject: [neutron] - OVN heartbead - short agent_down_time In-Reply-To: References: Message-ID: <112d530c01fd2d8cc80d755753c8baef9112c348.camel@redhat.com> On Mon, 2023-06-19 at 12:03 -0300, Roberto Bartzen Acosta wrote: > Hello Neutron folks, > > We discussed in the Operators feedback session about OVN heartbeat and the > use of "infinity" values for large-scale deployments because we have a > significant infrastructure impact when a short 'agent_down_time' is > configured. agent_down_time is intended to specify how long the heartbeat can be missed before the agent is considered down. it was not intented to contol the interval at which the heatbeat was sent. https://opendev.org/openstack/neutron/commit/628442aed7400251f12809a45605bd717f494c4e intoduced a colation between the two but it resulted in the agent incorrectly being considered down and causing port binding failures if the agent_down_time was set too large. > > The merged patch [1] limited the maximum delay to 10 seconds. I understand > the requirement to use random values to avoid load spikes, but why does > this fix limit the heartbeat to 10 seconds? What is the goal of the > agent_down_time parameter in this case? How will it work for someone who > has hundreds of compute nodes / metadata agents? the change in [1] shoudl just change the delay before _update_chassis is invoked that at least was the intent. im expecting the interval between heatbeats to be ratlimaited via the mechim that was used before https://opendev.org/openstack/neutron/commit/628442aed7400251f12809a45605bd717f494c4e?style=split&whitespace=show-all was implemented. i.e. whwen a SbGlobalUpdateEvent is generated now we are clamping the max wait to 10 seconds instead of cfg.CONF.agent_down_time // 2 which was causing port binding failures. the timer object will run the passed in fucntion after the timer interval has expired. https://docs.python.org/3/library/threading.html#timer-objects but it will not re run multiple times and the function we are invoking does not loop internally so only one update will happen per invocation of run. i believe the actual heatbeat/reporting interval is controlled by cfg.CONF.AGENT.report_interval https://github.com/openstack/neutron/blob/cbb89fdb1414a1b3a8e8b3a9a4154ef627bb9d1a/neutron/agent/metadata/agent.py#L313-L317 so i think if you want to reduce the interval in a large envionment to can do that by setting [AGENT] report_interval= im not that familiar with this code but that was my original understanding. the sllep before its rerun is calucated in oslo.service https://github.com/openstack/oslo.service/blob/1.38.0/oslo_service/loopingcall.py#L184-L194 https://github.com/openstack/oslo.service/blob/1.38.0/oslo_service/loopingcall.py#L154-L159 the neutron core team can correct me if that is incorrect but i would not expct this to negitivly impact large clouds. > > Regards, > Roberto > > [1] - https://review.opendev.org/c/openstack/neutron/+/883687 > From roberto.acosta at luizalabs.com Mon Jun 19 17:58:53 2023 From: roberto.acosta at luizalabs.com (Roberto Bartzen Acosta) Date: Mon, 19 Jun 2023 14:58:53 -0300 Subject: [neutron] - OVN heartbead - short agent_down_time In-Reply-To: <112d530c01fd2d8cc80d755753c8baef9112c348.camel@redhat.com> References: <112d530c01fd2d8cc80d755753c8baef9112c348.camel@redhat.com> Message-ID: Thanks for your feedback Sean. Em seg., 19 de jun. de 2023 ?s 14:01, escreveu: > On Mon, 2023-06-19 at 12:03 -0300, Roberto Bartzen Acosta wrote: > > Hello Neutron folks, > > > > We discussed in the Operators feedback session about OVN heartbeat and > the > > use of "infinity" values for large-scale deployments because we have a > > significant infrastructure impact when a short 'agent_down_time' is > > configured. > agent_down_time is intended to specify how long the heartbeat can be > missed before > the agent is considered down. it was not intented to contol the interval > at which the heatbeat > was sent. > > > https://opendev.org/openstack/neutron/commit/628442aed7400251f12809a45605bd717f494c4e > intoduced a colation between the two but it resulted in the agent > incorrectly being considered down > and causing port binding failures if the agent_down_time was set too large. > > > > The merged patch [1] limited the maximum delay to 10 seconds. I > understand > > the requirement to use random values to avoid load spikes, but why does > > this fix limit the heartbeat to 10 seconds? What is the goal of the > > agent_down_time parameter in this case? How will it work for someone who > > has hundreds of compute nodes / metadata agents? > the change in [1] shoudl just change the delay before _update_chassis is > invoked > that at least was the intent. im expecting the interval between heatbeats > to be ratlimaited > via the mechim that was used before > > https://opendev.org/openstack/neutron/commit/628442aed7400251f12809a45605bd717f494c4e?style=split&whitespace=show-all > was implemented. > > i.e. whwen a SbGlobalUpdateEvent is generated now we are clamping the max > wait to 10 seconds instead of > cfg.CONF.agent_down_time // 2 which was causing port binding failures. > > the timer object will run the passed in fucntion after the timer interval > has expired. > > https://docs.python.org/3/library/threading.html#timer-objects > > but it will not re run multiple times and the function we are invoking > does not loop internally > so only one update will happen per invocation of run. > > i believe the actual heatbeat/reporting interval is controlled by > cfg.CONF.AGENT.report_interval > > > https://github.com/openstack/neutron/blob/cbb89fdb1414a1b3a8e8b3a9a4154ef627bb9d1a/neutron/agent/metadata/agent.py#L313-L317 > > so i think if you want to reduce the interval in a large envionment to can > do that by setting > > [AGENT] > report_interval= > I agree that the mechanism for sending heartbeats is controlled by report_interval, however, from what I understand, their original idea would be to configure complementary values: report_interval and agent_down_time would be associated with the status of network agents. https://docs.openstack.org/neutron/2023.1/configuration/neutron.html report_interval: "Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time." agent_down_time: "Seconds to regard the agent is down; should be at least twice report_interval, to be sure the agent is down for good." > > im not that familiar with this code but that was my original understanding. > the sllep before its rerun is calucated in oslo.service > > https://github.com/openstack/oslo.service/blob/1.38.0/oslo_service/loopingcall.py#L184-L194 > > https://github.com/openstack/oslo.service/blob/1.38.0/oslo_service/loopingcall.py#L154-L159 > > the neutron core team can correct me if that is incorrect but i would not > expct this to negitivly impact large clouds. > Note 1: My point is that the function SbGlobalUpdateEvent seems to be using the agent_down_time disassociated from the original function ( double / half relation). Note 2: I'm curious to know the behavior of this modification with more than 200 chassis and with thousands of OVN routers. In this case, with many configurations being applied at the same time (a lot of events in SB_Global) and that require the agent running on Chassis to respond the report_interval at the same time as it is transitioning configs (probably millions of openflow flows entries). Is 10 seconds enough? > > > > > Regards, > > Roberto > > > > [1] - https://review.opendev.org/c/openstack/neutron/+/883687 > > > > -- _?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?._ *?**?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Jun 19 18:45:31 2023 From: smooney at redhat.com (smooney at redhat.com) Date: Mon, 19 Jun 2023 19:45:31 +0100 Subject: [neutron] - OVN heartbead - short agent_down_time In-Reply-To: References: <112d530c01fd2d8cc80d755753c8baef9112c348.camel@redhat.com> Message-ID: <477b9d63db04861c22a66a3e6db2769b099daad3.camel@redhat.com> On Mon, 2023-06-19 at 14:58 -0300, Roberto Bartzen Acosta wrote: > Thanks for your feedback Sean. > > Em seg., 19 de jun. de 2023 ?s 14:01, escreveu: > > > On Mon, 2023-06-19 at 12:03 -0300, Roberto Bartzen Acosta wrote: > > > Hello Neutron folks, > > > > > > We discussed in the Operators feedback session about OVN heartbeat and > > the > > > use of "infinity" values for large-scale deployments because we have a > > > significant infrastructure impact when a short 'agent_down_time' is > > > configured. > > agent_down_time is intended to specify how long the heartbeat can be > > missed before > > the agent is considered down. it was not intented to contol the interval > > at which the heatbeat > > was sent. > > > > > > https://opendev.org/openstack/neutron/commit/628442aed7400251f12809a45605bd717f494c4e > > intoduced a colation between the two but it resulted in the agent > > incorrectly being considered down > > and causing port binding failures if the agent_down_time was set too large. > > > > > > The merged patch [1] limited the maximum delay to 10 seconds. I > > understand > > > the requirement to use random values to avoid load spikes, but why does > > > this fix limit the heartbeat to 10 seconds? What is the goal of the > > > agent_down_time parameter in this case? How will it work for someone who > > > has hundreds of compute nodes / metadata agents? > > the change in [1] shoudl just change the delay before _update_chassis is > > invoked > > that at least was the intent. im expecting the interval between heatbeats > > to be ratlimaited > > via the mechim that was used before > > > > https://opendev.org/openstack/neutron/commit/628442aed7400251f12809a45605bd717f494c4e?style=split&whitespace=show-all > > was implemented. > > > > i.e. whwen a SbGlobalUpdateEvent is generated now we are clamping the max > > wait to 10 seconds instead of > > cfg.CONF.agent_down_time // 2 which was causing port binding failures. > > > > the timer object will run the passed in fucntion after the timer interval > > has expired. > > > > https://docs.python.org/3/library/threading.html#timer-objects > > > > but it will not re run multiple times and the function we are invoking > > does not loop internally > > so only one update will happen per invocation of run. > > > > i believe the actual heatbeat/reporting interval is controlled by > > cfg.CONF.AGENT.report_interval > > > > > > https://github.com/openstack/neutron/blob/cbb89fdb1414a1b3a8e8b3a9a4154ef627bb9d1a/neutron/agent/metadata/agent.py#L313-L317 > > > > so i think if you want to reduce the interval in a large envionment to can > > do that by setting > > > > [AGENT] > > report_interval= > > > > I agree that the mechanism for sending heartbeats is controlled by > report_interval, however, from what I understand, their original idea would > be to configure complementary values: report_interval and agent_down_time > would be associated with the status of network agents. > > https://docs.openstack.org/neutron/2023.1/configuration/neutron.html > report_interval: "Seconds between nodes reporting state to server; should > be less than agent_down_time, best if it is half or less than > agent_down_time." > agent_down_time: "Seconds to regard the agent is down; should be at least > twice report_interval, to be sure the agent is down for good." so it think thsi was do aggressive or in correct advice. with that advice if report_interval was 30 then agent_down_time shoudl be at least 60 but i dont think that was actully consiveritive enough a 3:1 ratio i.e. report_interval:30 agent_down_time:90 woudl have been more reasonable. that was actully what i orgianlly starte with just changing the delay form randint(0, cfg.CONF.agent_down_time // 2) to randint(0, cfg.CONF.agent_down_time // 3) but when discussion it on irc we didnt think this needed to be configurable at all. so the suggestion was to change to randint(0, 10) i decided to blend both approches and do max_delay = max(min(cfg.CONF.agent_down_time // 3, 10), 3) delay = randint(0, max_delay) the code that was modified is contoling the jiter we apply to the nodes not the rate at which the updates are sent report_interval and agent_down_time shoudl still be set at complementary values. cfg.CONF.agent_down_time however is not really a good import into that jitter calculation if we wanted this to be tweakable it really should be its own config value. > > > > > > im not that familiar with this code but that was my original understanding. > > the sllep before its rerun is calucated in oslo.service > > > > https://github.com/openstack/oslo.service/blob/1.38.0/oslo_service/loopingcall.py#L184-L194 > > > > https://github.com/openstack/oslo.service/blob/1.38.0/oslo_service/loopingcall.py#L154-L159 > > > > the neutron core team can correct me if that is incorrect but i would not > > expct this to negitivly impact large clouds. > > > > Note 1: My point is that the function SbGlobalUpdateEvent seems to be using > the agent_down_time disassociated from the original function ( double / > half relation). > > Note 2: I'm curious to know the behavior of this modification with more > than 200 chassis and with thousands of OVN routers. In this case, with many > configurations being applied at the same time (a lot of events in > SB_Global) and that require the agent running on Chassis to respond the > report_interval at the same time as it is transitioning configs (probably > millions of openflow flows entries).? Is 10 seconds enough? is up to 10 seconds of jitter enough. i think its a more reasonable value then using agent_down_time devieed by any fixed value. report_interval default to 30 https://github.com/openstack/neutron/blob/cbb89fdb1414a1b3a8e8b3a9a4154ef627bb9d1a/neutron/conf/agent/common.py#L112 agent_down_time default to 75 https://github.com/openstack/neutron/blob/cbb89fdb1414a1b3a8e8b3a9a4154ef627bb9d1a/neutron/conf/agent/database/agents_db.py#L19-L22 so previosuly this would have been a range of 0-45 the jitter really should not exceed report_interval i could see repalceing 10 with cfg.CONF.report_interval so replace max_delay = max(min(cfg.CONF.agent_down_time // 3, 10), 3) delay = randint(0, max_delay) with max_delay = max(min(cfg.CONF.agent_down_time // 3, cfg.CONF.report_interval), 3) delay = randint(0, max_delay) but really f we do need this to be configurable then we shoudl just add a report_interval_jitter cofnig option and then we could simplfy it to delay = randint(0, cfg.CONF.report_interval_jitter) looking at the code we dont actually need to calculate a random jitter on each event either we could just do it once when the heartbeat is created by passing the delay as initial_delay https://github.com/openstack/neutron/blob/cbb89fdb1414a1b3a8e8b3a9a4154ef627bb9d1a/neutron/agent/metadata/agent.py#L313-L317 that way the updates will happen at a deterministic interval (cfg.CONF.report_interval) with a fixed random offset determined when the agent starts. im not currenlty planning on either makign this run only once when the agent start or intolducion a dedicated config option but i think ither would be fine. prior to https://review.opendev.org/c/openstack/neutron/+/883687 we were seeing ci failure due to the change in https://opendev.org/openstack/neutron/commit/628442aed7400251f12809a45605bd717f494c4e so instead of reviing it adn reintoducing the bug it was trying to fix i limited the max delyay but we dont know how it will affect large deployments. i would suggest startign a patch if you belive the current behavior will be probelmatic but keep in mind that addign too much jitter/delay can cause vm boots/migtrations to randomly fail leavign the instance in an error state. that is what our temest ci result where detechting and that was preventing use form mergin patches. in prodcution that would have resulted in operator having to manually fix things in the db and or rerunnign the migrations. for end users they woudl have either seen no valid host error when booting vms or the boots would have taken longer as we woudl have had to retry alternative hosts and hope we dont hit the "dead agent" issue while the agent is actully runing fine the heatbeat is just being delayed excessivly long. > > > > > > > > > > > Regards, > > > Roberto > > > > > > [1] - https://review.opendev.org/c/openstack/neutron/+/883687 > > > > > > > > From tobias.urdin at binero.com Mon Jun 19 23:04:29 2023 From: tobias.urdin at binero.com (Tobias Urdin) Date: Mon, 19 Jun 2023 23:04:29 +0000 Subject: [puppet] Summary of discussions at PTG Vancouver June 2023 In-Reply-To: References: Message-ID: <94602807-9540-4A51-8CE9-4AD954A45290@binero.com> It was a pleasure to meet you all, thanks Takashi for driving the meeting and putting together an action plan! Best regards Tobias On 18 Jun 2023, at 18:23, Takashi Kajinami wrote: Hello, Thank you all for joining our PTG session ! It was really nice to meet some of the team members in person. The etherpad for the discussion can be found in the link below, but I'll share a summary of our discussions in this email. In case you have any questions/concerns then feel free to let me know. https://etherpad.opendev.org/p/vancouver-june2023-puppet-openstack * Status update * We share the status of each person. * Unfortunately all of the members attending the discussion have limited resources especially for development * Agreement: we focus on priorities and de-prioritize items which does not cause immediate problems/breackages * Agreement: we ensure we finish the prioritized items in a specific release to keep our maintenance simple * Puppet 8 support * We added unit/lint tests with Puppet 8 for early testing, but adding integration tests is currently blocked by some ruby dependencies not yet available for Ruby 3.1 which is required by Puppet 8. * All of the operating system versions do not provide Ruby 3.1 now * Puppet 7 EOL is not yet declared * Agreement: We leave this as non-priority for now and re-work on it once Ruby 3.1 is globally available * Adaptation to puppetlab-stdlib 9.0.0 * Some deprecated items were removed. We adapted our modules but are still waiting for update in the dependent modules * validate_legacy was deprecated and causes large warning now * Functions from stdlib should be now namespace-d to avoid warnings after bump is done * Agreement: We prioritize replacing validate_legacy by typed parameters * Agreement: We pin stdlib to an older version for now but attempt to bump it early * Module modernizations * Typed parameters * Replacing validate_legacy needs to be prioritized now to adapt to puppetlab-stdlib 9.0.0 * Implementing type validations for openstack config options require further discussions. Handling of os_service_defualt would be the main topic we have to sort * We prefer consistent implementations for all openstack service modules, while we can attempt some changes early in a few "independent" modules such as extras, vswitch, qdr * Agreement: We de-prioritize implementing validations for config options * Agreement: We ensure implementations is distributed to all modules consistently in a single release * Hieradata * This is "modern" design pattern, and it's ideal to replace legacy params class by it * However we don't have urgent requirement to complete this work * We have to create the common structure for hieradata files maintained in each repositories * Some concerns have been raised mainly how we can pick up some values once this change is made * Agreement: We de-prioritize this work for now * Agreement: We can start with the flat files (pattern 1 in https://etherpad.opendev.org/p/puppet-hieradata-structure ) * Agreement: tkajinam will submit a few examples and we review how this impact the existing usage * Agreement: Similarly to typed parameters, we should coordinate this work to make the change consistently * Making distro/version specific logic selectable by parameters * This work is tightly related to the hieradata work * Agreement: We basically leave this work until the above hieradata work is completed, and de-prioritize this. * Branch retirements * We have multiple branches open now and aim to reduce number of branches * Red Hat is interested in keeping train open until 2023Q3 (tentative) and wallaby for some times for donwstream * Others do not have requirement to maintain old releases in EM status * Agreement: We aim to retire train/ussuri/branch after 2023Q3. Retiring further branches is subject to future discussions Thank you, Takashi Kajinami -------------- next part -------------- An HTML attachment was scrubbed... URL: From knikolla at bu.edu Tue Jun 20 02:21:05 2023 From: knikolla at bu.edu (Nikolla, Kristi) Date: Tue, 20 Jun 2023 02:21:05 +0000 Subject: [tc] Technical Committee next weekly meeting on June 20, 2023 Message-ID: <2D1690D7-45F3-4C12-B863-5FED06D89600@bu.edu> Hi all, This is a reminder that the next weekly Technical Committee meeting is to be held on Tuesday, June 20, 2023 at 1800 UTC on #openstack-tc on OFTC IRC. Please find below the agenda for the meeting: * Roll call * Follow up on past action items * Open Infra Summit retrospective * Gate health check * Open Discussion and Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open More information can be found and items proposed at the following link https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting Thank you, Kristi Nikolla -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Tue Jun 20 02:38:18 2023 From: satish.txt at gmail.com (Satish Patel) Date: Mon, 19 Jun 2023 22:38:18 -0400 Subject: [kolla] rabbitmq failed to build image using 2023.1 release In-Reply-To: References: Message-ID: I did patch in my local repo to test build but got the following error. Did the patch have the wrong signature key? INFO:kolla.common.utils.rabbitmq:W: GPG error: https://ppa1.novemberain.com/rabbitmq/rabbitmq-erlang/deb/ubuntu jammy InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E495BB49CC4BBE5B INFO:kolla.common.utils.rabbitmq:E: The repository ' https://ppa1.novemberain.com/rabbitmq/rabbitmq-erlang/deb/ubuntu jammy InRelease' is not signed. INFO:kolla.common.utils.rabbitmq: INFO:kolla.common.utils.rabbitmq:Removing intermediate container 0b7cd0ffa782 ERROR:kolla.common.utils.rabbitmq:Error'd with the following message ERROR:kolla.common.utils.rabbitmq:The command '/bin/sh -c apt-get --error-on=any update && apt-get -y install --no-install-recommends logrotate rabbitmq-server && apt-get clean && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100 INFO:kolla.common.utils:Attempt number: 4 to run task: BuildTask(rabbitmq) On Mon, Jun 19, 2023 at 6:16?AM Maksim Malchuk wrote: > The review was merged 2 days ago. The images are built in a weekly maner. > > > On Mon, Jun 19, 2023 at 5:55?AM Satish Patel wrote: > >> Ok, thanks for the update. Just for my knowledge, how do I check it in >> review or in the build process. I saw the patch was merged so thought it >> must have kicked the gate and built images. >> >> On Sun, Jun 18, 2023 at 5:13?PM Maksim Malchuk >> wrote: >> >>> Satish, you should wait until the issue is fixed (the related patch is >>> on the review) and new images built. >>> >>> On Sun, Jun 18, 2023 at 5:53?AM Satish Patel >>> wrote: >>> >>>> Great! Michal, >>>> >>>> How did we miss this bug in CI ? >>>> >>>> I did install kolla using the "python3 -m pip install kolla==16.0.0" >>>> command. In this case, how do I upgrade it to fix bugs? Should I do a pip >>>> install --upgrade kolla? >>>> >>>> On Mon, Jun 12, 2023 at 6:15?AM Michal Arbet >>>> wrote: >>>> >>>>> https://review.opendev.org/c/openstack/kolla/+/885857 >>>>> Michal Arbet >>>>> Openstack Engineer >>>>> >>>>> Ultimum Technologies a.s. >>>>> Na Po???? 1047/26, 11000 Praha 1 >>>>> Czech Republic >>>>> >>>>> +420 604 228 897 >>>>> michal.arbet at ultimum.io >>>>> *https://ultimum.io * >>>>> >>>>> LinkedIn | >>>>> Twitter | Facebook >>>>> >>>>> >>>>> >>>>> po 12. 6. 2023 v 11:48 odes?latel Michal Arbet < >>>>> michal.arbet at ultimum.io> napsal: >>>>> >>>>>> APT dependencies broken :( >>>>>> Michal Arbet >>>>>> Openstack Engineer >>>>>> >>>>>> Ultimum Technologies a.s. >>>>>> Na Po???? 1047/26, 11000 Praha 1 >>>>>> Czech Republic >>>>>> >>>>>> +420 604 228 897 >>>>>> michal.arbet at ultimum.io >>>>>> *https://ultimum.io * >>>>>> >>>>>> LinkedIn | >>>>>> Twitter | Facebook >>>>>> >>>>>> >>>>>> >>>>>> ne 11. 6. 2023 v 4:37 odes?latel Satish Patel >>>>>> napsal: >>>>>> >>>>>>> Folks, >>>>>>> >>>>>>> Do you know how to solve this? I am using release 2023.1 of kolla to >>>>>>> build images using ubuntu 22.04 >>>>>>> >>>>>>> root at docker-reg:/opt/kolla/etc/kolla# kolla-build --registry >>>>>>> docker-reg:4000 --config-file kolla-build.conf --debug --threads 1 >>>>>>> --skip-existing --push --cache --format none rabbitmq >>>>>>> INFO:kolla.common.utils:Using engine: docker >>>>>>> INFO:kolla.common.utils:Found the container image folder at >>>>>>> /usr/local/share/kolla/docker >>>>>>> INFO:kolla.common.utils:Added image rabbitmq to queue >>>>>>> INFO:kolla.common.utils:Attempt number: 1 to run task: >>>>>>> BuildTask(rabbitmq) >>>>>>> DEBUG:kolla.common.utils.rabbitmq:Processing >>>>>>> INFO:kolla.common.utils.rabbitmq:Building started at 2023-06-11 >>>>>>> 02:25:44.208880 >>>>>>> DEBUG:kolla.common.utils.rabbitmq:Turned 0 plugins into plugins >>>>>>> archive >>>>>>> DEBUG:kolla.common.utils.rabbitmq:Turned 0 additions into additions >>>>>>> archive >>>>>>> INFO:kolla.common.utils.rabbitmq:Step 1/11 : FROM >>>>>>> docker-reg:4000/kolla/base:2023.1 >>>>>>> INFO:kolla.common.utils.rabbitmq: ---> 4551f4af8ddf >>>>>>> INFO:kolla.common.utils.rabbitmq:Step 2/11 : LABEL maintainer="Kolla >>>>>>> Project (https://launchpad.net/kolla)" name="rabbitmq" >>>>>>> build-date="20230611" >>>>>>> INFO:kolla.common.utils.rabbitmq: ---> Using cache >>>>>>> INFO:kolla.common.utils.rabbitmq: ---> 6c2ef10499f7 >>>>>>> INFO:kolla.common.utils.rabbitmq:Step 3/11 : RUN usermod --append >>>>>>> --home /var/lib/rabbitmq --groups kolla rabbitmq && mkdir -p >>>>>>> /var/lib/rabbitmq && chown -R 42439:42439 /var/lib/rabbitmq >>>>>>> INFO:kolla.common.utils.rabbitmq: ---> Using cache >>>>>>> INFO:kolla.common.utils.rabbitmq: ---> 29ef8940f40b >>>>>>> INFO:kolla.common.utils.rabbitmq:Step 4/11 : RUN echo 'Uris: >>>>>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu' >>>>>>> >/etc/apt/sources.list.d/erlang.sources && echo 'Components: main' >>>>>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Types: deb' >>>>>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Suites: jammy' >>>>>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Signed-By: >>>>>>> /etc/kolla/apt-keys/erlang-ppa.gpg' >>>>>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Uris: >>>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu' >>>>>>> >/etc/apt/sources.list.d/rabbitmq.sources && echo 'Components: main' >>>>>>> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Types: deb' >>>>>>> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Suites: jammy' >>>>>>> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Signed-By: >>>>>>> /etc/kolla/apt-keys/rabbitmq.gpg' >>/etc/apt/sources.list.d/rabbitmq.sources >>>>>>> INFO:kolla.common.utils.rabbitmq: ---> Using cache >>>>>>> INFO:kolla.common.utils.rabbitmq: ---> 6d92a7342a90 >>>>>>> INFO:kolla.common.utils.rabbitmq:Step 5/11 : RUN apt-get >>>>>>> --error-on=any update && apt-get -y install --no-install-recommends >>>>>>> logrotate rabbitmq-server && apt-get clean && rm -rf /var/lib/apt/lists/* >>>>>>> INFO:kolla.common.utils.rabbitmq: ---> Running in 0deab7961445 >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:1 >>>>>>> http://ubuntu-cloud.archive.canonical.com/ubuntu >>>>>>> jammy-updates/antelope InRelease [5,463 B] >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:2 >>>>>>> http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB] >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:3 >>>>>>> http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [3,447 B] >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:7 >>>>>>> http://ubuntu-cloud.archive.canonical.com/ubuntu >>>>>>> jammy-updates/antelope/main amd64 Packages [126 kB] >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:8 >>>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>>> jammy InRelease [5,152 B] >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:9 >>>>>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>>>>> jammy InRelease [18.1 kB] >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:5 >>>>>>> http://mirror.siena.edu/ubuntu jammy-updates InRelease [119 kB] >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:10 >>>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>>> jammy/main amd64 Packages [9,044 B] >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:6 http://ftp.usf.edu/pub/ubuntu >>>>>>> jammy-security InRelease [110 kB] >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:11 >>>>>>> http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 >>>>>>> Packages [27.0 kB] >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:4 >>>>>>> https://archive.linux.duke.edu/ubuntu jammy InRelease [270 kB] >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:12 >>>>>>> http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 >>>>>>> Packages [49.4 kB] >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:14 >>>>>>> http://ubuntu.osuosl.org/ubuntu jammy-updates/main amd64 Packages >>>>>>> [857 kB] >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:17 >>>>>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>>>>> jammy/main amd64 Packages [8,167 B] >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:16 >>>>>>> http://pubmirrors.dal.corespace.com/ubuntu jammy-security/universe >>>>>>> amd64 Packages [928 kB] >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:15 >>>>>>> https://atl.mirrors.clouvider.net/ubuntu jammy-security/main amd64 >>>>>>> Packages [575 kB] >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:19 >>>>>>> http://www.club.cc.cmu.edu/pub/ubuntu jammy/main amd64 Packages >>>>>>> [1,792 kB] >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:18 >>>>>>> http://mirror.team-cymru.com/ubuntu jammy/universe amd64 Packages >>>>>>> [17.5 MB] >>>>>>> INFO:kolla.common.utils.rabbitmq:Get:13 >>>>>>> http://mirrors.syringanetworks.net/ubuntu-archive >>>>>>> jammy-updates/universe amd64 Packages [1,176 kB] >>>>>>> INFO:kolla.common.utils.rabbitmq:Fetched 23.7 MB in 6s (4,091 kB/s) >>>>>>> INFO:kolla.common.utils.rabbitmq:Reading package lists... >>>>>>> INFO:kolla.common.utils.rabbitmq:Reading package lists... >>>>>>> INFO:kolla.common.utils.rabbitmq:Building dependency tree... >>>>>>> INFO:kolla.common.utils.rabbitmq:Reading state information... >>>>>>> INFO:kolla.common.utils.rabbitmq:Some packages could not be >>>>>>> installed. This may mean that you have >>>>>>> INFO:kolla.common.utils.rabbitmq:requested an impossible situation >>>>>>> or if you are using the unstable >>>>>>> INFO:kolla.common.utils.rabbitmq:distribution that some required >>>>>>> packages have not yet been created >>>>>>> INFO:kolla.common.utils.rabbitmq:or been moved out of Incoming. >>>>>>> INFO:kolla.common.utils.rabbitmq:The following information may help >>>>>>> to resolve the situation: >>>>>>> INFO:kolla.common.utils.rabbitmq:The following packages have unmet >>>>>>> dependencies: >>>>>>> INFO:kolla.common.utils.rabbitmq: rabbitmq-server : Depends: >>>>>>> erlang-base (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>> installed or >>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>> erlang-base-hipe (< 1:26.0) but it is not installable or >>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>> erlang-crypto (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>> installed or >>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>> erlang-eldap (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>> installed or >>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>> erlang-inets (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>> installed or >>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>> erlang-mnesia (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>> installed or >>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>> erlang-os-mon (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>> installed or >>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>> erlang-parsetools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>> installed or >>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>> erlang-public-key (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>> installed or >>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>> erlang-runtime-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to >>>>>>> be installed or >>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>> erlang-ssl (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>> installed or >>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>> erlang-syntax-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to >>>>>>> be installed or >>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>> erlang-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>> installed or >>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>> erlang-xmerl (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>> installed or >>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>> INFO:kolla.common.utils.rabbitmq:E: Unable to correct problems, you >>>>>>> have held broken packages. >>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>> >>>>>> >>> >>> -- >>> Regards, >>> Maksim Malchuk >>> >>> > > -- > Regards, > Maksim Malchuk > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Tue Jun 20 04:26:44 2023 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 20 Jun 2023 00:26:44 -0400 Subject: [kolla] rabbitmq failed to build image using 2023.1 release In-Reply-To: References: Message-ID: Nevermind, I have to rebuild the base image to push the new key. On Mon, Jun 19, 2023 at 10:38?PM Satish Patel wrote: > I did patch in my local repo to test build but got the following error. > Did the patch have the wrong signature key? > > INFO:kolla.common.utils.rabbitmq:W: GPG error: > https://ppa1.novemberain.com/rabbitmq/rabbitmq-erlang/deb/ubuntu jammy > InRelease: The following signatures couldn't be verified because the public > key is not available: NO_PUBKEY E495BB49CC4BBE5B > INFO:kolla.common.utils.rabbitmq:E: The repository ' > https://ppa1.novemberain.com/rabbitmq/rabbitmq-erlang/deb/ubuntu jammy > InRelease' is not signed. > INFO:kolla.common.utils.rabbitmq: > INFO:kolla.common.utils.rabbitmq:Removing intermediate container > 0b7cd0ffa782 > ERROR:kolla.common.utils.rabbitmq:Error'd with the following message > ERROR:kolla.common.utils.rabbitmq:The command '/bin/sh -c apt-get > --error-on=any update && apt-get -y install --no-install-recommends > logrotate rabbitmq-server && apt-get clean && rm -rf /var/lib/apt/lists/*' > returned a non-zero code: 100 > INFO:kolla.common.utils:Attempt number: 4 to run task: BuildTask(rabbitmq) > > On Mon, Jun 19, 2023 at 6:16?AM Maksim Malchuk > wrote: > >> The review was merged 2 days ago. The images are built in a weekly maner. >> >> >> On Mon, Jun 19, 2023 at 5:55?AM Satish Patel >> wrote: >> >>> Ok, thanks for the update. Just for my knowledge, how do I check it in >>> review or in the build process. I saw the patch was merged so thought it >>> must have kicked the gate and built images. >>> >>> On Sun, Jun 18, 2023 at 5:13?PM Maksim Malchuk >>> wrote: >>> >>>> Satish, you should wait until the issue is fixed (the related patch is >>>> on the review) and new images built. >>>> >>>> On Sun, Jun 18, 2023 at 5:53?AM Satish Patel >>>> wrote: >>>> >>>>> Great! Michal, >>>>> >>>>> How did we miss this bug in CI ? >>>>> >>>>> I did install kolla using the "python3 -m pip install kolla==16.0.0" >>>>> command. In this case, how do I upgrade it to fix bugs? Should I do a pip >>>>> install --upgrade kolla? >>>>> >>>>> On Mon, Jun 12, 2023 at 6:15?AM Michal Arbet >>>>> wrote: >>>>> >>>>>> https://review.opendev.org/c/openstack/kolla/+/885857 >>>>>> Michal Arbet >>>>>> Openstack Engineer >>>>>> >>>>>> Ultimum Technologies a.s. >>>>>> Na Po???? 1047/26, 11000 Praha 1 >>>>>> Czech Republic >>>>>> >>>>>> +420 604 228 897 >>>>>> michal.arbet at ultimum.io >>>>>> *https://ultimum.io * >>>>>> >>>>>> LinkedIn | >>>>>> Twitter | Facebook >>>>>> >>>>>> >>>>>> >>>>>> po 12. 6. 2023 v 11:48 odes?latel Michal Arbet < >>>>>> michal.arbet at ultimum.io> napsal: >>>>>> >>>>>>> APT dependencies broken :( >>>>>>> Michal Arbet >>>>>>> Openstack Engineer >>>>>>> >>>>>>> Ultimum Technologies a.s. >>>>>>> Na Po???? 1047/26, 11000 Praha 1 >>>>>>> Czech Republic >>>>>>> >>>>>>> +420 604 228 897 >>>>>>> michal.arbet at ultimum.io >>>>>>> *https://ultimum.io * >>>>>>> >>>>>>> LinkedIn | >>>>>>> Twitter | Facebook >>>>>>> >>>>>>> >>>>>>> >>>>>>> ne 11. 6. 2023 v 4:37 odes?latel Satish Patel >>>>>>> napsal: >>>>>>> >>>>>>>> Folks, >>>>>>>> >>>>>>>> Do you know how to solve this? I am using release 2023.1 of kolla >>>>>>>> to build images using ubuntu 22.04 >>>>>>>> >>>>>>>> root at docker-reg:/opt/kolla/etc/kolla# kolla-build --registry >>>>>>>> docker-reg:4000 --config-file kolla-build.conf --debug --threads 1 >>>>>>>> --skip-existing --push --cache --format none rabbitmq >>>>>>>> INFO:kolla.common.utils:Using engine: docker >>>>>>>> INFO:kolla.common.utils:Found the container image folder at >>>>>>>> /usr/local/share/kolla/docker >>>>>>>> INFO:kolla.common.utils:Added image rabbitmq to queue >>>>>>>> INFO:kolla.common.utils:Attempt number: 1 to run task: >>>>>>>> BuildTask(rabbitmq) >>>>>>>> DEBUG:kolla.common.utils.rabbitmq:Processing >>>>>>>> INFO:kolla.common.utils.rabbitmq:Building started at 2023-06-11 >>>>>>>> 02:25:44.208880 >>>>>>>> DEBUG:kolla.common.utils.rabbitmq:Turned 0 plugins into plugins >>>>>>>> archive >>>>>>>> DEBUG:kolla.common.utils.rabbitmq:Turned 0 additions into additions >>>>>>>> archive >>>>>>>> INFO:kolla.common.utils.rabbitmq:Step 1/11 : FROM >>>>>>>> docker-reg:4000/kolla/base:2023.1 >>>>>>>> INFO:kolla.common.utils.rabbitmq: ---> 4551f4af8ddf >>>>>>>> INFO:kolla.common.utils.rabbitmq:Step 2/11 : LABEL >>>>>>>> maintainer="Kolla Project (https://launchpad.net/kolla)" >>>>>>>> name="rabbitmq" build-date="20230611" >>>>>>>> INFO:kolla.common.utils.rabbitmq: ---> Using cache >>>>>>>> INFO:kolla.common.utils.rabbitmq: ---> 6c2ef10499f7 >>>>>>>> INFO:kolla.common.utils.rabbitmq:Step 3/11 : RUN usermod --append >>>>>>>> --home /var/lib/rabbitmq --groups kolla rabbitmq && mkdir -p >>>>>>>> /var/lib/rabbitmq && chown -R 42439:42439 /var/lib/rabbitmq >>>>>>>> INFO:kolla.common.utils.rabbitmq: ---> Using cache >>>>>>>> INFO:kolla.common.utils.rabbitmq: ---> 29ef8940f40b >>>>>>>> INFO:kolla.common.utils.rabbitmq:Step 4/11 : RUN echo 'Uris: >>>>>>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu' >>>>>>>> >/etc/apt/sources.list.d/erlang.sources && echo 'Components: main' >>>>>>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Types: deb' >>>>>>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Suites: jammy' >>>>>>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Signed-By: >>>>>>>> /etc/kolla/apt-keys/erlang-ppa.gpg' >>>>>>>> >>/etc/apt/sources.list.d/erlang.sources && echo 'Uris: >>>>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu' >>>>>>>> >/etc/apt/sources.list.d/rabbitmq.sources && echo 'Components: main' >>>>>>>> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Types: deb' >>>>>>>> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Suites: jammy' >>>>>>>> >>/etc/apt/sources.list.d/rabbitmq.sources && echo 'Signed-By: >>>>>>>> /etc/kolla/apt-keys/rabbitmq.gpg' >>/etc/apt/sources.list.d/rabbitmq.sources >>>>>>>> INFO:kolla.common.utils.rabbitmq: ---> Using cache >>>>>>>> INFO:kolla.common.utils.rabbitmq: ---> 6d92a7342a90 >>>>>>>> INFO:kolla.common.utils.rabbitmq:Step 5/11 : RUN apt-get >>>>>>>> --error-on=any update && apt-get -y install --no-install-recommends >>>>>>>> logrotate rabbitmq-server && apt-get clean && rm -rf /var/lib/apt/lists/* >>>>>>>> INFO:kolla.common.utils.rabbitmq: ---> Running in 0deab7961445 >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:1 >>>>>>>> http://ubuntu-cloud.archive.canonical.com/ubuntu >>>>>>>> jammy-updates/antelope InRelease [5,463 B] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:2 >>>>>>>> http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:3 >>>>>>>> http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [3,447 B] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:7 >>>>>>>> http://ubuntu-cloud.archive.canonical.com/ubuntu >>>>>>>> jammy-updates/antelope/main amd64 Packages [126 kB] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:8 >>>>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>>>> jammy InRelease [5,152 B] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:9 >>>>>>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>>>>>> jammy InRelease [18.1 kB] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:5 >>>>>>>> http://mirror.siena.edu/ubuntu jammy-updates InRelease [119 kB] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:10 >>>>>>>> https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu >>>>>>>> jammy/main amd64 Packages [9,044 B] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:6 >>>>>>>> http://ftp.usf.edu/pub/ubuntu jammy-security InRelease [110 kB] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:11 >>>>>>>> http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 >>>>>>>> Packages [27.0 kB] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:4 >>>>>>>> https://archive.linux.duke.edu/ubuntu jammy InRelease [270 kB] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:12 >>>>>>>> http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 >>>>>>>> Packages [49.4 kB] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:14 >>>>>>>> http://ubuntu.osuosl.org/ubuntu jammy-updates/main amd64 Packages >>>>>>>> [857 kB] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:17 >>>>>>>> https://ppa.launchpadcontent.net/rabbitmq/rabbitmq-erlang/ubuntu >>>>>>>> jammy/main amd64 Packages [8,167 B] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:16 >>>>>>>> http://pubmirrors.dal.corespace.com/ubuntu jammy-security/universe >>>>>>>> amd64 Packages [928 kB] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:15 >>>>>>>> https://atl.mirrors.clouvider.net/ubuntu jammy-security/main amd64 >>>>>>>> Packages [575 kB] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:19 >>>>>>>> http://www.club.cc.cmu.edu/pub/ubuntu jammy/main amd64 Packages >>>>>>>> [1,792 kB] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:18 >>>>>>>> http://mirror.team-cymru.com/ubuntu jammy/universe amd64 Packages >>>>>>>> [17.5 MB] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Get:13 >>>>>>>> http://mirrors.syringanetworks.net/ubuntu-archive >>>>>>>> jammy-updates/universe amd64 Packages [1,176 kB] >>>>>>>> INFO:kolla.common.utils.rabbitmq:Fetched 23.7 MB in 6s (4,091 kB/s) >>>>>>>> INFO:kolla.common.utils.rabbitmq:Reading package lists... >>>>>>>> INFO:kolla.common.utils.rabbitmq:Reading package lists... >>>>>>>> INFO:kolla.common.utils.rabbitmq:Building dependency tree... >>>>>>>> INFO:kolla.common.utils.rabbitmq:Reading state information... >>>>>>>> INFO:kolla.common.utils.rabbitmq:Some packages could not be >>>>>>>> installed. This may mean that you have >>>>>>>> INFO:kolla.common.utils.rabbitmq:requested an impossible situation >>>>>>>> or if you are using the unstable >>>>>>>> INFO:kolla.common.utils.rabbitmq:distribution that some required >>>>>>>> packages have not yet been created >>>>>>>> INFO:kolla.common.utils.rabbitmq:or been moved out of Incoming. >>>>>>>> INFO:kolla.common.utils.rabbitmq:The following information may help >>>>>>>> to resolve the situation: >>>>>>>> INFO:kolla.common.utils.rabbitmq:The following packages have unmet >>>>>>>> dependencies: >>>>>>>> INFO:kolla.common.utils.rabbitmq: rabbitmq-server : Depends: >>>>>>>> erlang-base (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>>> installed or >>>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>>> erlang-base-hipe (< 1:26.0) but it is not installable or >>>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>>> erlang-crypto (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>>> installed or >>>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>>> erlang-eldap (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>>> installed or >>>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>>> erlang-inets (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>>> installed or >>>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>>> erlang-mnesia (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>>> installed or >>>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>>> erlang-os-mon (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>>> installed or >>>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>>> erlang-parsetools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>>> installed or >>>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>>> erlang-public-key (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>>> installed or >>>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>>> erlang-runtime-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to >>>>>>>> be installed or >>>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>>> erlang-ssl (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>>> installed or >>>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>>> erlang-syntax-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to >>>>>>>> be installed or >>>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>>> erlang-tools (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>>> installed or >>>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>>> INFO:kolla.common.utils.rabbitmq: Depends: >>>>>>>> erlang-xmerl (< 1:26.0) but 1:26.0.1-1rmq1ppa1~ubuntu22.04.1 is to be >>>>>>>> installed or >>>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>>> esl-erlang (< 1:26.0) but it is not installable >>>>>>>> INFO:kolla.common.utils.rabbitmq:E: Unable to correct problems, you >>>>>>>> have held broken packages. >>>>>>>> INFO:kolla.common.utils.rabbitmq: >>>>>>>> >>>>>>> >>>> >>>> -- >>>> Regards, >>>> Maksim Malchuk >>>> >>>> >> >> -- >> Regards, >> Maksim Malchuk >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.martra at oscct.it Tue Jun 20 08:57:44 2023 From: andrea.martra at oscct.it (Andrea Martra) Date: Tue, 20 Jun 2023 10:57:44 +0200 Subject: OpenStack (cinder) volumes retyping on Ceph back-end Message-ID: <7410546b-1db3-90a7-80f4-21efba09f08c@oscct.it> Hello, I configured different back-end storage on OpenStack (Yoga release) and using Ceph (ceph version 17.2.4) with different pools (volumes, cloud-basic, shared-hosting-os, shared-hosting-homes,...) for RBD application. I created different volume types towards each of the backends and everything works perfectly. If I perform a "change volume type" from OpenStack on volumes attached to the VMs the system successfully migrates the volume from the source pool to the destination pool and at the end of the process the volume is visible in the new pool and is removed from the old pool. The problem encountered is that when reconfiguring the VM, to specify the new pool associated with the volumes (performed through a resize of the VM, I haven't found any other method to change the information on the nova/cinder db automatically. I also did some tests shut-off of the VM and modification of the xml through virsh edit and startup the VM) the volume presented to the VM is exactly the version and content on the retype date of the volume itself. All data written and modified after the retype is lost. The VM after the retype continues to work perfectly in RW but the "new" volume created in the new pool is not used to write data and consequently when the VM is shut down all the changes are lost. Do you have any idea how to carry out a check and possibly how to proceed in order not to lose the data of the vm of which I have retyped the volume? The data is written somewhere (the IO is still going to the old pool/image although they already have been removed) because the VMs work perfectly. Thank you -- Andrea Martra +39 393 9048451 From ralonsoh at redhat.com Tue Jun 20 10:43:28 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 20 Jun 2023 12:43:28 +0200 Subject: [neutron][ptg] PTG summary Message-ID: Hello all: Thank you for joining us during the Forum session and the PTG slots. It was very productive and we were able to check the pulse of OpenStack and Neutron in particular. This time both the Forum session and the PTG were attended mainly by operators and users; the number of developers was reduced and this is something we need to improve. A healthy community is nourished by the active participation of their members, not only pushing new code but reviewing and helping fixing the new bugs. === Forum session === That was a good surprise for me to find so many people in the room. I wasn't expecting that after checking the etherpad link provided before the meeting. During this short session, focused on issues/pain points of Neutron, we could find out that: * The ML2/OVN backend is increasing its adoption rate. * Most users have deployments between Wallaby and Antelope. * Hardware offload is increasing its demand due to the network BW increasing requirements. * BGP/eVPN is deployed in most of the environments. >From this session I would highlight some lessons learned for next events: * If the Forum session time slot is reduced (30 mins), the attendant interventions should be limited to 2-3 minutes, enough to describe the issue. Any further discussion should be taken in the PTG sessions. * Request attendance "checkin" in the etherpad, to be prepared for the event. * Request people to add their questions in the etherpad in advance. === PTG === We didn't have technical sessions but many questions about issues and possible bugs. These are the most relevant ones: *** OVN L3 scheduler issue *** This issue has been reproduced in an environment with more than 5 chassis with gateway ports. The router GW ports are assigned to the GW chassis using a manual scheduler implemented in Neutron (the default one is ``OVNGatewayLeastLoadedScheduler``). If one of the chassis is stopped, the GW ports should be re-assigned to the other GW chassis. This is happening but all ports fall under the same one; this re-scheduling should share the ports among the other active chassis. * Action item: I'll open a LP bug and investigate this issue. *** Size of the OVN SB "HA_Chassis_Group" table *** The OVN SB "HA_Chassis_Group" increases its size indefinitely with each operation creating a router and assigning a new external gateway (external network). This table never decreases, * Action item: I'll open a LP bug, investigate this issue and if this is a core OVN issue, report it. *** Live migration with ML2/OVN *** This is a common topic and not only for ML2/OVN. The migration time has many factors (memory size, applications running, network BW, etc) that could slow down the migration time and trigger a communication gap during this process. * Action item: to create better documentation, both in Nova and Neutron, about the migration process, what has been done to improve it (for example, the OVN multiple port binding) and what factors will affect the migration. *** ML2/OVN IPv6 DVR *** This spec was approved during the last cycle [1]. The implementation [2] is under review. * Action item: to review the patch (for Neutron reviewers) * Action item: to implement the necessary tempest tests (for the feature developers) *** BGP with ML2/OVS, exposing address blocks *** This user has successfully deployed Neutron with ML2/OVS and n-d-r. This user is currently making public a certain set of FIPs. However, for other VMs without FIPs, the goal is to make the router GW port IP address public, using the address blocks functionality; this is not working according to the user. * Action item: (for this user) to create a LP bug describing the architecture of the deployment, the configuration used and the API commands used to reproduce this issue. *** Metadata service (any backend) *** Neutron is in charge of deploying the Metadata service on the compute nodes. Each time the metadata HTTP server is called, it requests from the Neutron API the instance and tenant ID [3]. This method implies a RPC call. In "busy" compute nodes, where the VMs are created and destroyed very fast, this RPC communication is a bottleneck. * Action item: open a LP bug to implement the same ``CacheBackedPluginApi`` used in the OVS agent. This RPC cached class creates a set of subscriptions to the needed resources ("ports" in this case). The Neutron API will send the port updated info and cached locally; that makes unnecessary the RPC request if the resources are stored locally. *** ML2/OVN + Ironic nodes *** This user has deployed ML2/OVN with Ironic nodes, and is using ovn-bgp-agent with the eVPN driver to make public the private ports (IP and MACs) to the Ironic node ports. More information in [4]. *** BGP acceleration in ML2/OVN *** Many questions related to this topic, both with DPDK and HW offload. I would refer (once the link is available) to the talk "Enabling multi-cluster connectivity using dynamic routing via BGP in Openstack" given by Christophe Fontaine during this PTG. You'll find it very interesting how this new implementation moves all the packet processing to the OVS datapath (removing any Linux Bridge / iptables processing). The example provided in the talk refers to the use of DPDK. I hope this PTG was interesting for you! Don't hesitate to use the usual channels that are the mailing list and IRC. Remember we have the weekly Neutron meeting every Tuesday at 1400UTC. Regards. [1] https://specs.openstack.org/openstack/neutron-specs/specs/2023.1/ovn-ipv6-dvr.html [2]https://review.opendev.org/c/openstack/neutron/+/867513 [3] https://github.com/openstack/neutron/blob/cbb89fdb1414a1b3a8e8b3a9a4154ef627bb9d1a/neutron/agent/metadata/agent.py#L89 [4]https://ltomasbo.wordpress.com/2021/06/25/openstack-networking-with-evpn/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltomasbo at redhat.com Tue Jun 20 12:35:18 2023 From: ltomasbo at redhat.com (Luis Tomas Bolivar) Date: Tue, 20 Jun 2023 14:35:18 +0200 Subject: [neutron][ptg] PTG summary In-Reply-To: References: Message-ID: Just adding a couple of pointers below On Tue, Jun 20, 2023 at 12:53?PM Rodolfo Alonso Hernandez < ralonsoh at redhat.com> wrote: > Hello all: > > Thank you for joining us during the Forum session and the PTG slots. It > was very productive and we were able to check the pulse of OpenStack and > Neutron in particular. > > This time both the Forum session and the PTG were attended mainly by > operators and users; the number of developers was reduced and this is > something we need to improve. A healthy community is nourished by the > active participation of their members, not only pushing new code but > reviewing and helping fixing the new bugs. > > === Forum session === > That was a good surprise for me to find so many people in the room. I > wasn't expecting that after checking the etherpad link provided before the > meeting. During this short session, focused on issues/pain points of > Neutron, we could find out that: > * The ML2/OVN backend is increasing its adoption rate. > * Most users have deployments between Wallaby and Antelope. > * Hardware offload is increasing its demand due to the network BW > increasing requirements. > * BGP/eVPN is deployed in most of the environments. > > From this session I would highlight some lessons learned for next events: > * If the Forum session time slot is reduced (30 mins), the attendant > interventions should be limited to 2-3 minutes, enough to describe the > issue. Any further discussion should be taken in the PTG sessions. > * Request attendance "checkin" in the etherpad, to be prepared for the > event. > * Request people to add their questions in the etherpad in advance. > > === PTG === > We didn't have technical sessions but many questions about issues and > possible bugs. These are the most relevant ones: > > *** OVN L3 scheduler issue *** > This issue has been reproduced in an environment with more than 5 chassis > with gateway ports. The router GW ports are assigned to the GW chassis > using a manual scheduler implemented in Neutron (the default one is > ``OVNGatewayLeastLoadedScheduler``). If one of the chassis is stopped, the > GW ports should be re-assigned to the other GW chassis. This is happening > but all ports fall under the same one; this re-scheduling should share the > ports among the other active chassis. > * Action item: I'll open a LP bug and investigate this issue. > > *** Size of the OVN SB "HA_Chassis_Group" table *** > The OVN SB "HA_Chassis_Group" increases its size indefinitely with each > operation creating a router and assigning a new external gateway (external > network). This table never decreases, > * Action item: I'll open a LP bug, investigate this issue and if this is a > core OVN issue, report it. > > *** Live migration with ML2/OVN *** > This is a common topic and not only for ML2/OVN. The migration time has > many factors (memory size, applications running, network BW, etc) that > could slow down the migration time and trigger a communication gap during > this process. > * Action item: to create better documentation, both in Nova and Neutron, > about the migration process, what has been done to improve it (for example, > the OVN multiple port binding) and what factors will affect the migration. > > *** ML2/OVN IPv6 DVR *** > This spec was approved during the last cycle [1]. The implementation [2] > is under review. > * Action item: to review the patch (for Neutron reviewers) > * Action item: to implement the necessary tempest tests (for the feature > developers) > > *** BGP with ML2/OVS, exposing address blocks *** > This user has successfully deployed Neutron with ML2/OVS and n-d-r. This > user is currently making public a certain set of FIPs. However, for other > VMs without FIPs, the goal is to make the router GW port IP address public, > using the address blocks functionality; this is not working according to > the user. > * Action item: (for this user) to create a LP bug describing the > architecture of the deployment, the configuration used and the API commands > used to reproduce this issue. > > *** Metadata service (any backend) *** > Neutron is in charge of deploying the Metadata service on the compute > nodes. Each time the metadata HTTP server is called, it requests from the > Neutron API the instance and tenant ID [3]. This method implies a RPC call. > In "busy" compute nodes, where the VMs are created and destroyed very fast, > this RPC communication is a bottleneck. > * Action item: open a LP bug to implement the same > ``CacheBackedPluginApi`` used in the OVS agent. This RPC cached class > creates a set of subscriptions to the needed resources ("ports" in this > case). The Neutron API will send the port updated info and cached locally; > that makes unnecessary the RPC request if the resources are stored locally. > > *** ML2/OVN + Ironic nodes *** > This user has deployed ML2/OVN with Ironic nodes, and is using > ovn-bgp-agent with the eVPN driver to make public the private ports (IP and > MACs) to the Ironic node ports. More information in [4]. > There is an initial WIP implementation to add support for L2vni/evpn to the BGP driver, currently under review here [1] > > *** BGP acceleration in ML2/OVN *** > Many questions related to this topic, both with DPDK and HW offload. I > would refer (once the link is available) to the talk "Enabling > multi-cluster connectivity using dynamic routing via BGP in Openstack" > given by Christophe Fontaine during this PTG. You'll find it very > interesting how this new implementation moves all the packet processing to > the OVS datapath (removing any Linux Bridge / iptables processing). The > example provided in the talk refers to the use of DPDK. > The idea for the ovn-bgp-agent for this is described in [2], and initial (WIP) implementation is in [3] [1] https://review.opendev.org/c/openstack/ovn-bgp-agent/+/886090 [2] https://ltomasbo.wordpress.com/2023/01/09/how-to-make-the-ovn-bgp-agent-ready-for-hwol-and-dpdk/ [3] https://review.opendev.org/c/openstack/ovn-bgp-agent/+/881779 > > > I hope this PTG was interesting for you! Don't hesitate to use the usual > channels that are the mailing list and IRC. Remember we have the weekly > Neutron meeting every Tuesday at 1400UTC. > > Regards. > > [1] > https://specs.openstack.org/openstack/neutron-specs/specs/2023.1/ovn-ipv6-dvr.html > [2]https://review.opendev.org/c/openstack/neutron/+/867513 > [3] > https://github.com/openstack/neutron/blob/cbb89fdb1414a1b3a8e8b3a9a4154ef627bb9d1a/neutron/agent/metadata/agent.py#L89 > [4] > https://ltomasbo.wordpress.com/2021/06/25/openstack-networking-with-evpn/ > > -- LUIS TOM?S BOL?VAR Principal Software Engineer Red Hat Madrid, Spain ltomasbo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From frode.nordahl at canonical.com Tue Jun 20 13:01:06 2023 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Tue, 20 Jun 2023 15:01:06 +0200 Subject: [neutron][ptg] PTG summary In-Reply-To: References: Message-ID: Hello, Rodolfo, I have relevant information on one of the points discussed below, so just wanted to chime in. On Tue, Jun 20, 2023 at 12:44?PM Rodolfo Alonso Hernandez wrote: [ snip ] > *** OVN L3 scheduler issue *** > This issue has been reproduced in an environment with more than 5 chassis with gateway ports. The router GW ports are assigned to the GW chassis using a manual scheduler implemented in Neutron (the default one is ``OVNGatewayLeastLoadedScheduler``). If one of the chassis is stopped, the GW ports should be re-assigned to the other GW chassis. This is happening but all ports fall under the same one; this re-scheduling should share the ports among the other active chassis. > * Action item: I'll open a LP bug and investigate this issue. Background on why this is happening and a solution is being worked on in [5]. 5: https://review.opendev.org/c/openstack/neutron/+/874760 -- Frode Nordahl > *** Size of the OVN SB "HA_Chassis_Group" table *** > The OVN SB "HA_Chassis_Group" increases its size indefinitely with each operation creating a router and assigning a new external gateway (external network). This table never decreases, > * Action item: I'll open a LP bug, investigate this issue and if this is a core OVN issue, report it. > > *** Live migration with ML2/OVN *** > This is a common topic and not only for ML2/OVN. The migration time has many factors (memory size, applications running, network BW, etc) that could slow down the migration time and trigger a communication gap during this process. > * Action item: to create better documentation, both in Nova and Neutron, about the migration process, what has been done to improve it (for example, the OVN multiple port binding) and what factors will affect the migration. > > *** ML2/OVN IPv6 DVR *** > This spec was approved during the last cycle [1]. The implementation [2] is under review. > * Action item: to review the patch (for Neutron reviewers) > * Action item: to implement the necessary tempest tests (for the feature developers) > > *** BGP with ML2/OVS, exposing address blocks *** > This user has successfully deployed Neutron with ML2/OVS and n-d-r. This user is currently making public a certain set of FIPs. However, for other VMs without FIPs, the goal is to make the router GW port IP address public, using the address blocks functionality; this is not working according to the user. > * Action item: (for this user) to create a LP bug describing the architecture of the deployment, the configuration used and the API commands used to reproduce this issue. > > *** Metadata service (any backend) *** > Neutron is in charge of deploying the Metadata service on the compute nodes. Each time the metadata HTTP server is called, it requests from the Neutron API the instance and tenant ID [3]. This method implies a RPC call. In "busy" compute nodes, where the VMs are created and destroyed very fast, this RPC communication is a bottleneck. > * Action item: open a LP bug to implement the same ``CacheBackedPluginApi`` used in the OVS agent. This RPC cached class creates a set of subscriptions to the needed resources ("ports" in this case). The Neutron API will send the port updated info and cached locally; that makes unnecessary the RPC request if the resources are stored locally. > > *** ML2/OVN + Ironic nodes *** > This user has deployed ML2/OVN with Ironic nodes, and is using ovn-bgp-agent with the eVPN driver to make public the private ports (IP and MACs) to the Ironic node ports. More information in [4]. > > *** BGP acceleration in ML2/OVN *** > Many questions related to this topic, both with DPDK and HW offload. I would refer (once the link is available) to the talk "Enabling multi-cluster connectivity using dynamic routing via BGP in Openstack" given by Christophe Fontaine during this PTG. You'll find it very interesting how this new implementation moves all the packet processing to the OVS datapath (removing any Linux Bridge / iptables processing). The example provided in the talk refers to the use of DPDK. > > > I hope this PTG was interesting for you! Don't hesitate to use the usual channels that are the mailing list and IRC. Remember we have the weekly Neutron meeting every Tuesday at 1400UTC. > > Regards. > > [1]https://specs.openstack.org/openstack/neutron-specs/specs/2023.1/ovn-ipv6-dvr.html > [2]https://review.opendev.org/c/openstack/neutron/+/867513 > [3]https://github.com/openstack/neutron/blob/cbb89fdb1414a1b3a8e8b3a9a4154ef627bb9d1a/neutron/agent/metadata/agent.py#L89 > [4]https://ltomasbo.wordpress.com/2021/06/25/openstack-networking-with-evpn/ > -- Frode Nordahl From mahendra.paipuri at cnrs.fr Tue Jun 20 13:00:41 2023 From: mahendra.paipuri at cnrs.fr (PAIPURI Mahendra) Date: Tue, 20 Jun 2023 13:00:41 +0000 Subject: =?utf-8?B?UkU6IOetlOWkjTogRXhwZXJpZW5jZSB3aXRoIFZHUFVz?= In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> , Message-ID: Hello Ulrich, I am relaunching this discussion as I noticed that you gave a talk about this topic at OpenInfra Summit in Vancouver. Is it possible to share the presentation here? I hope the talks will be uploaded soon in YouTube. We are mainly interested in using MIG instances in Openstack cloud and I could not really find a lot of information by googling. If you could share your experiences, that would be great. Cheers. Regards Mahendra ________________________________ De : Ulrich Schwickerath Envoy? : lundi 16 janvier 2023 11:38:08 ? : openstack-discuss at lists.openstack.org Objet : Re: ??: Experience with VGPUs Hi, all, just to add to the discussion, at CERN we have recently deployed a bunch of A100 GPUs in PCI passthrough mode, and are now looking into improving their usage by using MIG. From the NOVA point of view things seem to work OK, we can schedule VMs requesting a VGPU, the client starts up and gets a license token from our NVIDIA license server (distributing license keys is our private cloud is relatively easy in our case). It's a PoC only for the time being, and we're not ready to put that forward as we're facing issues with CUDA on the client (it fails immediately in memory operations with 'not supported', still investigating why this happens). Once we get that working it would be nice to be able to have a more fine grained scheduling so that people can ask for MIG devices of different size. The other challenge is how to set limits on GPU resources. Once the above issues have been sorted out we may want to look into cyborg as well thus we are quite interested in first experiences with this. Kind regards, Ulrich On 13.01.23 21:06, Dmitriy Rabotyagov wrote: To have that said, deb/rpm packages they are providing doesn't help much, as: * There is no repo for them, so you need to download them manually from enterprise portal * They can't be upgraded anyway, as driver version is part of the package name. And each package conflicts with any another one. So you need to explicitly remove old package and only then install new one. And yes, you must stop all VMs before upgrading driver and no, you can't live migrate GPU mdev devices due to that now being implemented in qemu. So deb/rpm/generic driver doesn't matter at the end tbh. ??, 13 ???. 2023 ?., 20:56 Cedric >: Ended up with the very same conclusions than Dimitry regarding the use of Nvidia Vgrid for the VGPU use case with Nova, it works pretty well but: - respecting the licensing model as operationnal constraints, note that guests need to reach a license server in order to get a token (could be via the Nvidia SaaS service or on-prem) - drivers for both guest and hypervisor are not easy to implement and maintain on large scale. A year ago, hypervisors drivers were not packaged to Debian/Ubuntu, but builded though a bash script, thus requiering additional automatisation work and careful attention regarding kernel update/reboot of Nova hypervisors. Cheers On Fri, Jan 13, 2023 at 4:21 PM Dmitriy Rabotyagov > wrote: > > You are saying that, like Nvidia GRID drivers are open-sourced while > in fact they're super far from being that. In order to download > drivers not only for hypervisors, but also for guest VMs you need to > have an account in their Enterprise Portal. It took me roughly 6 weeks > of discussions with hardware vendors and Nvidia support to get a > proper account there. And that happened only after applying for their > Partner Network (NPN). > That still doesn't solve the issue of how to provide drivers to > guests, except pre-build a series of images with these drivers > pre-installed (we ended up with making a DIB element for that [1]). > Not saying about the need to distribute license tokens for guests and > the whole mess with compatibility between hypervisor and guest drivers > (as guest driver can't be newer then host one, and HVs can't be too > new either). > > It's not that I'm protecting AMD, but just saying that Nvidia is not > that straightforward either, and at least on paper AMD vGPUs look > easier both for operators and end-users. > > [1] https://github.com/citynetwork/dib-elements/tree/main/nvgrid > > > > > As for AMD cards, AMD stated that some of their MI series card supports SR-IOV for vGPUs. However, those drivers are never open source or provided closed source to public, only large cloud providers are able to get them. So I don't really recommend getting AMD cards for vGPU unless you are able to get support from them. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Tue Jun 20 13:09:39 2023 From: eblock at nde.ag (Eugen Block) Date: Tue, 20 Jun 2023 13:09:39 +0000 Subject: [cinder] in-use volume retype error "Expecting to find domain in project" Message-ID: <20230620130939.Horde.zeJtHd65RXC8oUCoV6Bcy1n@webmail.nde.ag> Hi, while trying to reproduce the issue mentioned in [1] I stumbled upon [2], I commented there but because of the "wishlist" importance I don't expect too much attention. I have a production cloud with Victoria where I see the same issue as described in [2], trying to retype an in-use volume results in a stack trace in cinder: 2023-06-20 12:45:52.900 1085571 ERROR oslo_messaging.rpc.server keystoneauth1.exceptions.http.BadRequest: Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-6b412e96-a735-4b94-a5e6-8252dd949297) I have also a test cloud installed the same way as the production cloud with V and upgraded to Wallaby where this stack trace does not occur, retyping an in-use volume just works there. I tried the described workaround in [1] and added a [nova] section to the cinder.conf, and that worked. There is no such section in the W cloud. I checked the Wallaby release notes and the only thing related to retype with ceph as backend is [3]. Could that make the difference here? One difference between V and W I noticed was that in V I see two volumes with the same name (but a different UUID, of course) during the retype process while in W I see only one volume at all times. So I assume that W handles the retype differently and it doesn't require a nova section. I'd appreciate any comment. Thanks, Eugen [1] https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034160.html [2] https://bugs.launchpad.net/cinder/+bug/1941062 [3] https://bugs.launchpad.net/cinder/+bug/1886543 From thomas at goirand.fr Thu Jun 15 06:06:59 2023 From: thomas at goirand.fr (thomas at goirand.fr) Date: Wed, 14 Jun 2023 23:06:59 -0700 Subject: [nova] [keystone] Default logrotate configuration for apache2 In-Reply-To: References: Message-ID: What you describe is one the reasons I decided to use uwsgi for all openstack API services in Debian (including keystone) for which copytruncate is fine. Thomas On Jun 14, 2023 1:48 PM, hai wu wrote: It seems the default logrotate configuration for apache2 log files from vanilla OS installation is to do the following daily: postrotate if invoke-rc.d apache2 status > /dev/null 2>&1; then \ invoke-rc.d apache2 reload > /dev/null 2>&1; \ fi; endscript Sometimes I noticed that not all log entries would show up for the same day after apache2 got reloaded. Also it seems redhat openstack switched its logrotate config to use copytruncate instead of reloading apache2 iirc. Is there some known issues with reloading apache2 daily for logrotate config? Sometimes there are keystone 503 errors, and I am wondering if that's related to the logrotate default config to reload apache2 daily.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From acogoluegnes at gmail.com Thu Jun 15 07:08:03 2023 From: acogoluegnes at gmail.com (=?UTF-8?Q?Arnaud_Cogolu=C3=A8gnes?=) Date: Thu, 15 Jun 2023 09:08:03 +0200 Subject: missing erlang-base_25.3.2.2-1rmq1ppa1~ubuntu22.04.1_amd64.deb In-Reply-To: References: Message-ID: Follow-up discussion on the appropriate GitHub repository: https://github.com/rabbitmq/erlang-debian-package/discussions/33. On Thu, Jun 15, 2023 at 12:22?AM Micha? Nasiadka wrote: > Hi Arnaud, > > So basically the thing that is missing - is a place where Erlang 25.x is > available for Ubuntu/Debian arm64. > We maintain releases that we can?t repackage to 3.12 easily, or most > probably not at all - because it would most probably break existing user > deployments of OpenStack Kolla. > > Is that something RabbitMQ team could provide - maybe in a separate PPA > repo than the existing one [1] if it?s problematic to have it in the same > PPA? > > [1]: https://launchpad.net/~rabbitmq/+archive/ubuntu/rabbitmq-erlang > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anupam.panta at f1soft.com Sat Jun 17 18:37:07 2023 From: anupam.panta at f1soft.com (Anupam Panta) Date: Sat, 17 Jun 2023 18:37:07 +0000 Subject: Help Needed: Configuring OpenStack Glance with VMware Message-ID: Dear OpenStack Community, I hope this email finds you well. I am reaching out to seek assistance and guidance regarding the configuration of OpenStack integration with VMware. I have encountered some difficulties while trying to configure Glance, and I would greatly appreciate any help or insights you can provide. Here is a brief overview of the issue I am facing: * Configuration Challenge: I am currently working on integrating OpenStack with VMware as the backend infrastructure for image storage and management. * Specific Problem: While configuring Glance, I am stuck with an error message stating "Store for identifier vmware not found". * Configuration Attempts: I have reviewed the relevant documentation and followed the recommended steps, tried changing parameters to configure the VMware store backend in Glance. However, despite my efforts, the error persists. I kindly request your guidance and expertise to help me troubleshoot and resolve this issue. Any advice, best practices, or configuration examples related to Glance integration with VMware would be highly appreciated. To provide you with more context, here are some relevant details of my environment: I am using kolla installation for openstack deployment. My global.yml file consists of: ``` #### GLANCE CONFIG glance_backend_vmware: "yes" glance_backend_insecure: "yes" #### VMWARE CONFIG vmware_vcenter_host_ip: vmware_vcenter_host_username: vmware_vcenter_host_password: vmware_datastore_name: glance-datastore vmware_vcenter_name: vmware_vcenter_cluster_name: "openstack" vmware_vcenter_datacenter_name: vmware_vcenter_insecure: "True" ``` After running reconfigure command, my glance-api container stuck in restart loop, with error message "Store for identifier vmware not found". I also tried changing parameters in glance-api.conf file, specific parameters `default_backend= vsphere`, `stores= file, http, swift, vmware`. Your assistance will not only help me but also contribute to the collective knowledge of the community. Thank you in advance for your time and support. I look forward to your valuable insights and suggestions. Regards, Anupam Panta | LinkedIn System Engineer anupam.panta at f1soft.com 9802079915 Movers & Shakes Tower, Pulchowk, Nepal [cid:78d40d57-4926-472e-bbe6-ea05244f879f] Disclaimer This e-mail and any attachments may contain confidential and privileged information. If you are not the intended recipient, please notify the sender immediately by return e-mail, followed by deleting this e-mail and destroying any copies thereof Any dissemination or use of this information by a person other than the intended recipient is unauthorized and may be illegal unless consented by the sender. F1soft International Private Limited or its employees are not responsible for any auto-generated spurious messages that you may receive from F1soft email addresses. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook-fqipa5bs.png Type: image/png Size: 7818 bytes Desc: Outlook-fqipa5bs.png URL: From baptiste.jonglez at inria.fr Mon Jun 19 17:32:37 2023 From: baptiste.jonglez at inria.fr (Baptiste Jonglez) Date: Mon, 19 Jun 2023 19:32:37 +0200 Subject: [ironic] [networking-generic-switch] NGS performance and moving to an agent-based design Message-ID: Hello, We have been hitting performance issues with networking-generic-switch (NGS), the Neutron ML2 plugin that performs dynamic reconfiguration of physical switches, most notably used by Ironic. In our quest to improve NGS performance, we have designed and implemented an agent-based system for NGS, and we would now like to upstream it. Since this is a fairly big change, and some parts of the design such as HA are not completely clear yet, feel free to join the discussion on the RFE: https://bugs.launchpad.net/networking-generic-switch/+bug/2024385 Regards, Baptiste -- Baptiste Jonglez Research Engineer, Inria STACK team From ralonsoh at redhat.com Tue Jun 20 13:41:51 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 20 Jun 2023 15:41:51 +0200 Subject: [neutron][ptg] PTG summary In-Reply-To: References: Message-ID: Hello: Luis, thanks for the update, the links and sharing the WIP implementation. Frode, how this method is improving the LRP scheduling? The callback is doing the same logic as before. Furthermore, if we are calculating the GW chassis for the LRP within the same DB transaction, that means the local IDL cache won't be updated until the end of this transaction. Sorry, I might be missing something. Regards. On Tue, Jun 20, 2023 at 3:02?PM Frode Nordahl wrote: > Hello, Rodolfo, > > I have relevant information on one of the points discussed below, so > just wanted to chime in. > > On Tue, Jun 20, 2023 at 12:44?PM Rodolfo Alonso Hernandez > wrote: > > [ snip ] > > > *** OVN L3 scheduler issue *** > > This issue has been reproduced in an environment with more than 5 > chassis with gateway ports. The router GW ports are assigned to the GW > chassis using a manual scheduler implemented in Neutron (the default one is > ``OVNGatewayLeastLoadedScheduler``). If one of the chassis is stopped, the > GW ports should be re-assigned to the other GW chassis. This is happening > but all ports fall under the same one; this re-scheduling should share the > ports among the other active chassis. > > * Action item: I'll open a LP bug and investigate this issue. > > Background on why this is happening and a solution is being worked on in > [5]. > > 5: https://review.opendev.org/c/openstack/neutron/+/874760 > > -- > Frode Nordahl > > > *** Size of the OVN SB "HA_Chassis_Group" table *** > > The OVN SB "HA_Chassis_Group" increases its size indefinitely with each > operation creating a router and assigning a new external gateway (external > network). This table never decreases, > > * Action item: I'll open a LP bug, investigate this issue and if this is > a core OVN issue, report it. > > > > *** Live migration with ML2/OVN *** > > This is a common topic and not only for ML2/OVN. The migration time has > many factors (memory size, applications running, network BW, etc) that > could slow down the migration time and trigger a communication gap during > this process. > > * Action item: to create better documentation, both in Nova and Neutron, > about the migration process, what has been done to improve it (for example, > the OVN multiple port binding) and what factors will affect the migration. > > > > *** ML2/OVN IPv6 DVR *** > > This spec was approved during the last cycle [1]. The implementation [2] > is under review. > > * Action item: to review the patch (for Neutron reviewers) > > * Action item: to implement the necessary tempest tests (for the feature > developers) > > > > *** BGP with ML2/OVS, exposing address blocks *** > > This user has successfully deployed Neutron with ML2/OVS and n-d-r. This > user is currently making public a certain set of FIPs. However, for other > VMs without FIPs, the goal is to make the router GW port IP address public, > using the address blocks functionality; this is not working according to > the user. > > * Action item: (for this user) to create a LP bug describing the > architecture of the deployment, the configuration used and the API commands > used to reproduce this issue. > > > > *** Metadata service (any backend) *** > > Neutron is in charge of deploying the Metadata service on the compute > nodes. Each time the metadata HTTP server is called, it requests from the > Neutron API the instance and tenant ID [3]. This method implies a RPC call. > In "busy" compute nodes, where the VMs are created and destroyed very fast, > this RPC communication is a bottleneck. > > * Action item: open a LP bug to implement the same > ``CacheBackedPluginApi`` used in the OVS agent. This RPC cached class > creates a set of subscriptions to the needed resources ("ports" in this > case). The Neutron API will send the port updated info and cached locally; > that makes unnecessary the RPC request if the resources are stored locally. > > > > *** ML2/OVN + Ironic nodes *** > > This user has deployed ML2/OVN with Ironic nodes, and is using > ovn-bgp-agent with the eVPN driver to make public the private ports (IP and > MACs) to the Ironic node ports. More information in [4]. > > > > *** BGP acceleration in ML2/OVN *** > > Many questions related to this topic, both with DPDK and HW offload. I > would refer (once the link is available) to the talk "Enabling > multi-cluster connectivity using dynamic routing via BGP in Openstack" > given by Christophe Fontaine during this PTG. You'll find it very > interesting how this new implementation moves all the packet processing to > the OVS datapath (removing any Linux Bridge / iptables processing). The > example provided in the talk refers to the use of DPDK. > > > > > > I hope this PTG was interesting for you! Don't hesitate to use the usual > channels that are the mailing list and IRC. Remember we have the weekly > Neutron meeting every Tuesday at 1400UTC. > > > > Regards. > > > > [1] > https://specs.openstack.org/openstack/neutron-specs/specs/2023.1/ovn-ipv6-dvr.html > > [2]https://review.opendev.org/c/openstack/neutron/+/867513 > > [3] > https://github.com/openstack/neutron/blob/cbb89fdb1414a1b3a8e8b3a9a4154ef627bb9d1a/neutron/agent/metadata/agent.py#L89 > > [4] > https://ltomasbo.wordpress.com/2021/06/25/openstack-networking-with-evpn/ > > > > > -- > Frode Nordahl > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Jun 20 13:47:33 2023 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 20 Jun 2023 15:47:33 +0200 Subject: =?UTF-8?B?UmU6IOetlOWkjTogRXhwZXJpZW5jZSB3aXRoIFZHUFVz?= In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> Message-ID: Le mar. 20 juin 2023 ? 15:12, PAIPURI Mahendra a ?crit : > Hello Ulrich, > > > I am relaunching this discussion as I noticed that you gave a talk about > this topic at OpenInfra Summit in Vancouver. Is it possible to share the > presentation here? I hope the talks will be uploaded soon in YouTube. > > > We are mainly interested in using MIG instances in Openstack cloud and I > could not really find a lot of information by googling. If you could share > your experiences, that would be great. > > > Due to scheduling conflicts, I wasn't able to attend Ulrich's session but his feedback will be greatly listened to by me. FWIW, there was also a short session about how to enable MIG and play with Nova at the OpenInfra stage (and that one I was able to attend it), and it was quite seamless. What exact information are you looking for ? The idea with MIG is that you need to create SRIOV VFs above the MIG instances using sriov-manage script provided by nvidia so that the mediated devices will use those VFs as the base PCI devices to be used for Nova. Cheers. > > > Regards > > Mahendra > ------------------------------ > *De :* Ulrich Schwickerath > *Envoy? :* lundi 16 janvier 2023 11:38:08 > *? :* openstack-discuss at lists.openstack.org > *Objet :* Re: ??: Experience with VGPUs > > > Hi, all, > > just to add to the discussion, at CERN we have recently deployed a bunch > of A100 GPUs in PCI passthrough mode, and are now looking into improving > their usage by using MIG. From the NOVA point of view things seem to work > OK, we can schedule VMs requesting a VGPU, the client starts up and gets a > license token from our NVIDIA license server (distributing license keys is > our private cloud is relatively easy in our case). It's a PoC only for the > time being, and we're not ready to put that forward as we're facing issues > with CUDA on the client (it fails immediately in memory operations with > 'not supported', still investigating why this happens). > > Once we get that working it would be nice to be able to have a more fine > grained scheduling so that people can ask for MIG devices of different > size. The other challenge is how to set limits on GPU resources. Once the > above issues have been sorted out we may want to look into cyborg as well > thus we are quite interested in first experiences with this. > > Kind regards, > > Ulrich > On 13.01.23 21:06, Dmitriy Rabotyagov wrote: > > To have that said, deb/rpm packages they are providing doesn't help much, > as: > * There is no repo for them, so you need to download them manually from > enterprise portal > * They can't be upgraded anyway, as driver version is part of the package > name. And each package conflicts with any another one. So you need to > explicitly remove old package and only then install new one. And yes, you > must stop all VMs before upgrading driver and no, you can't live migrate > GPU mdev devices due to that now being implemented in qemu. So > deb/rpm/generic driver doesn't matter at the end tbh. > > > ??, 13 ???. 2023 ?., 20:56 Cedric : > >> >> Ended up with the very same conclusions than Dimitry regarding the use of >> Nvidia Vgrid for the VGPU use case with Nova, it works pretty well but: >> >> - respecting the licensing model as operationnal constraints, note that >> guests need to reach a license server in order to get a token (could be via >> the Nvidia SaaS service or on-prem) >> - drivers for both guest and hypervisor are not easy to implement and >> maintain on large scale. A year ago, hypervisors drivers were not packaged >> to Debian/Ubuntu, but builded though a bash script, thus requiering >> additional automatisation work and careful attention regarding kernel >> update/reboot of Nova hypervisors. >> >> Cheers >> >> >> On Fri, Jan 13, 2023 at 4:21 PM Dmitriy Rabotyagov < >> noonedeadpunk at gmail.com> wrote: >> > >> > You are saying that, like Nvidia GRID drivers are open-sourced while >> > in fact they're super far from being that. In order to download >> > drivers not only for hypervisors, but also for guest VMs you need to >> > have an account in their Enterprise Portal. It took me roughly 6 weeks >> > of discussions with hardware vendors and Nvidia support to get a >> > proper account there. And that happened only after applying for their >> > Partner Network (NPN). >> > That still doesn't solve the issue of how to provide drivers to >> > guests, except pre-build a series of images with these drivers >> > pre-installed (we ended up with making a DIB element for that [1]). >> > Not saying about the need to distribute license tokens for guests and >> > the whole mess with compatibility between hypervisor and guest drivers >> > (as guest driver can't be newer then host one, and HVs can't be too >> > new either). >> > >> > It's not that I'm protecting AMD, but just saying that Nvidia is not >> > that straightforward either, and at least on paper AMD vGPUs look >> > easier both for operators and end-users. >> > >> > [1] https://github.com/citynetwork/dib-elements/tree/main/nvgrid >> > >> > > >> > > As for AMD cards, AMD stated that some of their MI series card >> supports SR-IOV for vGPUs. However, those drivers are never open source or >> provided closed source to public, only large cloud providers are able to >> get them. So I don't really recommend getting AMD cards for vGPU unless you >> are able to get support from them. >> > > >> > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frode.nordahl at canonical.com Tue Jun 20 13:53:37 2023 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Tue, 20 Jun 2023 15:53:37 +0200 Subject: [neutron][ptg] PTG summary In-Reply-To: References: Message-ID: On Tue, Jun 20, 2023 at 3:42?PM Rodolfo Alonso Hernandez wrote: > > Hello: > > Luis, thanks for the update, the links and sharing the WIP implementation. > > Frode, how this method is improving the LRP scheduling? The callback is doing the same logic as before. Furthermore, if we are calculating the GW chassis for the LRP within the same DB transaction, that means the local IDL cache won't be updated until the end of this transaction. Sorry, I might be missing something. The essence of the change once done with it is that the scheduler code will be called while ovsdbapp is applying the transaction (i.e. from within run_idl). At that point, the IDL will be updated for every operation performed ref [0][1][2], which means they will impact each other. What currently happens is that the Neutron code adds in-flight operations to the ovsdbapp transaction (which is essentially a stack of operations to perform), without taking into account what's in that transaction. I'm in the process of updating the current proposal in line with the discussion I've had with Terry on that review, so you'll have something to look at within the next day or so. 0: https://github.com/openvswitch/ovs/blob/e3ba0be48ca457ab3a1c9f1e3522e82218eca0f9/python/ovs/db/idl.py#L1316 1: https://github.com/openvswitch/ovs/blob/e3ba0be48ca457ab3a1c9f1e3522e82218eca0f9/python/ovs/db/idl.py#L1400 2: https://github.com/openvswitch/ovs/blob/1f47d73996b0c565f9ce035c899a042f2ea394a6/python/ovs/db/idl.py#L2083 -- Frode Nordahl > Regards. > > On Tue, Jun 20, 2023 at 3:02?PM Frode Nordahl wrote: >> >> Hello, Rodolfo, >> >> I have relevant information on one of the points discussed below, so >> just wanted to chime in. >> >> On Tue, Jun 20, 2023 at 12:44?PM Rodolfo Alonso Hernandez >> wrote: >> >> [ snip ] >> >> > *** OVN L3 scheduler issue *** >> > This issue has been reproduced in an environment with more than 5 chassis with gateway ports. The router GW ports are assigned to the GW chassis using a manual scheduler implemented in Neutron (the default one is ``OVNGatewayLeastLoadedScheduler``). If one of the chassis is stopped, the GW ports should be re-assigned to the other GW chassis. This is happening but all ports fall under the same one; this re-scheduling should share the ports among the other active chassis. >> > * Action item: I'll open a LP bug and investigate this issue. >> >> Background on why this is happening and a solution is being worked on in [5]. >> >> 5: https://review.opendev.org/c/openstack/neutron/+/874760 >> >> -- >> Frode Nordahl >> >> > *** Size of the OVN SB "HA_Chassis_Group" table *** >> > The OVN SB "HA_Chassis_Group" increases its size indefinitely with each operation creating a router and assigning a new external gateway (external network). This table never decreases, >> > * Action item: I'll open a LP bug, investigate this issue and if this is a core OVN issue, report it. >> > >> > *** Live migration with ML2/OVN *** >> > This is a common topic and not only for ML2/OVN. The migration time has many factors (memory size, applications running, network BW, etc) that could slow down the migration time and trigger a communication gap during this process. >> > * Action item: to create better documentation, both in Nova and Neutron, about the migration process, what has been done to improve it (for example, the OVN multiple port binding) and what factors will affect the migration. >> > >> > *** ML2/OVN IPv6 DVR *** >> > This spec was approved during the last cycle [1]. The implementation [2] is under review. >> > * Action item: to review the patch (for Neutron reviewers) >> > * Action item: to implement the necessary tempest tests (for the feature developers) >> > >> > *** BGP with ML2/OVS, exposing address blocks *** >> > This user has successfully deployed Neutron with ML2/OVS and n-d-r. This user is currently making public a certain set of FIPs. However, for other VMs without FIPs, the goal is to make the router GW port IP address public, using the address blocks functionality; this is not working according to the user. >> > * Action item: (for this user) to create a LP bug describing the architecture of the deployment, the configuration used and the API commands used to reproduce this issue. >> > >> > *** Metadata service (any backend) *** >> > Neutron is in charge of deploying the Metadata service on the compute nodes. Each time the metadata HTTP server is called, it requests from the Neutron API the instance and tenant ID [3]. This method implies a RPC call. In "busy" compute nodes, where the VMs are created and destroyed very fast, this RPC communication is a bottleneck. >> > * Action item: open a LP bug to implement the same ``CacheBackedPluginApi`` used in the OVS agent. This RPC cached class creates a set of subscriptions to the needed resources ("ports" in this case). The Neutron API will send the port updated info and cached locally; that makes unnecessary the RPC request if the resources are stored locally. >> > >> > *** ML2/OVN + Ironic nodes *** >> > This user has deployed ML2/OVN with Ironic nodes, and is using ovn-bgp-agent with the eVPN driver to make public the private ports (IP and MACs) to the Ironic node ports. More information in [4]. >> > >> > *** BGP acceleration in ML2/OVN *** >> > Many questions related to this topic, both with DPDK and HW offload. I would refer (once the link is available) to the talk "Enabling multi-cluster connectivity using dynamic routing via BGP in Openstack" given by Christophe Fontaine during this PTG. You'll find it very interesting how this new implementation moves all the packet processing to the OVS datapath (removing any Linux Bridge / iptables processing). The example provided in the talk refers to the use of DPDK. >> > >> > >> > I hope this PTG was interesting for you! Don't hesitate to use the usual channels that are the mailing list and IRC. Remember we have the weekly Neutron meeting every Tuesday at 1400UTC. >> > >> > Regards. >> > >> > [1]https://specs.openstack.org/openstack/neutron-specs/specs/2023.1/ovn-ipv6-dvr.html >> > [2]https://review.opendev.org/c/openstack/neutron/+/867513 >> > [3]https://github.com/openstack/neutron/blob/cbb89fdb1414a1b3a8e8b3a9a4154ef627bb9d1a/neutron/agent/metadata/agent.py#L89 >> > [4]https://ltomasbo.wordpress.com/2021/06/25/openstack-networking-with-evpn/ >> > >> >> >> -- >> Frode Nordahl >> -- Frode Nordahl From mahendra.paipuri at cnrs.fr Tue Jun 20 14:11:09 2023 From: mahendra.paipuri at cnrs.fr (Mahendra Paipuri) Date: Tue, 20 Jun 2023 16:11:09 +0200 Subject: =?UTF-8?B?UmU6IOetlOWkjTogRXhwZXJpZW5jZSB3aXRoIFZHUFVz?= In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> Message-ID: Thanks Sylvain for the pointers. One of the questions we have is: can we create MIG profiles on the host and then attach each one or more profile(s) to VMs? This bug [1] reports that once we attach one profile to a VM, rest of MIG profiles become unavailable. From what you have said about using SR-IOV and VFs, I guess this should be possible. I think you are talking about "vGPUs with OpenStack Nova" talk on OpenInfra stage. I will look into it once the videos will be online. [1] https://bugs.launchpad.net/nova/+bug/2008883 Thanks Regards Mahendra On 20/06/2023 15:47, Sylvain Bauza wrote: > > > Le?mar. 20 juin 2023 ??15:12, PAIPURI Mahendra > a ?crit?: > > Hello Ulrich, > > > I am relaunching this discussion as I noticed that you gave a talk > about this topic?at OpenInfra Summit in Vancouver. Is it possible > to share the presentation here? I hope the talks will be uploaded > soon in YouTube. > > > We are mainly interested in using MIG instances in Openstack cloud > and I could not really find a lot of information?by googling. If > you could share your experiences, that would be great. > > > > Due to scheduling conflicts, I wasn't able to attend Ulrich's session > but his feedback will be greatly listened to by me. > > FWIW, there was also a short session about how to enable MIG and play > with Nova at the OpenInfra stage (and that one I was able to attend > it), and it was quite seamless. What exact information are you looking > for ? > The idea with MIG is that you need to create SRIOV VFs above the MIG > instances using sriov-manage script provided by nvidia so that the > mediated devices will use those VFs as the base PCI devices to be used > for Nova. > > Cheers. > > > Regards > > Mahendra > > ------------------------------------------------------------------------ > *De :* Ulrich Schwickerath > *Envoy? :* lundi 16 janvier 2023 11:38:08 > *? :* openstack-discuss at lists.openstack.org > *Objet :* Re: ??: Experience with VGPUs > > Hi, all, > > just to add to the discussion, at CERN we have recently deployed a > bunch of A100 GPUs in PCI passthrough mode, and are now looking > into improving their usage by using MIG. From the NOVA point of > view things seem to work OK, we can schedule VMs requesting a > VGPU, the client starts up and gets a license token from our > NVIDIA license server (distributing license keys is our private > cloud is relatively easy in our case). It's a PoC only for the > time being, and we're not ready to put that forward as we're > facing issues with CUDA on the client (it fails immediately in > memory operations with 'not supported', still investigating why > this happens). > > Once we get that working it would be nice to be able to have a > more fine grained scheduling so that people can ask for MIG > devices of different size. The other challenge is how to set > limits on GPU resources. Once the above issues have been sorted > out we may want to look into cyborg as well thus we are quite > interested in first experiences with this. > > Kind regards, > > Ulrich > > On 13.01.23 21:06, Dmitriy Rabotyagov wrote: >> To have that said, deb/rpm packages they are providing doesn't >> help much, as: >> * There is no repo for them, so you need to download them >> manually from enterprise portal >> * They can't be upgraded anyway, as driver version is part of the >> package name. And each package conflicts with any another one. So >> you need to explicitly remove old package and only then install >> new one. And yes, you must stop all VMs before upgrading driver >> and no, you can't live migrate GPU mdev devices due to that now >> being implemented in qemu. So deb/rpm/generic driver doesn't >> matter at the end tbh. >> >> >> ??, 13 ???. 2023 ?., 20:56 Cedric : >> >> >> Ended up with the very same conclusions than Dimitry >> regarding the use of Nvidia Vgrid for the VGPU use case with >> Nova, it works pretty well but: >> >> - respecting the licensing model as operationnal constraints, >> note that guests need to reach a license server in order to >> get a token (could be via the Nvidia SaaS service or on-prem) >> - drivers for both guest and hypervisor are not easy to >> implement and maintain on large scale. A year ago, >> hypervisors drivers were not packaged to Debian/Ubuntu, but >> builded though a bash script, thus requiering additional >> automatisation work and careful attention regarding kernel >> update/reboot of Nova hypervisors. >> >> Cheers >> >> >> On Fri, Jan 13, 2023 at 4:21 PM Dmitriy Rabotyagov >> wrote: >> > >> > You are saying that, like Nvidia GRID drivers are >> open-sourced while >> > in fact they're super far from being that. In order to download >> > drivers not only for hypervisors, but also for guest VMs >> you need to >> > have an account in their Enterprise Portal. It took me >> roughly 6 weeks >> > of discussions with hardware vendors and Nvidia support to >> get a >> > proper account there. And that happened only after applying >> for their >> > Partner Network (NPN). >> > That still doesn't solve the issue of how to provide drivers to >> > guests, except pre-build a series of images with these drivers >> > pre-installed (we ended up with making a DIB element for >> that [1]). >> > Not saying about the need to distribute license tokens for >> guests and >> > the whole mess with compatibility between hypervisor and >> guest drivers >> > (as guest driver can't be newer then host one, and HVs >> can't be too >> > new either). >> > >> > It's not that I'm protecting AMD, but just saying that >> Nvidia is not >> > that straightforward either, and at least on paper AMD >> vGPUs look >> > easier both for operators and end-users. >> > >> > [1] >> https://github.com/citynetwork/dib-elements/tree/main/nvgrid >> > >> > > >> > > As for AMD cards, AMD stated that some of their MI series >> card supports SR-IOV for vGPUs. However, those drivers are >> never open source or provided closed source to public, only >> large cloud providers are able to get them. So I don't really >> recommend getting AMD cards for vGPU unless you are able to >> get support from them. >> > > >> > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril at redhat.com Tue Jun 20 14:55:13 2023 From: cyril at redhat.com (Cyril Roelandt) Date: Tue, 20 Jun 2023 16:55:13 +0200 Subject: Help Needed: Configuring OpenStack Glance with VMware In-Reply-To: References: Message-ID: Hello, On 2023-06-17 18:37, Anupam Panta wrote: > > I am using kolla installation for openstack deployment. My global.yml file consists of: > ``` > #### GLANCE CONFIG > glance_backend_vmware: "yes" > glance_backend_insecure: "yes" > > > #### VMWARE CONFIG > vmware_vcenter_host_ip: > vmware_vcenter_host_username: > vmware_vcenter_host_password: > vmware_datastore_name: glance-datastore > vmware_vcenter_name: > vmware_vcenter_cluster_name: "openstack" > vmware_vcenter_datacenter_name: > vmware_vcenter_insecure: "True" > ``` > > After running reconfigure command, my glance-api container stuck in restart loop, with error message "Store for identifier vmware not found". I also tried changing parameters in glance-api.conf file, specific parameters `default_backend= vsphere`, `stores= file, http, swift, vmware`. The "stores" config option has been deprecated for a while. Your config should look something like this: [DEFAULT] enabled_backends = foo:vmware, bar:cinder [glance_store] default_backend = foo [foo] vmware_option = value ... [bar] cinder_option = value Of course you could also only define a single backend in "enabled_backends". Regards, Cyril From sbauza at redhat.com Tue Jun 20 15:27:12 2023 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 20 Jun 2023 17:27:12 +0200 Subject: =?UTF-8?B?UmU6IOetlOWkjTogRXhwZXJpZW5jZSB3aXRoIFZHUFVz?= In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> Message-ID: Le mar. 20 juin 2023 ? 16:31, Mahendra Paipuri a ?crit : > Thanks Sylvain for the pointers. > > One of the questions we have is: can we create MIG profiles on the host > and then attach each one or more profile(s) to VMs? This bug [1] reports > that once we attach one profile to a VM, rest of MIG profiles become > unavailable. From what you have said about using SR-IOV and VFs, I guess > this should be possible. > Correct, what you need is to create first the VFs using sriov-manage and then you can create the MIG instances. Once you create the MIG instances using the profiles you want, you will see that the related available_instances for the nvidia mdev type (by looking at sysfs) will say that you can have a single vGPU for this profile. Then, you can use that mdev type with Nova using nova.conf. That being said, while this above is simple, the below talk was saying more about how to correctly use the GPU by the host so please wait :-) > I think you are talking about "vGPUs with OpenStack Nova" talk on > OpenInfra stage. I will look into it once the videos will be online. > Indeed. -S > [1] https://bugs.launchpad.net/nova/+bug/2008883 > > Thanks > > Regards > > Mahendra > On 20/06/2023 15:47, Sylvain Bauza wrote: > > > > Le mar. 20 juin 2023 ? 15:12, PAIPURI Mahendra > a ?crit : > >> Hello Ulrich, >> >> >> I am relaunching this discussion as I noticed that you gave a talk about >> this topic at OpenInfra Summit in Vancouver. Is it possible to share the >> presentation here? I hope the talks will be uploaded soon in YouTube. >> >> >> We are mainly interested in using MIG instances in Openstack cloud and I >> could not really find a lot of information by googling. If you could share >> your experiences, that would be great. >> >> >> > Due to scheduling conflicts, I wasn't able to attend Ulrich's session but > his feedback will be greatly listened to by me. > > FWIW, there was also a short session about how to enable MIG and play with > Nova at the OpenInfra stage (and that one I was able to attend it), and it > was quite seamless. What exact information are you looking for ? > The idea with MIG is that you need to create SRIOV VFs above the MIG > instances using sriov-manage script provided by nvidia so that the mediated > devices will use those VFs as the base PCI devices to be used for Nova. > > Cheers. >> >> >> Regards >> >> Mahendra >> ------------------------------ >> *De :* Ulrich Schwickerath >> *Envoy? :* lundi 16 janvier 2023 11:38:08 >> *? :* openstack-discuss at lists.openstack.org >> *Objet :* Re: ??: Experience with VGPUs >> >> >> Hi, all, >> >> just to add to the discussion, at CERN we have recently deployed a bunch >> of A100 GPUs in PCI passthrough mode, and are now looking into improving >> their usage by using MIG. From the NOVA point of view things seem to work >> OK, we can schedule VMs requesting a VGPU, the client starts up and gets a >> license token from our NVIDIA license server (distributing license keys is >> our private cloud is relatively easy in our case). It's a PoC only for the >> time being, and we're not ready to put that forward as we're facing issues >> with CUDA on the client (it fails immediately in memory operations with >> 'not supported', still investigating why this happens). >> >> Once we get that working it would be nice to be able to have a more fine >> grained scheduling so that people can ask for MIG devices of different >> size. The other challenge is how to set limits on GPU resources. Once the >> above issues have been sorted out we may want to look into cyborg as well >> thus we are quite interested in first experiences with this. >> >> Kind regards, >> >> Ulrich >> On 13.01.23 21:06, Dmitriy Rabotyagov wrote: >> >> To have that said, deb/rpm packages they are providing doesn't help much, >> as: >> * There is no repo for them, so you need to download them manually from >> enterprise portal >> * They can't be upgraded anyway, as driver version is part of the package >> name. And each package conflicts with any another one. So you need to >> explicitly remove old package and only then install new one. And yes, you >> must stop all VMs before upgrading driver and no, you can't live migrate >> GPU mdev devices due to that now being implemented in qemu. So >> deb/rpm/generic driver doesn't matter at the end tbh. >> >> >> ??, 13 ???. 2023 ?., 20:56 Cedric : >> >>> >>> Ended up with the very same conclusions than Dimitry regarding the use >>> of Nvidia Vgrid for the VGPU use case with Nova, it works pretty well but: >>> >>> - respecting the licensing model as operationnal constraints, note that >>> guests need to reach a license server in order to get a token (could be via >>> the Nvidia SaaS service or on-prem) >>> - drivers for both guest and hypervisor are not easy to implement and >>> maintain on large scale. A year ago, hypervisors drivers were not packaged >>> to Debian/Ubuntu, but builded though a bash script, thus requiering >>> additional automatisation work and careful attention regarding kernel >>> update/reboot of Nova hypervisors. >>> >>> Cheers >>> >>> >>> On Fri, Jan 13, 2023 at 4:21 PM Dmitriy Rabotyagov < >>> noonedeadpunk at gmail.com> wrote: >>> > >>> > You are saying that, like Nvidia GRID drivers are open-sourced while >>> > in fact they're super far from being that. In order to download >>> > drivers not only for hypervisors, but also for guest VMs you need to >>> > have an account in their Enterprise Portal. It took me roughly 6 weeks >>> > of discussions with hardware vendors and Nvidia support to get a >>> > proper account there. And that happened only after applying for their >>> > Partner Network (NPN). >>> > That still doesn't solve the issue of how to provide drivers to >>> > guests, except pre-build a series of images with these drivers >>> > pre-installed (we ended up with making a DIB element for that [1]). >>> > Not saying about the need to distribute license tokens for guests and >>> > the whole mess with compatibility between hypervisor and guest drivers >>> > (as guest driver can't be newer then host one, and HVs can't be too >>> > new either). >>> > >>> > It's not that I'm protecting AMD, but just saying that Nvidia is not >>> > that straightforward either, and at least on paper AMD vGPUs look >>> > easier both for operators and end-users. >>> > >>> > [1] https://github.com/citynetwork/dib-elements/tree/main/nvgrid >>> > >>> > > >>> > > As for AMD cards, AMD stated that some of their MI series card >>> supports SR-IOV for vGPUs. However, those drivers are never open source or >>> provided closed source to public, only large cloud providers are able to >>> get them. So I don't really recommend getting AMD cards for vGPU unless you >>> are able to get support from them. >>> > > >>> > >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Jun 20 15:33:53 2023 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 20 Jun 2023 17:33:53 +0200 Subject: [nova][tempest] Hold your rechecks Message-ID: As far as I can tell, all nova-live-migration job runs are getting a FAILURE [1] due to [2] so please hold your rechecks. Thanks to Rodolfo, we have a workaround for it https://review.opendev.org/c/openstack/tempest/+/886496* so please wait until this change is merged.* Tempest cores, it would be nice if you can look at the above change quickly :) Thanks, -Sylvain [1] https://zuul.openstack.org/builds?job_name=nova-live-migration&skip=0 [2] https://bugs.launchpad.net/nova/+bug/2024160 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Tue Jun 20 15:36:53 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 20 Jun 2023 17:36:53 +0200 Subject: [neutron][ptg] PTG summary In-Reply-To: References: Message-ID: Hello Frode: The local IDL cache is not updated during a transaction but after it. In order to make this transaction aware of the previous operations that are being stacked in this transaction, you should make the scheduler conditional to the command results. In other words: this is not a transactional database operation. The "write" commands are stored in the transaction while the "read" operations use the local cache, which is *not* updated. In a nutshell, the result is the same. Once you commit the "write" operations, the DB server will push the updates to the local caches. Regards. On Tue, Jun 20, 2023 at 3:53?PM Frode Nordahl wrote: > On Tue, Jun 20, 2023 at 3:42?PM Rodolfo Alonso Hernandez > wrote: > > > > Hello: > > > > Luis, thanks for the update, the links and sharing the WIP > implementation. > > > > Frode, how this method is improving the LRP scheduling? The callback is > doing the same logic as before. Furthermore, if we are calculating the GW > chassis for the LRP within the same DB transaction, that means the local > IDL cache won't be updated until the end of this transaction. Sorry, I > might be missing something. > > The essence of the change once done with it is that the scheduler code > will be called while ovsdbapp is applying the transaction (i.e. from > within run_idl). At that point, the IDL will be updated for every > operation performed ref [0][1][2], which means they will impact each > other. > > What currently happens is that the Neutron code adds in-flight > operations to the ovsdbapp transaction (which is essentially a stack > of operations to perform), without taking into account what's in that > transaction. > > I'm in the process of updating the current proposal in line with the > discussion I've had with Terry on that review, so you'll have > something to look at within the next day or so. > > 0: > https://github.com/openvswitch/ovs/blob/e3ba0be48ca457ab3a1c9f1e3522e82218eca0f9/python/ovs/db/idl.py#L1316 > 1: > https://github.com/openvswitch/ovs/blob/e3ba0be48ca457ab3a1c9f1e3522e82218eca0f9/python/ovs/db/idl.py#L1400 > 2: > https://github.com/openvswitch/ovs/blob/1f47d73996b0c565f9ce035c899a042f2ea394a6/python/ovs/db/idl.py#L2083 > > -- > Frode Nordahl > > > Regards. > > > > On Tue, Jun 20, 2023 at 3:02?PM Frode Nordahl < > frode.nordahl at canonical.com> wrote: > >> > >> Hello, Rodolfo, > >> > >> I have relevant information on one of the points discussed below, so > >> just wanted to chime in. > >> > >> On Tue, Jun 20, 2023 at 12:44?PM Rodolfo Alonso Hernandez > >> wrote: > >> > >> [ snip ] > >> > >> > *** OVN L3 scheduler issue *** > >> > This issue has been reproduced in an environment with more than 5 > chassis with gateway ports. The router GW ports are assigned to the GW > chassis using a manual scheduler implemented in Neutron (the default one is > ``OVNGatewayLeastLoadedScheduler``). If one of the chassis is stopped, the > GW ports should be re-assigned to the other GW chassis. This is happening > but all ports fall under the same one; this re-scheduling should share the > ports among the other active chassis. > >> > * Action item: I'll open a LP bug and investigate this issue. > >> > >> Background on why this is happening and a solution is being worked on > in [5]. > >> > >> 5: https://review.opendev.org/c/openstack/neutron/+/874760 > >> > >> -- > >> Frode Nordahl > >> > >> > *** Size of the OVN SB "HA_Chassis_Group" table *** > >> > The OVN SB "HA_Chassis_Group" increases its size indefinitely with > each operation creating a router and assigning a new external gateway > (external network). This table never decreases, > >> > * Action item: I'll open a LP bug, investigate this issue and if this > is a core OVN issue, report it. > >> > > >> > *** Live migration with ML2/OVN *** > >> > This is a common topic and not only for ML2/OVN. The migration time > has many factors (memory size, applications running, network BW, etc) that > could slow down the migration time and trigger a communication gap during > this process. > >> > * Action item: to create better documentation, both in Nova and > Neutron, about the migration process, what has been done to improve it (for > example, the OVN multiple port binding) and what factors will affect the > migration. > >> > > >> > *** ML2/OVN IPv6 DVR *** > >> > This spec was approved during the last cycle [1]. The implementation > [2] is under review. > >> > * Action item: to review the patch (for Neutron reviewers) > >> > * Action item: to implement the necessary tempest tests (for the > feature developers) > >> > > >> > *** BGP with ML2/OVS, exposing address blocks *** > >> > This user has successfully deployed Neutron with ML2/OVS and n-d-r. > This user is currently making public a certain set of FIPs. However, for > other VMs without FIPs, the goal is to make the router GW port IP address > public, using the address blocks functionality; this is not working > according to the user. > >> > * Action item: (for this user) to create a LP bug describing the > architecture of the deployment, the configuration used and the API commands > used to reproduce this issue. > >> > > >> > *** Metadata service (any backend) *** > >> > Neutron is in charge of deploying the Metadata service on the compute > nodes. Each time the metadata HTTP server is called, it requests from the > Neutron API the instance and tenant ID [3]. This method implies a RPC call. > In "busy" compute nodes, where the VMs are created and destroyed very fast, > this RPC communication is a bottleneck. > >> > * Action item: open a LP bug to implement the same > ``CacheBackedPluginApi`` used in the OVS agent. This RPC cached class > creates a set of subscriptions to the needed resources ("ports" in this > case). The Neutron API will send the port updated info and cached locally; > that makes unnecessary the RPC request if the resources are stored locally. > >> > > >> > *** ML2/OVN + Ironic nodes *** > >> > This user has deployed ML2/OVN with Ironic nodes, and is using > ovn-bgp-agent with the eVPN driver to make public the private ports (IP and > MACs) to the Ironic node ports. More information in [4]. > >> > > >> > *** BGP acceleration in ML2/OVN *** > >> > Many questions related to this topic, both with DPDK and HW offload. > I would refer (once the link is available) to the talk "Enabling > multi-cluster connectivity using dynamic routing via BGP in Openstack" > given by Christophe Fontaine during this PTG. You'll find it very > interesting how this new implementation moves all the packet processing to > the OVS datapath (removing any Linux Bridge / iptables processing). The > example provided in the talk refers to the use of DPDK. > >> > > >> > > >> > I hope this PTG was interesting for you! Don't hesitate to use the > usual channels that are the mailing list and IRC. Remember we have the > weekly Neutron meeting every Tuesday at 1400UTC. > >> > > >> > Regards. > >> > > >> > [1] > https://specs.openstack.org/openstack/neutron-specs/specs/2023.1/ovn-ipv6-dvr.html > >> > [2]https://review.opendev.org/c/openstack/neutron/+/867513 > >> > [3] > https://github.com/openstack/neutron/blob/cbb89fdb1414a1b3a8e8b3a9a4154ef627bb9d1a/neutron/agent/metadata/agent.py#L89 > >> > [4] > https://ltomasbo.wordpress.com/2021/06/25/openstack-networking-with-evpn/ > >> > > >> > >> > >> -- > >> Frode Nordahl > >> > > > -- > Frode Nordahl > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frode.nordahl at canonical.com Tue Jun 20 15:55:45 2023 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Tue, 20 Jun 2023 17:55:45 +0200 Subject: [neutron][ptg] PTG summary In-Reply-To: References: Message-ID: tir. 20. jun. 2023, 17:37 skrev Rodolfo Alonso Hernandez < ralonsoh at redhat.com>: > Hello Frode: > > The local IDL cache is not updated during a transaction but after it. In > order to make this transaction aware of the previous operations that are > being stacked in this transaction, you should make the scheduler > conditional to the command results. In other words: this is not a > transactional database operation. The "write" commands are stored in the > transaction while the "read" operations use the local cache, which is *not* > updated. In a nutshell, the result is the same. Once you commit the "write" > operations, the DB server will push the updates to the local caches. > It is true they are updated after the transaction, but the in-memory representation is also updated as operations are applied by the OVS Python IDL library (see references). That is kind of the whole point of the IDL. But as I said, we need to be careful about what lookups are done in neutron context and what lookups are done in ovsdbapp run_idl context. You can also look at the two reviews that does this and their functional tests if you need more proof :) Better yet, wait until tomorrow and you'll have two reviewable changes that does this and thereby fixed the issue. -- Frode Nordahl > Regards. > > > On Tue, Jun 20, 2023 at 3:53?PM Frode Nordahl > wrote: > >> On Tue, Jun 20, 2023 at 3:42?PM Rodolfo Alonso Hernandez >> wrote: >> > >> > Hello: >> > >> > Luis, thanks for the update, the links and sharing the WIP >> implementation. >> > >> > Frode, how this method is improving the LRP scheduling? The callback is >> doing the same logic as before. Furthermore, if we are calculating the GW >> chassis for the LRP within the same DB transaction, that means the local >> IDL cache won't be updated until the end of this transaction. Sorry, I >> might be missing something. >> >> The essence of the change once done with it is that the scheduler code >> will be called while ovsdbapp is applying the transaction (i.e. from >> within run_idl). At that point, the IDL will be updated for every >> operation performed ref [0][1][2], which means they will impact each >> other. >> >> What currently happens is that the Neutron code adds in-flight >> operations to the ovsdbapp transaction (which is essentially a stack >> of operations to perform), without taking into account what's in that >> transaction. >> >> I'm in the process of updating the current proposal in line with the >> discussion I've had with Terry on that review, so you'll have >> something to look at within the next day or so. >> >> 0: >> https://github.com/openvswitch/ovs/blob/e3ba0be48ca457ab3a1c9f1e3522e82218eca0f9/python/ovs/db/idl.py#L1316 >> 1: >> https://github.com/openvswitch/ovs/blob/e3ba0be48ca457ab3a1c9f1e3522e82218eca0f9/python/ovs/db/idl.py#L1400 >> 2: >> https://github.com/openvswitch/ovs/blob/1f47d73996b0c565f9ce035c899a042f2ea394a6/python/ovs/db/idl.py#L2083 >> >> -- >> Frode Nordahl >> >> > Regards. >> > >> > On Tue, Jun 20, 2023 at 3:02?PM Frode Nordahl < >> frode.nordahl at canonical.com> wrote: >> >> >> >> Hello, Rodolfo, >> >> >> >> I have relevant information on one of the points discussed below, so >> >> just wanted to chime in. >> >> >> >> On Tue, Jun 20, 2023 at 12:44?PM Rodolfo Alonso Hernandez >> >> wrote: >> >> >> >> [ snip ] >> >> >> >> > *** OVN L3 scheduler issue *** >> >> > This issue has been reproduced in an environment with more than 5 >> chassis with gateway ports. The router GW ports are assigned to the GW >> chassis using a manual scheduler implemented in Neutron (the default one is >> ``OVNGatewayLeastLoadedScheduler``). If one of the chassis is stopped, the >> GW ports should be re-assigned to the other GW chassis. This is happening >> but all ports fall under the same one; this re-scheduling should share the >> ports among the other active chassis. >> >> > * Action item: I'll open a LP bug and investigate this issue. >> >> >> >> Background on why this is happening and a solution is being worked on >> in [5]. >> >> >> >> 5: https://review.opendev.org/c/openstack/neutron/+/874760 >> >> >> >> -- >> >> Frode Nordahl >> >> >> >> > *** Size of the OVN SB "HA_Chassis_Group" table *** >> >> > The OVN SB "HA_Chassis_Group" increases its size indefinitely with >> each operation creating a router and assigning a new external gateway >> (external network). This table never decreases, >> >> > * Action item: I'll open a LP bug, investigate this issue and if >> this is a core OVN issue, report it. >> >> > >> >> > *** Live migration with ML2/OVN *** >> >> > This is a common topic and not only for ML2/OVN. The migration time >> has many factors (memory size, applications running, network BW, etc) that >> could slow down the migration time and trigger a communication gap during >> this process. >> >> > * Action item: to create better documentation, both in Nova and >> Neutron, about the migration process, what has been done to improve it (for >> example, the OVN multiple port binding) and what factors will affect the >> migration. >> >> > >> >> > *** ML2/OVN IPv6 DVR *** >> >> > This spec was approved during the last cycle [1]. The implementation >> [2] is under review. >> >> > * Action item: to review the patch (for Neutron reviewers) >> >> > * Action item: to implement the necessary tempest tests (for the >> feature developers) >> >> > >> >> > *** BGP with ML2/OVS, exposing address blocks *** >> >> > This user has successfully deployed Neutron with ML2/OVS and n-d-r. >> This user is currently making public a certain set of FIPs. However, for >> other VMs without FIPs, the goal is to make the router GW port IP address >> public, using the address blocks functionality; this is not working >> according to the user. >> >> > * Action item: (for this user) to create a LP bug describing the >> architecture of the deployment, the configuration used and the API commands >> used to reproduce this issue. >> >> > >> >> > *** Metadata service (any backend) *** >> >> > Neutron is in charge of deploying the Metadata service on the >> compute nodes. Each time the metadata HTTP server is called, it requests >> from the Neutron API the instance and tenant ID [3]. This method implies a >> RPC call. In "busy" compute nodes, where the VMs are created and destroyed >> very fast, this RPC communication is a bottleneck. >> >> > * Action item: open a LP bug to implement the same >> ``CacheBackedPluginApi`` used in the OVS agent. This RPC cached class >> creates a set of subscriptions to the needed resources ("ports" in this >> case). The Neutron API will send the port updated info and cached locally; >> that makes unnecessary the RPC request if the resources are stored locally. >> >> > >> >> > *** ML2/OVN + Ironic nodes *** >> >> > This user has deployed ML2/OVN with Ironic nodes, and is using >> ovn-bgp-agent with the eVPN driver to make public the private ports (IP and >> MACs) to the Ironic node ports. More information in [4]. >> >> > >> >> > *** BGP acceleration in ML2/OVN *** >> >> > Many questions related to this topic, both with DPDK and HW offload. >> I would refer (once the link is available) to the talk "Enabling >> multi-cluster connectivity using dynamic routing via BGP in Openstack" >> given by Christophe Fontaine during this PTG. You'll find it very >> interesting how this new implementation moves all the packet processing to >> the OVS datapath (removing any Linux Bridge / iptables processing). The >> example provided in the talk refers to the use of DPDK. >> >> > >> >> > >> >> > I hope this PTG was interesting for you! Don't hesitate to use the >> usual channels that are the mailing list and IRC. Remember we have the >> weekly Neutron meeting every Tuesday at 1400UTC. >> >> > >> >> > Regards. >> >> > >> >> > [1] >> https://specs.openstack.org/openstack/neutron-specs/specs/2023.1/ovn-ipv6-dvr.html >> >> > [2]https://review.opendev.org/c/openstack/neutron/+/867513 >> >> > [3] >> https://github.com/openstack/neutron/blob/cbb89fdb1414a1b3a8e8b3a9a4154ef627bb9d1a/neutron/agent/metadata/agent.py#L89 >> >> > [4] >> https://ltomasbo.wordpress.com/2021/06/25/openstack-networking-with-evpn/ >> >> > >> >> >> >> >> >> -- >> >> Frode Nordahl >> >> >> >> >> -- >> Frode Nordahl >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.weinmann at me.com Tue Jun 20 17:07:57 2023 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Tue, 20 Jun 2023 19:07:57 +0200 Subject: Experience with VGPUs In-Reply-To: References: Message-ID: <2DD18791-4BFD-4FF0-AAAC-77D8C18FB138@me.com> An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Tue Jun 20 17:40:25 2023 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 20 Jun 2023 13:40:25 -0400 Subject: [kolla] nova-compute image build failed with invalid signature Message-ID: Folks, Just wanted to check with folks if I did anything wrong here. Building 2023.1 image of nova-compute and encounter errors. root at docker-reg:/opt/kolla/etc/kolla# kolla-build --registry docker-reg:4000 --config-file kolla-build.conf --debug --cache --threads 1 --skip-existing --push --format none nova-compute INFO:kolla.common.utils:Using engine: docker INFO:kolla.common.utils:Found the container image folder at /usr/local/share/kolla/docker INFO:kolla.common.utils:Added image nova-compute-ironic to queue INFO:kolla.common.utils:Added image nova-compute to queue INFO:kolla.common.utils:Attempt number: 1 to run task: BuildTask(nova-compute-ironic) DEBUG:kolla.common.utils.nova-compute-ironic:Processing INFO:kolla.common.utils.nova-compute-ironic:Building started at 2023-06-20 17:36:33.771585 DEBUG:kolla.common.utils.nova-compute-ironic:Turned 0 plugins into plugins archive DEBUG:kolla.common.utils.nova-compute-ironic:Turned 0 additions into additions archive INFO:kolla.common.utils.nova-compute-ironic:Step 1/4 : FROM docker-reg:4000/kolla/nova-base:2023.1 INFO:kolla.common.utils.nova-compute-ironic: ---> 0ebe71c922b4 INFO:kolla.common.utils.nova-compute-ironic:Step 2/4 : LABEL maintainer="Kolla Project (https://launchpad.net/kolla)" name="nova-compute-ironic" build-date="20230620" INFO:kolla.common.utils.nova-compute-ironic: ---> Using cache INFO:kolla.common.utils.nova-compute-ironic: ---> 5209e0cc6403 INFO:kolla.common.utils.nova-compute-ironic:Step 3/4 : RUN apt-get --error-on=any update && apt-get -y install --no-install-recommends genisoimage nvme-cli && apt-get clean && rm -rf /var/lib/apt/lists/* INFO:kolla.common.utils.nova-compute-ironic: ---> Running in 196bc3715e45 INFO:kolla.common.utils.nova-compute-ironic:Get:1 http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope InRelease [5,463 B] INFO:kolla.common.utils.nova-compute-ironic:Get:2 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB] INFO:kolla.common.utils.nova-compute-ironic:Err:1 http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope InRelease INFO:kolla.common.utils.nova-compute-ironic: At least one invalid signature was encountered. INFO:kolla.common.utils.nova-compute-ironic:Get:3 http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [3,463 B] INFO:kolla.common.utils.nova-compute-ironic:Get:6 http://archive.ubuntu.com/ubuntu jammy-security InRelease [110 kB] INFO:kolla.common.utils.nova-compute-ironic:Err:2 http://archive.ubuntu.com/ubuntu jammy-backports InRelease INFO:kolla.common.utils.nova-compute-ironic: At least one invalid signature was encountered. INFO:kolla.common.utils.nova-compute-ironic:Get:5 https://mirror.dal.nexril.net/ubuntu jammy-updates InRelease [119 kB] INFO:kolla.common.utils.nova-compute-ironic:Err:6 http://archive.ubuntu.com/ubuntu jammy-security InRelease INFO:kolla.common.utils.nova-compute-ironic: At least one invalid signature was encountered. INFO:kolla.common.utils.nova-compute-ironic:Get:4 http://ftp.usf.edu/pub/ubuntu jammy InRelease [270 kB] INFO:kolla.common.utils.nova-compute-ironic:Err:5 https://mirror.dal.nexril.net/ubuntu jammy-updates InRelease INFO:kolla.common.utils.nova-compute-ironic: At least one invalid signature was encountered. INFO:kolla.common.utils.nova-compute-ironic:Err:4 http://ftp.usf.edu/pub/ubuntu jammy InRelease INFO:kolla.common.utils.nova-compute-ironic: At least one invalid signature was encountered. INFO:kolla.common.utils.nova-compute-ironic:Reading package lists... INFO:kolla.common.utils.nova-compute-ironic:W: GPG error: http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope InRelease: At least one invalid signature was encountered. INFO:kolla.common.utils.nova-compute-ironic:E: The repository 'http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope InRelease' is not signed. INFO:kolla.common.utils.nova-compute-ironic:W: GPG error: http://archive.ubuntu.com/ubuntu jammy-backports InRelease: At least one invalid signature was encountered. INFO:kolla.common.utils.nova-compute-ironic:E: The repository 'http://archive.ubuntu.com/ubuntu jammy-backports InRelease' is not signed. INFO:kolla.common.utils.nova-compute-ironic:W: GPG error: http://archive.ubuntu.com/ubuntu jammy-security InRelease: At least one invalid signature was encountered. INFO:kolla.common.utils.nova-compute-ironic:E: The repository 'mirror://mirrors.ubuntu.com/mirrors.txt jammy-security InRelease' is not signed. INFO:kolla.common.utils.nova-compute-ironic:W: GPG error: https://mirror.dal.nexril.net/ubuntu jammy-updates InRelease: At least one invalid signature was encountered. INFO:kolla.common.utils.nova-compute-ironic:E: The repository 'mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates InRelease' is not signed. INFO:kolla.common.utils.nova-compute-ironic:W: GPG error: http://ftp.usf.edu/pub/ubuntu jammy InRelease: At least one invalid signature was encountered. INFO:kolla.common.utils.nova-compute-ironic:E: The repository 'mirror://mirrors.ubuntu.com/mirrors.txt jammy InRelease' is not signed. INFO:kolla.common.utils.nova-compute-ironic: INFO:kolla.common.utils.nova-compute-ironic:Removing intermediate container 196bc3715e45 ERROR:kolla.common.utils.nova-compute-ironic:Error'd with the following message ERROR:kolla.common.utils.nova-compute-ironic:The command '/bin/sh -c apt-get --error-on=any update && apt-get -y install --no-install-recommends genisoimage nvme-cli && apt-get clean && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100 INFO:kolla.common.utils:Attempt number: 2 to run task: BuildTask(nova-compute-ironic) DEBUG:kolla.common.utils.nova-compute-ironic:Processing Full trace - https://paste.opendev.org/show/b3IYgcUXsvg0kxVpEaLM/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From maksim.malchuk at gmail.com Tue Jun 20 19:11:52 2023 From: maksim.malchuk at gmail.com (Maksim Malchuk) Date: Tue, 20 Jun 2023 22:11:52 +0300 Subject: [kolla] nova-compute image build failed with invalid signature In-Reply-To: References: Message-ID: Looks like a network issue. Try to manually download any InRelease file using curl from any running container. On Tue, Jun 20, 2023 at 8:48?PM Satish Patel wrote: > Folks, > > Just wanted to check with folks if I did anything wrong here. Building > 2023.1 image of nova-compute and encounter errors. > > root at docker-reg:/opt/kolla/etc/kolla# kolla-build --registry docker-reg:4000 --config-file kolla-build.conf --debug --cache --threads 1 --skip-existing --push --format none nova-compute > INFO:kolla.common.utils:Using engine: docker > INFO:kolla.common.utils:Found the container image folder at /usr/local/share/kolla/docker > INFO:kolla.common.utils:Added image nova-compute-ironic to queue > INFO:kolla.common.utils:Added image nova-compute to queue > INFO:kolla.common.utils:Attempt number: 1 to run task: BuildTask(nova-compute-ironic) > DEBUG:kolla.common.utils.nova-compute-ironic:Processing > INFO:kolla.common.utils.nova-compute-ironic:Building started at 2023-06-20 17:36:33.771585 > DEBUG:kolla.common.utils.nova-compute-ironic:Turned 0 plugins into plugins archive > DEBUG:kolla.common.utils.nova-compute-ironic:Turned 0 additions into additions archive > INFO:kolla.common.utils.nova-compute-ironic:Step 1/4 : FROM docker-reg:4000/kolla/nova-base:2023.1 > INFO:kolla.common.utils.nova-compute-ironic: ---> 0ebe71c922b4 > INFO:kolla.common.utils.nova-compute-ironic:Step 2/4 : LABEL maintainer="Kolla Project (https://launchpad.net/kolla)" name="nova-compute-ironic" build-date="20230620" > INFO:kolla.common.utils.nova-compute-ironic: ---> Using cache > INFO:kolla.common.utils.nova-compute-ironic: ---> 5209e0cc6403 > INFO:kolla.common.utils.nova-compute-ironic:Step 3/4 : RUN apt-get --error-on=any update && apt-get -y install --no-install-recommends genisoimage nvme-cli && apt-get clean && rm -rf /var/lib/apt/lists/* > INFO:kolla.common.utils.nova-compute-ironic: ---> Running in 196bc3715e45 > INFO:kolla.common.utils.nova-compute-ironic:Get:1 http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope InRelease [5,463 B] > INFO:kolla.common.utils.nova-compute-ironic:Get:2 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB] > INFO:kolla.common.utils.nova-compute-ironic:Err:1 http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope InRelease > INFO:kolla.common.utils.nova-compute-ironic: At least one invalid signature was encountered. > INFO:kolla.common.utils.nova-compute-ironic:Get:3 http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [3,463 B] > INFO:kolla.common.utils.nova-compute-ironic:Get:6 http://archive.ubuntu.com/ubuntu jammy-security InRelease [110 kB] > INFO:kolla.common.utils.nova-compute-ironic:Err:2 http://archive.ubuntu.com/ubuntu jammy-backports InRelease > INFO:kolla.common.utils.nova-compute-ironic: At least one invalid signature was encountered. > INFO:kolla.common.utils.nova-compute-ironic:Get:5 https://mirror.dal.nexril.net/ubuntu jammy-updates InRelease [119 kB] > INFO:kolla.common.utils.nova-compute-ironic:Err:6 http://archive.ubuntu.com/ubuntu jammy-security InRelease > INFO:kolla.common.utils.nova-compute-ironic: At least one invalid signature was encountered. > INFO:kolla.common.utils.nova-compute-ironic:Get:4 http://ftp.usf.edu/pub/ubuntu jammy InRelease [270 kB] > INFO:kolla.common.utils.nova-compute-ironic:Err:5 https://mirror.dal.nexril.net/ubuntu jammy-updates InRelease > INFO:kolla.common.utils.nova-compute-ironic: At least one invalid signature was encountered. > INFO:kolla.common.utils.nova-compute-ironic:Err:4 http://ftp.usf.edu/pub/ubuntu jammy InRelease > INFO:kolla.common.utils.nova-compute-ironic: At least one invalid signature was encountered. > INFO:kolla.common.utils.nova-compute-ironic:Reading package lists... > INFO:kolla.common.utils.nova-compute-ironic:W: GPG error: http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope InRelease: At least one invalid signature was encountered. > INFO:kolla.common.utils.nova-compute-ironic:E: The repository 'http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope InRelease' is not signed. > INFO:kolla.common.utils.nova-compute-ironic:W: GPG error: http://archive.ubuntu.com/ubuntu jammy-backports InRelease: At least one invalid signature was encountered. > INFO:kolla.common.utils.nova-compute-ironic:E: The repository 'http://archive.ubuntu.com/ubuntu jammy-backports InRelease' is not signed. > INFO:kolla.common.utils.nova-compute-ironic:W: GPG error: http://archive.ubuntu.com/ubuntu jammy-security InRelease: At least one invalid signature was encountered. > INFO:kolla.common.utils.nova-compute-ironic:E: The repository 'mirror://mirrors.ubuntu.com/mirrors.txt jammy-security InRelease' is not signed. > INFO:kolla.common.utils.nova-compute-ironic:W: GPG error: https://mirror.dal.nexril.net/ubuntu jammy-updates InRelease: At least one invalid signature was encountered. > INFO:kolla.common.utils.nova-compute-ironic:E: The repository 'mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates InRelease' is not signed. > INFO:kolla.common.utils.nova-compute-ironic:W: GPG error: http://ftp.usf.edu/pub/ubuntu jammy InRelease: At least one invalid signature was encountered. > INFO:kolla.common.utils.nova-compute-ironic:E: The repository 'mirror://mirrors.ubuntu.com/mirrors.txt jammy InRelease' is not signed. > INFO:kolla.common.utils.nova-compute-ironic: > INFO:kolla.common.utils.nova-compute-ironic:Removing intermediate container 196bc3715e45 > ERROR:kolla.common.utils.nova-compute-ironic:Error'd with the following message > ERROR:kolla.common.utils.nova-compute-ironic:The command '/bin/sh -c apt-get --error-on=any update && apt-get -y install --no-install-recommends genisoimage nvme-cli && apt-get clean && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100 > INFO:kolla.common.utils:Attempt number: 2 to run task: BuildTask(nova-compute-ironic) > DEBUG:kolla.common.utils.nova-compute-ironic:Processing > > > Full trace - https://paste.opendev.org/show/b3IYgcUXsvg0kxVpEaLM/ > -- Regards, Maksim Malchuk -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Tue Jun 20 21:00:41 2023 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 20 Jun 2023 17:00:41 -0400 Subject: [kolla] nova-compute image build failed with invalid signature In-Reply-To: References: Message-ID: Hi Maksim, Very sorry but it turned out to be a disk full issue :( sorry for the spamming. On Tue, Jun 20, 2023 at 3:12?PM Maksim Malchuk wrote: > Looks like a network issue. Try to manually download any InRelease file > using curl from any running container. > > On Tue, Jun 20, 2023 at 8:48?PM Satish Patel wrote: > >> Folks, >> >> Just wanted to check with folks if I did anything wrong here. Building >> 2023.1 image of nova-compute and encounter errors. >> >> root at docker-reg:/opt/kolla/etc/kolla# kolla-build --registry docker-reg:4000 --config-file kolla-build.conf --debug --cache --threads 1 --skip-existing --push --format none nova-compute >> INFO:kolla.common.utils:Using engine: docker >> INFO:kolla.common.utils:Found the container image folder at /usr/local/share/kolla/docker >> INFO:kolla.common.utils:Added image nova-compute-ironic to queue >> INFO:kolla.common.utils:Added image nova-compute to queue >> INFO:kolla.common.utils:Attempt number: 1 to run task: BuildTask(nova-compute-ironic) >> DEBUG:kolla.common.utils.nova-compute-ironic:Processing >> INFO:kolla.common.utils.nova-compute-ironic:Building started at 2023-06-20 17:36:33.771585 >> DEBUG:kolla.common.utils.nova-compute-ironic:Turned 0 plugins into plugins archive >> DEBUG:kolla.common.utils.nova-compute-ironic:Turned 0 additions into additions archive >> INFO:kolla.common.utils.nova-compute-ironic:Step 1/4 : FROM docker-reg:4000/kolla/nova-base:2023.1 >> INFO:kolla.common.utils.nova-compute-ironic: ---> 0ebe71c922b4 >> INFO:kolla.common.utils.nova-compute-ironic:Step 2/4 : LABEL maintainer="Kolla Project (https://launchpad.net/kolla)" name="nova-compute-ironic" build-date="20230620" >> INFO:kolla.common.utils.nova-compute-ironic: ---> Using cache >> INFO:kolla.common.utils.nova-compute-ironic: ---> 5209e0cc6403 >> INFO:kolla.common.utils.nova-compute-ironic:Step 3/4 : RUN apt-get --error-on=any update && apt-get -y install --no-install-recommends genisoimage nvme-cli && apt-get clean && rm -rf /var/lib/apt/lists/* >> INFO:kolla.common.utils.nova-compute-ironic: ---> Running in 196bc3715e45 >> INFO:kolla.common.utils.nova-compute-ironic:Get:1 http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope InRelease [5,463 B] >> INFO:kolla.common.utils.nova-compute-ironic:Get:2 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB] >> INFO:kolla.common.utils.nova-compute-ironic:Err:1 http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope InRelease >> INFO:kolla.common.utils.nova-compute-ironic: At least one invalid signature was encountered. >> INFO:kolla.common.utils.nova-compute-ironic:Get:3 http://mirrors.ubuntu.com/mirrors.txt Mirrorlist [3,463 B] >> INFO:kolla.common.utils.nova-compute-ironic:Get:6 http://archive.ubuntu.com/ubuntu jammy-security InRelease [110 kB] >> INFO:kolla.common.utils.nova-compute-ironic:Err:2 http://archive.ubuntu.com/ubuntu jammy-backports InRelease >> INFO:kolla.common.utils.nova-compute-ironic: At least one invalid signature was encountered. >> INFO:kolla.common.utils.nova-compute-ironic:Get:5 https://mirror.dal.nexril.net/ubuntu jammy-updates InRelease [119 kB] >> INFO:kolla.common.utils.nova-compute-ironic:Err:6 http://archive.ubuntu.com/ubuntu jammy-security InRelease >> INFO:kolla.common.utils.nova-compute-ironic: At least one invalid signature was encountered. >> INFO:kolla.common.utils.nova-compute-ironic:Get:4 http://ftp.usf.edu/pub/ubuntu jammy InRelease [270 kB] >> INFO:kolla.common.utils.nova-compute-ironic:Err:5 https://mirror.dal.nexril.net/ubuntu jammy-updates InRelease >> INFO:kolla.common.utils.nova-compute-ironic: At least one invalid signature was encountered. >> INFO:kolla.common.utils.nova-compute-ironic:Err:4 http://ftp.usf.edu/pub/ubuntu jammy InRelease >> INFO:kolla.common.utils.nova-compute-ironic: At least one invalid signature was encountered. >> INFO:kolla.common.utils.nova-compute-ironic:Reading package lists... >> INFO:kolla.common.utils.nova-compute-ironic:W: GPG error: http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope InRelease: At least one invalid signature was encountered. >> INFO:kolla.common.utils.nova-compute-ironic:E: The repository 'http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-updates/antelope InRelease' is not signed. >> INFO:kolla.common.utils.nova-compute-ironic:W: GPG error: http://archive.ubuntu.com/ubuntu jammy-backports InRelease: At least one invalid signature was encountered. >> INFO:kolla.common.utils.nova-compute-ironic:E: The repository 'http://archive.ubuntu.com/ubuntu jammy-backports InRelease' is not signed. >> INFO:kolla.common.utils.nova-compute-ironic:W: GPG error: http://archive.ubuntu.com/ubuntu jammy-security InRelease: At least one invalid signature was encountered. >> INFO:kolla.common.utils.nova-compute-ironic:E: The repository 'mirror://mirrors.ubuntu.com/mirrors.txt jammy-security InRelease' is not signed. >> INFO:kolla.common.utils.nova-compute-ironic:W: GPG error: https://mirror.dal.nexril.net/ubuntu jammy-updates InRelease: At least one invalid signature was encountered. >> INFO:kolla.common.utils.nova-compute-ironic:E: The repository 'mirror://mirrors.ubuntu.com/mirrors.txt jammy-updates InRelease' is not signed. >> INFO:kolla.common.utils.nova-compute-ironic:W: GPG error: http://ftp.usf.edu/pub/ubuntu jammy InRelease: At least one invalid signature was encountered. >> INFO:kolla.common.utils.nova-compute-ironic:E: The repository 'mirror://mirrors.ubuntu.com/mirrors.txt jammy InRelease' is not signed. >> INFO:kolla.common.utils.nova-compute-ironic: >> INFO:kolla.common.utils.nova-compute-ironic:Removing intermediate container 196bc3715e45 >> ERROR:kolla.common.utils.nova-compute-ironic:Error'd with the following message >> ERROR:kolla.common.utils.nova-compute-ironic:The command '/bin/sh -c apt-get --error-on=any update && apt-get -y install --no-install-recommends genisoimage nvme-cli && apt-get clean && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100 >> INFO:kolla.common.utils:Attempt number: 2 to run task: BuildTask(nova-compute-ironic) >> DEBUG:kolla.common.utils.nova-compute-ironic:Processing >> >> >> Full trace - https://paste.opendev.org/show/b3IYgcUXsvg0kxVpEaLM/ >> > > > -- > Regards, > Maksim Malchuk > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbaker at redhat.com Wed Jun 21 03:49:02 2023 From: sbaker at redhat.com (Steve Baker) Date: Wed, 21 Jun 2023 15:49:02 +1200 Subject: [ironic] [networking-generic-switch] NGS performance and moving to an agent-based design In-Reply-To: References: Message-ID: Baptiste presented a talk for this at the Vancouver summit last week which clearly lays out the evolution of this, I'd recommend watching the video when it is published, titled: Scaling bare-metal network reconfiguration for a large-scale research infrastructure: a journey with Neutron and Networking-Generic-Switch On 20/06/23 05:32, Baptiste Jonglez wrote: > Hello, > > We have been hitting performance issues with networking-generic-switch > (NGS), the Neutron ML2 plugin that performs dynamic reconfiguration of > physical switches, most notably used by Ironic. > > In our quest to improve NGS performance, we have designed and implemented > an agent-based system for NGS, and we would now like to upstream it. > > Since this is a fairly big change, and some parts of the design such as HA > are not completely clear yet, feel free to join the discussion on the RFE: > > https://bugs.launchpad.net/networking-generic-switch/+bug/2024385 > > Regards, > Baptiste > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Wed Jun 21 04:07:48 2023 From: satish.txt at gmail.com (Satish Patel) Date: Wed, 21 Jun 2023 00:07:48 -0400 Subject: [kolla-ansible][nova] nova upgrade failed from zed to antelope Message-ID: Folks, I'am upgrading zed to antelope using kolla-ansible and encount following error TASK [nova : Upgrade status check result] ************************************************************************************************************************************************************ fatal: [kolla-infra-1]: FAILED! => {"changed": false, "msg": ["There was an upgrade status check failure!", "See the detail at https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks "]} fatal: [kolla-infra-2]: FAILED! => {"changed": false, "msg": ["There was an upgrade status check failure!", "See the detail at https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks "]} fatal: [kolla-infra-3]: FAILED! => {"changed": false, "msg": ["There was an upgrade status check failure!", "See the detail at https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks "]} After running upgrade check command manually on nova-api container got following error (nova-api)[root at kolla-infra-2 /]# nova-status upgrade check Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code. +---------------------------------------------------------------------+ | Upgrade Check Results | +---------------------------------------------------------------------+ | Check: Cells v2 | | Result: Success | | Details: None | +---------------------------------------------------------------------+ | Check: Placement API | | Result: Success | | Details: None | +---------------------------------------------------------------------+ | Check: Cinder API | | Result: Success | | Details: None | +---------------------------------------------------------------------+ | Check: Policy File JSON to YAML Migration | | Result: Success | | Details: None | +---------------------------------------------------------------------+ | Check: Older than N-1 computes | | Result: Success | | Details: None | +---------------------------------------------------------------------+ | Check: hw_machine_type unset | | Result: Success | | Details: None | +---------------------------------------------------------------------+ | Check: Service User Token Configuration | | Result: Failure | | Details: Service user token configuration is required for all Nova | | services. For more details see the following: https://docs | | .openstack.org/latest/nova/admin/configuration/service- | | user-token.html | +---------------------------------------------------------------------+ Service user token reference to the following doc [1] . Do I need to configure token users in order to upgrade nova? [1] https://docs.openstack.org/nova/latest/admin/configuration/service-user-token.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Wed Jun 21 08:06:43 2023 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 21 Jun 2023 10:06:43 +0200 Subject: [kolla-ansible][nova] nova upgrade failed from zed to antelope In-Reply-To: References: Message-ID: Le mer. 21 juin 2023 ? 06:16, Satish Patel a ?crit : > Folks, > > I'am upgrading zed to antelope using kolla-ansible and encount following > error > > TASK [nova : Upgrade status check result] > ************************************************************************************************************************************************************ > fatal: [kolla-infra-1]: FAILED! => {"changed": false, "msg": ["There was > an upgrade status check failure!", "See the detail at > https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks > "]} > fatal: [kolla-infra-2]: FAILED! => {"changed": false, "msg": ["There was > an upgrade status check failure!", "See the detail at > https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks > "]} > fatal: [kolla-infra-3]: FAILED! => {"changed": false, "msg": ["There was > an upgrade status check failure!", "See the detail at > https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks > "]} > > > After running upgrade check command manually on nova-api container got > following error > > (nova-api)[root at kolla-infra-2 /]# nova-status upgrade check > Modules with known eventlet monkey patching issues were imported prior to > eventlet monkey patching: urllib3. This warning can usually be ignored if > the caller is only importing and not executing nova code. > +---------------------------------------------------------------------+ > | Upgrade Check Results | > +---------------------------------------------------------------------+ > | Check: Cells v2 | > | Result: Success | > | Details: None | > +---------------------------------------------------------------------+ > | Check: Placement API | > | Result: Success | > | Details: None | > +---------------------------------------------------------------------+ > | Check: Cinder API | > | Result: Success | > | Details: None | > +---------------------------------------------------------------------+ > | Check: Policy File JSON to YAML Migration | > | Result: Success | > | Details: None | > +---------------------------------------------------------------------+ > | Check: Older than N-1 computes | > | Result: Success | > | Details: None | > +---------------------------------------------------------------------+ > | Check: hw_machine_type unset | > | Result: Success | > | Details: None | > +---------------------------------------------------------------------+ > | Check: Service User Token Configuration | > | Result: Failure | > | Details: Service user token configuration is required for all Nova | > | services. For more details see the following: https://docs | > | .openstack.org/latest/nova/admin/configuration/service- | > | user-token.html | > +---------------------------------------------------------------------+ > > Service user token reference to the following doc [1] . Do I need to > configure token users in order to upgrade nova? > > As you can read in the documentation below, yes indeed it became mandatory by the 27.1.0 release. In general, if you have a question about a new release, you can look at the release notes by https://docs.openstack.org/releasenotes/nova/2023.1.html#relnotes-27-1-0-stable-2023-1 and you'll see what is changed. HTH, -Sylvain [1] > https://docs.openstack.org/nova/latest/admin/configuration/service-user-token.html > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maksim.malchuk at gmail.com Wed Jun 21 08:15:25 2023 From: maksim.malchuk at gmail.com (Maksim Malchuk) Date: Wed, 21 Jun 2023 11:15:25 +0300 Subject: [kolla-ansible][nova] nova upgrade failed from zed to antelope In-Reply-To: References: Message-ID: Hi Satish, It very strange because the fix for service token for the Nova was merged a month ago ( https://review.opendev.org/q/I2189dafca070accfd8efcd4b8cc4221c6decdc9f) Maybe you have custom configuration which overrides nova.conf ? On Wed, Jun 21, 2023 at 7:16?AM Satish Patel wrote: > Folks, > > I'am upgrading zed to antelope using kolla-ansible and encount following > error > > TASK [nova : Upgrade status check result] > ************************************************************************************************************************************************************ > fatal: [kolla-infra-1]: FAILED! => {"changed": false, "msg": ["There was > an upgrade status check failure!", "See the detail at > https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks > "]} > fatal: [kolla-infra-2]: FAILED! => {"changed": false, "msg": ["There was > an upgrade status check failure!", "See the detail at > https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks > "]} > fatal: [kolla-infra-3]: FAILED! => {"changed": false, "msg": ["There was > an upgrade status check failure!", "See the detail at > https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks > "]} > > > After running upgrade check command manually on nova-api container got > following error > > (nova-api)[root at kolla-infra-2 /]# nova-status upgrade check > Modules with known eventlet monkey patching issues were imported prior to > eventlet monkey patching: urllib3. This warning can usually be ignored if > the caller is only importing and not executing nova code. > +---------------------------------------------------------------------+ > | Upgrade Check Results | > +---------------------------------------------------------------------+ > | Check: Cells v2 | > | Result: Success | > | Details: None | > +---------------------------------------------------------------------+ > | Check: Placement API | > | Result: Success | > | Details: None | > +---------------------------------------------------------------------+ > | Check: Cinder API | > | Result: Success | > | Details: None | > +---------------------------------------------------------------------+ > | Check: Policy File JSON to YAML Migration | > | Result: Success | > | Details: None | > +---------------------------------------------------------------------+ > | Check: Older than N-1 computes | > | Result: Success | > | Details: None | > +---------------------------------------------------------------------+ > | Check: hw_machine_type unset | > | Result: Success | > | Details: None | > +---------------------------------------------------------------------+ > | Check: Service User Token Configuration | > | Result: Failure | > | Details: Service user token configuration is required for all Nova | > | services. For more details see the following: https://docs | > | .openstack.org/latest/nova/admin/configuration/service- | > | user-token.html | > +---------------------------------------------------------------------+ > > Service user token reference to the following doc [1] . Do I need to > configure token users in order to upgrade nova? > > [1] > https://docs.openstack.org/nova/latest/admin/configuration/service-user-token.html > > > > -- Regards, Maksim Malchuk -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Jun 21 08:50:42 2023 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 21 Jun 2023 09:50:42 +0100 Subject: Cinder Bug Report 2023-06-21 Message-ID: Hello Argonauts, Cinder Bug Meeting Etherpad *Wishlist* - Restore RBD Incremental Backup Fails When Using the --container Argument. - *Status*: Fix proposed to master. *Low* - Hitachi: wrong exception when deleted volume is busy. - *Status*: Fix proposed to master . Cheers, -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From antony at edgebricks.com Wed Jun 21 09:45:37 2023 From: antony at edgebricks.com (Antony P) Date: Wed, 21 Jun 2023 15:15:37 +0530 Subject: HOT template help Message-ID: Hi team, This is user data jenkins_password=$(sudo cat /var/lib/jenkins/secrets/initialAdminPassword) i want to pass this as output like Output: value: password How can i resolve this? *--Warm Regards* *Antony P* *Jr. DevOps Engineer* antony at edgebricks.com Mob No: +919498079898 www.edgebricks.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Wed Jun 21 10:13:37 2023 From: mrunge at matthias-runge.de (Matthias Runge) Date: Wed, 21 Jun 2023 12:13:37 +0200 Subject: [ceilometer][gnocchi][telemetry] (User) frontend or tool to create reports on telemetry data (GPU, RAM, disk, network, ...) In-Reply-To: References: Message-ID: <247e9ba9-6045-ace9-09ff-3d40039039e7@matthias-runge.de> Hi, There was an integration with ceilometer ages ago, but it was perceived as unbearable slow. I can't find code references anymore. The issue with using grafana is that it doesn't do any user based filtering. Matthias On 19/06/2023 15:51, Dmitriy Rabotyagov wrote: > I would say that you need to do that on the publisher side. For > Gnocchi there's a Grafana plugin [1] that can be potentially used for > representing data to users. I'm not aware of any Gnocchi integration > with Horizon directly. Though it would make total sense to me to have > such plugin and there was even an old spec for that [2] > > [1] https://grafana.com/grafana/plugins/gnocchixyz-gnocchi-datasource/ > [2] https://blueprints.launchpad.net/horizon/+spec/horizon-gnocchi-graphs > > ??, 19 ???. 2023??. ? 15:36, Christian Rohmann : >> >> Hello OpenStack-Discuss, >> >> I was wondering if there are any simple frontends or similar solutions >> to provide ceilometer's telemetry data, such as CPU, RAM, volume >> throughput and IOPs, network throughput, ... to users? Something like a >> basic horizon dashboard? >> >> Or does anybody have written something using the API to create some nice >> graphs? >> I know there is Cloudkitty, but that looks more targeted towards doing >> rating and billing with this kind of data? >> >> >> Thanks and with regards >> >> Christian >> >> > -- Matthias Runge From kkchn.in at gmail.com Wed Jun 21 11:33:53 2023 From: kkchn.in at gmail.com (KK CHN) Date: Wed, 21 Jun 2023 17:03:53 +0530 Subject: DRaaS question Message-ID: List, Looking for solutions using OpenStack Similar to 1. VMWare VCDR( VMware Cloud Disaster Recovery) as a Pilot Light DR Solution hassle free Failover and Failback. 2. AWS Cloud Replication Agent( Cloud Endurance ) Operational DR solution in Cloud ( Both 1 and 2 have capabilities of acting as an Operational DR in Cloud and serving both from AWS demonstrated as a PoC ). Any solution/directives to achieve a similar to better solution using OpenStack techniques and components? Please enlighten with your ideas. Thank you Krishane. -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Wed Jun 21 13:02:20 2023 From: satish.txt at gmail.com (Satish Patel) Date: Wed, 21 Jun 2023 09:02:20 -0400 Subject: [kolla-ansible][nova] nova upgrade failed from zed to antelope In-Reply-To: References: Message-ID: Hi Maksim, This is all I have in my config/ I don't have any override. (venv-kolla) root at kolla-infra-1:~# ls -l /etc/kolla/config/ total 8 -rw-r--r-- 1 root root 187 Apr 30 04:11 global.conf drwxr-xr-x 2 root root 4096 May 3 01:38 neutron (venv-kolla) root at kolla-infra-1:~# cat /etc/kolla/config/global.conf [oslo_messaging_rabbit] kombu_reconnect_delay=0.5 rabbit_transient_queues_ttl=60 On Wed, Jun 21, 2023 at 4:15?AM Maksim Malchuk wrote: > Hi Satish, > > It very strange because the fix for service token for the Nova was merged > a month ago ( > https://review.opendev.org/q/I2189dafca070accfd8efcd4b8cc4221c6decdc9f) > Maybe you have custom configuration which overrides nova.conf ? > > On Wed, Jun 21, 2023 at 7:16?AM Satish Patel wrote: > >> Folks, >> >> I'am upgrading zed to antelope using kolla-ansible and encount following >> error >> >> TASK [nova : Upgrade status check result] >> ************************************************************************************************************************************************************ >> fatal: [kolla-infra-1]: FAILED! => {"changed": false, "msg": ["There was >> an upgrade status check failure!", "See the detail at >> https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks >> "]} >> fatal: [kolla-infra-2]: FAILED! => {"changed": false, "msg": ["There was >> an upgrade status check failure!", "See the detail at >> https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks >> "]} >> fatal: [kolla-infra-3]: FAILED! => {"changed": false, "msg": ["There was >> an upgrade status check failure!", "See the detail at >> https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks >> "]} >> >> >> After running upgrade check command manually on nova-api container got >> following error >> >> (nova-api)[root at kolla-infra-2 /]# nova-status upgrade check >> Modules with known eventlet monkey patching issues were imported prior to >> eventlet monkey patching: urllib3. This warning can usually be ignored if >> the caller is only importing and not executing nova code. >> +---------------------------------------------------------------------+ >> | Upgrade Check Results | >> +---------------------------------------------------------------------+ >> | Check: Cells v2 | >> | Result: Success | >> | Details: None | >> +---------------------------------------------------------------------+ >> | Check: Placement API | >> | Result: Success | >> | Details: None | >> +---------------------------------------------------------------------+ >> | Check: Cinder API | >> | Result: Success | >> | Details: None | >> +---------------------------------------------------------------------+ >> | Check: Policy File JSON to YAML Migration | >> | Result: Success | >> | Details: None | >> +---------------------------------------------------------------------+ >> | Check: Older than N-1 computes | >> | Result: Success | >> | Details: None | >> +---------------------------------------------------------------------+ >> | Check: hw_machine_type unset | >> | Result: Success | >> | Details: None | >> +---------------------------------------------------------------------+ >> | Check: Service User Token Configuration | >> | Result: Failure | >> | Details: Service user token configuration is required for all Nova | >> | services. For more details see the following: https://docs | >> | .openstack.org/latest/nova/admin/configuration/service- | >> | user-token.html | >> +---------------------------------------------------------------------+ >> >> Service user token reference to the following doc [1] . Do I need to >> configure token users in order to upgrade nova? >> >> [1] >> https://docs.openstack.org/nova/latest/admin/configuration/service-user-token.html >> >> >> >> > > -- > Regards, > Maksim Malchuk > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Wed Jun 21 13:02:56 2023 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Wed, 21 Jun 2023 15:02:56 +0200 Subject: [ceilometer][cinder] How to properly set up Block Storage meters / cinder-volume-usage-audit Message-ID: Hello OpenStack-Discuss, when setting up telemetry (ceilometer, gnocci, aodh) I followed the documentation at https://docs.openstack.org/ceilometer/latest/install/cinder/install-cinder-ubuntu.html to set up sending metrics for cinder to ceilomter. 1) The instruction to run cinder-volume-usage-audit as a timer / cron every 5 minutes > */5 * * * * /path/to/cinder-volume-usage-audit --send_actions has me confused. This would cause the usage data for the last full month, to be sent to the notifications queue every 5 minutes. There is no progress tracker or pointer limiting this to only sent data that has not yet been sent. Looking at how openstack-ansible configures this (https://github.com/openstack/openstack-ansible-os_cinder/blob/1af3003e163e09f917dd124ae874f1bea6fe2c6b/tasks/main.yml#L147) it seems they were aware that running this without bounds and every 5 minutes is wrong. I also opened a bug (https://bugs.launchpad.net/ceilometer/+bug/2024475) about this. 2) Extending on 1) I believe setting the > volume_usage_audit_period in the cinder configuration, to e.g. month or hour, will already set the interval for which the data is emitted. Then the important thing is to only run cinder-volume-usage-audit in the same frequency. 3) Looking at the "--send_actions" options, I am wondering if the "create/delete" actions for volumes, snapshosts and backups are not sent by cinder anyways and why they would need to be sent again in batch and by cinder-volume-usage-audit? I am referring to > [oslo_messaging_notifications] > driver = messagingv2 > being enabled. This might also be the reasoning behind this bug, https://bugs.launchpad.net/openstack-ansible/+bug/1968734, asking for this parameter to be behind a config switch to avoid duplicate actions to be sent. 4) Thinking about the whole setup more questions arise * What happens when I run this more than once in the same interval? What will ceilometer do with the (redundant) data? * I am also wondering how well this single piece of software scales and how this could be split up more? * Is there no way to implement a pointer to allow running this command more often and only send "new data"? Just like the timestamp up to which data was sent. * What about any locking / coordination to enable multiple instances? Regards Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From geert.geurts at linuxfabrik.ch Wed Jun 21 07:09:37 2023 From: geert.geurts at linuxfabrik.ch (Geert Geurts) Date: Wed, 21 Jun 2023 09:09:37 +0200 Subject: versioning hidden container accounting Message-ID: <8bcbf2b769fa9fb8e46b9e269e6d63fcd98870db.camel@linuxfabrik.ch> Hello I've implemented the newer type of versioning (x-versions-enabled: True) for a container. ? The documentation (https://docs.openstack.org/swift/latest/middleware.html#object-versioning) states that actual data is written to a hidden container. Now my question is would it be possible to set the account for the data of the current ("is_latest": true) objects to a different account? Would it be possible to set headers on the event that 'current' changes? Can I see/change the properties of this "hidden" container? Thanks allot for any help/pointers! Best regards, Geert From ulrich.schwickerath at cern.ch Wed Jun 21 14:10:26 2023 From: ulrich.schwickerath at cern.ch (Ulrich Schwickerath) Date: Wed, 21 Jun 2023 16:10:26 +0200 Subject: Experience with VGPUs In-Reply-To: <2DD18791-4BFD-4FF0-AAAC-77D8C18FB138@me.com> References: <2DD18791-4BFD-4FF0-AAAC-77D8C18FB138@me.com> Message-ID: Hi, all, Sylvain explained quite well how to do it technically. We have a PoC running, however, still have some stability issues, as mentioned on the summit. We're running the NVIDIA virtualisation drivers on the hypervisors and the guests, which requires a license from NVIDIA. In our configuration we are still quite limited in the sense that we have to configure all cards in the same hypervisor in the same way, that is the same MIG partitioning. Also, it is not possible to attach more than one device to a single VM. As mentioned in the presentation we are a bit behind with Nova, and in the process of fixing this as we speak. Because of that we had to do a couple of back ports in Nova to make it work, which we hope to be able to get rid of by the ongoing upgrades. Let me? see if I can make the slides available here. Cheers, Ulrich On 20/06/2023 19:07, Oliver Weinmann wrote: > Hi everyone, > > Jumping into this topic again. Unfortunately I haven?t had time yet to > test Nvidia VGPU in OpenStack but in VMware Vsphere. What our users > complain most about is the inflexibility since you have to use the > same profile on all vms that use the gpu. One user mentioned to try > SLURM. I know there is no official OpenStack project for SLURM but I > wonder if anyone else tried this approach? If I understood correctly > this would also not require any Nvidia subscription since you > passthrough the GPU to a single instance and you don?t use VGPU nor MIG. > > Cheers, > Oliver > > Von meinem iPhone gesendet > >> Am 20.06.2023 um 17:34 schrieb Sylvain Bauza : >> >> ? >> >> >> Le?mar. 20 juin 2023 ??16:31, Mahendra Paipuri >> a ?crit?: >> >> Thanks Sylvain for the pointers. >> >> One of the questions we have is: can we create MIG profiles on >> the host and then attach each one or more profile(s) to VMs? This >> bug [1] reports that once we attach one profile to a VM, rest of >> MIG profiles become unavailable. From what you have said about >> using SR-IOV and VFs, I guess this should be possible. >> >> >> Correct, what you need is to create first the VFs using sriov-manage >> and then you can create the MIG instances. >> Once you create the MIG instances using the profiles you want, you >> will see that the related available_instances for the nvidia mdev >> type (by looking at sysfs) will say that you can have a single vGPU >> for this profile. >> Then, you can use that mdev type with Nova using nova.conf. >> >> That being said, while this above is simple, the below talk was >> saying more about how to correctly use the GPU by the host so please >> wait :-) >> >> I think you are talking about "vGPUs with OpenStack Nova" talk on >> OpenInfra stage. I will look into it once the videos will be online. >> >> >> Indeed. >> -S >> >> [1] https://bugs.launchpad.net/nova/+bug/2008883 >> >> Thanks >> >> Regards >> >> Mahendra >> >> On 20/06/2023 15:47, Sylvain Bauza wrote: >>> >>> >>> Le?mar. 20 juin 2023 ??15:12, PAIPURI Mahendra >>> a ?crit?: >>> >>> Hello Ulrich, >>> >>> >>> I am relaunching this discussion as I noticed that you gave >>> a talk about this topic?at OpenInfra Summit in Vancouver. Is >>> it possible to share the presentation here? I hope the talks >>> will be uploaded soon in YouTube. >>> >>> >>> We are mainly interested in using MIG instances in Openstack >>> cloud and I could not really find a lot of information?by >>> googling. If you could share your experiences, that would be >>> great. >>> >>> >>> >>> Due to scheduling conflicts, I wasn't able to attend Ulrich's >>> session but his feedback will be greatly listened to by me. >>> >>> FWIW, there was also a short session about how to enable MIG and >>> play with Nova at the OpenInfra stage (and that one I was able >>> to attend it), and it was quite seamless. What exact information >>> are you looking for ? >>> The idea with MIG is that you need to create SRIOV VFs above the >>> MIG instances using sriov-manage script provided by nvidia so >>> that the mediated devices will use those VFs as the base PCI >>> devices to be used for Nova. >>> >>> Cheers. >>> >>> >>> Regards >>> >>> Mahendra >>> >>> ------------------------------------------------------------------------ >>> *De :* Ulrich Schwickerath >>> *Envoy? :* lundi 16 janvier 2023 11:38:08 >>> *? :* openstack-discuss at lists.openstack.org >>> *Objet :* Re: ??: Experience with VGPUs >>> >>> Hi, all, >>> >>> just to add to the discussion, at CERN we have recently >>> deployed a bunch of A100 GPUs in PCI passthrough mode, and >>> are now looking into improving their usage by using MIG. >>> From the NOVA point of view things seem to work OK, we can >>> schedule VMs requesting a VGPU, the client starts up and >>> gets a license token from our NVIDIA license server >>> (distributing license keys is our private cloud is >>> relatively easy in our case). It's a PoC only for the time >>> being, and we're not ready to put that forward as we're >>> facing issues with CUDA on the client (it fails immediately >>> in memory operations with 'not supported', still >>> investigating why this happens). >>> >>> Once we get that working it would be nice to be able to have >>> a more fine grained scheduling so that people can ask for >>> MIG devices of different size. The other challenge is how to >>> set limits on GPU resources. Once the above issues have been >>> sorted out we may want to look into cyborg as well thus we >>> are quite interested in first experiences with this. >>> >>> Kind regards, >>> >>> Ulrich >>> >>> On 13.01.23 21:06, Dmitriy Rabotyagov wrote: >>>> To have that said, deb/rpm packages they are providing >>>> doesn't help much, as: >>>> * There is no repo for them, so you need to download them >>>> manually from enterprise portal >>>> * They can't be upgraded anyway, as driver version is part >>>> of the package name. And each package conflicts with any >>>> another one. So you need to explicitly remove old package >>>> and only then install new one. And yes, you must stop all >>>> VMs before upgrading driver and no, you can't live migrate >>>> GPU mdev devices due to that now being implemented in qemu. >>>> So deb/rpm/generic driver doesn't matter at the end tbh. >>>> >>>> >>>> ??, 13 ???. 2023 ?., 20:56 Cedric : >>>> >>>> >>>> Ended up with the very same conclusions than Dimitry >>>> regarding the use of Nvidia Vgrid for the VGPU use case >>>> with Nova, it works pretty well but: >>>> >>>> - respecting the licensing model as operationnal >>>> constraints, note that guests need to reach a license >>>> server in order to get a token (could be via the Nvidia >>>> SaaS service or on-prem) >>>> - drivers for both guest and hypervisor are not easy to >>>> implement and maintain on large scale. A year ago, >>>> hypervisors drivers were not packaged to Debian/Ubuntu, >>>> but builded though a bash script, thus requiering >>>> additional automatisation work and careful attention >>>> regarding kernel update/reboot of Nova hypervisors. >>>> >>>> Cheers >>>> >>>> >>>> On Fri, Jan 13, 2023 at 4:21 PM Dmitriy Rabotyagov >>>> wrote: >>>> > >>>> > You are saying that, like Nvidia GRID drivers are >>>> open-sourced while >>>> > in fact they're super far from being that. In order >>>> to download >>>> > drivers not only for hypervisors, but also for guest >>>> VMs you need to >>>> > have an account in their Enterprise Portal. It took >>>> me roughly 6 weeks >>>> > of discussions with hardware vendors and Nvidia >>>> support to get a >>>> > proper account there. And that happened only after >>>> applying for their >>>> > Partner Network (NPN). >>>> > That still doesn't solve the issue of how to provide >>>> drivers to >>>> > guests, except pre-build a series of images with >>>> these drivers >>>> > pre-installed (we ended up with making a DIB element >>>> for that [1]). >>>> > Not saying about the need to distribute license >>>> tokens for guests and >>>> > the whole mess with compatibility between hypervisor >>>> and guest drivers >>>> > (as guest driver can't be newer then host one, and >>>> HVs can't be too >>>> > new either). >>>> > >>>> > It's not that I'm protecting AMD, but just saying >>>> that Nvidia is not >>>> > that straightforward either, and at least on paper >>>> AMD vGPUs look >>>> > easier both for operators and end-users. >>>> > >>>> > [1] >>>> https://github.com/citynetwork/dib-elements/tree/main/nvgrid >>>> > >>>> > > >>>> > > As for AMD cards, AMD stated that some of their MI >>>> series card supports SR-IOV for vGPUs. However, those >>>> drivers are never open source or provided closed source >>>> to public, only large cloud providers are able to get >>>> them. So I don't really recommend getting AMD cards for >>>> vGPU unless you are able to get support from them. >>>> > > >>>> > >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From knikolla at bu.edu Wed Jun 21 14:26:39 2023 From: knikolla at bu.edu (Nikolla, Kristi) Date: Wed, 21 Jun 2023 14:26:39 +0000 Subject: [keystone] Updates to core reviewers Message-ID: <3403DBC1-63E6-41E1-8D8F-EDE182EB691A@bu.edu> Hi all, Some updates to the core reviewers in Keystone repos. Dave Wilde (d34dh0r53) has been Keystone PTL for the Antelope and Bobcat cycle and has done an amazing work. Dave is now a core reviewer in Keystone. Artem Goncharov (gtema) is the PTL of OpenStack SDK, he is willing to help out with the Keystoneauth library, that being a core part of the SDK and CLI. Thank you Artem, you are now a core reviewer of Keystoneauth. Lance Bragstad (lbragstad) has been inactive for a bit over a year now and I believe doesn't work on OpenStack anymore. Due to that I am removing as a core reviewer from Keystone. Lance has been a contributor to Keystone and OpenStack for a really really long time and was an essential part of the Keystone team, serving as its PTL too. Best, Kristi Nikolla From dwilde at redhat.com Wed Jun 21 14:35:52 2023 From: dwilde at redhat.com (Dave Wilde) Date: Wed, 21 Jun 2023 09:35:52 -0500 Subject: [keystone] Updates to core reviewers In-Reply-To: <3403DBC1-63E6-41E1-8D8F-EDE182EB691A@bu.edu> References: <3403DBC1-63E6-41E1-8D8F-EDE182EB691A@bu.edu> Message-ID: <1634869b-ec25-40a1-b1c1-ffd6e668eff0@Spark> Thank you Kristi! I'd like to echo the huge thanks for Lance and also welcome Artem to the core team for keystoneauth.??Thank you for offering to help with that project! And as a shameless plug for the Keystone Reviewathons, Artem if you (or anyone else) are interested in joining the reviewathons we have them every Friday at 14:00 UTC on Google Meet.??I can send out calendar invites to anyone interested in attending. /Dave On Jun 21, 2023 at 9:30 AM -0500, Nikolla, Kristi , wrote: > Hi all, > > Some updates to the core reviewers in Keystone repos. > > Dave Wilde (d34dh0r53) has been Keystone PTL for the Antelope and Bobcat cycle and has done an amazing work. Dave is now a core reviewer in Keystone. > > Artem Goncharov (gtema) is the PTL of OpenStack SDK, he is willing to help out with the Keystoneauth library, that being a core part of the SDK and CLI. Thank you Artem, you are now a core reviewer of Keystoneauth. > > Lance Bragstad (lbragstad) has been inactive for a bit over a year now and I believe doesn't work on OpenStack anymore. Due to that I am removing as a core reviewer from Keystone. Lance has been a contributor to Keystone and OpenStack for a really really long time and was an essential part of the Keystone team, serving as its PTL too. > > Best, > Kristi Nikolla > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maksim.malchuk at gmail.com Wed Jun 21 14:47:50 2023 From: maksim.malchuk at gmail.com (Maksim Malchuk) Date: Wed, 21 Jun 2023 17:47:50 +0300 Subject: [kolla-ansible][nova] nova upgrade failed from zed to antelope In-Reply-To: References: Message-ID: Seems strange. If you have the correct latest images, and can reproduce the issue, then create the bug report on launchpad. On Wed, Jun 21, 2023 at 4:02?PM Satish Patel wrote: > Hi Maksim, > > This is all I have in my config/ I don't have any override. > > (venv-kolla) root at kolla-infra-1:~# ls -l /etc/kolla/config/ > total 8 > -rw-r--r-- 1 root root 187 Apr 30 04:11 global.conf > drwxr-xr-x 2 root root 4096 May 3 01:38 neutron > > > (venv-kolla) root at kolla-infra-1:~# cat /etc/kolla/config/global.conf > [oslo_messaging_rabbit] > kombu_reconnect_delay=0.5 > rabbit_transient_queues_ttl=60 > > > > On Wed, Jun 21, 2023 at 4:15?AM Maksim Malchuk > wrote: > >> Hi Satish, >> >> It very strange because the fix for service token for the Nova was merged >> a month ago ( >> https://review.opendev.org/q/I2189dafca070accfd8efcd4b8cc4221c6decdc9f) >> Maybe you have custom configuration which overrides nova.conf ? >> >> On Wed, Jun 21, 2023 at 7:16?AM Satish Patel >> wrote: >> >>> Folks, >>> >>> I'am upgrading zed to antelope using kolla-ansible and encount following >>> error >>> >>> TASK [nova : Upgrade status check result] >>> ************************************************************************************************************************************************************ >>> fatal: [kolla-infra-1]: FAILED! => {"changed": false, "msg": ["There was >>> an upgrade status check failure!", "See the detail at >>> https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks >>> "]} >>> fatal: [kolla-infra-2]: FAILED! => {"changed": false, "msg": ["There was >>> an upgrade status check failure!", "See the detail at >>> https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks >>> "]} >>> fatal: [kolla-infra-3]: FAILED! => {"changed": false, "msg": ["There was >>> an upgrade status check failure!", "See the detail at >>> https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks >>> "]} >>> >>> >>> After running upgrade check command manually on nova-api container got >>> following error >>> >>> (nova-api)[root at kolla-infra-2 /]# nova-status upgrade check >>> Modules with known eventlet monkey patching issues were imported prior >>> to eventlet monkey patching: urllib3. This warning can usually be ignored >>> if the caller is only importing and not executing nova code. >>> +---------------------------------------------------------------------+ >>> | Upgrade Check Results | >>> +---------------------------------------------------------------------+ >>> | Check: Cells v2 | >>> | Result: Success | >>> | Details: None | >>> +---------------------------------------------------------------------+ >>> | Check: Placement API | >>> | Result: Success | >>> | Details: None | >>> +---------------------------------------------------------------------+ >>> | Check: Cinder API | >>> | Result: Success | >>> | Details: None | >>> +---------------------------------------------------------------------+ >>> | Check: Policy File JSON to YAML Migration | >>> | Result: Success | >>> | Details: None | >>> +---------------------------------------------------------------------+ >>> | Check: Older than N-1 computes | >>> | Result: Success | >>> | Details: None | >>> +---------------------------------------------------------------------+ >>> | Check: hw_machine_type unset | >>> | Result: Success | >>> | Details: None | >>> +---------------------------------------------------------------------+ >>> | Check: Service User Token Configuration | >>> | Result: Failure | >>> | Details: Service user token configuration is required for all Nova | >>> | services. For more details see the following: https://docs | >>> | .openstack.org/latest/nova/admin/configuration/service- | >>> | user-token.html | >>> +---------------------------------------------------------------------+ >>> >>> Service user token reference to the following doc [1] . Do I need to >>> configure token users in order to upgrade nova? >>> >>> [1] >>> https://docs.openstack.org/nova/latest/admin/configuration/service-user-token.html >>> >>> >>> >>> >> >> -- >> Regards, >> Maksim Malchuk >> >> -- Regards, Maksim Malchuk -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Wed Jun 21 14:48:08 2023 From: satish.txt at gmail.com (Satish Patel) Date: Wed, 21 Jun 2023 10:48:08 -0400 Subject: [kolla-ansible][nova] nova upgrade failed from zed to antelope In-Reply-To: References: Message-ID: Just to close this loop, after adding the following in /etc/kolla/nova-api/nova.conf fixed my issue. But still need to understand what went wrong in upgrade path [service_user] send_service_user_token = true auth_url = http://10.30.50.10:5000 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = nova password = AAABBBCCCDDDEEE cafile = region_name = RegionOne valid_interfaces = internal On Wed, Jun 21, 2023 at 9:02?AM Satish Patel wrote: > Hi Maksim, > > This is all I have in my config/ I don't have any override. > > (venv-kolla) root at kolla-infra-1:~# ls -l /etc/kolla/config/ > total 8 > -rw-r--r-- 1 root root 187 Apr 30 04:11 global.conf > drwxr-xr-x 2 root root 4096 May 3 01:38 neutron > > > (venv-kolla) root at kolla-infra-1:~# cat /etc/kolla/config/global.conf > [oslo_messaging_rabbit] > kombu_reconnect_delay=0.5 > rabbit_transient_queues_ttl=60 > > > > On Wed, Jun 21, 2023 at 4:15?AM Maksim Malchuk > wrote: > >> Hi Satish, >> >> It very strange because the fix for service token for the Nova was merged >> a month ago ( >> https://review.opendev.org/q/I2189dafca070accfd8efcd4b8cc4221c6decdc9f) >> Maybe you have custom configuration which overrides nova.conf ? >> >> On Wed, Jun 21, 2023 at 7:16?AM Satish Patel >> wrote: >> >>> Folks, >>> >>> I'am upgrading zed to antelope using kolla-ansible and encount following >>> error >>> >>> TASK [nova : Upgrade status check result] >>> ************************************************************************************************************************************************************ >>> fatal: [kolla-infra-1]: FAILED! => {"changed": false, "msg": ["There was >>> an upgrade status check failure!", "See the detail at >>> https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks >>> "]} >>> fatal: [kolla-infra-2]: FAILED! => {"changed": false, "msg": ["There was >>> an upgrade status check failure!", "See the detail at >>> https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks >>> "]} >>> fatal: [kolla-infra-3]: FAILED! => {"changed": false, "msg": ["There was >>> an upgrade status check failure!", "See the detail at >>> https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks >>> "]} >>> >>> >>> After running upgrade check command manually on nova-api container got >>> following error >>> >>> (nova-api)[root at kolla-infra-2 /]# nova-status upgrade check >>> Modules with known eventlet monkey patching issues were imported prior >>> to eventlet monkey patching: urllib3. This warning can usually be ignored >>> if the caller is only importing and not executing nova code. >>> +---------------------------------------------------------------------+ >>> | Upgrade Check Results | >>> +---------------------------------------------------------------------+ >>> | Check: Cells v2 | >>> | Result: Success | >>> | Details: None | >>> +---------------------------------------------------------------------+ >>> | Check: Placement API | >>> | Result: Success | >>> | Details: None | >>> +---------------------------------------------------------------------+ >>> | Check: Cinder API | >>> | Result: Success | >>> | Details: None | >>> +---------------------------------------------------------------------+ >>> | Check: Policy File JSON to YAML Migration | >>> | Result: Success | >>> | Details: None | >>> +---------------------------------------------------------------------+ >>> | Check: Older than N-1 computes | >>> | Result: Success | >>> | Details: None | >>> +---------------------------------------------------------------------+ >>> | Check: hw_machine_type unset | >>> | Result: Success | >>> | Details: None | >>> +---------------------------------------------------------------------+ >>> | Check: Service User Token Configuration | >>> | Result: Failure | >>> | Details: Service user token configuration is required for all Nova | >>> | services. For more details see the following: https://docs | >>> | .openstack.org/latest/nova/admin/configuration/service- | >>> | user-token.html | >>> +---------------------------------------------------------------------+ >>> >>> Service user token reference to the following doc [1] . Do I need to >>> configure token users in order to upgrade nova? >>> >>> [1] >>> https://docs.openstack.org/nova/latest/admin/configuration/service-user-token.html >>> >>> >>> >>> >> >> -- >> Regards, >> Maksim Malchuk >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulrich.schwickerath at cern.ch Wed Jun 21 15:30:40 2023 From: ulrich.schwickerath at cern.ch (Ulrich Schwickerath) Date: Wed, 21 Jun 2023 17:30:40 +0200 Subject: Experience with VGPUs In-Reply-To: References: <2DD18791-4BFD-4FF0-AAAC-77D8C18FB138@me.com> Message-ID: <045f750d-223b-5c90-2097-0d67d0352faf@cern.ch> Hi, again, here's a link to my slides: https://cernbox.cern.ch/s/v3YCyJjrZZv55H2 Let me know if it works. Cheers, Ulrich On 21/06/2023 16:10, Ulrich Schwickerath wrote: > > Hi, all, > > Sylvain explained quite well how to do it technically. We have a PoC > running, however, still have some stability issues, as mentioned on > the summit. We're running the NVIDIA virtualisation drivers on the > hypervisors and the guests, which requires a license from NVIDIA. In > our configuration we are still quite limited in the sense that we have > to configure all cards in the same hypervisor in the same way, that is > the same MIG partitioning. Also, it is not possible to attach more > than one device to a single VM. > > As mentioned in the presentation we are a bit behind with Nova, and in > the process of fixing this as we speak. Because of that we had to do a > couple of back ports in Nova to make it work, which we hope to be able > to get rid of by the ongoing upgrades. > > Let me? see if I can make the slides available here. > > Cheers, Ulrich > > On 20/06/2023 19:07, Oliver Weinmann wrote: >> Hi everyone, >> >> Jumping into this topic again. Unfortunately I haven?t had time yet >> to test Nvidia VGPU in OpenStack but in VMware Vsphere. What our >> users complain most about is the inflexibility since you have to use >> the same profile on all vms that use the gpu. One user mentioned to >> try SLURM. I know there is no official OpenStack project for SLURM >> but I wonder if anyone else tried this approach? If I understood >> correctly this would also not require any Nvidia subscription since >> you passthrough the GPU to a single instance and you don?t use VGPU >> nor MIG. >> >> Cheers, >> Oliver >> >> Von meinem iPhone gesendet >> >>> Am 20.06.2023 um 17:34 schrieb Sylvain Bauza : >>> >>> ? >>> >>> >>> Le?mar. 20 juin 2023 ??16:31, Mahendra Paipuri >>> a ?crit?: >>> >>> Thanks Sylvain for the pointers. >>> >>> One of the questions we have is: can we create MIG profiles on >>> the host and then attach each one or more profile(s) to VMs? >>> This bug [1] reports that once we attach one profile to a VM, >>> rest of MIG profiles become unavailable. From what you have said >>> about using SR-IOV and VFs, I guess this should be possible. >>> >>> >>> Correct, what you need is to create first the VFs using sriov-manage >>> and then you can create the MIG instances. >>> Once you create the MIG instances using the profiles you want, you >>> will see that the related available_instances for the nvidia mdev >>> type (by looking at sysfs) will say that you can have a single vGPU >>> for this profile. >>> Then, you can use that mdev type with Nova using nova.conf. >>> >>> That being said, while this above is simple, the below talk was >>> saying more about how to correctly use the GPU by the host so please >>> wait :-) >>> >>> I think you are talking about "vGPUs with OpenStack Nova" talk >>> on OpenInfra stage. I will look into it once the videos will be >>> online. >>> >>> >>> Indeed. >>> -S >>> >>> [1] https://bugs.launchpad.net/nova/+bug/2008883 >>> >>> Thanks >>> >>> Regards >>> >>> Mahendra >>> >>> On 20/06/2023 15:47, Sylvain Bauza wrote: >>>> >>>> >>>> Le?mar. 20 juin 2023 ??15:12, PAIPURI Mahendra >>>> a ?crit?: >>>> >>>> Hello Ulrich, >>>> >>>> >>>> I am relaunching this discussion as I noticed that you gave >>>> a talk about this topic?at OpenInfra Summit in Vancouver. >>>> Is it possible to share the presentation here? I hope the >>>> talks will be uploaded soon in YouTube. >>>> >>>> >>>> We are mainly interested in using MIG instances in >>>> Openstack cloud and I could not really find a lot of >>>> information?by googling. If you could share your >>>> experiences, that would be great. >>>> >>>> >>>> >>>> Due to scheduling conflicts, I wasn't able to attend Ulrich's >>>> session but his feedback will be greatly listened to by me. >>>> >>>> FWIW, there was also a short session about how to enable MIG >>>> and play with Nova at the OpenInfra stage (and that one I was >>>> able to attend it), and it was quite seamless. What exact >>>> information are you looking for ? >>>> The idea with MIG is that you need to create SRIOV VFs above >>>> the MIG instances using sriov-manage script provided by nvidia >>>> so that the mediated devices will use those VFs as the base PCI >>>> devices to be used for Nova. >>>> >>>> Cheers. >>>> >>>> >>>> Regards >>>> >>>> Mahendra >>>> >>>> ------------------------------------------------------------------------ >>>> *De :* Ulrich Schwickerath >>>> *Envoy? :* lundi 16 janvier 2023 11:38:08 >>>> *? :* openstack-discuss at lists.openstack.org >>>> *Objet :* Re: ??: Experience with VGPUs >>>> >>>> Hi, all, >>>> >>>> just to add to the discussion, at CERN we have recently >>>> deployed a bunch of A100 GPUs in PCI passthrough mode, and >>>> are now looking into improving their usage by using MIG. >>>> From the NOVA point of view things seem to work OK, we can >>>> schedule VMs requesting a VGPU, the client starts up and >>>> gets a license token from our NVIDIA license server >>>> (distributing license keys is our private cloud is >>>> relatively easy in our case). It's a PoC only for the time >>>> being, and we're not ready to put that forward as we're >>>> facing issues with CUDA on the client (it fails immediately >>>> in memory operations with 'not supported', still >>>> investigating why this happens). >>>> >>>> Once we get that working it would be nice to be able to >>>> have a more fine grained scheduling so that people can ask >>>> for MIG devices of different size. The other challenge is >>>> how to set limits on GPU resources. Once the above issues >>>> have been sorted out we may want to look into cyborg as >>>> well thus we are quite interested in first experiences with >>>> this. >>>> >>>> Kind regards, >>>> >>>> Ulrich >>>> >>>> On 13.01.23 21:06, Dmitriy Rabotyagov wrote: >>>>> To have that said, deb/rpm packages they are providing >>>>> doesn't help much, as: >>>>> * There is no repo for them, so you need to download them >>>>> manually from enterprise portal >>>>> * They can't be upgraded anyway, as driver version is part >>>>> of the package name. And each package conflicts with any >>>>> another one. So you need to explicitly remove old package >>>>> and only then install new one. And yes, you must stop all >>>>> VMs before upgrading driver and no, you can't live migrate >>>>> GPU mdev devices due to that now being implemented in >>>>> qemu. So deb/rpm/generic driver doesn't matter at the end tbh. >>>>> >>>>> >>>>> ??, 13 ???. 2023 ?., 20:56 Cedric : >>>>> >>>>> >>>>> Ended up with the very same conclusions than Dimitry >>>>> regarding the use of Nvidia Vgrid for the VGPU use >>>>> case with Nova, it works pretty well but: >>>>> >>>>> - respecting the licensing model as operationnal >>>>> constraints, note that guests need to reach a license >>>>> server in order to get a token (could be via the >>>>> Nvidia SaaS service or on-prem) >>>>> - drivers for both guest and hypervisor are not easy >>>>> to implement and maintain on large scale. A year ago, >>>>> hypervisors drivers were not packaged to >>>>> Debian/Ubuntu, but builded though a bash script, thus >>>>> requiering additional automatisation work and careful >>>>> attention regarding kernel update/reboot of Nova >>>>> hypervisors. >>>>> >>>>> Cheers >>>>> >>>>> >>>>> On Fri, Jan 13, 2023 at 4:21 PM Dmitriy Rabotyagov >>>>> wrote: >>>>> > >>>>> > You are saying that, like Nvidia GRID drivers are >>>>> open-sourced while >>>>> > in fact they're super far from being that. In order >>>>> to download >>>>> > drivers not only for hypervisors, but also for guest >>>>> VMs you need to >>>>> > have an account in their Enterprise Portal. It took >>>>> me roughly 6 weeks >>>>> > of discussions with hardware vendors and Nvidia >>>>> support to get a >>>>> > proper account there. And that happened only after >>>>> applying for their >>>>> > Partner Network (NPN). >>>>> > That still doesn't solve the issue of how to provide >>>>> drivers to >>>>> > guests, except pre-build a series of images with >>>>> these drivers >>>>> > pre-installed (we ended up with making a DIB element >>>>> for that [1]). >>>>> > Not saying about the need to distribute license >>>>> tokens for guests and >>>>> > the whole mess with compatibility between hypervisor >>>>> and guest drivers >>>>> > (as guest driver can't be newer then host one, and >>>>> HVs can't be too >>>>> > new either). >>>>> > >>>>> > It's not that I'm protecting AMD, but just saying >>>>> that Nvidia is not >>>>> > that straightforward either, and at least on paper >>>>> AMD vGPUs look >>>>> > easier both for operators and end-users. >>>>> > >>>>> > [1] >>>>> https://github.com/citynetwork/dib-elements/tree/main/nvgrid >>>>> > >>>>> > > >>>>> > > As for AMD cards, AMD stated that some of their MI >>>>> series card supports SR-IOV for vGPUs. However, those >>>>> drivers are never open source or provided closed >>>>> source to public, only large cloud providers are able >>>>> to get them. So I don't really recommend getting AMD >>>>> cards for vGPU unless you are able to get support from >>>>> them. >>>>> > > >>>>> > >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Wed Jun 21 16:12:13 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 21 Jun 2023 18:12:13 +0200 Subject: Experience with VGPUs In-Reply-To: <045f750d-223b-5c90-2097-0d67d0352faf@cern.ch> References: <2DD18791-4BFD-4FF0-AAAC-77D8C18FB138@me.com> <045f750d-223b-5c90-2097-0d67d0352faf@cern.ch> Message-ID: I can recall in quite recent release notes in Nvidia drivers, that now they do allow attaching multiple vGPUs to a single VM, but I can recall Sylvain said that is not exactly as it sounds like and there're severe limitations to this advertised feature. Also I think in MIG mode it's possible to split GPU in a subset of supported (but different) flavors, though I have close to no idea how scheduling would be done in this case. On Wed, Jun 21, 2023, 17:36 Ulrich Schwickerath wrote: > Hi, again, > > here's a link to my slides: > > https://cernbox.cern.ch/s/v3YCyJjrZZv55H2 > > Let me know if it works. > > Cheers, Ulrich > > > On 21/06/2023 16:10, Ulrich Schwickerath wrote: > > Hi, all, > > Sylvain explained quite well how to do it technically. We have a PoC > running, however, still have some stability issues, as mentioned on the > summit. We're running the NVIDIA virtualisation drivers on the hypervisors > and the guests, which requires a license from NVIDIA. In our configuration > we are still quite limited in the sense that we have to configure all cards > in the same hypervisor in the same way, that is the same MIG partitioning. > Also, it is not possible to attach more than one device to a single VM. > > As mentioned in the presentation we are a bit behind with Nova, and in the > process of fixing this as we speak. Because of that we had to do a couple > of back ports in Nova to make it work, which we hope to be able to get rid > of by the ongoing upgrades. > > Let me see if I can make the slides available here. > > Cheers, Ulrich > On 20/06/2023 19:07, Oliver Weinmann wrote: > > Hi everyone, > > Jumping into this topic again. Unfortunately I haven?t had time yet to > test Nvidia VGPU in OpenStack but in VMware Vsphere. What our users > complain most about is the inflexibility since you have to use the same > profile on all vms that use the gpu. One user mentioned to try SLURM. I > know there is no official OpenStack project for SLURM but I wonder if > anyone else tried this approach? If I understood correctly this would also > not require any Nvidia subscription since you passthrough the GPU to a > single instance and you don?t use VGPU nor MIG. > > Cheers, > Oliver > > Von meinem iPhone gesendet > > Am 20.06.2023 um 17:34 schrieb Sylvain Bauza > : > > ? > > > Le mar. 20 juin 2023 ? 16:31, Mahendra Paipuri > a ?crit : > >> Thanks Sylvain for the pointers. >> >> One of the questions we have is: can we create MIG profiles on the host >> and then attach each one or more profile(s) to VMs? This bug [1] reports >> that once we attach one profile to a VM, rest of MIG profiles become >> unavailable. From what you have said about using SR-IOV and VFs, I guess >> this should be possible. >> > > Correct, what you need is to create first the VFs using sriov-manage and > then you can create the MIG instances. > Once you create the MIG instances using the profiles you want, you will > see that the related available_instances for the nvidia mdev type (by > looking at sysfs) will say that you can have a single vGPU for this profile. > Then, you can use that mdev type with Nova using nova.conf. > > That being said, while this above is simple, the below talk was saying > more about how to correctly use the GPU by the host so please wait :-) > >> I think you are talking about "vGPUs with OpenStack Nova" talk on >> OpenInfra stage. I will look into it once the videos will be online. >> > > Indeed. > -S > >> [1] https://bugs.launchpad.net/nova/+bug/2008883 >> >> Thanks >> >> Regards >> >> Mahendra >> On 20/06/2023 15:47, Sylvain Bauza wrote: >> >> >> >> Le mar. 20 juin 2023 ? 15:12, PAIPURI Mahendra >> a ?crit : >> >>> Hello Ulrich, >>> >>> >>> I am relaunching this discussion as I noticed that you gave a talk about >>> this topic at OpenInfra Summit in Vancouver. Is it possible to share the >>> presentation here? I hope the talks will be uploaded soon in YouTube. >>> >>> >>> We are mainly interested in using MIG instances in Openstack cloud and I >>> could not really find a lot of information by googling. If you could share >>> your experiences, that would be great. >>> >>> >>> >> Due to scheduling conflicts, I wasn't able to attend Ulrich's session but >> his feedback will be greatly listened to by me. >> >> FWIW, there was also a short session about how to enable MIG and play >> with Nova at the OpenInfra stage (and that one I was able to attend it), >> and it was quite seamless. What exact information are you looking for ? >> The idea with MIG is that you need to create SRIOV VFs above the MIG >> instances using sriov-manage script provided by nvidia so that the mediated >> devices will use those VFs as the base PCI devices to be used for Nova. >> >> Cheers. >>> >>> >>> Regards >>> >>> Mahendra >>> ------------------------------ >>> *De :* Ulrich Schwickerath >>> *Envoy? :* lundi 16 janvier 2023 11:38:08 >>> *? :* openstack-discuss at lists.openstack.org >>> *Objet :* Re: ??: Experience with VGPUs >>> >>> >>> Hi, all, >>> >>> just to add to the discussion, at CERN we have recently deployed a bunch >>> of A100 GPUs in PCI passthrough mode, and are now looking into improving >>> their usage by using MIG. From the NOVA point of view things seem to work >>> OK, we can schedule VMs requesting a VGPU, the client starts up and gets a >>> license token from our NVIDIA license server (distributing license keys is >>> our private cloud is relatively easy in our case). It's a PoC only for the >>> time being, and we're not ready to put that forward as we're facing issues >>> with CUDA on the client (it fails immediately in memory operations with >>> 'not supported', still investigating why this happens). >>> >>> Once we get that working it would be nice to be able to have a more fine >>> grained scheduling so that people can ask for MIG devices of different >>> size. The other challenge is how to set limits on GPU resources. Once the >>> above issues have been sorted out we may want to look into cyborg as well >>> thus we are quite interested in first experiences with this. >>> >>> Kind regards, >>> >>> Ulrich >>> On 13.01.23 21:06, Dmitriy Rabotyagov wrote: >>> >>> To have that said, deb/rpm packages they are providing doesn't help >>> much, as: >>> * There is no repo for them, so you need to download them manually from >>> enterprise portal >>> * They can't be upgraded anyway, as driver version is part of the >>> package name. And each package conflicts with any another one. So you need >>> to explicitly remove old package and only then install new one. And yes, >>> you must stop all VMs before upgrading driver and no, you can't live >>> migrate GPU mdev devices due to that now being implemented in qemu. So >>> deb/rpm/generic driver doesn't matter at the end tbh. >>> >>> >>> ??, 13 ???. 2023 ?., 20:56 Cedric : >>> >>>> >>>> Ended up with the very same conclusions than Dimitry regarding the use >>>> of Nvidia Vgrid for the VGPU use case with Nova, it works pretty well but: >>>> >>>> - respecting the licensing model as operationnal constraints, note that >>>> guests need to reach a license server in order to get a token (could be via >>>> the Nvidia SaaS service or on-prem) >>>> - drivers for both guest and hypervisor are not easy to implement and >>>> maintain on large scale. A year ago, hypervisors drivers were not packaged >>>> to Debian/Ubuntu, but builded though a bash script, thus requiering >>>> additional automatisation work and careful attention regarding kernel >>>> update/reboot of Nova hypervisors. >>>> >>>> Cheers >>>> >>>> >>>> On Fri, Jan 13, 2023 at 4:21 PM Dmitriy Rabotyagov < >>>> noonedeadpunk at gmail.com> wrote: >>>> > >>>> > You are saying that, like Nvidia GRID drivers are open-sourced while >>>> > in fact they're super far from being that. In order to download >>>> > drivers not only for hypervisors, but also for guest VMs you need to >>>> > have an account in their Enterprise Portal. It took me roughly 6 weeks >>>> > of discussions with hardware vendors and Nvidia support to get a >>>> > proper account there. And that happened only after applying for their >>>> > Partner Network (NPN). >>>> > That still doesn't solve the issue of how to provide drivers to >>>> > guests, except pre-build a series of images with these drivers >>>> > pre-installed (we ended up with making a DIB element for that [1]). >>>> > Not saying about the need to distribute license tokens for guests and >>>> > the whole mess with compatibility between hypervisor and guest drivers >>>> > (as guest driver can't be newer then host one, and HVs can't be too >>>> > new either). >>>> > >>>> > It's not that I'm protecting AMD, but just saying that Nvidia is not >>>> > that straightforward either, and at least on paper AMD vGPUs look >>>> > easier both for operators and end-users. >>>> > >>>> > [1] https://github.com/citynetwork/dib-elements/tree/main/nvgrid >>>> > >>>> > > >>>> > > As for AMD cards, AMD stated that some of their MI series card >>>> supports SR-IOV for vGPUs. However, those drivers are never open source or >>>> provided closed source to public, only large cloud providers are able to >>>> get them. So I don't really recommend getting AMD cards for vGPU unless you >>>> are able to get support from them. >>>> > > >>>> > >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Wed Jun 21 16:25:01 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Wed, 21 Jun 2023 09:25:01 -0700 Subject: [ironic] Summit PTG Summary Message-ID: Hey all, As indicated before the summit, the Ironic PTG was dedicated to in-person collaboration, hacking, and design chats but were not intended for any final decision making due to not all of our community being present. Full, contemporaneous notes taken are at https://etherpad.opendev.org/p/ironic-openinfra-2023 -- this is meant to be a high level summary. For the summit, there were several extremely well-attended Ironic talks. Thank you to all of those who gave talks! In addition, I, and other members of the Ironic community, were able to connect with many people using Ironic quietly in production with a large amount of success. As always, the Ironic community strongly encourages people with success stories to loudly communicate about them :D. Ironic did have a single forum session, where we met with several operators, answering questions and in some places providing solutions to people struggling with a problem. The full notes from the session, primarily consisting of a census of Ironic installs, are accessible from the above linked etherpad. As for the PTG sessions, there were a few topics. These are extremely rough outlines of what was discussed; but again, no specific decisions were made. - networking-generic-switch -- Several contributors, including Baptise Jonglez and John Garbutt, discussed options for scaling NGS further up in the future, including enhancing it to support other protocols, such as VXLAN. Some of these discussions have already moved to the list; and I encourage folks to engage with Baptise to make our network tooling scale even more. - Future of Ironic -- We spoke for a while about the Ironic vision document created in Rocky, targeting approximately now: https://docs.openstack.org/ironic/latest/contributor/vision.html -- we've accomplished many of the items on the list, but what's next? -- Possibilities brainstormed included: --- Enhanced network interfaces that use SDN or DPU orchestration to configure baremetal networks --- more distinct support for composible hardware --- expanding Ironic standalone use cases --- getting more directly connected with communities like Metal3 integrating into Ironic --- scaling down Ironic into a tool useful at smaller scale (that a tool like cobbler has a strong hold on today) --- terraform driver designed to call Ironic directly -- I think we should update the vision document with some of these ideas so we can use it as a measuring stick in 5-6 years, like we were able to use the Rocky vision document this time. Finally, we closed up the summit with an Ironic dinner, with 16 attendees from various companies, use cases, and backgrounds with one thing in common: we all need bare metal servers :). If you're wondering who the faces of Ironic are, here are some of us https://twitter.com/jayofdoom/status/1669531671937384449 :). I'll say, on a personal note, it was extremely nice to get to see many of my old and new friends in the OpenInfra community face to face for the first time in years. The absence of those unable to travel was felt deeply as well, and I hope we'll be able to reconnect in person in the future. Thanks, Jay Faulkner Ironic PTL -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Jun 21 16:30:05 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 21 Jun 2023 09:30:05 -0700 Subject: [keystone] Updates to core reviewers In-Reply-To: <1634869b-ec25-40a1-b1c1-ffd6e668eff0@Spark> References: <3403DBC1-63E6-41E1-8D8F-EDE182EB691A@bu.edu> <1634869b-ec25-40a1-b1c1-ffd6e668eff0@Spark> Message-ID: <188decaafdc.c132404f303490.5961001138156308626@ghanshyammann.com> Thanks Kriti for doing the changes, Thanks Dave, Artem for helping in keystone maintenance. -gmann ---- On Wed, 21 Jun 2023 07:35:52 -0700 Dave Wilde wrote --- > Thank you Kristi! I'd like to echo the huge thanks for Lance and also welcome Artem to the core team for keystoneauth.??Thank you for offering to help with that project! > > And as a shameless plug for the Keystone Reviewathons, Artem if you (or anyone else) are interested in joining the reviewathons we have them every Friday at 14:00 UTC on Google Meet.??I can send out calendar invites to anyone interested in attending. > /DaveOn Jun 21, 2023 at 9:30 AM -0500, Nikolla, Kristi knikolla at bu.edu>, wrote: > Hi all, > > Some updates to the core reviewers in Keystone repos. > > Dave Wilde (d34dh0r53) has been Keystone PTL for the Antelope and Bobcat cycle and has done an amazing work. Dave is now a core reviewer in Keystone. > > Artem Goncharov (gtema) is the PTL of OpenStack SDK, he is willing to help out with the Keystoneauth library, that being a core part of the SDK and CLI. Thank you Artem, you are now a core reviewer of Keystoneauth. > > Lance Bragstad (lbragstad) has been inactive for a bit over a year now and I believe doesn't work on OpenStack anymore. Due to that I am removing as a core reviewer from Keystone. Lance has been a contributor to Keystone and OpenStack for a really really long time and was an essential part of the Keystone team, serving as its PTL too. > > Best, > Kristi Nikolla > > From gmann at ghanshyammann.com Wed Jun 21 18:11:56 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 21 Jun 2023 11:11:56 -0700 Subject: [all][ops][rbac] RBAC discussion summary from Vancouver summit Message-ID: <188df27eebd.126e7adff315950.7816177899197775871@ghanshyammann.com> Hello Everyone, Most of you know we had the RBAC discussion at the Vancouver summit on Tuesday. - https://etherpad.opendev.org/p/rbac-operator-feedback-vancouver2023 This forum session started providing the current progress for RBAC work which you can find in Etherpad - https://etherpad.opendev.org/p/rbac-operator-feedback-vancouver2023#L46 After the progress updates, we started the feedback from operators. I am writing the summary and feel free to add anything I missed to capture here: Admin Access: =========== * Keep current Admin (legacy admin means admin in any project) untouched. * There is a need for second-level support along with the current admin, and this can be done with the project manager * Public Cloud use case: ---------------------------- ** Currently defined RBAC goal does not solve the public cloud use case of having a domain admin manage the specific domain's user/project/role. ** If the domain admin can assign an admin role to any user under their domain, then that user gets admin access across all the projects on the service side (even projects are in different domains as the service side does not have domain boundaries) ** We do not have any better solution for now, but one idea is to have the Domain manager allow managing the user/project/ role in their domain except 'Admin' role assignment. Current legacy admin (admin across all domains) can handle the 'Admin' role assignment. I mean, domain manager behaves as domain-level admin and legacy admin as a global admin. Feel free to write other solutions if you have? Member Access: ============ * Project Member is a good fix and useful to enforce 'member' roles actually Reader Access: ============ * Project Reader is one of the most useful roles in this change. * There is an ask for a global reader, which was nothing but a system reader in past. This can be useful for performing the audit of the complete cloud. One idea is to do this with a special role _role in Keystone. I will write the spec for it, and we can talk about it there. -gmann From ihrachys at redhat.com Wed Jun 21 21:00:16 2023 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 21 Jun 2023 17:00:16 -0400 Subject: [neutron] - OVN heartbead - short agent_down_time In-Reply-To: References: Message-ID: On Mon, Jun 19, 2023 at 11:04?AM Roberto Bartzen Acosta < roberto.acosta at luizalabs.com> wrote: > Hello Neutron folks, > > We discussed in the Operators feedback session about OVN heartbeat and the > use of "infinity" values for large-scale deployments because we have a > significant infrastructure impact when a short 'agent_down_time' is > configured. > This is tangentially related, but note that using "infinity" values for agent_down_time is unsafe: https://bugzilla.redhat.com/show_bug.cgi?id=2215407 (depending on whether your "infinity" value is larger than ~15 days, assuming 32 bit ints used on your platform). > > > The merged patch [1] limited the maximum delay to 10 seconds. I understand > the requirement to use random values to avoid load spikes, but why does > this fix limit the heartbeat to 10 seconds? What is the goal of the > agent_down_time parameter in this case? How will it work for someone who > has hundreds of compute nodes / metadata agents? > > Regards, > Roberto > > [1] - https://review.opendev.org/c/openstack/neutron/+/883687 > > > *?Esta mensagem ? direcionada apenas para os endere?os constantes no > cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no > cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa > mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o > imediatamente anuladas e proibidas?.* > > *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para > assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o > poder? aceitar a responsabilidade por quaisquer perdas ou danos causados > por esse e-mail ou por seus anexos?.* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Wed Jun 21 23:31:55 2023 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 21 Jun 2023 16:31:55 -0700 Subject: [ironic] Summit PTG Summary In-Reply-To: References: Message-ID: On Wed, Jun 21, 2023 at 9:29?AM Jay Faulkner wrote: > Hey all, > > As indicated before the summit, the Ironic PTG was dedicated to in-person > collaboration, hacking, and design chats but were not intended for any > final decision making due to not all of our community being present. > > Full, contemporaneous notes taken are at > https://etherpad.opendev.org/p/ironic-openinfra-2023 -- this is meant to > be a high level summary. > > For the summit, there were several extremely well-attended Ironic talks. > Thank you to all of those who gave talks! In addition, I, and other members > of the Ironic community, were able to connect with many people using Ironic > quietly in production with a large amount of success. As always, the Ironic > community strongly encourages people with success stories to loudly > communicate about them :D. > > Ironic did have a single forum session, where we met with several > operators, answering questions and in some places providing solutions to > people struggling with a problem. The full notes from the session, > primarily consisting of a census of Ironic installs, are accessible from > the above linked etherpad. > > As for the PTG sessions, there were a few topics. These are extremely > rough outlines of what was discussed; but again, no specific decisions were > made. > - networking-generic-switch > -- Several contributors, including Baptise Jonglez and John Garbutt, > discussed options for scaling NGS further up in the future, including > enhancing it to support other protocols, such as VXLAN. Some of these > discussions have already moved to the list; and I encourage folks to engage > with Baptise to make our network tooling scale even more. > > - Future of Ironic > -- We spoke for a while about the Ironic vision document created in Rocky, > targeting approximately now: > https://docs.openstack.org/ironic/latest/contributor/vision.html -- we've > accomplished many of the items on the list, but what's next? > I agree it is time to add a new entry for a new date to our vision document. > -- Possibilities brainstormed included: > --- Enhanced network interfaces that use SDN or DPU orchestration to > configure baremetal networks > I also had an interesting conversation about also allowing NGS to be invoked directly. We've sort of mused with the idea before, but I think maybe someone hacking on it might be the next logical step. Also, a number of different interests in use of DPUs, from "hiding" LACP from the workload to wanting to do VXLAN or geneve tunnel termination. Given I'm leading work to try and get us into a place to better support things such as this, I expect it will be a topic we'll be revisiting in the next few months. > --- more distinct support for composible hardware > --- expanding Ironic standalone use cases > --- getting more directly connected with communities like Metal3 > integrating into Ironic > --- scaling down Ironic into a tool useful at smaller scale (that a tool > like cobbler has a strong hold on today) > Would a one shot command deploy thing be of value? > --- terraform driver designed to call Ironic directly > Consider this a solid +1! I was *really* surprised by the number of conversations I had where someone had a large BMaaS sort of cloud, and where they were presently using Terraform to drive Nova to ask for nodes from ironic. Flavor sprawl was an aspect that also came up because of a need for more or less matching down to represent required physical architectures for multi-machine workloads. I think it would be wise of us to try and push that capability forward so operators could leverage terraform with a whole deployed cloud or an Ironic only BMaaS cloud. At the same time, I would encourage us to also think of other use integrations and generally theme to make it easier to tie into processes and tools where operators presently take multiple steps. > -- I think we should update the vision document with some of these ideas > so we can use it as a measuring stick in 5-6 years, like we were able to > use the Rocky vision document this time. > > Finally, we closed up the summit with an Ironic dinner, with 16 attendees > from various companies, use cases, and backgrounds with one thing in > common: we all need bare metal servers :). If you're wondering who the > faces of Ironic are, here are some of us > https://twitter.com/jayofdoom/status/1669531671937384449 :). > > I'll say, on a personal note, it was extremely nice to get to see many of > my old and new friends in the OpenInfra community face to face for the > first time in years. The absence of those unable to travel was felt deeply > as well, and I hope we'll be able to reconnect in person in the future. > > Thanks, > Jay Faulkner > Ironic PTL > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Jun 22 03:23:32 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 21 Jun 2023 20:23:32 -0700 Subject: [nova][tempest] Hold your rechecks In-Reply-To: References: Message-ID: <188e120ef34.10e78b94b325482.4897050829082639566@ghanshyammann.com> ---- On Tue, 20 Jun 2023 08:33:53 -0700 Sylvain Bauza wrote --- > As far as I can tell, all nova-live-migration job runs are getting a FAILURE [1] due to [2] so please hold your rechecks. > Thanks to Rodolfo, we have a workaround for it https://review.opendev.org/c/openstack/tempest/+/886496 so please wait until this change is merged. This is merged, please recheck the tests for this failure. -gmann > Tempest cores, it would be nice if you can look at the above change quickly :) > Thanks,-Sylvain > [1] https://zuul.openstack.org/builds?job_name=nova-live-migration&skip=0[2] https://bugs.launchpad.net/nova/+bug/2024160 From alsotoes at gmail.com Thu Jun 22 04:05:56 2023 From: alsotoes at gmail.com (Alvaro Soto) Date: Wed, 21 Jun 2023 22:05:56 -0600 Subject: DRaaS question In-Reply-To: References: Message-ID: Hello, I'm unfamiliar with the 2 options you mentioned to build a comparison matrix. But take a look at the freezer project. https://docs.openstack.org/freezer/latest/ Cheers. --- Alvaro Soto. Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you. ---------------------------------------------------------- Great people talk about ideas, ordinary people talk about things, small people talk... about other people. On Wed, Jun 21, 2023, 5:33 AM KK CHN wrote: > List, > > Looking for solutions using OpenStack Similar to > > 1. VMWare VCDR( VMware Cloud Disaster Recovery) as a Pilot Light DR > Solution hassle free Failover and Failback. > > 2. AWS Cloud Replication Agent( Cloud Endurance ) Operational DR > solution in Cloud > > ( Both 1 and 2 have capabilities of acting as an Operational DR in Cloud > and serving both from AWS demonstrated as a PoC ). > > Any solution/directives to achieve a similar to better solution using > OpenStack techniques and components? > > Please enlighten with your ideas. > > Thank you > Krishane. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zakhar at gmail.com Thu Jun 22 04:48:20 2023 From: zakhar at gmail.com (Zakhar Kirpichenko) Date: Thu, 22 Jun 2023 07:48:20 +0300 Subject: DRaaS question In-Reply-To: References: Message-ID: Isn't freezer basically abandonware? /Z On Thu, 22 Jun 2023 at 07:10, Alvaro Soto wrote: > Hello, > I'm unfamiliar with the 2 options you mentioned to build a comparison > matrix. But take a look at the freezer project. > > https://docs.openstack.org/freezer/latest/ > > Cheers. > --- > Alvaro Soto. > > Note: My work hours may not be your work hours. Please do not feel the > need to respond during a time that is not convenient for you. > ---------------------------------------------------------- > Great people talk about ideas, > ordinary people talk about things, > small people talk... about other people. > > On Wed, Jun 21, 2023, 5:33 AM KK CHN wrote: > >> List, >> >> Looking for solutions using OpenStack Similar to >> >> 1. VMWare VCDR( VMware Cloud Disaster Recovery) as a Pilot Light DR >> Solution hassle free Failover and Failback. >> >> 2. AWS Cloud Replication Agent( Cloud Endurance ) Operational DR >> solution in Cloud >> >> ( Both 1 and 2 have capabilities of acting as an Operational DR in Cloud >> and serving both from AWS demonstrated as a PoC ). >> >> Any solution/directives to achieve a similar to better solution using >> OpenStack techniques and components? >> >> Please enlighten with your ideas. >> >> Thank you >> Krishane. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Thu Jun 22 08:08:01 2023 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Thu, 22 Jun 2023 13:38:01 +0530 Subject: instance console something went wrong, connection is closed | Wallaby DCN In-Reply-To: References: Message-ID: Hi, Please find the below log: [root at dcn01-hci-1 libvirt]# cat virtqemud.log 2023-06-22 07:40:01.575+0000: 350319: error : virNetSocketReadWire:1804 : End of file while reading data: Input/output error 2023-06-22 07:40:01.575+0000: 350319: error : virNetSocketWriteWire:1844 : Cannot write data: Broken pipe I think this is causing the problem of not getting the instance console. With regards, Swogat Pradhan On Fri, Jun 2, 2023 at 11:27?AM Swogat Pradhan wrote: > Update: > If the i am performing any activity like migration or resize of an > instance whose console is accessible, the console becomes inaccessible > giving out the following error : something went wrong, connection is closed > > The was 1 other instance whose console was not accessible and i did a > shelve and unshelve and suddenly the instance console became accessible. > > This is a peculiar behavior and i don't understand where is the issue . > > With regards, > Swogat Pradhan > > On Fri, Jun 2, 2023 at 11:19?AM Swogat Pradhan > wrote: > >> Hi, >> I am creating instances in my DCN site and i am unable to get the console >> sometimes, error: something went wrong, connection is closed >> >> I have 3 instances now running on my hci02 node and there is console >> access on 1 of the vm's and the rest two i am not getting the console, i >> have used the same flavor, same image same security group for the VM's. >> >> Please suggest what can be done. >> >> With regards, >> Swogat Pradhan >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Thu Jun 22 08:19:39 2023 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 22 Jun 2023 10:19:39 +0200 Subject: Experience with VGPUs In-Reply-To: References: <2DD18791-4BFD-4FF0-AAAC-77D8C18FB138@me.com> <045f750d-223b-5c90-2097-0d67d0352faf@cern.ch> Message-ID: Le mer. 21 juin 2023 ? 18:23, Dmitriy Rabotyagov a ?crit : > I can recall in quite recent release notes in Nvidia drivers, that now > they do allow attaching multiple vGPUs to a single VM, but I can recall > Sylvain said that is not exactly as it sounds like and there're severe > limitations to this advertised feature. > > That's the problem with this feature enablement in Nova : we mostly depend on a very specific external Linux driver. So, tbc, if you want to use vGPU, please rather look at the Nvidia documentation *before* :) About multiple vGPUs, Nvidia says it depends on the GPU architecture (and that was changing since the last years) : (quoting Nvidia here) *The supported vGPUs depend on the architecture of the GPU on which the vGPUs reside: * - *For GPUs based on the NVIDIA Volta architecture and later GPU architectures, all Q-series and C-series vGPUs are supported. On GPUs that support the Multi-Instance GPU (MIG) feature, both time-sliced and MIG-backed vGPUs are supported. * - *For GPUs based on the NVIDIA Pascal? architecture, only Q-series and C-series vGPUs that are allocated all of the physical GPU's frame buffer are supported. * - *For GPUs based on the NVIDIA NVIDIA Maxwell? graphic architecture, only Q-series vGPUs that are allocated all of the physical GPU's frame buffer are supported. * *You can assign multiple vGPUs with differing amounts of frame buffer to a single VM, provided the board type and the series of all the vGPUs is the same. For example, you can assign an A40-48C vGPU and an A40-16C vGPU to the same VM. However, you cannot assign an A30-8C vGPU and an A16-8C vGPU to the same VM. * https://docs.nvidia.com/grid/latest/grid-vgpu-release-notes-red-hat-el-kvm/index.html#multiple-vgpu-support-vgpus As a reminder, you can find the vGPU types here https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html#virtual-gpu-types-grid-reference Basically, what changed is that with the latest Volta and Ampere architecture, Nvidia was able to provide different vGPUs with sliced frame buffer recently, while previously Nvidia was only able to pin a vGPU taking the whole pGPU frame buffer to a single VM, which was actually limiting de facto the instance to only have one single vGPU attached (or having a second vGPU attached from another pGPU, which is non trivial to schedule) For that reason, we initially limited the VGPU allocation requests to a maximum of 1 in Nova since it was horribly depending on hardware, but I eventually tried to propose to remove that limitation with https://review.opendev.org/c/openstack/nova/+/845757 which would need some further work and testing (which is nearly impossible with upstream CI since the nvidia drivers are proprietary and licensed). Some operator wanting to lift that current limitation would get all my attention if he/she would volunteer for *testing* such patch. Ping me on IRC #openstack-nova (bauzas) and we could proceed quickly. > Also I think in MIG mode it's possible to split GPU in a subset of > supported (but different) flavors, though I have close to no idea how > scheduling would be done in this case. > > This is quite simple : you need to create different MIG instances using different heterogenous profiles and you'll see then that *some* mdev types will accordingly have an inventory of 1. You could then use some new feature we introduced in Xena, which allows the nova libvirt driver to create different custom resource classes : https://specs.openstack.org/openstack/nova-specs/specs/xena/implemented/generic-mdevs.html Again, testing this on real production is the crux of the problem. We provided as many functional tests as we were able in order to verify such things, but getting a real MIG-backed GPU and setting the confs appropriately is something we are missing and which would be useful for tracking bugs. Last point, I'm more than open to collaborating with CERN or any other operator wanting to stabilize the vGPU feature enablement in Nova. I know that the existing feature presents a quite long list of bug reports and has some severe limitations, but I'd be more happy with having some guidance from the operators on how and what to stabilize. -Sylvain On Wed, Jun 21, 2023, 17:36 Ulrich Schwickerath > wrote: > >> Hi, again, >> >> here's a link to my slides: >> >> https://cernbox.cern.ch/s/v3YCyJjrZZv55H2 >> >> Let me know if it works. >> >> Cheers, Ulrich >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahendra.paipuri at cnrs.fr Thu Jun 22 08:43:25 2023 From: mahendra.paipuri at cnrs.fr (Mahendra Paipuri) Date: Thu, 22 Jun 2023 10:43:25 +0200 Subject: Experience with VGPUs In-Reply-To: References: <2DD18791-4BFD-4FF0-AAAC-77D8C18FB138@me.com> Message-ID: Hello all, Thanks @Ulrich for sharing the presentation. Very informative!! One question : if I understood correctly, *time-sliced *vGPUs *absolutely need* GRID drivers and licensed clients for the vGPUs to work in the guests. For the MIG partitioning, there is *no need* to install GRID drivers in the guest and also *no need* to have licensed clients. Could you confirm if this is the actual case? Cheers. Regards Mahendra On 21/06/2023 16:10, Ulrich Schwickerath wrote: > > Hi, all, > > Sylvain explained quite well how to do it technically. We have a PoC > running, however, still have some stability issues, as mentioned on > the summit. We're running the NVIDIA virtualisation drivers on the > hypervisors and the guests, which requires a license from NVIDIA. In > our configuration we are still quite limited in the sense that we have > to configure all cards in the same hypervisor in the same way, that is > the same MIG partitioning. Also, it is not possible to attach more > than one device to a single VM. > > As mentioned in the presentation we are a bit behind with Nova, and in > the process of fixing this as we speak. Because of that we had to do a > couple of back ports in Nova to make it work, which we hope to be able > to get rid of by the ongoing upgrades. > > Let me? see if I can make the slides available here. > > Cheers, Ulrich > > On 20/06/2023 19:07, Oliver Weinmann wrote: >> Hi everyone, >> >> Jumping into this topic again. Unfortunately I haven?t had time yet >> to test Nvidia VGPU in OpenStack but in VMware Vsphere. What our >> users complain most about is the inflexibility since you have to use >> the same profile on all vms that use the gpu. One user mentioned to >> try SLURM. I know there is no official OpenStack project for SLURM >> but I wonder if anyone else tried this approach? If I understood >> correctly this would also not require any Nvidia subscription since >> you passthrough the GPU to a single instance and you don?t use VGPU >> nor MIG. >> >> Cheers, >> Oliver >> >> Von meinem iPhone gesendet >> >>> Am 20.06.2023 um 17:34 schrieb Sylvain Bauza : >>> >>> ? >>> >>> >>> Le?mar. 20 juin 2023 ??16:31, Mahendra Paipuri >>> a ?crit?: >>> >>> Thanks Sylvain for the pointers. >>> >>> One of the questions we have is: can we create MIG profiles on >>> the host and then attach each one or more profile(s) to VMs? >>> This bug [1] reports that once we attach one profile to a VM, >>> rest of MIG profiles become unavailable. From what you have said >>> about using SR-IOV and VFs, I guess this should be possible. >>> >>> >>> Correct, what you need is to create first the VFs using sriov-manage >>> and then you can create the MIG instances. >>> Once you create the MIG instances using the profiles you want, you >>> will see that the related available_instances for the nvidia mdev >>> type (by looking at sysfs) will say that you can have a single vGPU >>> for this profile. >>> Then, you can use that mdev type with Nova using nova.conf. >>> >>> That being said, while this above is simple, the below talk was >>> saying more about how to correctly use the GPU by the host so please >>> wait :-) >>> >>> I think you are talking about "vGPUs with OpenStack Nova" talk >>> on OpenInfra stage. I will look into it once the videos will be >>> online. >>> >>> >>> Indeed. >>> -S >>> >>> [1] https://bugs.launchpad.net/nova/+bug/2008883 >>> >>> Thanks >>> >>> Regards >>> >>> Mahendra >>> >>> On 20/06/2023 15:47, Sylvain Bauza wrote: >>>> >>>> >>>> Le?mar. 20 juin 2023 ??15:12, PAIPURI Mahendra >>>> a ?crit?: >>>> >>>> Hello Ulrich, >>>> >>>> >>>> I am relaunching this discussion as I noticed that you gave >>>> a talk about this topic?at OpenInfra Summit in Vancouver. >>>> Is it possible to share the presentation here? I hope the >>>> talks will be uploaded soon in YouTube. >>>> >>>> >>>> We are mainly interested in using MIG instances in >>>> Openstack cloud and I could not really find a lot of >>>> information?by googling. If you could share your >>>> experiences, that would be great. >>>> >>>> >>>> >>>> Due to scheduling conflicts, I wasn't able to attend Ulrich's >>>> session but his feedback will be greatly listened to by me. >>>> >>>> FWIW, there was also a short session about how to enable MIG >>>> and play with Nova at the OpenInfra stage (and that one I was >>>> able to attend it), and it was quite seamless. What exact >>>> information are you looking for ? >>>> The idea with MIG is that you need to create SRIOV VFs above >>>> the MIG instances using sriov-manage script provided by nvidia >>>> so that the mediated devices will use those VFs as the base PCI >>>> devices to be used for Nova. >>>> >>>> Cheers. >>>> >>>> >>>> Regards >>>> >>>> Mahendra >>>> >>>> ------------------------------------------------------------------------ >>>> *De :* Ulrich Schwickerath >>>> *Envoy? :* lundi 16 janvier 2023 11:38:08 >>>> *? :* openstack-discuss at lists.openstack.org >>>> *Objet :* Re: ??: Experience with VGPUs >>>> >>>> Hi, all, >>>> >>>> just to add to the discussion, at CERN we have recently >>>> deployed a bunch of A100 GPUs in PCI passthrough mode, and >>>> are now looking into improving their usage by using MIG. >>>> From the NOVA point of view things seem to work OK, we can >>>> schedule VMs requesting a VGPU, the client starts up and >>>> gets a license token from our NVIDIA license server >>>> (distributing license keys is our private cloud is >>>> relatively easy in our case). It's a PoC only for the time >>>> being, and we're not ready to put that forward as we're >>>> facing issues with CUDA on the client (it fails immediately >>>> in memory operations with 'not supported', still >>>> investigating why this happens). >>>> >>>> Once we get that working it would be nice to be able to >>>> have a more fine grained scheduling so that people can ask >>>> for MIG devices of different size. The other challenge is >>>> how to set limits on GPU resources. Once the above issues >>>> have been sorted out we may want to look into cyborg as >>>> well thus we are quite interested in first experiences with >>>> this. >>>> >>>> Kind regards, >>>> >>>> Ulrich >>>> >>>> On 13.01.23 21:06, Dmitriy Rabotyagov wrote: >>>>> To have that said, deb/rpm packages they are providing >>>>> doesn't help much, as: >>>>> * There is no repo for them, so you need to download them >>>>> manually from enterprise portal >>>>> * They can't be upgraded anyway, as driver version is part >>>>> of the package name. And each package conflicts with any >>>>> another one. So you need to explicitly remove old package >>>>> and only then install new one. And yes, you must stop all >>>>> VMs before upgrading driver and no, you can't live migrate >>>>> GPU mdev devices due to that now being implemented in >>>>> qemu. So deb/rpm/generic driver doesn't matter at the end tbh. >>>>> >>>>> >>>>> ??, 13 ???. 2023 ?., 20:56 Cedric : >>>>> >>>>> >>>>> Ended up with the very same conclusions than Dimitry >>>>> regarding the use of Nvidia Vgrid for the VGPU use >>>>> case with Nova, it works pretty well but: >>>>> >>>>> - respecting the licensing model as operationnal >>>>> constraints, note that guests need to reach a license >>>>> server in order to get a token (could be via the >>>>> Nvidia SaaS service or on-prem) >>>>> - drivers for both guest and hypervisor are not easy >>>>> to implement and maintain on large scale. A year ago, >>>>> hypervisors drivers were not packaged to >>>>> Debian/Ubuntu, but builded though a bash script, thus >>>>> requiering additional automatisation work and careful >>>>> attention regarding kernel update/reboot of Nova >>>>> hypervisors. >>>>> >>>>> Cheers >>>>> >>>>> >>>>> On Fri, Jan 13, 2023 at 4:21 PM Dmitriy Rabotyagov >>>>> wrote: >>>>> > >>>>> > You are saying that, like Nvidia GRID drivers are >>>>> open-sourced while >>>>> > in fact they're super far from being that. In order >>>>> to download >>>>> > drivers not only for hypervisors, but also for guest >>>>> VMs you need to >>>>> > have an account in their Enterprise Portal. It took >>>>> me roughly 6 weeks >>>>> > of discussions with hardware vendors and Nvidia >>>>> support to get a >>>>> > proper account there. And that happened only after >>>>> applying for their >>>>> > Partner Network (NPN). >>>>> > That still doesn't solve the issue of how to provide >>>>> drivers to >>>>> > guests, except pre-build a series of images with >>>>> these drivers >>>>> > pre-installed (we ended up with making a DIB element >>>>> for that [1]). >>>>> > Not saying about the need to distribute license >>>>> tokens for guests and >>>>> > the whole mess with compatibility between hypervisor >>>>> and guest drivers >>>>> > (as guest driver can't be newer then host one, and >>>>> HVs can't be too >>>>> > new either). >>>>> > >>>>> > It's not that I'm protecting AMD, but just saying >>>>> that Nvidia is not >>>>> > that straightforward either, and at least on paper >>>>> AMD vGPUs look >>>>> > easier both for operators and end-users. >>>>> > >>>>> > [1] >>>>> https://github.com/citynetwork/dib-elements/tree/main/nvgrid >>>>> > >>>>> > > >>>>> > > As for AMD cards, AMD stated that some of their MI >>>>> series card supports SR-IOV for vGPUs. However, those >>>>> drivers are never open source or provided closed >>>>> source to public, only large cloud providers are able >>>>> to get them. So I don't really recommend getting AMD >>>>> cards for vGPU unless you are able to get support from >>>>> them. >>>>> > > >>>>> > >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Thu Jun 22 08:58:17 2023 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 22 Jun 2023 10:58:17 +0200 Subject: Experience with VGPUs In-Reply-To: References: <2DD18791-4BFD-4FF0-AAAC-77D8C18FB138@me.com> Message-ID: Le jeu. 22 juin 2023 ? 10:43, Mahendra Paipuri a ?crit : > Hello all, > > Thanks @Ulrich for sharing the presentation. Very informative!! > > One question : if I understood correctly, *time-sliced *vGPUs *absolutely > need* GRID drivers and licensed clients for the vGPUs to work in the > guests. For the MIG partitioning, there is *no need* to install GRID > drivers in the guest and also *no need* to have licensed clients. Could > you confirm if this is the actual case? > Again, I'm not part of nVidia, neither I'm paid from them, but you can look at their GRID licensing here : https://docs.nvidia.com/grid/latest/grid-licensing-user-guide/index.html If you also look at the nvidia docs for RHEL support, you need a vCS (virtualComputeServer) licence for Ampere MIG profiles like C-series : https://docs.nvidia.com/grid/latest/grid-vgpu-release-notes-red-hat-el-kvm/index.html#hardware-configuration Cheers. > > Regards > > Mahendra > On 21/06/2023 16:10, Ulrich Schwickerath wrote: > > Hi, all, > > Sylvain explained quite well how to do it technically. We have a PoC > running, however, still have some stability issues, as mentioned on the > summit. We're running the NVIDIA virtualisation drivers on the hypervisors > and the guests, which requires a license from NVIDIA. In our configuration > we are still quite limited in the sense that we have to configure all cards > in the same hypervisor in the same way, that is the same MIG partitioning. > Also, it is not possible to attach more than one device to a single VM. > > As mentioned in the presentation we are a bit behind with Nova, and in the > process of fixing this as we speak. Because of that we had to do a couple > of back ports in Nova to make it work, which we hope to be able to get rid > of by the ongoing upgrades. > > Let me see if I can make the slides available here. > > Cheers, Ulrich > On 20/06/2023 19:07, Oliver Weinmann wrote: > > Hi everyone, > > Jumping into this topic again. Unfortunately I haven?t had time yet to > test Nvidia VGPU in OpenStack but in VMware Vsphere. What our users > complain most about is the inflexibility since you have to use the same > profile on all vms that use the gpu. One user mentioned to try SLURM. I > know there is no official OpenStack project for SLURM but I wonder if > anyone else tried this approach? If I understood correctly this would also > not require any Nvidia subscription since you passthrough the GPU to a > single instance and you don?t use VGPU nor MIG. > > Cheers, > Oliver > > Von meinem iPhone gesendet > > Am 20.06.2023 um 17:34 schrieb Sylvain Bauza > : > > ? > > > Le mar. 20 juin 2023 ? 16:31, Mahendra Paipuri > a ?crit : > >> Thanks Sylvain for the pointers. >> >> One of the questions we have is: can we create MIG profiles on the host >> and then attach each one or more profile(s) to VMs? This bug [1] reports >> that once we attach one profile to a VM, rest of MIG profiles become >> unavailable. From what you have said about using SR-IOV and VFs, I guess >> this should be possible. >> > > Correct, what you need is to create first the VFs using sriov-manage and > then you can create the MIG instances. > Once you create the MIG instances using the profiles you want, you will > see that the related available_instances for the nvidia mdev type (by > looking at sysfs) will say that you can have a single vGPU for this profile. > Then, you can use that mdev type with Nova using nova.conf. > > That being said, while this above is simple, the below talk was saying > more about how to correctly use the GPU by the host so please wait :-) > >> I think you are talking about "vGPUs with OpenStack Nova" talk on >> OpenInfra stage. I will look into it once the videos will be online. >> > > Indeed. > -S > >> [1] https://bugs.launchpad.net/nova/+bug/2008883 >> >> Thanks >> >> Regards >> >> Mahendra >> On 20/06/2023 15:47, Sylvain Bauza wrote: >> >> >> >> Le mar. 20 juin 2023 ? 15:12, PAIPURI Mahendra >> a ?crit : >> >>> Hello Ulrich, >>> >>> >>> I am relaunching this discussion as I noticed that you gave a talk about >>> this topic at OpenInfra Summit in Vancouver. Is it possible to share the >>> presentation here? I hope the talks will be uploaded soon in YouTube. >>> >>> >>> We are mainly interested in using MIG instances in Openstack cloud and I >>> could not really find a lot of information by googling. If you could share >>> your experiences, that would be great. >>> >>> >>> >> Due to scheduling conflicts, I wasn't able to attend Ulrich's session but >> his feedback will be greatly listened to by me. >> >> FWIW, there was also a short session about how to enable MIG and play >> with Nova at the OpenInfra stage (and that one I was able to attend it), >> and it was quite seamless. What exact information are you looking for ? >> The idea with MIG is that you need to create SRIOV VFs above the MIG >> instances using sriov-manage script provided by nvidia so that the mediated >> devices will use those VFs as the base PCI devices to be used for Nova. >> >> Cheers. >>> >>> >>> Regards >>> >>> Mahendra >>> ------------------------------ >>> *De :* Ulrich Schwickerath >>> *Envoy? :* lundi 16 janvier 2023 11:38:08 >>> *? :* openstack-discuss at lists.openstack.org >>> *Objet :* Re: ??: Experience with VGPUs >>> >>> >>> Hi, all, >>> >>> just to add to the discussion, at CERN we have recently deployed a bunch >>> of A100 GPUs in PCI passthrough mode, and are now looking into improving >>> their usage by using MIG. From the NOVA point of view things seem to work >>> OK, we can schedule VMs requesting a VGPU, the client starts up and gets a >>> license token from our NVIDIA license server (distributing license keys is >>> our private cloud is relatively easy in our case). It's a PoC only for the >>> time being, and we're not ready to put that forward as we're facing issues >>> with CUDA on the client (it fails immediately in memory operations with >>> 'not supported', still investigating why this happens). >>> >>> Once we get that working it would be nice to be able to have a more fine >>> grained scheduling so that people can ask for MIG devices of different >>> size. The other challenge is how to set limits on GPU resources. Once the >>> above issues have been sorted out we may want to look into cyborg as well >>> thus we are quite interested in first experiences with this. >>> >>> Kind regards, >>> >>> Ulrich >>> On 13.01.23 21:06, Dmitriy Rabotyagov wrote: >>> >>> To have that said, deb/rpm packages they are providing doesn't help >>> much, as: >>> * There is no repo for them, so you need to download them manually from >>> enterprise portal >>> * They can't be upgraded anyway, as driver version is part of the >>> package name. And each package conflicts with any another one. So you need >>> to explicitly remove old package and only then install new one. And yes, >>> you must stop all VMs before upgrading driver and no, you can't live >>> migrate GPU mdev devices due to that now being implemented in qemu. So >>> deb/rpm/generic driver doesn't matter at the end tbh. >>> >>> >>> ??, 13 ???. 2023 ?., 20:56 Cedric : >>> >>>> >>>> Ended up with the very same conclusions than Dimitry regarding the use >>>> of Nvidia Vgrid for the VGPU use case with Nova, it works pretty well but: >>>> >>>> - respecting the licensing model as operationnal constraints, note that >>>> guests need to reach a license server in order to get a token (could be via >>>> the Nvidia SaaS service or on-prem) >>>> - drivers for both guest and hypervisor are not easy to implement and >>>> maintain on large scale. A year ago, hypervisors drivers were not packaged >>>> to Debian/Ubuntu, but builded though a bash script, thus requiering >>>> additional automatisation work and careful attention regarding kernel >>>> update/reboot of Nova hypervisors. >>>> >>>> Cheers >>>> >>>> >>>> On Fri, Jan 13, 2023 at 4:21 PM Dmitriy Rabotyagov < >>>> noonedeadpunk at gmail.com> wrote: >>>> > >>>> > You are saying that, like Nvidia GRID drivers are open-sourced while >>>> > in fact they're super far from being that. In order to download >>>> > drivers not only for hypervisors, but also for guest VMs you need to >>>> > have an account in their Enterprise Portal. It took me roughly 6 weeks >>>> > of discussions with hardware vendors and Nvidia support to get a >>>> > proper account there. And that happened only after applying for their >>>> > Partner Network (NPN). >>>> > That still doesn't solve the issue of how to provide drivers to >>>> > guests, except pre-build a series of images with these drivers >>>> > pre-installed (we ended up with making a DIB element for that [1]). >>>> > Not saying about the need to distribute license tokens for guests and >>>> > the whole mess with compatibility between hypervisor and guest drivers >>>> > (as guest driver can't be newer then host one, and HVs can't be too >>>> > new either). >>>> > >>>> > It's not that I'm protecting AMD, but just saying that Nvidia is not >>>> > that straightforward either, and at least on paper AMD vGPUs look >>>> > easier both for operators and end-users. >>>> > >>>> > [1] https://github.com/citynetwork/dib-elements/tree/main/nvgrid >>>> > >>>> > > >>>> > > As for AMD cards, AMD stated that some of their MI series card >>>> supports SR-IOV for vGPUs. However, those drivers are never open source or >>>> provided closed source to public, only large cloud providers are able to >>>> get them. So I don't really recommend getting AMD cards for vGPU unless you >>>> are able to get support from them. >>>> > > >>>> > >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From Danny.Webb at thehutgroup.com Thu Jun 22 09:21:08 2023 From: Danny.Webb at thehutgroup.com (Danny Webb) Date: Thu, 22 Jun 2023 09:21:08 +0000 Subject: DRaaS question In-Reply-To: References: Message-ID: We've been looking at Trilio (https://trilio.io/), it's a paid solution but it's functionality seems pretty good. ________________________________ From: Zakhar Kirpichenko Sent: 22 June 2023 05:48 To: openstack-discuss Subject: Re: DRaaS question CAUTION: This email originates from outside THG ________________________________ Isn't freezer basically abandonware? /Z On Thu, 22 Jun 2023 at 07:10, Alvaro Soto > wrote: Hello, I'm unfamiliar with the 2 options you mentioned to build a comparison matrix. But take a look at the freezer project. https://docs.openstack.org/freezer/latest/ Cheers. --- Alvaro Soto. Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you. ---------------------------------------------------------- Great people talk about ideas, ordinary people talk about things, small people talk... about other people. On Wed, Jun 21, 2023, 5:33 AM KK CHN > wrote: List, Looking for solutions using OpenStack Similar to 1. VMWare VCDR( VMware Cloud Disaster Recovery) as a Pilot Light DR Solution hassle free Failover and Failback. 2. AWS Cloud Replication Agent( Cloud Endurance ) Operational DR solution in Cloud ( Both 1 and 2 have capabilities of acting as an Operational DR in Cloud and serving both from AWS demonstrated as a PoC ). Any solution/directives to achieve a similar to better solution using OpenStack techniques and components? Please enlighten with your ideas. Thank you Krishane. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zakhar at gmail.com Thu Jun 22 09:22:56 2023 From: zakhar at gmail.com (Zakhar Kirpichenko) Date: Thu, 22 Jun 2023 12:22:56 +0300 Subject: DRaaS question In-Reply-To: References: Message-ID: Yes, I think Trilio took freezer and turned it into a commercial product. Unfortunately, the freezer project itself appears to be dead. /Z On Thu, 22 Jun 2023 at 12:21, Danny Webb wrote: > We've been looking at Trilio (https://trilio.io/), it's a paid solution > but it's functionality seems pretty good. > ------------------------------ > *From:* Zakhar Kirpichenko > *Sent:* 22 June 2023 05:48 > *To:* openstack-discuss > *Subject:* Re: DRaaS question > > > * CAUTION: This email originates from outside THG * > ------------------------------ > Isn't freezer basically abandonware? > > /Z > > On Thu, 22 Jun 2023 at 07:10, Alvaro Soto wrote: > > Hello, > I'm unfamiliar with the 2 options you mentioned to build a comparison > matrix. But take a look at the freezer project. > > https://docs.openstack.org/freezer/latest/ > > Cheers. > --- > Alvaro Soto. > > Note: My work hours may not be your work hours. Please do not feel the > need to respond during a time that is not convenient for you. > ---------------------------------------------------------- > Great people talk about ideas, > ordinary people talk about things, > small people talk... about other people. > > On Wed, Jun 21, 2023, 5:33 AM KK CHN wrote: > > List, > > Looking for solutions using OpenStack Similar to > > 1. VMWare VCDR( VMware Cloud Disaster Recovery) as a Pilot Light DR > Solution hassle free Failover and Failback. > > 2. AWS Cloud Replication Agent( Cloud Endurance ) Operational DR > solution in Cloud > > ( Both 1 and 2 have capabilities of acting as an Operational DR in Cloud > and serving both from AWS demonstrated as a PoC ). > > Any solution/directives to achieve a similar to better solution using > OpenStack techniques and components? > > Please enlighten with your ideas. > > Thank you > Krishane. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Jun 22 11:02:23 2023 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 22 Jun 2023 13:02:23 +0200 Subject: [all][ops][rbac] RBAC discussion summary from Vancouver summit In-Reply-To: <188df27eebd.126e7adff315950.7816177899197775871@ghanshyammann.com> References: <188df27eebd.126e7adff315950.7816177899197775871@ghanshyammann.com> Message-ID: <2435427.0eHbjRvo1U@p1> Hi, Dnia ?roda, 21 czerwca 2023 20:11:56 CEST Ghanshyam Mann pisze: > Hello Everyone, > > Most of you know we had the RBAC discussion at the Vancouver summit on Tuesday. > - https://etherpad.opendev.org/p/rbac-operator-feedback-vancouver2023 > > This forum session started providing the current progress for RBAC work which you can > find in Etherpad > - https://etherpad.opendev.org/p/rbac-operator-feedback-vancouver2023#L46 > > After the progress updates, we started the feedback from operators. I am writing the summary > and feel free to add anything I missed to capture here: > > Admin Access: > =========== > * Keep current Admin (legacy admin means admin in any project) untouched. > * There is a need for second-level support along with the current admin, and this can be done with the project manager > * Public Cloud use case: > ---------------------------- > ** Currently defined RBAC goal does not solve the public cloud use case of having a domain admin manage the specific > domain's user/project/role. > ** If the domain admin can assign an admin role to any user under their domain, then that user gets admin > access across all the projects on the service side (even projects are in different domains as the service side does not have > domain boundaries) > ** We do not have any better solution for now, but one idea is to have the Domain manager allow managing the user/project/ > role in their domain except 'Admin' role assignment. Current legacy admin (admin across all domains) can handle the 'Admin' > role assignment. I mean, domain manager behaves as domain-level admin and legacy admin as a global admin. > Feel free to write other solutions if you have? > > Member Access: > ============ > * Project Member is a good fix and useful to enforce 'member' roles actually > > Reader Access: > ============ > * Project Reader is one of the most useful roles in this change. > > * There is an ask for a global reader, which was nothing but a system reader in past. This can > be useful for performing the audit of the complete cloud. One idea is to do this with a special role > _role in Keystone. I will write the spec for it, and we can talk about it there. Will this impact existing phases in https://governance.openstack.org/tc/goals/selected/consistent-and-secure-rbac.html somehow or will this global reader role be considered as next "phase" after project-manager (phase 3) will be done? > > > -gmann > > > > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From dtantsur at protonmail.com Thu Jun 22 12:15:33 2023 From: dtantsur at protonmail.com (Dmitry Tantsur) Date: Thu, 22 Jun 2023 12:15:33 +0000 Subject: [ironic] Summit PTG Summary In-Reply-To: References: Message-ID: <245f0d8f-6065-7fea-922c-86b68f2b0ecb@protonmail.com> Hi! On 6/21/23 18:25, Jay Faulkner wrote: > Hey all, > > As indicated before the summit, the Ironic PTG was dedicated to > in-person collaboration, hacking, and design chats but were not intended > for any final decision making due to not all of our community being present. > > Full, contemporaneous notes taken are at > https://etherpad.opendev.org/p/ironic-openinfra-2023 > -- this is meant > to be a high level summary. > > For the summit, there were several extremely well-attended Ironic talks. > Thank you to all of those who gave talks! In addition, I, and other > members of the Ironic community, were able to connect with many people > using Ironic quietly in production with a large amount of success. As > always, the Ironic community strongly encourages people with success > stories to loudly communicate about them :D. > > Ironic did have a single forum session, where we met with several > operators, answering questions and in some places providing solutions to > people struggling with a problem. The full notes from the session, > primarily consisting of a census of Ironic installs, are accessible from > the above linked etherpad. > > As for the PTG sessions, there were a few topics. These are extremely > rough outlines of what was discussed; but again, no specific decisions > were made. > - networking-generic-switch > -- Several contributors, including Baptise Jonglez and John Garbutt, > discussed options for scaling NGS further up in the future, including > enhancing it to support other protocols, such as VXLAN. Some of these > discussions have already moved to the list; and I encourage folks to > engage with Baptise to make our network tooling scale even more. > > - Future of Ironic > -- We spoke for a while about the Ironic vision document created in > Rocky, targeting approximately now: > https://docs.openstack.org/ironic/latest/contributor/vision.html > -- > we've accomplished many of the items on the list, but what's next? > -- Possibilities brainstormed included: > --- Enhanced network interfaces that use SDN or DPU orchestration to > configure baremetal networks I can haz standalone switch management? :) > --- more distinct support for composible hardware > --- expanding Ironic standalone use cases > --- getting more directly connected with communities like Metal3 > integrating into Ironic > --- scaling down Ironic into a tool useful at smaller scale (that a tool > like cobbler has a strong hold on today) Not clear from the etherpad: is it a question of scale or UX? I'm not sure a Bifrost installation is much larger than MaaS, etc (although we can always optimize it further, e.g. add sqlite as an option). The etherpad mentions "Easy enough", and ease of use is definitely the field where we Ironic leaves a lot to be desired. Possibilities that come to mind: 1) My "deployment API" proposal, 2) Adding a strict schema to our API and clean up confusing JSON fields, 3) An image building service, 4) The already mentioned network management, 5) More high-level API actions/workflows. > --- terraform driver designed to call Ironic directly We have something like this, admittedly, limited to our case: https://github.com/openshift-metal3/terraform-provider-ironic/ Dmitry > -- I think we should update the vision document with some of these ideas > so we can use it as a measuring stick in 5-6 years, like we were able to > use the Rocky vision document this time. > > Finally, we closed up the summit with an Ironic dinner, with 16 > attendees from various companies, use cases, and backgrounds with one > thing in common: we all need bare metal servers :). If you're wondering > who the faces of Ironic are, here are some of us > https://twitter.com/jayofdoom/status/1669531671937384449 > :). > > I'll say, on a personal note, it was extremely nice to get to see many > of my old and new friends in the OpenInfra community face to face for > the first time in years. The absence of those unable to travel was felt > deeply as well, and I hope we'll be able to reconnect in person in the > future. > > Thanks, > Jay Faulkner > Ironic PTL From dmellado at redhat.com Thu Jun 22 12:42:07 2023 From: dmellado at redhat.com (Daniel Mellado) Date: Thu, 22 Jun 2023 14:42:07 +0200 Subject: [all] PyCharm/JetBrains licenses renewed Message-ID: Hi all o/ This is just to inform you that the open source licenses for PyCharm/JetBrains have been renewed for 1 extra year until Aug 2024. Best! Daniel Mellado -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.acosta at luizalabs.com Thu Jun 22 14:58:50 2023 From: roberto.acosta at luizalabs.com (Roberto Bartzen Acosta) Date: Thu, 22 Jun 2023 11:58:50 -0300 Subject: [neutron] - OVN heartbead - short agent_down_time In-Reply-To: References: Message-ID: I understand Nova's requirements, but OVN heartbeat has a significant impact on the Southbound Database. We have a related topic on etherpad about this (Vancouver PTG): - "(labedz) OVN heartbeat mechanism - big mechanism with significant infrastructure impact for ? Why we need to be on OVN southbound with Neutron?" Sean mentioned some reasons to use the metadata heartbeat mechanism: - "i would suggest startign a patch if you belive the current behavior will be probelmatic but keep in mind that addign too much jitter/delay can cause vm boots/migtrations to randomly fail leavign the instance in an error state." Maybe we shouldn't try to get the network agent status without considering the OVN backend impact. OVN should take a long time processing messages from ovs-vswitchd daemon on chassis (OVSDB transactions). In this case, the ovn-controller still blocked by the unix socket between ovn-controller <-> ovs-vswitchd, and during this sync the ovn-controller cannot process any "heartbeat" because ovn-controller is busy with the last cfg. In other words, the time to bump the heartbeat cfg is very dependent on the number of resources used (scaling). This specific patch is related to "OVN Metadata agent" heartbeat and use the neutron:ovn-metadata-sb-cfg to bump nb_cfg config number: table = ('Chassis_Private' if self.agent.has_chassis_private else 'Chassis') self.agent.sb_idl.db_set( table, self.agent.chassis, ('external_ids', { ovn_const.OVN_AGENT_METADATA_SB_CFG_KEY: str(row.nb_cfg)})).execute() As I understand, this is very similar to "OVN Controller agent" heartbeat but in ovn-controller case we are talking about the "neutron:liveness_check_at" to bump cfg on NB_Global table. last_ping = self.nb_ovn.nb_global.external_ids.get( ovn_const.OVN_LIVENESS_CHECK_EXT_ID_KEY) In both cases, to transition cfg numbers we need ovn-controller availability... I suppose that being able to customize this value is better for large-scale cases. It seems to me that's what we talked about in Vancouver Rodolfo (scalability vs reliability). OVN needs to evolve with I-P (incremental processing) to respond faster to configuration changes, but while that doesn't happen, we'll have to live with bigger timeouts... Em qua., 21 de jun. de 2023 ?s 18:00, Ihar Hrachyshka escreveu: > On Mon, Jun 19, 2023 at 11:04?AM Roberto Bartzen Acosta < > roberto.acosta at luizalabs.com> wrote: > >> Hello Neutron folks, >> >> We discussed in the Operators feedback session about OVN heartbeat and >> the use of "infinity" values for large-scale deployments because we have a >> significant infrastructure impact when a short 'agent_down_time' is >> configured. >> > > This is tangentially related, but note that using "infinity" values for > agent_down_time is unsafe: > https://bugzilla.redhat.com/show_bug.cgi?id=2215407 (depending on whether > your "infinity" value is larger than ~15 days, assuming 32 bit ints used on > your platform). > > >> >> >> The merged patch [1] limited the maximum delay to 10 seconds. I >> understand the requirement to use random values to avoid load spikes, but >> why does this fix limit the heartbeat to 10 seconds? What is the goal of >> the agent_down_time parameter in this case? How will it work for someone >> who has hundreds of compute nodes / metadata agents? >> >> Regards, >> Roberto >> >> [1] - https://review.opendev.org/c/openstack/neutron/+/883687 >> >> >> *?Esta mensagem ? direcionada apenas para os endere?os constantes no >> cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no >> cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa >> mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o >> imediatamente anuladas e proibidas?.* >> >> *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para >> assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o >> poder? aceitar a responsabilidade por quaisquer perdas ou danos causados >> por esse e-mail ou por seus anexos?.* >> > -- _?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?._ *?**?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Jun 22 17:06:39 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 22 Jun 2023 17:06:39 +0000 Subject: DRaaS question In-Reply-To: References: Message-ID: <20230622170638.uofih5fcnqa264vq@yuggoth.org> On 2023-06-22 12:22:56 +0300 (+0300), Zakhar Kirpichenko wrote: > Yes, I think Trilio took freezer and turned it into a commercial > product. Unfortunately, the freezer project itself appears to be > dead. [...] If anyone is or decides to become a Trilio customer, maybe let the salespeople you interact with there know that you'd greatly appreciate if they joined the OpenInfra Foundation and/or contributed to upstream maintenance of the OpenStack projects their business depends on. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jay at gr-oss.io Thu Jun 22 17:25:10 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Thu, 22 Jun 2023 10:25:10 -0700 Subject: [ironic] Summit PTG Summary In-Reply-To: <245f0d8f-6065-7fea-922c-86b68f2b0ecb@protonmail.com> References: <245f0d8f-6065-7fea-922c-86b68f2b0ecb@protonmail.com> Message-ID: > --- scaling down Ironic into a tool useful at smaller scale (that a tool > > like cobbler has a strong hold on today) > > Not clear from the etherpad: is it a question of scale or UX? I'm not > sure a Bifrost installation is much larger than MaaS, etc (although we > can always optimize it further, e.g. add sqlite as an option). > > Yes to all? It's more that this is a space that we have very little market penetration into, and I'd like to solve it. I think there's going to need to be a nexus of documentation, simplification, and fighting back against false perceptions to make progress here, but it's worthwhile because of the size of the potential growth. > The etherpad mentions "Easy enough", and ease of use is definitely the > field where we Ironic leaves a lot to be desired. Possibilities that > come to mind: > 1) My "deployment API" proposal, > 2) Adding a strict schema to our API and clean up confusing JSON fields, > 3) An image building service, > 4) The already mentioned network management, > 5) More high-level API actions/workflows. > > These are all really good ideas that go in the direction I was thinking :). - Jay Faulkner Ironic PTL -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Jun 22 17:44:05 2023 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 22 Jun 2023 10:44:05 -0700 Subject: DRaaS question In-Reply-To: <20230622170638.uofih5fcnqa264vq@yuggoth.org> References: <20230622170638.uofih5fcnqa264vq@yuggoth.org> Message-ID: On Thu, Jun 22, 2023 at 10:14?AM Jeremy Stanley wrote: > On 2023-06-22 12:22:56 +0300 (+0300), Zakhar Kirpichenko wrote: > > Yes, I think Trilio took freezer and turned it into a commercial > > product. Unfortunately, the freezer project itself appears to be > > dead. > [...] > > If anyone is or decides to become a Trilio customer, maybe let the > salespeople you interact with there know that you'd greatly > appreciate if they joined the OpenInfra Foundation and/or > contributed to upstream maintenance of the OpenStack projects their > business depends on. > -- > Jeremy Stanley > Or perhaps make it a condition of future business, after all we are more powerful together. -------------- next part -------------- An HTML attachment was scrubbed... URL: From raubvogel at gmail.com Thu Jun 22 18:11:03 2023 From: raubvogel at gmail.com (Mauricio Tavares) Date: Thu, 22 Jun 2023 14:11:03 -0400 Subject: DRaaS question In-Reply-To: References: Message-ID: According to https://github.com/openstack/freezer, last change was some 4 months ago On Thu, Jun 22, 2023 at 5:28?AM Zakhar Kirpichenko wrote: > > Yes, I think Trilio took freezer and turned it into a commercial product. Unfortunately, the freezer project itself appears to be dead. > > /Z > > On Thu, 22 Jun 2023 at 12:21, Danny Webb wrote: >> >> We've been looking at Trilio (https://trilio.io/), it's a paid solution but it's functionality seems pretty good. >> ________________________________ >> From: Zakhar Kirpichenko >> Sent: 22 June 2023 05:48 >> To: openstack-discuss >> Subject: Re: DRaaS question >> >> >> CAUTION: This email originates from outside THG >> >> ________________________________ >> Isn't freezer basically abandonware? >> >> /Z >> >> On Thu, 22 Jun 2023 at 07:10, Alvaro Soto wrote: >> >> Hello, >> I'm unfamiliar with the 2 options you mentioned to build a comparison matrix. But take a look at the freezer project. >> >> https://docs.openstack.org/freezer/latest/ >> >> Cheers. >> --- >> Alvaro Soto. >> >> Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you. >> ---------------------------------------------------------- >> Great people talk about ideas, >> ordinary people talk about things, >> small people talk... about other people. >> >> On Wed, Jun 21, 2023, 5:33 AM KK CHN wrote: >> >> List, >> >> Looking for solutions using OpenStack Similar to >> >> 1. VMWare VCDR( VMware Cloud Disaster Recovery) as a Pilot Light DR Solution hassle free Failover and Failback. >> >> 2. AWS Cloud Replication Agent( Cloud Endurance ) Operational DR solution in Cloud >> >> ( Both 1 and 2 have capabilities of acting as an Operational DR in Cloud and serving both from AWS demonstrated as a PoC ). >> >> Any solution/directives to achieve a similar to better solution using OpenStack techniques and components? >> >> Please enlighten with your ideas. >> >> Thank you >> Krishane. >> From fungi at yuggoth.org Thu Jun 22 19:08:36 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 22 Jun 2023 19:08:36 +0000 Subject: DRaaS question In-Reply-To: References: Message-ID: <20230622190836.bhhpnz7aucibzwjn@yuggoth.org> On 2023-06-22 14:11:03 -0400 (-0400), Mauricio Tavares wrote: > According to https://github.com/openstack/freezer, last change was > some 4 months ago [...] Yes, but if you filter out changes which were merely adjustments to packaging, fixing typos, adding or altering tests, log and exception tweaks, et cetera, the last substantive bug fix or feature addition I see was in September of 2020. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From melwittt at gmail.com Thu Jun 22 19:12:31 2023 From: melwittt at gmail.com (melanie witt) Date: Thu, 22 Jun 2023 12:12:31 -0700 Subject: instance console something went wrong, connection is closed | Wallaby DCN In-Reply-To: References: Message-ID: <55eff3d6-b852-840e-80e0-26cbb5a58aac@gmail.com> On 06/22/23 01:08, Swogat Pradhan wrote: > Hi, > Please find the below log: > [root at dcn01-hci-1 libvirt]# cat virtqemud.log > 2023-06-22 07:40:01.575+0000: 350319: error : virNetSocketReadWire:1804 > : End of file while reading data: Input/output error > 2023-06-22 07:40:01.575+0000: 350319: error : virNetSocketWriteWire:1844 > : Cannot write data: Broken pipe > > I think this is causing the?problem of not getting the instance console. When you say "instance console" are you referring to an interactive console like VNC or are you talking about the console log for the instance? If it's the interactive console, if you have a console open and then migrate the instance, that console will not be moved along with the instance. When a user requests a console, the console proxy service establishes a connection to the compute host where the instance is located. The proxy doesn't know when an instance has been moved though, so if the instance is moved, the user will need to request a new console (which will establish a new connection to the new compute host). Is that the behavior you are seeing? -melwitt > On Fri, Jun 2, 2023 at 11:27?AM Swogat Pradhan > > wrote: > > Update: > If the i am performing?any activity like migration or resize of an > instance whose console is accessible, the console becomes > inaccessible giving out the following error : something went wrong, > connection is closed > > The was 1 other instance whose console was not accessible and i did > a shelve and unshelve and suddenly?the instance console became > accessible. > > This is a peculiar behavior and i don't understand where is the issue . > > With regards, > Swogat Pradhan > > On Fri, Jun 2, 2023 at 11:19?AM Swogat Pradhan > > wrote: > > Hi, > I am creating instances in my DCN site and i am unable to get > the console sometimes, error:?something went wrong, connection > is closed > > I have 3 instances now running on my hci02 node and there is > console access on 1 of the vm's and the rest two i am not > getting the console, i have used the same flavor, same image > same security group for the VM's. > > Please suggest what can?be done. > > With regards, > Swogat Pradhan > From gmann at ghanshyammann.com Thu Jun 22 19:30:46 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 22 Jun 2023 12:30:46 -0700 Subject: [all][ops][rbac] RBAC discussion summary from Vancouver summit In-Reply-To: <2435427.0eHbjRvo1U@p1> References: <188df27eebd.126e7adff315950.7816177899197775871@ghanshyammann.com> <2435427.0eHbjRvo1U@p1> Message-ID: <188e49679bf.acfa44fe4643.1857367833365437497@ghanshyammann.com> ---- On Thu, 22 Jun 2023 04:02:23 -0700 Slawek Kaplonski wrote --- > Hi, > > Dnia ?roda, 21 czerwca 2023 20:11:56 CEST Ghanshyam Mann pisze: > > Hello Everyone, > > > > Most of you know we had the RBAC discussion at the Vancouver summit on Tuesday. > > - https://etherpad.opendev.org/p/rbac-operator-feedback-vancouver2023 > > > > This forum session started providing the current progress for RBAC work which you can > > find in Etherpad > > - https://etherpad.opendev.org/p/rbac-operator-feedback-vancouver2023#L46 > > > > After the progress updates, we started the feedback from operators. I am writing the summary > > and feel free to add anything I missed to capture here: > > > > Admin Access: > > =========== > > * Keep current Admin (legacy admin means admin in any project) untouched. > > * There is a need for second-level support along with the current admin, and this can be done with the project manager > > * Public Cloud use case: > > ---------------------------- > > ** Currently defined RBAC goal does not solve the public cloud use case of having a domain admin manage the specific > > domain's user/project/role. > > ** If the domain admin can assign an admin role to any user under their domain, then that user gets admin > > access across all the projects on the service side (even projects are in different domains as the service side does not have > > domain boundaries) > > ** We do not have any better solution for now, but one idea is to have the Domain manager allow managing the user/project/ > > role in their domain except 'Admin' role assignment. Current legacy admin (admin across all domains) can handle the 'Admin' > > role assignment. I mean, domain manager behaves as domain-level admin and legacy admin as a global admin. > > Feel free to write other solutions if you have? > > > > Member Access: > > ============ > > * Project Member is a good fix and useful to enforce 'member' roles actually > > > > Reader Access: > > ============ > > * Project Reader is one of the most useful roles in this change. > > > > * There is an ask for a global reader, which was nothing but a system reader in past. This can > > be useful for performing the audit of the complete cloud.? One idea is to do this with a special role > > _role in Keystone. I will write the spec for it, and we can talk about it there. > > Will this impact existing phases in https://governance.openstack.org/tc/goals/selected/consistent-and-secure-rbac.html?somehow or will this global reader role be considered as next "phase" after project-manager (phase 3) will be done? I will say not to add this in phase-1. This can be good to do before the service/manager role, but we can discuss this in the meeting (or next cycle PTG). For this cycle no change in plan; let's complete phase-1 (project personas) for every project. One thing to note is that this new global reader will not add any backward incompatibility in the current ([phase-1) defaults because it will be an additional role in logical OR to the existing defaults (so deprecation of current defaults). -gmann > > > > > > > -gmann > > > >? > > > > > > > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > From alsotoes at gmail.com Thu Jun 22 23:14:49 2023 From: alsotoes at gmail.com (Alvaro Soto) Date: Thu, 22 Jun 2023 17:14:49 -0600 Subject: DRaaS question In-Reply-To: <20230622190836.bhhpnz7aucibzwjn@yuggoth.org> References: <20230622190836.bhhpnz7aucibzwjn@yuggoth.org> Message-ID: IMHO, like 5 years ago, it was a nice-looking project but with so many bugs, bypassing OpenStack API and not using the backend HA OS stack, it was a real pain to make it work in the long run. I hope that it's a better product now, but one thing that hasn't changed for sure is the cost, maybe because there are not many options out there that can do the same. Please, freezer project, don't die on us!!!! Cheers! On Thu, Jun 22, 2023 at 1:14?PM Jeremy Stanley wrote: > On 2023-06-22 14:11:03 -0400 (-0400), Mauricio Tavares wrote: > > According to https://github.com/openstack/freezer, last change was > > some 4 months ago > [...] > > Yes, but if you filter out changes which were merely adjustments to > packaging, fixing typos, adding or altering tests, log and exception > tweaks, et cetera, the last substantive bug fix or feature addition > I see was in September of 2020. > -- > Jeremy Stanley > -- Alvaro Soto *Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you.* ---------------------------------------------------------- Great people talk about ideas, ordinary people talk about things, small people talk... about other people. -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Fri Jun 23 03:07:50 2023 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Fri, 23 Jun 2023 08:37:50 +0530 Subject: instance console something went wrong, connection is closed | Wallaby DCN In-Reply-To: <55eff3d6-b852-840e-80e0-26cbb5a58aac@gmail.com> References: <55eff3d6-b852-840e-80e0-26cbb5a58aac@gmail.com> Message-ID: Hi Mel, Thank you for your response. I am facing issues with the instance console (vnc) in the openstack dashboard, Most of the time I shelve the instance and unshelve the instance to get the console. But there are some VM's I created which are not working even after shelve/unshelve. I have used the same director to deploy a total of a central and 2 edge sites. This issue is happening on a single edge site. Cold Migration also helps in some situations. With regards, Swogat Pradhan On Fri, Jun 23, 2023 at 12:42?AM melanie witt wrote: > On 06/22/23 01:08, Swogat Pradhan wrote: > > Hi, > > Please find the below log: > > [root at dcn01-hci-1 libvirt]# cat virtqemud.log > > 2023-06-22 07:40:01.575+0000: 350319: error : virNetSocketReadWire:1804 > > : End of file while reading data: Input/output error > > 2023-06-22 07:40:01.575+0000: 350319: error : virNetSocketWriteWire:1844 > > : Cannot write data: Broken pipe > > > > I think this is causing the problem of not getting the instance console. > > When you say "instance console" are you referring to an interactive > console like VNC or are you talking about the console log for the instance? > > If it's the interactive console, if you have a console open and then > migrate the instance, that console will not be moved along with the > instance. When a user requests a console, the console proxy service > establishes a connection to the compute host where the instance is > located. The proxy doesn't know when an instance has been moved though, > so if the instance is moved, the user will need to request a new console > (which will establish a new connection to the new compute host). > > Is that the behavior you are seeing? > > -melwitt > > > On Fri, Jun 2, 2023 at 11:27?AM Swogat Pradhan > > > wrote: > > > > Update: > > If the i am performing any activity like migration or resize of an > > instance whose console is accessible, the console becomes > > inaccessible giving out the following error : something went wrong, > > connection is closed > > > > The was 1 other instance whose console was not accessible and i did > > a shelve and unshelve and suddenly the instance console became > > accessible. > > > > This is a peculiar behavior and i don't understand where is the > issue . > > > > With regards, > > Swogat Pradhan > > > > On Fri, Jun 2, 2023 at 11:19?AM Swogat Pradhan > > > > wrote: > > > > Hi, > > I am creating instances in my DCN site and i am unable to get > > the console sometimes, error: something went wrong, connection > > is closed > > > > I have 3 instances now running on my hci02 node and there is > > console access on 1 of the vm's and the rest two i am not > > getting the console, i have used the same flavor, same image > > same security group for the VM's. > > > > Please suggest what can be done. > > > > With regards, > > Swogat Pradhan > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From garcetto at gmail.com Fri Jun 23 07:23:46 2023 From: garcetto at gmail.com (garcetto) Date: Fri, 23 Jun 2023 09:23:46 +0200 Subject: [kolla] help working config cinder and cinder-backup on different nfs servers and exports Message-ID: good morning, i am having trouble understanding how to config different nfs shares on different nfs servers, one for cinder and ANOTHER one for use for cinder-backup, this is my files: (already check both nfs export are reachable and works fine in rw inside openstack) (kolla latest version on ubuntu 22 hosts, all-in-one-deploy) [globals.yaml] ... enable_cinder: "yes" enable_cinder_backend_nfs: "yes" enable_cinder_backup: "yes" cinder_backup_driver: "nfs" cinder_backup_share: "192.168.100.100:/exports/backup" cinder_backup_mount_options_nfs: "vers=3" [config/nfs_shares] 192.168.100.100:/exports/backup 192.168.200.200:/exports/volumes thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Fri Jun 23 09:34:59 2023 From: sbauza at redhat.com (Sylvain Bauza) Date: Fri, 23 Jun 2023 11:34:59 +0200 Subject: [nova] Last Spec review day next Tuesday ! Message-ID: Hey folks, As a reminder [1], we will have a spec review day next Tuesday June 27th. Sharpen your pens and your Gerrit patches because it will be the last spec review day for this cycle and the Spec Approval Freeze will be on July 6th [2] ! Make sure you have everything uploaded so we can look at them during this day ! After July 6th, no new features will be accepted if they need a specific spec. -Sylvain [1] https://releases.openstack.org/bobcat/schedule.html#b-nova-spec-review-day [2] https://releases.openstack.org/bobcat/schedule.html#b-nova-spec-freeze -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Fri Jun 23 10:26:11 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 23 Jun 2023 12:26:11 +0200 Subject: [neutron] Neutron drivers meeting cancelled Message-ID: Hello Neutrinos: Due to the lack of agenda [1], today's meeting is cancelled. Have a nice weekend! [1]https://wiki.openstack.org/wiki/Meetings/NeutronDrivers -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Fri Jun 23 14:10:41 2023 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 23 Jun 2023 16:10:41 +0200 Subject: Next minimum libvirt / QEMU versions for "C" release (2024) Message-ID: Hi, folks! The last time we incremented versions for libvirt and QEMU was for "Wallaby" release[1]. It'Although we advertized NEXT_MIN_{LIBVIRT,QEMU} versions to be libvirt 7, 0, 0 and QEMU 5.2.0, we actually didn't bump. So, for "Bobcat", we'll bump the MIN libvirt and QEMU versions to: MIN_LIBVIRT_VERSION = (7, 0, 0) MIN_QEMU_VERSION = (5, 2, 0) For the upcoming "C" release (2024), we annouce the NEXT_MIN_{LIBVIRT,QEMU} to be the following: NEXT_MIN_LIBVIRT_VERSION = (8, 0, 0) NEXT_MIN_QEMU_VERSION = (6, 2, 0) Hope that sounds fine. If anyone has concerns or further comments, please raise them on this thread or add 'em in the patch below: https://review.opendev.org/c/openstack/nova/+/886825 Pick next min libvirt / QEMU versions for "C" (2024) release Rationale --------- We picked the above NEXT_MIN version based on Ubuntu "Jammy" (22.04). It is the lowest common denominotor among Debian 11 ("Bullseye"), Ubuntu 22.04 ("Jammy"), and CentOS 9 Stream. - Ubuntu 22.04 (Jammy): - libvirt-daemon: 8.0.0-1ubuntu7.5 - qemu-system-x86: 6.2+dfsg-2ubuntu6.11 - Debian 11 (Bullseye): - libvirt: 7.0.0-3+deb11u2 - qemu: 5.2+dfsg-11+deb11u2 - CentOS Stream 9: - libvirt-daemon-kvm-9.3.0-2.el9.x86_64.rpm - qemu-kvm-8.0.0-4.el9.x86_64.rpm [1] https://opendev.org/openstack/nova/commit/95724bbaef6 -- libvirt: Bump MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION, 2020-09-28) -- /kashyap From zigo at debian.org Fri Jun 23 14:17:26 2023 From: zigo at debian.org (Thomas Goirand) Date: Fri, 23 Jun 2023 16:17:26 +0200 Subject: Next minimum libvirt / QEMU versions for "C" release (2024) In-Reply-To: References: Message-ID: Hi, Repeating what I wrote on IRC. On 6/23/23 16:10, Kashyap Chamarthy wrote: > - Debian 11 (Bullseye): > - libvirt: 7.0.0-3+deb11u2 > - qemu: 5.2+dfsg-11+deb11u2 As far as Debian is concerned, the last version of OpenStack supported on Bullseye was Zed (like every 4 OpenStack release and for each new Debian release, there's a version of Zed that I maintain for both Bullseye and Bookworm, to offer easier transitions). Everything above that, needs to run on bookworm. I will *not* do any backport for Debian 11, which is already the past for me. In this case, for Debian 12 (aka: Bookworm) we have: - Debian 11 (Bullseye): - libvirt: 9.0.0 - qemu: 7.2 So feel free to bump up to that. If needed, I can even do backports myself for these key components, whenever they reach Testing. Probably soon libvirt 9.4.0 (which is already in Experimental, and that will probably soon reach Unstable, then testing). I hope this helps, Cheers, Thomas Goirand (zigo) From pdeore at redhat.com Fri Jun 23 14:27:17 2023 From: pdeore at redhat.com (Pranali Deore) Date: Fri, 23 Jun 2023 19:57:17 +0530 Subject: [Glance] Weekly Meeting Cancelled Message-ID: Hello, Glance weekly meeting for next week, Thursday 29th June, has been cancelled and instead we are going to have review party on 28th June. Kindly please refer the meeting etherpad[1] for more details, [1]: https://etherpad.opendev.org/p/glance-team-meeting-agenda Thanks and regards, ~Pranali -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Fri Jun 23 15:08:08 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 23 Jun 2023 17:08:08 +0200 Subject: Next minimum libvirt / QEMU versions for "C" release (2024) In-Reply-To: References: Message-ID: I wonder what is the reason for such diversification between B and C then? As C will be the next SLURP [1] release, meaning A->C upgrades must be supported. So doing a minimal version bump 2 times doesn't make too much sense to me, as long as any platform we have in PTI is not affected. Also my assumption was that all version bumps (or deprecations) ideally should happen during non-SLURP releases. So what's the reason not to do this now and wait for C? [1] https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html ??, 23 ???. 2023??. ? 16:20, Thomas Goirand : > > Hi, > > Repeating what I wrote on IRC. > > On 6/23/23 16:10, Kashyap Chamarthy wrote: > > - Debian 11 (Bullseye): > > - libvirt: 7.0.0-3+deb11u2 > > - qemu: 5.2+dfsg-11+deb11u2 > > As far as Debian is concerned, the last version of OpenStack supported > on Bullseye was Zed (like every 4 OpenStack release and for each new > Debian release, there's a version of Zed that I maintain for both > Bullseye and Bookworm, to offer easier transitions). Everything above > that, needs to run on bookworm. I will *not* do any backport for Debian > 11, which is already the past for me. In this case, for Debian 12 (aka: > Bookworm) we have: > > - Debian 11 (Bullseye): > - libvirt: 9.0.0 > - qemu: 7.2 > > So feel free to bump up to that. If needed, I can even do backports > myself for these key components, whenever they reach Testing. Probably > soon libvirt 9.4.0 (which is already in Experimental, and that will > probably soon reach Unstable, then testing). > > I hope this helps, > Cheers, > > Thomas Goirand (zigo) > > From garcetto at gmail.com Fri Jun 23 15:48:52 2023 From: garcetto at gmail.com (garcetto) Date: Fri, 23 Jun 2023 17:48:52 +0200 Subject: [cinder-backup][kolla] mix cinder-volume and cinder-backup types (nfs, ceph) Message-ID: good evening, it is possible to mix cinder-volume backend (say ceph) and cinder-backup backend (say nfs)? and also do incrementals backup? thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnasiadka at gmail.com Fri Jun 23 16:55:15 2023 From: mnasiadka at gmail.com (=?utf-8?Q?Micha=C5=82_Nasiadka?=) Date: Fri, 23 Jun 2023 18:55:15 +0200 Subject: [kolla] User Forum Vancouver Message-ID: Hello Koalas, Thank you for attending User Forum on Vancouver 2023 Summit. We?ve had a fruitful conversation around Kolla-Ansible performance and additional actions that can be taken to improve day 2 operations and stability of production environments. Link to etherpad: https://etherpad.opendev.org/p/YVR23-kolla-user-forum See you next time - and I once again encourage everybody to attend Kolla Weekly Meetings [1] or write messages to the Mailing List. [1]: https://meetings.opendev.org/#Kolla_Team_Meeting Michal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper at schmitz.computer Fri Jun 23 17:22:01 2023 From: jesper at schmitz.computer (Jesper Schmitz Mouridsen) Date: Fri, 23 Jun 2023 19:22:01 +0200 Subject: =?US-ASCII?Q?Re=3A_=5Bcinder-backup=5D=5Bkolla=5D_mix_cinder-vo?= =?US-ASCII?Q?lume_and_cinder-backup_types_=28nfs=2C_ceph=29?= In-Reply-To: References: Message-ID: https://github.com/jsm222/cinder-backup-benji-driver Den 23. juni 2023 17.48.52 CEST, garcetto skrev: >good evening, > it is possible to mix cinder-volume backend (say ceph) and cinder-backup >backend (say nfs)? >and also do incrementals backup? >thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From garcetto at gmail.com Fri Jun 23 17:57:47 2023 From: garcetto at gmail.com (garcetto) Date: Fri, 23 Jun 2023 19:57:47 +0200 Subject: [cinder-backup][kolla] mix cinder-volume and cinder-backup types (nfs, ceph) In-Reply-To: References: Message-ID: thank you, but no way using cinder-backup simply? On Fri, Jun 23, 2023 at 7:22?PM Jesper Schmitz Mouridsen wrote: > https://github.com/jsm222/cinder-backup-benji-driver > > > Den 23. juni 2023 17.48.52 CEST, garcetto skrev: > >> good evening, >> it is possible to mix cinder-volume backend (say ceph) and cinder-backup >> backend (say nfs)? >> and also do incrementals backup? >> thank you >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From garcetto at gmail.com Fri Jun 23 18:09:11 2023 From: garcetto at gmail.com (garcetto) Date: Fri, 23 Jun 2023 20:09:11 +0200 Subject: [kolla] does cinder volumes works on ceph rbd pools erasure coded? Message-ID: good evening, trying to make it working, (ubuntu 22, kolla all-in-one, external ceph wirh both rdb replica AND erasure code pools). with replica cinder-volumes works. no way with ec pools? cephx and access already check all ok... any idea? is it supported? thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From ces.eduardo98 at gmail.com Fri Jun 23 19:42:09 2023 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Fri, 23 Jun 2023 16:42:09 -0300 Subject: [manila] Vancouver PTG and Forum Message-ID: Hello, Zorillas! I would like to share with you a brief summary of the PTG and the Manila forum from last week's OpenInfra summit. *Manila Forum:* In the Manila Operator Forum session [0], we showed how the Ceph driver could support clustered NFS. We had feedback from operators in different areas. Many operators in the room were interested in VirtIOFS so they can consume Native CephFS without having to expose the cluster to user VMs/containers. The discussion extended into the Nova Forum session the next day. *PTG:* Open discussion/operator feedback: - It would be great if we had this bug [1] fixed. - Ceph Mon IP changes not reflecting on exported shares - We are targeting the fix for this bug on Bobcat-2 - Interest in share backups: - CERN is looking for a share backup approach similar to what cinder currently does. We have one blueprint open for it and the implementation is currently going on [2] - This can likely integrate with a tool they are developing internally: CBACK - CERN will also provide feedback on the implementation of this feature and help us shape it. For more details, please check the Manila PTG etherpad [3]. We joined the Nova team to discuss the manila work for supporting VirtioFS. Please check out the notes for the discussion in the nova summary [4]. [0] https://etherpad.opendev.org/p/manila-vancouver-forum-2023 [1] https://bugs.launchpad.net/manila/+bug/1996793 [2] https://review.opendev.org/c/openstack/manila/+/343980 [3] https://etherpad.opendev.org/p/manila-openinfra-2023 [4] https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034152.html Thanks, carloss -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Fri Jun 23 20:30:36 2023 From: melwittt at gmail.com (melanie witt) Date: Fri, 23 Jun 2023 13:30:36 -0700 Subject: instance console something went wrong, connection is closed | Wallaby DCN In-Reply-To: References: <55eff3d6-b852-840e-80e0-26cbb5a58aac@gmail.com> Message-ID: <3a7830e4-9f09-210c-c1c5-0ee27d8945ff@gmail.com> On 06/22/23 20:07, Swogat Pradhan wrote: > Hi Mel, > Thank you for your response. > I am facing issues with the instance console?(vnc) in the openstack > dashboard, Most of the time I shelve the instance and unshelve the > instance to get the console. > But there are some VM's I created which are not working even after > shelve/unshelve. > > I have used the same director to deploy a total of a central and 2 edge > sites. > This issue is happening on a single edge site. > Cold Migration also helps in some situations. OK, you didn't mention whether requesting a new console 'openstack console url show --vnc ' gets you a working console after a migration (or other event where you see the console stop working). I'm trying to determine whether the behavior you're seeing is expected or a bug. After an instance is moved to a different compute node than the one it was on when the console was started, that console is not expected to work anymore. And a new console needs to be started. Can you give steps for reproducing the issue? Maybe that will provide more clarity. -melwitt > On Fri, Jun 23, 2023 at 12:42?AM melanie witt > wrote: > > On 06/22/23 01:08, Swogat Pradhan wrote: > > Hi, > > Please find the below log: > > [root at dcn01-hci-1 libvirt]# cat virtqemud.log > > 2023-06-22 07:40:01.575+0000: 350319: error : > virNetSocketReadWire:1804 > > : End of file while reading data: Input/output error > > 2023-06-22 07:40:01.575+0000: 350319: error : > virNetSocketWriteWire:1844 > > : Cannot write data: Broken pipe > > > > I think this is causing the?problem of not getting the instance > console. > > When you say "instance console" are you referring to an interactive > console like VNC or are you talking about the console log for the > instance? > > If it's the interactive console, if you have a console open and then > migrate the instance, that console will not be moved along with the > instance. When a user requests a console, the console proxy service > establishes a connection to the compute host where the instance is > located. The proxy doesn't know when an instance has been moved though, > so if the instance is moved, the user will need to request a new > console > (which will establish a new connection to the new compute host). > > Is that the behavior you are seeing? > > -melwitt > > > On Fri, Jun 2, 2023 at 11:27?AM Swogat Pradhan > > > >> wrote: > > > >? ? ?Update: > >? ? ?If the i am performing?any activity like migration or resize > of an > >? ? ?instance whose console is accessible, the console becomes > >? ? ?inaccessible giving out the following error : something went > wrong, > >? ? ?connection is closed > > > >? ? ?The was 1 other instance whose console was not accessible and > i did > >? ? ?a shelve and unshelve and suddenly?the instance console became > >? ? ?accessible. > > > >? ? ?This is a peculiar behavior and i don't understand where is > the issue . > > > >? ? ?With regards, > >? ? ?Swogat Pradhan > > > >? ? ?On Fri, Jun 2, 2023 at 11:19?AM Swogat Pradhan > >? ? ? > >> wrote: > > > >? ? ? ? ?Hi, > >? ? ? ? ?I am creating instances in my DCN site and i am unable to get > >? ? ? ? ?the console sometimes, error:?something went wrong, > connection > >? ? ? ? ?is closed > > > >? ? ? ? ?I have 3 instances now running on my hci02 node and there is > >? ? ? ? ?console access on 1 of the vm's and the rest two i am not > >? ? ? ? ?getting the console, i have used the same flavor, same image > >? ? ? ? ?same security group for the VM's. > > > >? ? ? ? ?Please suggest what can?be done. > > > >? ? ? ? ?With regards, > >? ? ? ? ?Swogat Pradhan > > > From satish.txt at gmail.com Fri Jun 23 22:26:48 2023 From: satish.txt at gmail.com (Satish Patel) Date: Fri, 23 Jun 2023 18:26:48 -0400 Subject: [glance][nova] nova.exception.ImageNotAuthorized: Not authorized for image Message-ID: Folks, I am running kolla-ansible on small environments with ceph. When I am getting the following error when performing a VM snapshot. This started happening after I upgraded from yoga to Zed. Any idea what changed here? 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return self._client.call( 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", line 191, in call 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server result = getattr(controller, method)(*args, **kwargs) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/v2/images.py", line 503, in add_location 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server response = self._send_image_update_request(image_id, add_patch) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/utils.py", line 670, in inner 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return RequestIdProxy(wrapped(*args, **kwargs)) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/v2/images.py", line 483, in _send_image_update_request 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server resp, body = self.http_client.patch(url, headers=hdrs, 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/keystoneauth1/adapter.py", line 407, in patch 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return self.request(url, 'PATCH', **kwargs) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/http.py", line 380, in request 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return self._handle_response(resp) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/http.py", line 120, in _handle_response 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server raise exc.from_response(resp, resp.content) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server glanceclient.exc.HTTPForbidden: HTTP 403 Forbidden: It's not allowed to add locations if locations are invisible. 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server During handling of the above exception, another exception occurred: 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/exception_wrapper.py", line 65, in wrapped 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server with excutils.save_and_reraise_exception(): 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 227, in __exit__ 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server self.force_reraise() 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 200, in force_reraise 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server raise self.value 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/exception_wrapper.py", line 63, in wrapped 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", line 164, in decorated_function 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server with excutils.save_and_reraise_exception(): 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 227, in __exit__ 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server self.force_reraise() 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 200, in force_reraise 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server raise self.value 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", line 155, in decorated_function 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/utils.py", line 1439, in decorated_function 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", line 211, in decorated_function 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server with excutils.save_and_reraise_exception(): 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 227, in __exit__ 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server self.force_reraise() 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 200, in force_reraise 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server raise self.value 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", line 201, in decorated_function 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", line 231, in decorated_function 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server with excutils.save_and_reraise_exception(): 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 227, in __exit__ 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server self.force_reraise() 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 200, in force_reraise 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server raise self.value 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", line 228, in decorated_function 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return function(self, context, image_id, instance, 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", line 4219, in snapshot_instance 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server self._snapshot_instance(context, image_id, instance, 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", line 4252, in _snapshot_instance 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server self.driver.snapshot(context, instance, image_id, 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/virt/libvirt/driver.py", line 3116, in snapshot 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server with excutils.save_and_reraise_exception(): 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 227, in __exit__ 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server self.force_reraise() 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 200, in force_reraise 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server raise self.value 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/virt/libvirt/driver.py", line 3045, in snapshot 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server self._image_api.update(context, image_id, metadata, 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", line 1243, in update 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return session.update(context, image_id, image_info, data=data, 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", line 693, in update 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server _reraise_translated_image_exception(image_id) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server raise new_exc.with_traceback(exc_trace) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", line 691, in update 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server image = self._update_v2(context, sent_service_image_meta, data) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", line 705, in _update_v2 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server image = self._add_location(context, image_id, location) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", line 561, in _add_location 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return self._client.call( 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", line 191, in call 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server result = getattr(controller, method)(*args, **kwargs) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/v2/images.py", line 503, in add_location 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server response = self._send_image_update_request(image_id, add_patch) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/utils.py", line 670, in inner 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return RequestIdProxy(wrapped(*args, **kwargs)) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/v2/images.py", line 483, in _send_image_update_request 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server resp, body = self.http_client.patch(url, headers=hdrs, 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/keystoneauth1/adapter.py", line 407, in patch 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return self.request(url, 'PATCH', **kwargs) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/http.py", line 380, in request 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return self._handle_response(resp) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/http.py", line 120, in _handle_response 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server raise exc.from_response(resp, resp.content) 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server nova.exception.ImageNotAuthorized: Not authorized for image 6d39ead7-e543-4ab6-b54c-78ca16421242. 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Jun 23 22:45:33 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 23 Jun 2023 15:45:33 -0700 Subject: [ptl][tc][winstackers] Final call for Winstackers PTL and maintainers In-Reply-To: References: Message-ID: <188ea6f2a53.ee65931886747.6600003948918544663@ghanshyammann.com> As there is no volunteer to maintain this project, I have proposed the retirement - https://review.opendev.org/c/openstack/governance/+/886880 -gmann ---- On Thu, 13 Apr 2023 07:54:12 -0700 James Page wrote --- > Hi All > > As announced by Lucian last November (see [0]) Cloudbase Solutions are no longer in a position to maintain support for running OpenStack on Windows and have also ceased operation of their 3rd party CI for the windows support across a number of OpenStack projects. > This situation has resulted in the Winstackers project becoming PTL-less for the 2023.2 cycle with no volunteers responding to the TC's call to fill this role and take this feature in OpenStack forward (see [1]). > This is the final call for any maintainers to step forward if this feature is important to them in OpenStack. > The last user survey in 2022 indicated that 2% of respondents were running on Hyper-V so this might be important enough to warrant a commitment from someone operating OpenStack on Windows to maintain these features going forward. > Here is a reminder from Lucian's original email on the full list of projects which are impacted in some way:?* nova hyper-v driver - in-tree plus out-of-tree compute-hyperv driver* os-win - common Windows library for Openstack* neutron hyperv ml2 plugin and agent* ovs on Windows and neutron ovs agent support* cinder drivers - SMB and Windows iSCSI* os-brick Windows connectors - iSCSI, FC, SMB, RBD* ceilometer Windows poller* manila Windows driver* glance Windows support* freerdp gateway > The lack of 3rd party CI for testing all of this really needs to be addressed as well. > If no maintainers are forthcoming between now and the next PTG in June the TC will need to officially retire the project and start the process of removing support for Windows across the various projects that support this operating system in some way - either directly or through the use of os-win. > For clarity this call refers to the use of the Hyper-V virtualisation driver and associated Windows server components to provide WIndows based OpenStack Hypervisors and does not relate to the ability to run Windows images as guests on OpenStack. > Regards > James > [0]?https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031044.html[1]?https://lists.openstack.org/pipermail/openstack-discuss/2023-March/032888.html From smooney at redhat.com Fri Jun 23 22:58:06 2023 From: smooney at redhat.com (smooney at redhat.com) Date: Fri, 23 Jun 2023 23:58:06 +0100 Subject: Next minimum libvirt / QEMU versions for "C" release (2024) In-Reply-To: References: Message-ID: On Fri, 2023-06-23 at 17:08 +0200, Dmitriy Rabotyagov wrote: > I wonder what is the reason for such diversification between B and C then? the short version is nova maintains two version contants for libvirt/qemu version in our code base. our current minium and our future minium so in the b cycle we are updating our current minium to what was our planned future miniutm and we are decided what to set the next annouched future minium too. > As C will be the next SLURP [1] release, meaning A->C upgrades must be > supported. So doing a minimal version bump 2 times doesn't make too > much sense to me, as long as any platform we have in PTI is not > affected. > Also my assumption was that all version bumps (or deprecations) > ideally should happen during non-SLURP releases. there is actully no agreement on that specifically but if we were to adapt nova policy to the slurp release cycle we would want to ensure when we select a next min it is shiped in the release notes fo a SLUP release and then yes the actual bump would happen the non slurp or next sulrp release. The cadnace of when we actully do a min bump tend to be every 2-4 releases. > > So what's the reason not to do this now and wait for C? we cant with out breaking our policy of alwasy providing at least one release notice before increaing our min. the version bump we are doing for bobcat was ment to be done in zed and antelope but got pushed out for differnt reasosn. mainly gate instablity and review badnwith. we may or may not actully do a version bump in C but by advertising that our future min verions will move to that shiped in ubuntu 22.04. it means if we do decided to bump in C we will actully be testing with the min supported version. canonical assuming they hold to there normally pattern will be release C/2024.1 on 24.04 an also supproting it on 22.04 for upgrades. We will want to keep support for 22.04 libvirt/qemu to ensure a smoth upgrade. but we can raise our min to aling whith what is shiped on 22.04 and what is tested in our ci by doing a second version bump in ci. The D/2024.2 cycle should see us move to 24.04 as a new base for ci and we can revaluate moving to somehting more modern again at that point. nova generally support quite a broad range of libvirt/qemu version so while it help reduce our tech debt when we increase our min version, in reality most distros end up shipping a much newer version then we technically support. > > [1] https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html > > ??, 23 ???. 2023??. ? 16:20, Thomas Goirand : > > > > Hi, > > > > Repeating what I wrote on IRC. > > > > On 6/23/23 16:10, Kashyap Chamarthy wrote: > > > ?? - Debian 11 (Bullseye): > > > ???? - libvirt: 7.0.0-3+deb11u2 > > > ???? - qemu: 5.2+dfsg-11+deb11u2 > > > > As far as Debian is concerned, the last version of OpenStack supported > > on Bullseye was Zed (like every 4 OpenStack release and for each new > > Debian release, there's a version of Zed that I maintain for both > > Bullseye and Bookworm, to offer easier transitions). Everything above > > that, needs to run on bookworm. I will *not* do any backport for Debian > > 11, which is already the past for me. In this case, for Debian 12 (aka: > > Bookworm) we have: > > > > ??? - Debian 11 (Bullseye): > > ????? - libvirt: 9.0.0 > > ????? - qemu: 7.2 > > > > So feel free to bump up to that. If needed, I can even do backports > > myself for these key components, whenever they reach Testing. Probably > > soon libvirt 9.4.0 (which is already in Experimental, and that will > > probably soon reach Unstable, then testing). > > > > I hope this helps, > > Cheers, > > > > Thomas Goirand (zigo) > > > > > From noonedeadpunk at gmail.com Sat Jun 24 00:58:43 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Sat, 24 Jun 2023 02:58:43 +0200 Subject: Next minimum libvirt / QEMU versions for "C" release (2024) In-Reply-To: References: Message-ID: Thanks for such detailed explanation, that makes sense indeed for me. On Sat, Jun 24, 2023, 00:58 wrote: > On Fri, 2023-06-23 at 17:08 +0200, Dmitriy Rabotyagov wrote: > > I wonder what is the reason for such diversification between B and C > then? > the short version is nova maintains two version contants for libvirt/qemu > version in > our code base. our current minium and our future minium so in the b cycle > we are updating > our current minium to what was our planned future miniutm and we are > decided what > to set the next annouched future minium too. > > As C will be the next SLURP [1] release, meaning A->C upgrades must be > > supported. So doing a minimal version bump 2 times doesn't make too > > much sense to me, as long as any platform we have in PTI is not > > affected. > > Also my assumption was that all version bumps (or deprecations) > > ideally should happen during non-SLURP releases. > there is actully no agreement on that specifically but if we were to adapt > nova policy > to the slurp release cycle we would want to ensure when we select a next > min > it is shiped in the release notes fo a SLUP release and then yes the > actual bump would happen > the non slurp or next sulrp release. The cadnace of when we actully do a > min bump > tend to be every 2-4 releases. > > > > So what's the reason not to do this now and wait for C? > we cant with out breaking our policy of alwasy providing at least one > release notice > before increaing our min. > > the version bump we are doing for bobcat was ment to be done in zed and > antelope > but got pushed out for differnt reasosn. mainly gate instablity and review > badnwith. > > we may or may not actully do a version bump in C but by advertising that > our future min verions > will move to that shiped in ubuntu 22.04. it means if we do decided to > bump in C we will actully > be testing with the min supported version. canonical assuming they hold to > there normally pattern will be release > C/2024.1 on 24.04 an also supproting it on 22.04 for upgrades. We will > want to keep support for 22.04 libvirt/qemu to > ensure a smoth upgrade. but we can raise our min to aling whith what is > shiped on 22.04 and what is tested in our ci > by doing a second version bump in ci. > > The D/2024.2 cycle should see us move to 24.04 as a new base for ci and we > can revaluate moving to somehting > more modern again at that point. > > nova generally support quite a broad range of libvirt/qemu version so > while it help reduce our tech debt when > we increase our min version, in reality most distros end up shipping a > much newer version then we technically support. > > > > > [1] > https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html > > > > ??, 23 ???. 2023??. ? 16:20, Thomas Goirand : > > > > > > Hi, > > > > > > Repeating what I wrote on IRC. > > > > > > On 6/23/23 16:10, Kashyap Chamarthy wrote: > > > > - Debian 11 (Bullseye): > > > > - libvirt: 7.0.0-3+deb11u2 > > > > - qemu: 5.2+dfsg-11+deb11u2 > > > > > > As far as Debian is concerned, the last version of OpenStack supported > > > on Bullseye was Zed (like every 4 OpenStack release and for each new > > > Debian release, there's a version of Zed that I maintain for both > > > Bullseye and Bookworm, to offer easier transitions). Everything above > > > that, needs to run on bookworm. I will *not* do any backport for Debian > > > 11, which is already the past for me. In this case, for Debian 12 (aka: > > > Bookworm) we have: > > > > > > - Debian 11 (Bullseye): > > > - libvirt: 9.0.0 > > > - qemu: 7.2 > > > > > > So feel free to bump up to that. If needed, I can even do backports > > > myself for these key components, whenever they reach Testing. Probably > > > soon libvirt 9.4.0 (which is already in Experimental, and that will > > > probably soon reach Unstable, then testing). > > > > > > I hope this helps, > > > Cheers, > > > > > > Thomas Goirand (zigo) > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamesleong123098 at gmail.com Sat Jun 24 01:07:38 2023 From: jamesleong123098 at gmail.com (James Leong) Date: Fri, 23 Jun 2023 20:07:38 -0500 Subject: [kolla-ansible][zun] Deleting container through api Message-ID: Hi all, I am currently playing around with the Zun component. I have deployed the yoga version of OpenStack using kolla-ansible. Is there an api to delete a container based on the ID without any user context? Best, Jame -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Sun Jun 25 10:56:43 2023 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Sun, 25 Jun 2023 16:26:43 +0530 Subject: instance console something went wrong, connection is closed | Wallaby DCN In-Reply-To: References: <55eff3d6-b852-840e-80e0-26cbb5a58aac@gmail.com> <3a7830e4-9f09-210c-c1c5-0ee27d8945ff@gmail.com> Message-ID: Nova-novncproxy.log: 2023-06-25 10:55:32.487 515433 INFO nova.console.websocketproxy [-] 172.25.201.205 - - [25/Jun/2023 10:55:32] 172.25.201.205: Plain non-SSL (ws://) WebSocket connection 2023-06-25 10:55:32.489 515433 INFO nova.console.websocketproxy [-] 172.25.201.205 - - [25/Jun/2023 10:55:32] 172.25.201.205: Path: '/?token=f985582a-47ae-48f6-bfb3-477dae02c369' 2023-06-25 10:55:33.131 515433 INFO nova.console.websocketproxy [req-ee483ce0-1d1a-446e-8e03-da371daa977d - - - - -] 34368: connect info: ConsoleAuthToken(access_url_base=' https://bolt.bdxworld.com:13080/vnc_auto.html ',console_type='novnc',created_at=2023-06-25T10:55:31Z,host='172.25.221.228',id=3579,instance_uuid=cdcfe890-4f15-4091-abc1-9fe0cf7eff85,internal_access_path=None,port=5900,token='***',updated_at=None) 2023-06-25 10:55:33.135 515433 INFO nova.console.websocketproxy [req-ee483ce0-1d1a-446e-8e03-da371daa977d - - - - -] 34368: connecting to: 172.25.221.228:5900 2023-06-25 10:55:33.419 515433 INFO nova.console.securityproxy.rfb [req-ee483ce0-1d1a-446e-8e03-da371daa977d - - - - -] Finished security handshake, resuming normal proxy mode using secured socket 2023-06-25 10:55:33.456 515433 INFO nova.console.websocketproxy [req-ee483ce0-1d1a-446e-8e03-da371daa977d - - - - -] handler exception: [Errno 107] Transport endpoint is not connected 2023-06-25 10:55:34.830 515436 INFO nova.console.websocketproxy [-] 172.25.201.205 - - [25/Jun/2023 10:55:34] 172.25.201.205: Plain non-SSL (ws://) WebSocket connection 2023-06-25 10:55:34.832 515436 INFO nova.console.websocketproxy [-] 172.25.201.205 - - [25/Jun/2023 10:55:34] 172.25.201.205: Path: '/?token=f985582a-47ae-48f6-bfb3-477dae02c369' 2023-06-25 10:55:35.458 515436 INFO nova.console.websocketproxy [req-c9f5d4de-a4e3-4040-9fcf-dc42d5f0b0d3 - - - - -] 34369: connect info: ConsoleAuthToken(access_url_base=' https://bolt.bdxworld.com:13080/vnc_auto.html ',console_type='novnc',created_at=2023-06-25T10:55:31Z,host='172.25.221.228',id=3579,instance_uuid=cdcfe890-4f15-4091-abc1-9fe0cf7eff85,internal_access_path=None,port=5900,token='***',updated_at=None) 2023-06-25 10:55:35.461 515436 INFO nova.console.websocketproxy [req-c9f5d4de-a4e3-4040-9fcf-dc42d5f0b0d3 - - - - -] 34369: connecting to: 172.25.221.228:5900 2023-06-25 10:55:35.753 515436 INFO nova.console.securityproxy.rfb [req-c9f5d4de-a4e3-4040-9fcf-dc42d5f0b0d3 - - - - -] Finished security handshake, resuming normal proxy mode using secured socket 2023-06-25 10:55:35.790 515436 INFO nova.console.websocketproxy [req-c9f5d4de-a4e3-4040-9fcf-dc42d5f0b0d3 - - - - -] handler exception: [Errno 107] Transport endpoint is not connected On Sun, Jun 25, 2023 at 4:20?PM Swogat Pradhan wrote: > Hi, > After doing a console url show after migration, I am still unable to > access the console. > > My site consists of 1 central site and 2 DCN sites. Consoles for central > and DCN02 are working fine without any issues. > But when i am creating an instance for DCN01 the console for the instance > is not coming up (attached image for reference). > > Today I created 3 different VM's using the same flavor, image, security > group, the instances were created in the same compute host. Console was not > accessible, so I shelved and unshelved all 3 instances, after which I was > able to access the console for 2 of those VM's and am still unable to > access the console of the 3rd VM no matter what I do. > > With regards, > Swogat Pradhan > > On Sat, Jun 24, 2023 at 2:00?AM melanie witt wrote: > >> On 06/22/23 20:07, Swogat Pradhan wrote: >> > Hi Mel, >> > Thank you for your response. >> > I am facing issues with the instance console (vnc) in the openstack >> > dashboard, Most of the time I shelve the instance and unshelve the >> > instance to get the console. >> > But there are some VM's I created which are not working even after >> > shelve/unshelve. >> > >> > I have used the same director to deploy a total of a central and 2 edge >> > sites. >> > This issue is happening on a single edge site. >> > Cold Migration also helps in some situations. >> >> OK, you didn't mention whether requesting a new console 'openstack >> console url show --vnc ' gets you a working console after a >> migration (or other event where you see the console stop working). I'm >> trying to determine whether the behavior you're seeing is expected or a >> bug. After an instance is moved to a different compute node than the one >> it was on when the console was started, that console is not expected to >> work anymore. And a new console needs to be started. >> >> Can you give steps for reproducing the issue? Maybe that will provide >> more clarity. >> >> -melwitt >> >> > On Fri, Jun 23, 2023 at 12:42?AM melanie witt > > > wrote: >> > >> > On 06/22/23 01:08, Swogat Pradhan wrote: >> > > Hi, >> > > Please find the below log: >> > > [root at dcn01-hci-1 libvirt]# cat virtqemud.log >> > > 2023-06-22 07:40:01.575+0000: 350319: error : >> > virNetSocketReadWire:1804 >> > > : End of file while reading data: Input/output error >> > > 2023-06-22 07:40:01.575+0000: 350319: error : >> > virNetSocketWriteWire:1844 >> > > : Cannot write data: Broken pipe >> > > >> > > I think this is causing the problem of not getting the instance >> > console. >> > >> > When you say "instance console" are you referring to an interactive >> > console like VNC or are you talking about the console log for the >> > instance? >> > >> > If it's the interactive console, if you have a console open and then >> > migrate the instance, that console will not be moved along with the >> > instance. When a user requests a console, the console proxy service >> > establishes a connection to the compute host where the instance is >> > located. The proxy doesn't know when an instance has been moved >> though, >> > so if the instance is moved, the user will need to request a new >> > console >> > (which will establish a new connection to the new compute host). >> > >> > Is that the behavior you are seeing? >> > >> > -melwitt >> > >> > > On Fri, Jun 2, 2023 at 11:27?AM Swogat Pradhan >> > > >> > > > >> wrote: >> > > >> > > Update: >> > > If the i am performing any activity like migration or resize >> > of an >> > > instance whose console is accessible, the console becomes >> > > inaccessible giving out the following error : something went >> > wrong, >> > > connection is closed >> > > >> > > The was 1 other instance whose console was not accessible and >> > i did >> > > a shelve and unshelve and suddenly the instance console >> became >> > > accessible. >> > > >> > > This is a peculiar behavior and i don't understand where is >> > the issue . >> > > >> > > With regards, >> > > Swogat Pradhan >> > > >> > > On Fri, Jun 2, 2023 at 11:19?AM Swogat Pradhan >> > > > > >> > > > >> wrote: >> > > >> > > Hi, >> > > I am creating instances in my DCN site and i am unable >> to get >> > > the console sometimes, error: something went wrong, >> > connection >> > > is closed >> > > >> > > I have 3 instances now running on my hci02 node and >> there is >> > > console access on 1 of the vm's and the rest two i am not >> > > getting the console, i have used the same flavor, same >> image >> > > same security group for the VM's. >> > > >> > > Please suggest what can be done. >> > > >> > > With regards, >> > > Swogat Pradhan >> > > >> > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bryan at raksmart.com Sun Jun 25 03:54:51 2023 From: bryan at raksmart.com (Bryan Huang) Date: Sun, 25 Jun 2023 03:54:51 +0000 Subject: Neutron BGP agent advertisement and l3/openvswitch-agent problems (zed) Message-ID: Dear folks, Recently, we met some neutron networking problems in our envrionment, openstack version is zed, and kolla-ansible as the deployment tool. 1. Neutron BGP agent doesn't advertise the floating IPs to the BGP peer, in case of the floating IPs were served for port forwarding, but the floating IPs attached to VM/Container were advertised correctly. so the question is this scenario supported by BGP agent, if not when will it be supported, is it in the plan? 2. iptable rules restoring error in l3-agent and openvswitch-agent (A bug was reported in launchpad: https://bugs.launchpad.net/neutron/+bug/2024976) Bug #2024976 ?iptable rules restoring error in l3-agent and open...? : Bugs : neutron Openstack version: zed/stable OS version: Ubuntu 22.04.2 LTS Kernel version: 5.15.0-75-generic #82-Ubuntu Deployment: kolla-ansible iptable rules restoring error in l3-agent and openvswitch-agent: ??????openvswitch-agnet log: 2023-06-23 15:54:58.616 7 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None req-4440bce1-8c07-4243-ac1b-2566b406a30a - - - - - -] Error while processing VIF ports: neutron_lib.exceptions.ProcessExecutionError: Exit code: 2; Cmd: [... bugs.launchpad.net ??????openvswitch-agnet log: 2023-06-23 15:54:58.616 7 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None req-4440bce1-8c07-4243-ac1b-2566b406a30a - - - - - -] Error while processing VIF ports: neutron_lib.exceptions.ProcessExecutionError: Exit code: 2; Cmd: ['iptables-restore', '-n']; Stdin: # Generated by iptables_manager *filter :FORWARD - [0:0] :INPUT - [0:0] :OUTPUT - [0:0] :neutron-filter-top - [0:0] :neutron-openvswi-FORWARD - [0:0] :neutron-openvswi-INPUT - [0:0] :neutron-openvswi-OUTPUT - [0:0] :neutron-openvswi-local - [0:0] :neutron-openvswi-sg-chain - [0:0] :neutron-openvswi-sg-fallback - [0:0] -I FORWARD 1 -j neutron-filter-top -I FORWARD 2 -j neutron-openvswi-FORWARD -I INPUT 1 -j neutron-openvswi-INPUT -I OUTPUT 1 -j neutron-filter-top -I OUTPUT 2 -j neutron-openvswi-OUTPUT -I neutron-filter-top 1 -j neutron-openvswi-local -I neutron-openvswi-FORWARD 1 -m physdev --physdev-out tap2fcacaf9-9d --physdev-is-bridged -m comment --comment "Accept all packets when port is trusted." -j ACCEPT -I neutron-openvswi-FORWARD 2 -m physdev --physdev-in tap2fcacaf9-9d --physdev-is-bridged -m comment --comment "Accept all packets when port is trusted." -j ACCEPT -I neutron-openvswi-FORWARD 3 -m physdev --physdev-out tap8c64cce3-ea --physdev-is-bridged -m comment --comment "Accept all packets when port is trusted." -j ACCEPT -I neutron-openvswi-FORWARD 4 -m physdev --physdev-in tap8c64cce3-ea --physdev-is-bridged -m comment --comment "Accept all packets when port is trusted." -j ACCEPT -I neutron-openvswi-sg-chain 1 -j ACCEPT -I neutron-openvswi-sg-fallback 1 -m comment --comment "Default drop rule for unmatched traffic." -j DROP COMMIT # Completed by iptables_manager # Generated by iptables_manager *raw :OUTPUT - [0:0] :PREROUTING - [0:0] :neutron-openvswi-OUTPUT - [0:0] :neutron-openvswi-PREROUTING - [0:0] -I OUTPUT 1 -j neutron-openvswi-OUTPUT -I PREROUTING 1 -j neutron-openvswi-PREROUTING COMMIT # Completed by iptables_manager ; Stdout: ; Stderr: iptables-restore v1.8.7 (nf_tables): Couldn't load match `physdev':No such file or directory Error occurred at line: 19 Try `iptables-restore -h' or 'iptables-restore --help' for more information. ??????l3-agent log: 2023-06-23 16:15:49.545 33 ERROR neutron.agent.linux.iptables_manager [-] Failure applying iptables rules: neutron_lib.exceptions.ProcessExecutionError: Exit code: 2; Cmd: ['ip', 'netns', 'exec', 'qrouter-0f0e60d0-bf51-4361-901b-4b998201b44b', 'iptables-restore', '-n']; Stdin: # Generated by iptables_manager *filter :FORWARD - [0:0] :INPUT - [0:0] :OUTPUT - [0:0] :neutron-filter-top - [0:0] :neutron-l3-agent-FORWARD - [0:0] :neutron-l3-agent-INPUT - [0:0] :neutron-l3-agent-OUTPUT - [0:0] :neutron-l3-agent-local - [0:0] :neutron-l3-agent-scope - [0:0] -I FORWARD 1 -j neutron-filter-top -I FORWARD 2 -j neutron-l3-agent-FORWARD -I INPUT 1 -j neutron-l3-agent-INPUT -I OUTPUT 1 -j neutron-filter-top -I OUTPUT 2 -j neutron-l3-agent-OUTPUT -I neutron-filter-top 1 -j neutron-l3-agent-local -I neutron-l3-agent-FORWARD 1 -j neutron-l3-agent-scope -I neutron-l3-agent-scope 1 -m mark --mark 0x1/0xffff -j DROP COMMIT # Completed by iptables_manager # Generated by iptables_manager *mangle :FORWARD - [0:0] :INPUT - [0:0] :OUTPUT - [0:0] :POSTROUTING - [0:0] :PREROUTING - [0:0] :neutron-l3-agent-FORWARD - [0:0] :neutron-l3-agent-INPUT - [0:0] :neutron-l3-agent-OUTPUT - [0:0] :neutron-l3-agent-POSTROUTING - [0:0] :neutron-l3-agent-PREROUTING - [0:0] :neutron-l3-agent-float-snat - [0:0] :neutron-l3-agent-floatingip - [0:0] :neutron-l3-agent-mark - [0:0] :neutron-l3-agent-scope - [0:0] -I FORWARD 1 -j neutron-l3-agent-FORWARD -I INPUT 1 -j neutron-l3-agent-INPUT -I OUTPUT 1 -j neutron-l3-agent-OUTPUT -I POSTROUTING 1 -j neutron-l3-agent-POSTROUTING -I PREROUTING 1 -j neutron-l3-agent-PREROUTING -I neutron-l3-agent-PREROUTING 1 -j neutron-l3-agent-mark -I neutron-l3-agent-PREROUTING 2 -j neutron-l3-agent-scope -I neutron-l3-agent-PREROUTING 3 -m connmark ! --mark 0x0/0xffff0000 -j CONNMARK --restore-mark --nfmask 0xffff0000 --ctmask 0xffff0000 -I neutron-l3-agent-PREROUTING 4 -j neutron-l3-agent-floatingip -I neutron-l3-agent-PREROUTING 5 -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j MARK --set-xmark 0x1/0xffff -I neutron-l3-agent-float-snat 1 -m connmark --mark 0x0/0xffff0000 -j CONNMARK --save-mark --nfmask 0xffff0000 --ctmask 0xffff0000 COMMIT # Completed by iptables_manager # Generated by iptables_manager *nat :OUTPUT - [0:0] :POSTROUTING - [0:0] :PREROUTING - [0:0] :neutron-l3-agent-OUTPUT - [0:0] :neutron-l3-agent-POSTROUTING - [0:0] :neutron-l3-agent-PREROUTING - [0:0] :neutron-l3-agent-float-snat - [0:0] :neutron-l3-agent-snat - [0:0] :neutron-postrouting-bottom - [0:0] -I OUTPUT 1 -j neutron-l3-agent-OUTPUT -I POSTROUTING 1 -j neutron-l3-agent-POSTROUTING -I POSTROUTING 2 -j neutron-postrouting-bottom -I PREROUTING 1 -j neutron-l3-agent-PREROUTING -I neutron-l3-agent-POSTROUTING 1 ! -o rfp-0f0e60d0-b -m conntrack ! --ctstate DNAT -j ACCEPT -I neutron-l3-agent-PREROUTING 1 -d 137.175.31.207/32 -i rfp-0f0e60d0-b -j DNAT --to-destination 10.10.0.246 -I neutron-l3-agent-float-snat 1 -s 10.10.0.246/32 -j SNAT --to-source 137.175.31.207 --random-fully -I neutron-l3-agent-snat 1 -j neutron-l3-agent-float-snat -I neutron-postrouting-bottom 1 -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat COMMIT # Completed by iptables_manager # Generated by iptables_manager *raw :OUTPUT - [0:0] :PREROUTING - [0:0] :neutron-l3-agent-OUTPUT - [0:0] :neutron-l3-agent-PREROUTING - [0:0] -I OUTPUT 1 -j neutron-l3-agent-OUTPUT -I PREROUTING 1 -j neutron-l3-agent-PREROUTING COMMIT # Completed by iptables_manager ; Stdout: ; Stderr: iptables-restore v1.8.7 (nf_tables): Couldn't load match `mark':No such file or directory Error occurred at line: 19 ??????And we check the system the x_tables kernel module were loaded: # lsmod | grep x_tables x_tables 53248 12 xt_conntrack,nft_compat,xt_tcpudp,xt_physdev,xt_nat,xt_comment,ip6_tables,xt_connmark,xt_CT,ip_tables,xt_REDIRECT,xt_mark (neutron-l3-agent)[neutron at compute06 usr]$ find . -name "*mark.so" ./lib/x86_64-linux-gnu/xtables/libxt_connmark.so ./lib/x86_64-linux-gnu/xtables/libxt_mark.so ./lib/x86_64-linux-gnu/xtables/libebt_mark.so (neutron-l3-agent)[neutron at compute06 usr]$ find . -name "*physdev.so" ./lib/x86_64-linux-gnu/xtables/libxt_physdev.so Does someone have ever met the problems what is the solution the resovle them. Thanks in advance Sincerely? Bryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From 2292613444 at qq.com Sun Jun 25 02:19:21 2023 From: 2292613444 at qq.com (=?gb18030?B?zt7K/bXE0MfH8g==?=) Date: Sun, 25 Jun 2023 10:19:21 +0800 Subject: the nova errors (vresion queens) errors Message-ID: I reported an error on the compute node after building nova? 2023-06-23 10:04:25.334 15207 ERROR nova.compute.manager [req-a85e1436-e5a4-4395-ba5c-899d3b21b7bc - - - - -] Error updating resources for node compute.: KeyError: 'allocations' 2023-06-23 10:04:25.334 15207 ERROR nova.compute.manager Traceback (most recent call last): 2023-06-23 10:04:25.334 15207 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7578, in update_available_resource_for_node 2023-06-23 10:04:25.334 15207 ERROR nova.compute.manager rt.update_available_resource(context, nodename) 2023-06-23 10:04:25.334 15207 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 720, in update_available_resource 2023-06-23 10:04:25.334 15207 ERROR nova.compute.manager self._update_available_resource(context, resources) 2023-06-23 10:04:25.334 15207 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner 2023-06-23 10:04:25.334 15207 ERROR nova.compute.manager return f(*args, **kwargs) 2023-06-23 10:04:25.334 15207 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 771, in _update_available_resource 2023-06-23 10:04:25.334 15207 ERROR nova.compute.manager context, self.compute_nodes[nodename], migrations) 2023-06-23 10:04:25.334 15207 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 1277, in _remove_deleted_instances_allocations 2023-06-23 10:04:25.334 15207 ERROR nova.compute.manager context, cn.uuid) or {} 2023-06-23 10:04:25.334 15207 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method 2023-06-23 10:04:25.334 15207 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) 2023-06-23 10:04:25.334 15207 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 64, in wrapper 2023-06-23 10:04:25.334 15207 ERROR nova.compute.manager return f(self, *a, **k) 2023-06-23 10:04:25.334 15207 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 1785, in get_allocations_for_resource_provider After this error occurs, after building the neutron and creating the virtual machine, it will be unable to create the virtual machine, which will be reported in the controller NoValidHost: No valid host was found. 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager Traceback (most recent call last): 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 1165, in schedule_and_build_instances 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager instance_uuids, return_alternates=True) 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 760, in _schedule_instances 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager return_alternates=return_alternates) 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 793, in wrapped 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager return func(*args, **kwargs) 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/init.py", line 53, in select_destinations 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager instance_uuids, return_objects, return_alternates) 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/init.py", line 37, in __run_method 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager return getattr(self.instance, __name)(*args, **kwargs) 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 42, in select_destinations 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager instance_uuids, return_objects, return_alternates) 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 158, in select_destinations 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager return cctxt.call(ctxt, 'select_destinations', **msg_args) 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 174, in call 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager retry=self.retry) 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 131, in _send 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager timeout=timeout, retry=retry) 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 625, in send 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager retry=retry) 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 616, in _send 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager raise result 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager NoValidHost_Remote: No valid host was found. 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager Traceback (most recent call last): 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 229, in inner 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager return func(*args, **kwargs) 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 139, in select_destinations 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager raise exception.NoValidHost(reason="") 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager NoValidHost: No valid host was found. 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager 2023-06-22 10:57:15.649 3732 ERROR nova.conductor.manager Note: As the logs were obtained at different times, please ignore the timing issue. The following will list my nova configuration files. Please review them. Thank you system is centos 7 2009 When a computer error is found, Error updating resources for node compute.: KeyError: 'allocations' the controller will report it 2023-06-24 10:45:51.133 96992 INFO nova.osapi_compute.wsgi.server [-] 192.168.80.20 "GET /resource_providers/6304d158-7a38-4a61-b6f6-f16c22e27b53/allocations HTTP/1.1" status: 300 len: 720 time: 0.0006239 the placement is Fix bugs Listen 8778 -------------- next part -------------- A non-text attachment was scrubbed... Name: nova(compute).conf Type: application/octet-stream Size: 1881 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nova(controller).conf Type: application/octet-stream Size: 2393 bytes Desc: not available URL: From bryan at raksmart.com Sun Jun 25 03:58:21 2023 From: bryan at raksmart.com (Bryan Huang) Date: Sun, 25 Jun 2023 03:58:21 +0000 Subject: Neutron BGP agent advertisement and l3/openvswitch-agent problems (zed) Message-ID: Dear folks, Recently, we met some neutron networking problems in our envrionment, openstack version is zed, and kolla-ansible as the deployment tool. 1. Neutron BGP agent doesn't advertise the floating IPs to the BGP peer, in case of the floating IPs were served for port forwarding, but the floating IPs attached to VM/Container were advertised correctly. so the question is this scenario supported by BGP agent, if not when will it be supported, is it in the plan? 2. iptable rules restoring error in l3-agent and openvswitch-agent (A bug was reported in launchpad: https://bugs.launchpad.net/neutron/+bug/2024976) Bug #2024976 ?iptable rules restoring error in l3-agent and open...? : Bugs : neutron Openstack version: zed/stable OS version: Ubuntu 22.04.2 LTS Kernel version: 5.15.0-75-generic #82-Ubuntu Deployment: kolla-ansible iptable rules restoring error in l3-agent and openvswitch-agent: ??????openvswitch-agnet log: 2023-06-23 15:54:58.616 7 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None req-4440bce1-8c07-4243-ac1b-2566b406a30a - - - - - -] Error while processing VIF ports: neutron_lib.exceptions.ProcessExecutionError: Exit code: 2; Cmd: [... bugs.launchpad.net ??????openvswitch-agnet log: 2023-06-23 15:54:58.616 7 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None req-4440bce1-8c07-4243-ac1b-2566b406a30a - - - - - -] Error while processing VIF ports: neutron_lib.exceptions.ProcessExecutionError: Exit code: 2; Cmd: ['iptables-restore', '-n']; Stdin: # Generated by iptables_manager *filter :FORWARD - [0:0] :INPUT - [0:0] :OUTPUT - [0:0] :neutron-filter-top - [0:0] :neutron-openvswi-FORWARD - [0:0] :neutron-openvswi-INPUT - [0:0] :neutron-openvswi-OUTPUT - [0:0] :neutron-openvswi-local - [0:0] :neutron-openvswi-sg-chain - [0:0] :neutron-openvswi-sg-fallback - [0:0] -I FORWARD 1 -j neutron-filter-top -I FORWARD 2 -j neutron-openvswi-FORWARD -I INPUT 1 -j neutron-openvswi-INPUT -I OUTPUT 1 -j neutron-filter-top -I OUTPUT 2 -j neutron-openvswi-OUTPUT -I neutron-filter-top 1 -j neutron-openvswi-local -I neutron-openvswi-FORWARD 1 -m physdev --physdev-out tap2fcacaf9-9d --physdev-is-bridged -m comment --comment "Accept all packets when port is trusted." -j ACCEPT -I neutron-openvswi-FORWARD 2 -m physdev --physdev-in tap2fcacaf9-9d --physdev-is-bridged -m comment --comment "Accept all packets when port is trusted." -j ACCEPT -I neutron-openvswi-FORWARD 3 -m physdev --physdev-out tap8c64cce3-ea --physdev-is-bridged -m comment --comment "Accept all packets when port is trusted." -j ACCEPT -I neutron-openvswi-FORWARD 4 -m physdev --physdev-in tap8c64cce3-ea --physdev-is-bridged -m comment --comment "Accept all packets when port is trusted." -j ACCEPT -I neutron-openvswi-sg-chain 1 -j ACCEPT -I neutron-openvswi-sg-fallback 1 -m comment --comment "Default drop rule for unmatched traffic." -j DROP COMMIT # Completed by iptables_manager # Generated by iptables_manager *raw :OUTPUT - [0:0] :PREROUTING - [0:0] :neutron-openvswi-OUTPUT - [0:0] :neutron-openvswi-PREROUTING - [0:0] -I OUTPUT 1 -j neutron-openvswi-OUTPUT -I PREROUTING 1 -j neutron-openvswi-PREROUTING COMMIT # Completed by iptables_manager ; Stdout: ; Stderr: iptables-restore v1.8.7 (nf_tables): Couldn't load match `physdev':No such file or directory Error occurred at line: 19 Try `iptables-restore -h' or 'iptables-restore --help' for more information. ??????l3-agent log: 2023-06-23 16:15:49.545 33 ERROR neutron.agent.linux.iptables_manager [-] Failure applying iptables rules: neutron_lib.exceptions.ProcessExecutionError: Exit code: 2; Cmd: ['ip', 'netns', 'exec', 'qrouter-0f0e60d0-bf51-4361-901b-4b998201b44b', 'iptables-restore', '-n']; Stdin: # Generated by iptables_manager *filter :FORWARD - [0:0] :INPUT - [0:0] :OUTPUT - [0:0] :neutron-filter-top - [0:0] :neutron-l3-agent-FORWARD - [0:0] :neutron-l3-agent-INPUT - [0:0] :neutron-l3-agent-OUTPUT - [0:0] :neutron-l3-agent-local - [0:0] :neutron-l3-agent-scope - [0:0] -I FORWARD 1 -j neutron-filter-top -I FORWARD 2 -j neutron-l3-agent-FORWARD -I INPUT 1 -j neutron-l3-agent-INPUT -I OUTPUT 1 -j neutron-filter-top -I OUTPUT 2 -j neutron-l3-agent-OUTPUT -I neutron-filter-top 1 -j neutron-l3-agent-local -I neutron-l3-agent-FORWARD 1 -j neutron-l3-agent-scope -I neutron-l3-agent-scope 1 -m mark --mark 0x1/0xffff -j DROP COMMIT # Completed by iptables_manager # Generated by iptables_manager *mangle :FORWARD - [0:0] :INPUT - [0:0] :OUTPUT - [0:0] :POSTROUTING - [0:0] :PREROUTING - [0:0] :neutron-l3-agent-FORWARD - [0:0] :neutron-l3-agent-INPUT - [0:0] :neutron-l3-agent-OUTPUT - [0:0] :neutron-l3-agent-POSTROUTING - [0:0] :neutron-l3-agent-PREROUTING - [0:0] :neutron-l3-agent-float-snat - [0:0] :neutron-l3-agent-floatingip - [0:0] :neutron-l3-agent-mark - [0:0] :neutron-l3-agent-scope - [0:0] -I FORWARD 1 -j neutron-l3-agent-FORWARD -I INPUT 1 -j neutron-l3-agent-INPUT -I OUTPUT 1 -j neutron-l3-agent-OUTPUT -I POSTROUTING 1 -j neutron-l3-agent-POSTROUTING -I PREROUTING 1 -j neutron-l3-agent-PREROUTING -I neutron-l3-agent-PREROUTING 1 -j neutron-l3-agent-mark -I neutron-l3-agent-PREROUTING 2 -j neutron-l3-agent-scope -I neutron-l3-agent-PREROUTING 3 -m connmark ! --mark 0x0/0xffff0000 -j CONNMARK --restore-mark --nfmask 0xffff0000 --ctmask 0xffff0000 -I neutron-l3-agent-PREROUTING 4 -j neutron-l3-agent-floatingip -I neutron-l3-agent-PREROUTING 5 -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j MARK --set-xmark 0x1/0xffff -I neutron-l3-agent-float-snat 1 -m connmark --mark 0x0/0xffff0000 -j CONNMARK --save-mark --nfmask 0xffff0000 --ctmask 0xffff0000 COMMIT # Completed by iptables_manager # Generated by iptables_manager *nat :OUTPUT - [0:0] :POSTROUTING - [0:0] :PREROUTING - [0:0] :neutron-l3-agent-OUTPUT - [0:0] :neutron-l3-agent-POSTROUTING - [0:0] :neutron-l3-agent-PREROUTING - [0:0] :neutron-l3-agent-float-snat - [0:0] :neutron-l3-agent-snat - [0:0] :neutron-postrouting-bottom - [0:0] -I OUTPUT 1 -j neutron-l3-agent-OUTPUT -I POSTROUTING 1 -j neutron-l3-agent-POSTROUTING -I POSTROUTING 2 -j neutron-postrouting-bottom -I PREROUTING 1 -j neutron-l3-agent-PREROUTING -I neutron-l3-agent-POSTROUTING 1 ! -o rfp-0f0e60d0-b -m conntrack ! --ctstate DNAT -j ACCEPT -I neutron-l3-agent-PREROUTING 1 -d 137.175.31.207/32 -i rfp-0f0e60d0-b -j DNAT --to-destination 10.10.0.246 -I neutron-l3-agent-float-snat 1 -s 10.10.0.246/32 -j SNAT --to-source 137.175.31.207 --random-fully -I neutron-l3-agent-snat 1 -j neutron-l3-agent-float-snat -I neutron-postrouting-bottom 1 -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat COMMIT # Completed by iptables_manager # Generated by iptables_manager *raw :OUTPUT - [0:0] :PREROUTING - [0:0] :neutron-l3-agent-OUTPUT - [0:0] :neutron-l3-agent-PREROUTING - [0:0] -I OUTPUT 1 -j neutron-l3-agent-OUTPUT -I PREROUTING 1 -j neutron-l3-agent-PREROUTING COMMIT # Completed by iptables_manager ; Stdout: ; Stderr: iptables-restore v1.8.7 (nf_tables): Couldn't load match `mark':No such file or directory Error occurred at line: 19 ??????And we check the system the x_tables kernel module were loaded: # lsmod | grep x_tables x_tables 53248 12 xt_conntrack,nft_compat,xt_tcpudp,xt_physdev,xt_nat,xt_comment,ip6_tables,xt_connmark,xt_CT,ip_tables,xt_REDIRECT,xt_mark (neutron-l3-agent)[neutron at compute06 usr]$ find . -name "*mark.so" ./lib/x86_64-linux-gnu/xtables/libxt_connmark.so ./lib/x86_64-linux-gnu/xtables/libxt_mark.so ./lib/x86_64-linux-gnu/xtables/libebt_mark.so (neutron-l3-agent)[neutron at compute06 usr]$ find . -name "*physdev.so" ./lib/x86_64-linux-gnu/xtables/libxt_physdev.so Does someone have ever met the problems what is the solution the resovle them. Thanks in advance Sincerely? Bryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Sun Jun 25 10:50:13 2023 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Sun, 25 Jun 2023 16:20:13 +0530 Subject: instance console something went wrong, connection is closed | Wallaby DCN In-Reply-To: <3a7830e4-9f09-210c-c1c5-0ee27d8945ff@gmail.com> References: <55eff3d6-b852-840e-80e0-26cbb5a58aac@gmail.com> <3a7830e4-9f09-210c-c1c5-0ee27d8945ff@gmail.com> Message-ID: Hi, After doing a console url show after migration, I am still unable to access the console. My site consists of 1 central site and 2 DCN sites. Consoles for central and DCN02 are working fine without any issues. But when i am creating an instance for DCN01 the console for the instance is not coming up (attached image for reference). Today I created 3 different VM's using the same flavor, image, security group, the instances were created in the same compute host. Console was not accessible, so I shelved and unshelved all 3 instances, after which I was able to access the console for 2 of those VM's and am still unable to access the console of the 3rd VM no matter what I do. With regards, Swogat Pradhan On Sat, Jun 24, 2023 at 2:00?AM melanie witt wrote: > On 06/22/23 20:07, Swogat Pradhan wrote: > > Hi Mel, > > Thank you for your response. > > I am facing issues with the instance console (vnc) in the openstack > > dashboard, Most of the time I shelve the instance and unshelve the > > instance to get the console. > > But there are some VM's I created which are not working even after > > shelve/unshelve. > > > > I have used the same director to deploy a total of a central and 2 edge > > sites. > > This issue is happening on a single edge site. > > Cold Migration also helps in some situations. > > OK, you didn't mention whether requesting a new console 'openstack > console url show --vnc ' gets you a working console after a > migration (or other event where you see the console stop working). I'm > trying to determine whether the behavior you're seeing is expected or a > bug. After an instance is moved to a different compute node than the one > it was on when the console was started, that console is not expected to > work anymore. And a new console needs to be started. > > Can you give steps for reproducing the issue? Maybe that will provide > more clarity. > > -melwitt > > > On Fri, Jun 23, 2023 at 12:42?AM melanie witt > > wrote: > > > > On 06/22/23 01:08, Swogat Pradhan wrote: > > > Hi, > > > Please find the below log: > > > [root at dcn01-hci-1 libvirt]# cat virtqemud.log > > > 2023-06-22 07:40:01.575+0000: 350319: error : > > virNetSocketReadWire:1804 > > > : End of file while reading data: Input/output error > > > 2023-06-22 07:40:01.575+0000: 350319: error : > > virNetSocketWriteWire:1844 > > > : Cannot write data: Broken pipe > > > > > > I think this is causing the problem of not getting the instance > > console. > > > > When you say "instance console" are you referring to an interactive > > console like VNC or are you talking about the console log for the > > instance? > > > > If it's the interactive console, if you have a console open and then > > migrate the instance, that console will not be moved along with the > > instance. When a user requests a console, the console proxy service > > establishes a connection to the compute host where the instance is > > located. The proxy doesn't know when an instance has been moved > though, > > so if the instance is moved, the user will need to request a new > > console > > (which will establish a new connection to the new compute host). > > > > Is that the behavior you are seeing? > > > > -melwitt > > > > > On Fri, Jun 2, 2023 at 11:27?AM Swogat Pradhan > > > > > > >> wrote: > > > > > > Update: > > > If the i am performing any activity like migration or resize > > of an > > > instance whose console is accessible, the console becomes > > > inaccessible giving out the following error : something went > > wrong, > > > connection is closed > > > > > > The was 1 other instance whose console was not accessible and > > i did > > > a shelve and unshelve and suddenly the instance console became > > > accessible. > > > > > > This is a peculiar behavior and i don't understand where is > > the issue . > > > > > > With regards, > > > Swogat Pradhan > > > > > > On Fri, Jun 2, 2023 at 11:19?AM Swogat Pradhan > > > > > > >> wrote: > > > > > > Hi, > > > I am creating instances in my DCN site and i am unable to > get > > > the console sometimes, error: something went wrong, > > connection > > > is closed > > > > > > I have 3 instances now running on my hci02 node and there > is > > > console access on 1 of the vm's and the rest two i am not > > > getting the console, i have used the same flavor, same > image > > > same security group for the VM's. > > > > > > Please suggest what can be done. > > > > > > With regards, > > > Swogat Pradhan > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: console.png Type: image/png Size: 44778 bytes Desc: not available URL: From satish.txt at gmail.com Mon Jun 26 03:02:39 2023 From: satish.txt at gmail.com (Satish Patel) Date: Sun, 25 Jun 2023 23:02:39 -0400 Subject: [glance][nova] nova.exception.ImageNotAuthorized: Not authorized for image In-Reply-To: References: Message-ID: Folks, Following options fixed my problem but it has some security issues involved. show_multiple_locations=True On Fri, Jun 23, 2023 at 6:26?PM Satish Patel wrote: > Folks, > > I am running kolla-ansible on small environments with ceph. When I am > getting the following error when performing a VM snapshot. > > This started happening after I upgraded from yoga to Zed. Any idea what > changed here? > > > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return > self._client.call( > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", > line 191, in call > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server result = > getattr(controller, method)(*args, **kwargs) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/v2/images.py", > line 503, in add_location > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server response = > self._send_image_update_request(image_id, add_patch) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/utils.py", > line 670, in inner > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return > RequestIdProxy(wrapped(*args, **kwargs)) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/v2/images.py", > line 483, in _send_image_update_request > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server resp, body = > self.http_client.patch(url, headers=hdrs, > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/keystoneauth1/adapter.py", > line 407, in patch > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return > self.request(url, 'PATCH', **kwargs) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/http.py", > line 380, in request > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return > self._handle_response(resp) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/http.py", > line 120, in _handle_response > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server raise > exc.from_response(resp, resp.content) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > glanceclient.exc.HTTPForbidden: HTTP 403 Forbidden: It's not allowed > to add locations if locations are invisible. > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server During handling > of the above exception, another exception occurred: > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server Traceback (most > recent call last): > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_messaging/rpc/server.py", > line 165, in _process_incoming > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server res = > self.dispatcher.dispatch(message) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_messaging/rpc/dispatcher.py", > line 309, in dispatch > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return > self._do_dispatch(endpoint, method, ctxt, args) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_messaging/rpc/dispatcher.py", > line 229, in _do_dispatch > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server result = > func(ctxt, **new_args) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/exception_wrapper.py", > line 65, in wrapped > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server with > excutils.save_and_reraise_exception(): > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", > line 227, in __exit__ > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > self.force_reraise() > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", > line 200, in force_reraise > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server raise > self.value > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/exception_wrapper.py", > line 63, in wrapped > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return > f(self, context, *args, **kw) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", > line 164, in decorated_function > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server with > excutils.save_and_reraise_exception(): > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", > line 227, in __exit__ > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > self.force_reraise() > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", > line 200, in force_reraise > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server raise > self.value > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", > line 155, in decorated_function > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return > function(self, context, *args, **kwargs) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/utils.py", > line 1439, in decorated_function > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return > function(self, context, *args, **kwargs) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", > line 211, in decorated_function > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server with > excutils.save_and_reraise_exception(): > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", > line 227, in __exit__ > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > self.force_reraise() > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", > line 200, in force_reraise > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server raise > self.value > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", > line 201, in decorated_function > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return > function(self, context, *args, **kwargs) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", > line 231, in decorated_function > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server with > excutils.save_and_reraise_exception(): > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", > line 227, in __exit__ > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > self.force_reraise() > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", > line 200, in force_reraise > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server raise > self.value > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", > line 228, in decorated_function > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return > function(self, context, image_id, instance, > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", > line 4219, in snapshot_instance > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > self._snapshot_instance(context, image_id, instance, > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", > line 4252, in _snapshot_instance > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > self.driver.snapshot(context, instance, image_id, > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/virt/libvirt/driver.py", > line 3116, in snapshot > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server with > excutils.save_and_reraise_exception(): > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", > line 227, in __exit__ > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > self.force_reraise() > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", > line 200, in force_reraise > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server raise > self.value > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/virt/libvirt/driver.py", > line 3045, in snapshot > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > self._image_api.update(context, image_id, metadata, > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", > line 1243, in update > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return > session.update(context, image_id, image_info, data=data, > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", > line 693, in update > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > _reraise_translated_image_exception(image_id) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", > line 1031, in _reraise_translated_image_exception > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server raise > new_exc.with_traceback(exc_trace) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", > line 691, in update > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server image = > self._update_v2(context, sent_service_image_meta, data) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", > line 705, in _update_v2 > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server image = > self._add_location(context, image_id, location) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", > line 561, in _add_location > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return > self._client.call( > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", > line 191, in call > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server result = > getattr(controller, method)(*args, **kwargs) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/v2/images.py", > line 503, in add_location > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server response = > self._send_image_update_request(image_id, add_patch) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/utils.py", > line 670, in inner > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return > RequestIdProxy(wrapped(*args, **kwargs)) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/v2/images.py", > line 483, in _send_image_update_request > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server resp, body = > self.http_client.patch(url, headers=hdrs, > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/keystoneauth1/adapter.py", > line 407, in patch > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return > self.request(url, 'PATCH', **kwargs) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/http.py", > line 380, in request > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return > self._handle_response(resp) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/http.py", > line 120, in _handle_response > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server raise > exc.from_response(resp, resp.content) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > nova.exception.ImageNotAuthorized: Not authorized for image > 6d39ead7-e543-4ab6-b54c-78ca16421242. > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Jun 26 06:51:25 2023 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 26 Jun 2023 08:51:25 +0200 Subject: [largescale-sig] Next meeting: June 28, 8utc Message-ID: <6304dd67-9a45-8c10-deec-b9bde0792847@openstack.org> Hi everyone, The Large Scale SIG will be meeting for the last time before the Northern hemisphere summer this Wednesday in #openstack-operators on OFTC IRC, at 8UTC, our APAC+EU-friendly time. I will be chairing. You can doublecheck how that UTC time translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20230628T08 Feel free to add topics to the agenda: https://etherpad.opendev.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez From zigo at debian.org Mon Jun 26 07:38:06 2023 From: zigo at debian.org (Thomas Goirand) Date: Mon, 26 Jun 2023 09:38:06 +0200 Subject: [manila] Vancouver PTG and Forum In-Reply-To: References: Message-ID: <498c82b7-1fce-eb37-b8e1-1a3a20d67ebf@debian.org> On 6/23/23 21:42, Carlos Silva wrote: > Many operators in the room were interested in VirtIOFS so they > can consume Native CephFS without having to expose the cluster to user > VMs/containers. Hi, FYI, I missed the session, though I very much am interested as well by this driver... Is there any roadmap for it somewhere? Cheers, Thomas Goirand (zigo) From tomas at leypold.cz Mon Jun 26 09:27:28 2023 From: tomas at leypold.cz (Tomas Leypold) Date: Mon, 26 Jun 2023 11:27:28 +0200 Subject: Kolla iser (RDMA) support Message-ID: <3d-64995a00-1-27151a00@243891857> Hi, I have a kolla-ansible deployment, and I am trying to switch from the LVM iSCSI protocol to iSER (RDMA). On the host machine, I have installed rdma-core, which enabled kernel module loading for the NIC card we are using, the "BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller". I have verified the function of RDMA with ib_write_bw, which worked fine. The first problem I encountered with Kolla is the absence of necessary packages - ibverbs-providers, which includes userspace drivers for RDMA. Without it, cinder_volume will not attach a volume. I did a image rebuild with package ibverbs-providers included. I will fill an issue for this later. Then I just changed target_protocol = iser in cinder_volume config file. The current issue I am facing is that the instance is stuck at the "Spawning" phase. In the nova_compute log, I see the following lines: 2023-06-26 10:57:36.204 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] ==> connect_volume: call "{'self': , 'connection_properties': {'target_discovered': False, 'target_portal': '10.47.1.25:3260', 'target_iqn': 'iqn.2010-10.org.openstack:volume-bc382b4c-dac4-4ca6-a260-0d450d6c4e26', 'target_lun': 1, 'volume_id': 'bc382b4c-dac4-4ca6-a260-0d450d6c4e26', 'auth_method': 'CHAP', 'auth_username': '8n7WyZ2wZFNd7B33rfYw', 'auth_password': '***', 'encrypted': False, 'qos_specs': None, 'access_mode': 'rw': False}}" trace_logging_wrapper /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/utils.py:174 2023-06-26 10:57:36.208 7 DEBUG os_brick.initiator.connectors.base [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] Acquiring lock "connect_to_iscsi_portal-10.47.1.25:3260-iqn.2010-10.org.openstack:volume-bc382b4c-dac4-4ca6-a260-0d450d6c4e26" by "os_brick.initiator.connectors.iscsi.ISCSIConnector._connect_to_iscsi_portal_unsafe" inner /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/base.py:68 2023-06-26 10:57:36.208 7 DEBUG os_brick.initiator.connectors.base [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] Lock "connect_to_iscsi_portal-10.47.1.25:3260-iqn.2010-10.org.openstack:volume-bc382b4c-dac4-4ca6-a260-0d450d6c4e26" acquired by "os_brick.initiator.connectors.iscsi.ISCSIConnector._connect_to_iscsi_portal_unsafe" :: waited 0.001s inner /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/base.py:73 2023-06-26 10:57:36.209 7 INFO os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] Trying to connect to iSCSI portal 10.47.1.25:3260 2023-06-26 10:57:36.219 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm (): stdout= stderr=iscsiadm: No records found _run_iscsiadm /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1026 2023-06-26 10:57:36.227 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm ('--interface', 'iser', '--op', 'new'): stdout=New iSCSI node [iser:[hw=,ip=,net_if=,iscsi_if=iser] 10.47.1.25,3260,-1 iqn.2010-10.org.openstack:volume-bc382b4c-dac4-4ca6-a260-0d450d6c4e26] added stderr= _run_iscsiadm /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1026 2023-06-26 10:57:36.234 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm ('--op', 'update', '-n', 'node.session.scan', '-v', 'manual'): stdout= stderr= _run_iscsiadm /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1026 2023-06-26 10:57:36.241 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm ('--op', 'update', '-n', 'node.session.auth.authmethod', '-v', 'CHAP'): stdout= stderr= _run_iscsiadm /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1026 2023-06-26 10:57:36.247 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm ('--op', 'update', '-n', 'node.session.auth.username', '-v', '8n7WyZ2wZFNd7B33rfYw'): stdout= stderr= _run_iscsiadm /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1026 2023-06-26 10:57:36.254 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm ('--op', 'update', '-n', 'node.session.auth.password', '-v', '***'): stdout= stderr= _run_iscsiadm /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1026 2023-06-26 10:57:36.260 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm ('-m', 'session'): stdout= stderr=iscsiadm: No active sessions. _run_iscsiadm_bare /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1182 2023-06-26 10:57:36.261 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsi session list stdout= stderr=iscsiadm: No active sessions. _run_iscsi_session /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1171 2023-06-26 10:57:36.261 7 WARNING os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm stderr output when getting sessions: iscsiadm: No active sessions. 2 min later it timeouts: 2023-06-26 10:59:38.377 7 WARNING os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] Failed to login iSCSI target iqn.2010-10.org.openstack:volume-bc382b4c-dac4-4ca6-a260-0d450d6c4e26 on portal 10.47.1.25:3260 (exit code 8).: oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2023-06-26 10:59:38.378 7 DEBUG os_brick.initiator.connectors.base [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] Lock "connect_to_iscsi_portal-10.47.1.25:3260-iqn.2010-10.org.openstack:volume-bc382b4c-dac4-4ca6-a260-0d450d6c4e26" "released" by "os_brick.initiator.connectors.iscsi.ISCSIConnector._connect_to_iscsi_portal_unsafe" :: held 122.170s inner /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/base.py:87 2023-06-26 10:59:38.378 7 WARNING os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] Failed to connect to iSCSI portal 10.47.1.25:3260. 2023-06-26 10:59:38.378 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] Getting connected devices for (ips,iqns,luns)=[('10.47.1.25:3260', 'iqn.2010-10.org.openstack:volume-bc382b4c-dac4-4ca6-a260-0d450d6c4e26', 1)] _get_connection_devices /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:819 2023-06-26 10:59:38.394 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm ('-m', 'session'): stdout= stderr=iscsiadm: No active sessions. _run_iscsiadm_bare /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1182 2023-06-26 10:59:38.394 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsi session list stdout= stderr=iscsiadm: No active sessions. _run_iscsi_session /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1171 2023-06-26 10:59:38.394 7 WARNING os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm stderr output when getting sessions: iscsiadm: No active sessions. 2023-06-26 10:59:38.394 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] Resulting device map defaultdict(. at 0x7f4dd43230a0>, {('10.47.1.25:3260', 'iqn.2010-10.org.openstack:volume-bc382b4c-dac4-4ca6-a260-0d450d6c4e26'): (set(), set())}) _get_connection_devices /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:852 2023-06-26 10:59:38.394 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] Disconnecting from: [('10.47.1.25:3260', 'iqn.2010-10.org.openstack:volume-bc382b4c-dac4-4ca6-a260-0d450d6c4e26')] _disconnect_connection /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1160 2023-06-26 10:59:38.401 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm ('--op', 'update', '-n', 'node.startup', '-v', 'manual'): stdout= stderr= _run_iscsiadm /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1026 2023-06-26 10:59:38.407 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm ('--logout',): stdout= stderr=iscsiadm: No matching sessions found _run_iscsiadm /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1026 2023-06-26 10:59:38.413 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm ('--op', 'delete'): stdout= stderr= _run_iscsiadm /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1026 2023-06-26 10:59:38.414 7 DEBUG os_brick.utils [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] Finished call to 'os_brick.initiator.connectors.iscsi.ISCSIConnector._connect_single_volume' after 122.207(s), this was the 1st time calling it. log_it /var/lib/kolla/venv/lib/python3.10/site-packages/tenacity/after.py:30 2023-06-26 10:59:38.415 7 DEBUG os_brick.utils [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] Retrying os_brick.initiator.connectors.iscsi.ISCSIConnector._connect_single_volume in 1.0 seconds as it raised VolumeDeviceNotFound: Volume device not found at .. log_it /var/lib/kolla/venv/lib/python3.10/site-packages/tenacity/before_sleep.py:40 2023-06-26 10:59:39.393 7 DEBUG oslo_service.periodic_task [None req-a325bda7-8910-43e3-90a9-6ae0dc8ae6a7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /var/lib/kolla/venv/lib/python3.10/site-packages/oslo_service/periodic_task.py:210 2023-06-26 10:59:39.416 7 WARNING os_brick.initiator.connectors.base [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] Service needs to call os_brick.setup() before connecting volumes, if it doesn't it will break on the next release 2023-06-26 10:59:39.416 7 DEBUG os_brick.initiator.connectors.base [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] Acquiring lock "connect_to_iscsi_portal-10.47.1.25:3260-iqn.2010-10.org.openstack:volume-bc382b4c-dac4-4ca6-a260-0d450d6c4e26" by "os_brick.initiator.connectors.iscsi.ISCSIConnector._connect_to_iscsi_portal_unsafe" inner /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/base.py:68 2023-06-26 10:59:39.417 7 DEBUG os_brick.initiator.connectors.base [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] Lock "connect_to_iscsi_portal-10.47.1.25:3260-iqn.2010-10.org.openstack:volume-bc382b4c-dac4-4ca6-a260-0d450d6c4e26" acquired by "os_brick.initiator.connectors.iscsi.ISCSIConnector._connect_to_iscsi_portal_unsafe" :: waited 0.001s inner /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/base.py:73 2023-06-26 10:59:39.418 7 INFO os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] Trying to connect to iSCSI portal 10.47.1.25:3260 2023-06-26 10:59:39.426 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm (): stdout= stderr=iscsiadm: No records found _run_iscsiadm /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1026 2023-06-26 10:59:39.435 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm ('--interface', 'iser', '--op', 'new'): stdout=New iSCSI node [iser:[hw=,ip=,net_if=,iscsi_if=iser] 10.47.1.25,3260,-1 iqn.2010-10.org.openstack:volume-bc382b4c-dac4-4ca6-a260-0d450d6c4e26] added stderr= _run_iscsiadm /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1026 2023-06-26 10:59:39.441 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm ('--op', 'update', '-n', 'node.session.scan', '-v', 'manual'): stdout= stderr= _run_iscsiadm /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1026 2023-06-26 10:59:39.447 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm ('--op', 'update', '-n', 'node.session.auth.authmethod', '-v', 'CHAP'): stdout= stderr= _run_iscsiadm /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1026 2023-06-26 10:59:39.454 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm ('--op', 'update', '-n', 'node.session.auth.username', '-v', '8n7WyZ2wZFNd7B33rfYw'): stdout= stderr= _run_iscsiadm /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1026 2023-06-26 10:59:39.454 7 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib/python3/dist-packages/ovs/poller.py:263 2023-06-26 10:59:39.461 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm ('--op', 'update', '-n', 'node.session.auth.password', '-v', '***'): stdout= stderr= _run_iscsiadm /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1026 2023-06-26 10:59:39.467 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm ('-m', 'session'): stdout= stderr=iscsiadm: No active sessions. _run_iscsiadm_bare /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1182 2023-06-26 10:59:39.467 7 DEBUG os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsi session list stdout= stderr=iscsiadm: No active sessions. _run_iscsi_session /var/lib/kolla/venv/lib/python3.10/site-packages/os_brick/initiator/connectors/iscsi.py:1171 2023-06-26 10:59:39.468 7 WARNING os_brick.initiator.connectors.iscsi [None req-67fc8494-95ce-45ca-972b-63d0833da0b2 c10f2f77bb9d43a0832dcc9262d63943 49fccac069b847a98fe3aae84ecf9aa4 - - default default] iscsiadm stderr output when getting sessions: iscsiadm: No active sessions. In this log, the problem is the line "Failed to login to iSCSI target", but I am not sure why this is happening. Does anyone have any ideas about what could be wrong? Thanks! --- Best regards, Tomas Leypold From wodel.youchi at gmail.com Mon Jun 26 10:07:21 2023 From: wodel.youchi at gmail.com (wodel youchi) Date: Mon, 26 Jun 2023 11:07:21 +0100 Subject: [kolla-ansible][yoga] How can I modify the logrotate configuration for kolla to save some space Message-ID: Hi, Is there a way to tweak kolla's logrotate configuration, to save up some space on /var/log/kolla? Regards. Virus-free.www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at stackhpc.com Mon Jun 26 10:27:31 2023 From: doug at stackhpc.com (Doug Szumski) Date: Mon, 26 Jun 2023 11:27:31 +0100 Subject: [kolla-ansible][yoga] How can I modify the logrotate configuration for kolla to save some space In-Reply-To: References: Message-ID: On 26/06/2023 11:07, wodel youchi wrote: > Hi, > > Is there a way to tweak kolla's logrotate configuration, to? save up > some space on /var/log/kolla? You could override some of the cron settings? https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/common/defaults/main.yml#L129 > > > Regards. > > > Virus-free.www.avast.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Mon Jun 26 10:44:15 2023 From: wodel.youchi at gmail.com (wodel youchi) Date: Mon, 26 Jun 2023 11:44:15 +0100 Subject: [kolla-ansible][yoga] How can I modify the logrotate configuration for kolla to save some space In-Reply-To: References: Message-ID: Hi, Can this do the job, putting a modified logrotate.conf file in /etc/kolla/config/cron/ the redeploy kolla? Have another question, the maxsize parameter does not exist in the logrotate man page, just size and minsize??? Regards. Virus-free.www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> Le lun. 26 juin 2023 ? 11:27, Doug Szumski a ?crit : > > On 26/06/2023 11:07, wodel youchi wrote: > > Hi, > > Is there a way to tweak kolla's logrotate configuration, to save up some > space on /var/log/kolla? > > You could override some of the cron settings? > https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/common/defaults/main.yml#L129 > > > > Regards. > > > > Virus-free.www.avast.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnasiadka at gmail.com Mon Jun 26 10:50:03 2023 From: mnasiadka at gmail.com (=?utf-8?Q?Micha=C5=82_Nasiadka?=) Date: Mon, 26 Jun 2023 12:50:03 +0200 Subject: [kolla] Weekly meetings in July 2023 Message-ID: <381F2692-31DD-4795-A096-AD1ABF0FCDA8@gmail.com> Hello Koalas, We?ve decided last week that due to my absence from 1st July - we?re cancelling weekly meetings for the first three weeks of July - i.e. we will meet 26th of July for the only July meeting. NOTE: This Wednesday meeting is not cancelled In case of queries of additional topics - please reach out on #openstack-kolla Best regards, Michal From michal.arbet at ultimum.io Mon Jun 26 11:00:25 2023 From: michal.arbet at ultimum.io (Michal Arbet) Date: Mon, 26 Jun 2023 13:00:25 +0200 Subject: [kolla-ansible][nova] nova upgrade failed from zed to antelope In-Reply-To: References: Message-ID: Hi, they just added check for service user , check the log : +---------------------------------------------------------------------+ | Upgrade Check Results | +---------------------------------------------------------------------+ | Check: Cells v2 | | Result: Success | | Details: None | +---------------------------------------------------------------------+ | Check: Placement API | | Result: Success | | Details: None | +---------------------------------------------------------------------+ | Check: Cinder API | | Result: Success | | Details: None | +---------------------------------------------------------------------+ | Check: Policy File JSON to YAML Migration | | Result: Success | | Details: None | +---------------------------------------------------------------------+ | Check: Older than N-1 computes | | Result: Success | | Details: None | +---------------------------------------------------------------------+ | Check: hw_machine_type unset | | Result: Success | | Details: None | +---------------------------------------------------------------------+ | Check: Service User Token Configuration | | Result: Failure | | Details: Service user token configuration is required for all Nova | | services. For more details see the following: https://docs | | .openstack.org/latest/nova/admin/configuration/service- | | user-token.html | +---------------------------------------------------------------------+ If we want to fix this in kolla-ansible, we should generate another config file for container which is actually running nova-upgrade status check, run container with this config, and if it will pass it's ok. Problem is that we are running upgrade check before config generation. If you run kolla-ansible genconfig before upgrade on host1 it will pass also... Michal Arbet Openstack Engineer Ultimum Technologies a.s. Na Po???? 1047/26, 11000 Praha 1 Czech Republic +420 604 228 897 michal.arbet at ultimum.io *https://ultimum.io * LinkedIn | Twitter | Facebook st 21. 6. 2023 v 16:54 odes?latel Satish Patel napsal: > Just to close this loop, after adding the following in > /etc/kolla/nova-api/nova.conf fixed my issue. But still need to understand > what went wrong in upgrade path > > [service_user] > send_service_user_token = true > auth_url = http://10.30.50.10:5000 > auth_type = password > project_domain_id = default > user_domain_id = default > project_name = service > username = nova > password = AAABBBCCCDDDEEE > cafile = > region_name = RegionOne > valid_interfaces = internal > > > On Wed, Jun 21, 2023 at 9:02?AM Satish Patel wrote: > >> Hi Maksim, >> >> This is all I have in my config/ I don't have any override. >> >> (venv-kolla) root at kolla-infra-1:~# ls -l /etc/kolla/config/ >> total 8 >> -rw-r--r-- 1 root root 187 Apr 30 04:11 global.conf >> drwxr-xr-x 2 root root 4096 May 3 01:38 neutron >> >> >> (venv-kolla) root at kolla-infra-1:~# cat /etc/kolla/config/global.conf >> [oslo_messaging_rabbit] >> kombu_reconnect_delay=0.5 >> rabbit_transient_queues_ttl=60 >> >> >> >> On Wed, Jun 21, 2023 at 4:15?AM Maksim Malchuk >> wrote: >> >>> Hi Satish, >>> >>> It very strange because the fix for service token for the Nova was >>> merged a month ago ( >>> https://review.opendev.org/q/I2189dafca070accfd8efcd4b8cc4221c6decdc9f) >>> Maybe you have custom configuration which overrides nova.conf ? >>> >>> On Wed, Jun 21, 2023 at 7:16?AM Satish Patel >>> wrote: >>> >>>> Folks, >>>> >>>> I'am upgrading zed to antelope using kolla-ansible and encount >>>> following error >>>> >>>> TASK [nova : Upgrade status check result] >>>> ************************************************************************************************************************************************************ >>>> fatal: [kolla-infra-1]: FAILED! => {"changed": false, "msg": ["There >>>> was an upgrade status check failure!", "See the detail at >>>> https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks >>>> "]} >>>> fatal: [kolla-infra-2]: FAILED! => {"changed": false, "msg": ["There >>>> was an upgrade status check failure!", "See the detail at >>>> https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks >>>> "]} >>>> fatal: [kolla-infra-3]: FAILED! => {"changed": false, "msg": ["There >>>> was an upgrade status check failure!", "See the detail at >>>> https://docs.openstack.org/nova/latest/cli/nova-status.html#nova-status-checks >>>> "]} >>>> >>>> >>>> After running upgrade check command manually on nova-api container got >>>> following error >>>> >>>> (nova-api)[root at kolla-infra-2 /]# nova-status upgrade check >>>> Modules with known eventlet monkey patching issues were imported prior >>>> to eventlet monkey patching: urllib3. This warning can usually be ignored >>>> if the caller is only importing and not executing nova code. >>>> +---------------------------------------------------------------------+ >>>> | Upgrade Check Results | >>>> +---------------------------------------------------------------------+ >>>> | Check: Cells v2 | >>>> | Result: Success | >>>> | Details: None | >>>> +---------------------------------------------------------------------+ >>>> | Check: Placement API | >>>> | Result: Success | >>>> | Details: None | >>>> +---------------------------------------------------------------------+ >>>> | Check: Cinder API | >>>> | Result: Success | >>>> | Details: None | >>>> +---------------------------------------------------------------------+ >>>> | Check: Policy File JSON to YAML Migration | >>>> | Result: Success | >>>> | Details: None | >>>> +---------------------------------------------------------------------+ >>>> | Check: Older than N-1 computes | >>>> | Result: Success | >>>> | Details: None | >>>> +---------------------------------------------------------------------+ >>>> | Check: hw_machine_type unset | >>>> | Result: Success | >>>> | Details: None | >>>> +---------------------------------------------------------------------+ >>>> | Check: Service User Token Configuration | >>>> | Result: Failure | >>>> | Details: Service user token configuration is required for all Nova | >>>> | services. For more details see the following: https://docs | >>>> | .openstack.org/latest/nova/admin/configuration/service- | >>>> | user-token.html | >>>> +---------------------------------------------------------------------+ >>>> >>>> Service user token reference to the following doc [1] . Do I need to >>>> configure token users in order to upgrade nova? >>>> >>>> [1] >>>> https://docs.openstack.org/nova/latest/admin/configuration/service-user-token.html >>>> >>>> >>>> >>>> >>> >>> -- >>> Regards, >>> Maksim Malchuk >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Mon Jun 26 12:14:29 2023 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 26 Jun 2023 14:14:29 +0200 Subject: [blazar] Cancelling next two IRC meetings Message-ID: Hello, I am cancelling the next two Blazar IRC meetings due to being out of the office. The next meeting will be on July 27. Best wishes, Pierre Riteau (priteau) -------------- next part -------------- An HTML attachment was scrubbed... URL: From knikolla at bu.edu Mon Jun 26 14:34:05 2023 From: knikolla at bu.edu (Nikolla, Kristi) Date: Mon, 26 Jun 2023 14:34:05 +0000 Subject: [tc] Technical Committee next weekly meeting on June 27, 2023 Message-ID: <25FD8182-387C-4431-8B5A-0CA8FA43D419@bu.edu> Hi all, This is a reminder that the next weekly Technical Committee meeting is to be held on Tuesday, Jun 27, 2023 at 1800 UTC on #openstack-tc on OFTC IRC. Please propose items to the agenda by editing the wiki page at https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting At the end of the day I will send out an email with the finalized agenda. Thank you, Kristi Nikolla From ces.eduardo98 at gmail.com Mon Jun 26 14:53:50 2023 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Mon, 26 Jun 2023 11:53:50 -0300 Subject: [manila] Vancouver PTG and Forum In-Reply-To: <498c82b7-1fce-eb37-b8e1-1a3a20d67ebf@debian.org> References: <498c82b7-1fce-eb37-b8e1-1a3a20d67ebf@debian.org> Message-ID: Hi, We have been tracking the roadmap for the driver with blueprints in the Manila launchpad. As for VirtioFS support, the goal is to complete the Manila side as soon as we can in this release (Bobcat). In Manila, we have two main things being worked on: Allow locking shares against deletion [1] and Access rule visibility and deletion restrictions [2]. The corresponding blueprints are available in [3]. [1] https://review.opendev.org/c/openstack/manila-specs/+/881894 - [2] https://review.opendev.org/c/openstack/manila-specs/+/881934 - [3] https://blueprints.launchpad.net/openstack/manila?searchtext=virtiofs Regards, carloss Em seg., 26 de jun. de 2023 ?s 04:43, Thomas Goirand escreveu: > On 6/23/23 21:42, Carlos Silva wrote: > > Many operators in the room were interested in VirtIOFS so they > > can consume Native CephFS without having to expose the cluster to user > > VMs/containers. > Hi, > > FYI, I missed the session, though I very much am interested as well by > this driver... Is there any roadmap for it somewhere? > > Cheers, > > Thomas Goirand (zigo) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Mon Jun 26 15:24:27 2023 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 26 Jun 2023 11:24:27 -0400 Subject: [glance][nova] nova.exception.ImageNotAuthorized: Not authorized for image In-Reply-To: References: Message-ID: On 6/25/23 11:02 PM, Satish Patel wrote: > Folks, > > Following options fixed my problem but it has some security?issues > involved. > show_multiple_locations=True See OSSN-0090 for a discussion of the security issues involved: https://wiki.openstack.org/wiki/OSSN/OSSN-0090 > > On Fri, Jun 23, 2023 at 6:26?PM Satish Patel > wrote: > > Folks, > > I am running kolla-ansible?on small environments with ceph. When I > am getting the following error when performing a VM snapshot. > > This started happening?after I upgraded from yoga to Zed. Any idea > what changed here? > > > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? return > self._client.call( > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", line 191, in call > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? result > = getattr(controller, method)(*args, **kwargs) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/v2/images.py", line 503, in add_location > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > response = self._send_image_update_request(image_id, add_patch) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/utils.py", line 670, in inner > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? return > RequestIdProxy(wrapped(*args, **kwargs)) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/v2/images.py", line 483, in _send_image_update_request > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? resp, > body = self.http_client.patch(url, headers=hdrs, > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/keystoneauth1/adapter.py", line 407, in patch > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? return > self.request(url, 'PATCH', **kwargs) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/http.py", line 380, in request > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? return > self._handle_response(resp) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/http.py", line 120, in _handle_response > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? raise > exc.from_response(resp, resp.content) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > glanceclient.exc.HTTPForbidden: HTTP 403 Forbidden: It's not > allowed to add locations if locations are invisible. > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server During > handling of the above exception, another exception occurred: > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server Traceback > (most recent call last): > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? res = > self.dispatcher.dispatch(message) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? return > self._do_dispatch(endpoint, method, ctxt, args) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? result > = func(ctxt, **new_args) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/exception_wrapper.py", line 65, in wrapped > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? with > excutils.save_and_reraise_exception(): > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 227, in __exit__ > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > self.force_reraise() > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 200, in force_reraise > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? raise > self.value > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/exception_wrapper.py", line 63, in wrapped > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? return > f(self, context, *args, **kw) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", line 164, in decorated_function > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? with > excutils.save_and_reraise_exception(): > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 227, in __exit__ > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > self.force_reraise() > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 200, in force_reraise > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? raise > self.value > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", line 155, in decorated_function > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? return > function(self, context, *args, **kwargs) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/utils.py", line 1439, in decorated_function > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? return > function(self, context, *args, **kwargs) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", line 211, in decorated_function > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? with > excutils.save_and_reraise_exception(): > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 227, in __exit__ > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > self.force_reraise() > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 200, in force_reraise > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? raise > self.value > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", line 201, in decorated_function > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? return > function(self, context, *args, **kwargs) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", line 231, in decorated_function > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? with > excutils.save_and_reraise_exception(): > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 227, in __exit__ > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > self.force_reraise() > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 200, in force_reraise > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? raise > self.value > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", line 228, in decorated_function > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? return > function(self, context, image_id, instance, > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", line 4219, in snapshot_instance > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > self._snapshot_instance(context, image_id, instance, > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", line 4252, in _snapshot_instance > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > self.driver.snapshot(context, instance, image_id, > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/virt/libvirt/driver.py", line 3116, in snapshot > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? with > excutils.save_and_reraise_exception(): > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 227, in __exit__ > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > self.force_reraise() > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 200, in force_reraise > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? raise > self.value > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/virt/libvirt/driver.py", line 3045, in snapshot > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > self._image_api.update(context, image_id, metadata, > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", line 1243, in update > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? return > session.update(context, image_id, image_info, data=data, > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", line 693, in update > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > _reraise_translated_image_exception(image_id) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? raise > new_exc.with_traceback(exc_trace) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", line 691, in update > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? image > = self._update_v2(context, sent_service_image_meta, data) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", line 705, in _update_v2 > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? image > = self._add_location(context, image_id, location) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", line 561, in _add_location > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? return > self._client.call( > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", line 191, in call > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? result > = getattr(controller, method)(*args, **kwargs) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/v2/images.py", line 503, in add_location > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > response = self._send_image_update_request(image_id, add_patch) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/utils.py", line 670, in inner > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? return > RequestIdProxy(wrapped(*args, **kwargs)) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/v2/images.py", line 483, in _send_image_update_request > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? resp, > body = self.http_client.patch(url, headers=hdrs, > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/keystoneauth1/adapter.py", line 407, in patch > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? return > self.request(url, 'PATCH', **kwargs) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/http.py", line 380, in request > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? return > self._handle_response(resp) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? File > "/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/http.py", line 120, in _handle_response > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server ? ? raise > exc.from_response(resp, resp.content) > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > nova.exception.ImageNotAuthorized: Not authorized for image > 6d39ead7-e543-4ab6-b54c-78ca16421242. > 2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server > > From haleyb.dev at gmail.com Mon Jun 26 19:05:07 2023 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 26 Jun 2023 15:05:07 -0400 Subject: Neutron BGP agent advertisement and l3/openvswitch-agent problems (zed) In-Reply-To: References: Message-ID: <0673c7ef-c191-2912-dd71-6236e8e4890b@gmail.com> On 6/24/23 11:58 PM, Bryan Huang wrote: > Dear folks, > > Recently, we met some neutron networking problems in our envrionment, > openstack version is zed, and kolla-ansible as the deployment tool. > > 1. Neutron BGP agent doesn't advertise the floating IPs to the BGP > peer, in case of the floating IPs were served for port forwarding, > but the floating IPs attached to VM/Container were advertised > correctly. so the question is *this scenario supported by BGP > agent*, if not when will it be supported, is it in the plan? Someone more familiar with that agent will have to help you here. > 2. iptable rules restoring error in l3-agent and openvswitch-agent (A > bug was reported in launchpad: > https://bugs.launchpad.net/neutron/+bug/2024976 > ) > Bug #2024976 ?iptable rules restoring error in l3-agent and open...? > : Bugs : neutron > Openstack version: zed/stable OS version: Ubuntu 22.04.2 LTS Kernel > version: 5.15.0-75-generic #82-Ubuntu Deployment: kolla-ansible > iptable rules restoring error in l3-agent and openvswitch-agent: ??? > ???openvswitch-agnet log: 2023-06-23 15:54:58.616 7 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent > [None req-4440bce1-8c07-4243-ac1b-2566b406a30a - - - - - -] Error > while processing VIF ports: > neutron_lib.exceptions.ProcessExecutionError: Exit code: 2; Cmd: [... > bugs.launchpad.net > > > *??????openvswitch-agnet log:* > > 2023-06-23 15:54:58.616 7 ERROR > neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent > [None req-4440bce1-8c07-4243-ac1b-2566b406a30a - - - - - -] > Error while processing VIF ports: > neutron_lib.exceptions.ProcessExecutionError: Exit code: 2; Cmd: > ['iptables-restore', '-n']; Stdin: # Generated by iptables_manager This is most likely due to a system update, as iptables is being replaced by nftables I've seen this happen. You should be able to fix this with update-alternatives, this is my working system: $ sudo update-alternatives --config iptables There are 2 choices for the alternative iptables (providing /usr/sbin/iptables). Selection Path Priority Status ------------------------------------------------------------ * 0 /usr/sbin/iptables-nft 20 auto mode 1 /usr/sbin/iptables-legacy 10 manual mode 2 /usr/sbin/iptables-nft 20 manual mode Press to keep the current choice[*], or type selection number: -Brian From sbauza at redhat.com Tue Jun 27 08:43:17 2023 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 27 Jun 2023 10:43:17 +0200 Subject: [nova] Last Spec review day next Tuesday ! In-Reply-To: References: Message-ID: As a reminder, this is today :) Le ven. 23 juin 2023 ? 11:34, Sylvain Bauza a ?crit : > Hey folks, > > As a reminder [1], we will have a spec review day next Tuesday June 27th. > Sharpen your pens and your Gerrit patches because it will be the last spec > review day for this cycle and the Spec Approval Freeze will be on July 6th > [2] ! > > Make sure you have everything uploaded so we can look at them during this > day ! > After July 6th, no new features will be accepted if they need a specific > spec. > > -Sylvain > > [1] > https://releases.openstack.org/bobcat/schedule.html#b-nova-spec-review-day > [2] https://releases.openstack.org/bobcat/schedule.html#b-nova-spec-freeze > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Jun 27 10:39:34 2023 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 27 Jun 2023 12:39:34 +0200 Subject: [neutron] Bug deputy report - week of Jun 19th Message-ID: <6492238.8pY88D2gUS@p1> Hi, I was bug deputy last week and here is summary of new bugs opened in Neutron: ## Critical ## * https://bugs.launchpad.net/neutron/+bug/2024903 - [neutron-lib] FT "test_negative_update_floatingip_port_forwarding" failing randomly very often - already fixed by Rodolfo, * https://bugs.launchpad.net/neutron/+bug/2025126 - Functional MySQL sync test failing with oslo/sqlalchemy master branch - not assigned ## High ## * https://bugs.launchpad.net/neutron/+bug/2024674 - Unit tests fails with oslo_db.exception.DBNonExistentTable: (sqlite3.OperationalError) no such table: ml2_geneve_allocations when run with low concurrency or on loaded systems - assigned to Yatin * https://bugs.launchpad.net/neutron/+bug/2025129 - DvrLocalRouter init references namespace before it is created - not assigned ## Medium ## * https://bugs.launchpad.net/neutron/+bug/2024381 - keepalived fails to start after updating DVR-HA internal network MTU - assigned to Anton Kurbatov, fix proposed, * https://bugs.launchpad.net/neutron/+bug/2024912 - [ovn-octavia-provider] Updating status on incorrect pool when HM delete - assigned to Fernando * https://bugs.launchpad.net/neutron/+bug/2025056 - Router ports without IP addresses shouldn't be allowed to deletion using port's API directly - assigned to slaweq, fix proposed ## Whishlist ## * https://bugs.launchpad.net/neutron/+bug/2024502 - Tempest: add scenario to validate that stateless SG rules are working in presence of Load Balancer attached to the same network - assigned to slaweq, * https://bugs.launchpad.net/neutron/+bug/2024621 - tempest: add scenario for reverse DNS resolution - not assigned - I marked it as low-hanging-fruit task as I think that writing such test case is good way to start contribution to Neutron, ## RFEs ## * https://bugs.launchpad.net/neutron/+bug/2024581 - [RFE] Caching instance id in the metadata agent to have less RPC messages sent to server - not assigned * https://bugs.launchpad.net/neutron/+bug/2024921- [RFE] Formalize use of subnet service-type for draining subnets - not assigned * https://bugs.launchpad.net/neutron/+bug/2025055 - [rfe][ml2] Add a new API that supports cloning a specified security group - not assigned ## Needs triage ## * https://bugs.launchpad.net/ubuntu/+source/neutron-dynamic-routing/+bug/2024510 - Address on SNAT port won't be advertised by BGP speaker - not assigned * https://bugs.launchpad.net/neutron/+bug/2024481- [ndr] neutron-bgp-dragent is racy when a service restart is made just before a speaker is added - frickler is already taking care of this one but it needs more triaging still ## Incomplete ## * https://bugs.launchpad.net/neutron/+bug/2024976 - iptable rules restoring error in l3-agent and openvswitch-agent - already taken care by Brian, it seems like it's deployment issue, -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From molenkam at uwo.ca Tue Jun 27 11:37:14 2023 From: molenkam at uwo.ca (Gary Molenkamp) Date: Tue, 27 Jun 2023 07:37:14 -0400 Subject: SNAT failure with OVN under Antelope Message-ID: <34841bdd-bb95-ff9a-258b-cfae26558627@uwo.ca> Good morning,?? I'm having a problem with snat routing under OVN but I'm not sure if something is mis-configured or just my understanding of how OVN is architected is wrong. I've built a Zed cloud, since upgraded to Antelope, using the Neutron Manual install method here: https://docs.openstack.org/neutron/latest/install/ovn/manual_install.html I'm using a multi-tenent configuration using geneve and the flat provider network is present on each hypervisor. Each hypervisor is connected to the physical provider network, along with the tenent network and is tagged as an external chassis under OVN. ??????? br-int exists, as does br-provider ??? ??? ovs-vsctl set open . external-ids:ovn-cms-options=enable-chassis-as-gw For most cases, distributed FIP based connectivity is working without issue, but I'm having an issue where VMs without a FIP are not always able to use the SNAT services of the tenent network router. Scenario: ??? Internal network named cs3319:? with subnet 172.31.100.0/23 ??? Has a router named cs3319_router with external gateway set (snat enabled) ??? This network has 3 vms: ?? ???? - #1 has a FIP and can be accessed externally ?? ???? - #2 has no FIP, can be accessed via VM1 and can access external resources via SNAT? (ie OS repos, DNS, etc) ??? ??? - #3 has no FIP, can be accessed via VM1 but has no external SNAT connectivity From what I can tell,? the chassis config is correct, compute05 is the hypervisor and the faulty VM has a port binding on this hypervisor: ovn-sbctl show ... Chassis "8e0fa17c-e480-4b60-9015-bd8833412561" ??? hostname: compute05.cloud.sci.uwo.ca ??? Encap geneve ??????? ip: "192.168.0.105" ??????? options: {csum="true"} ??? Port_Binding "7a5257eb-caea-45bf-b48c-620c5dff4b39" ??? Port_Binding "50e16602-78e6-429b-8c2f-e7e838ece1b4" ??? Port_Binding "f121c9f4-c3fe-4ea9-b754-a809be95a3fd" The router has the candidate gateways, and the snat set: ovn-nbctl show? 92df19a7-4ebe-43ea-b233-f4e9f5a46e7c router 92df19a7-4ebe-43ea-b233-f4e9f5a46e7c (neutron-389439b5-07f8-44b6-a35b-c76651b48be5) (aka cs3319_public_router) ??? port lrp-44ae1753-845e-4822-9e3d-a41e0469e257 ??????? mac: "fa:16:3e:9a:db:d8" ??????? networks: ["129.100.21.94/22"] ??????? gateway chassis: [5c039d38-70b2-4ee6-9df1-596f82c68106 99facd23-ad17-4b68-a8c2-1ff6da15ac5f 1694116c-6d30-4c31-b5ea-0f411878316e 2a4bbaf9-228a-462e-8970-0cdbf59086e6 9332c61b-93e1-4a70-9547-701a014bfd98] ??? port lrp-509bba37-fa06-42d6-9210-2342045490db ??????? mac: "fa:16:3e:ff:0f:3b" ??????? networks: ["172.31.100.1/23"] ??? nat 11e0565a-4695-4f67-b4ee-101f1b1b9a4f ??????? external ip: "129.100.21.94" ??????? logical ip: "172.31.100.0/23" ??????? type: "snat" ??? nat 21e4be02-d81c-46e8-8fa8-3f94edb4aed1 ??????? external ip: "129.100.21.87" ??????? logical ip: "172.31.100.49" ??????? type: "dnat_and_snat" Each network agent on the hypervisors shows the ovn controller up : ?? ? OVN Controller Gateway agent | compute05.cloud.sci.uwo.ca |?????????????????? | :-)?? | UP??? | ovn-controller The ovs vswitch on the hypervisor looks correct afaict and ovn ports bfd status are all forwarding to other hypervisors. ie: ?? Port ovn-2a4bba-0 ??????????? Interface ovn-2a4bba-0 ??????????????? type: geneve ??????????????? options: {csum="true", key=flow, remote_ip="192.168.0.106"} ??????????????? bfd_status: {diagnostic="No Diagnostic", flap_count="1", forwarding="true", remote_diagnostic="No Diagnostic", remote_state=up, state=up} Any advice on where to look would be appreciated. PS.? Version info: ??? Neutron 22.0.0-1 ??? OVN 22.12 ?? neutron options: ?? ?? enable_distributed_floating_ip = true ????? ovn_l3_scheduler = leastloaded Thanks Gary -- Gary Molenkamp Science Technology Services Systems/Cloud Administrator University of Western Ontario molenkam at uwo.ca http://sts.sci.uwo.ca (519) 661-2111 x86882 (519) 661-3566 From wodel.youchi at gmail.com Tue Jun 27 11:53:15 2023 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 27 Jun 2023 12:53:15 +0100 Subject: Cloudkitty support for Opensearch on Yoga Message-ID: Hi, Reading the The release notes about Cloudkitty I read this : "Support for using Elasticsearch as a storage backend is being deprecated in the Antelope release in favour of OpenSearch. We will try to keep CloudKitty compatible with both solutions. However, we will only test with OpenSearch." What is the state of the support of Opensearch on Cloudkitty? If it is fully supported, does Openstack Yoga offer the same support? Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue Jun 27 12:24:30 2023 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 27 Jun 2023 14:24:30 +0200 Subject: Cloudkitty support for Opensearch on Yoga In-Reply-To: References: Message-ID: Hi, OpenSearch support for CloudKitty has been proposed here: https://review.opendev.org/c/openstack/cloudkitty/+/880739 It still needs to be tested in the context of a migration from Elasticsearch to OpenSearch (using a Kolla Ansible deployment). Once this is done, the code will be backported to Yoga, which is the first Kolla Ansible release that supports OpenSearch. Best regards, Pierre Riteau (priteau) On Tue, 27 Jun 2023 at 13:57, wodel youchi wrote: > Hi, > > Reading the The release notes about Cloudkitty I read this : > > "Support for using Elasticsearch as a storage backend is being deprecated > in the Antelope release in favour of OpenSearch. We will try to keep > CloudKitty compatible with both solutions. However, we will only test with > OpenSearch." > > What is the state of the support of Opensearch on Cloudkitty? > If it is fully supported, does Openstack Yoga offer the same support? > > Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Tue Jun 27 13:05:44 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Tue, 27 Jun 2023 20:05:44 +0700 Subject: [cinder]cinder qos question. Message-ID: Hello guys. I am trying to use cinder qos. But i have question. - I added qos after a volume was create and I see that qos did not apply to this volume. - i create new volume with assosiate qos then it is ok but when i changed extra specs in qos, volume keep old specs. Is there any way to make qos apply with two above cases. Many thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Tue Jun 27 13:59:28 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 27 Jun 2023 15:59:28 +0200 Subject: SNAT failure with OVN under Antelope In-Reply-To: <34841bdd-bb95-ff9a-258b-cfae26558627@uwo.ca> References: <34841bdd-bb95-ff9a-258b-cfae26558627@uwo.ca> Message-ID: Hello Gary: If you have 2 VMs in the same network, both without FIPs and one is working but not the other, I would just compare 1:1 the Neutron and OVN resources of both ports (I guess both VMs have one single port). I would start with the OVN NAT registers. You should also check the internal VM routing table. Apart from that, you should also trace the VM traffic, to know where it is dropped. Maybe the traffic is sent correctly outside the GW port but never gets back (in that case, check you underlying network configuration). Or, as you commented, the SNAT is not working for this specific port. Regards. On Tue, Jun 27, 2023 at 1:38?PM Gary Molenkamp wrote: > Good morning, I'm having a problem with snat routing under OVN but I'm > not sure if something is mis-configured or just my understanding of how > OVN is architected is wrong. > > I've built a Zed cloud, since upgraded to Antelope, using the Neutron > Manual install method here: > https://docs.openstack.org/neutron/latest/install/ovn/manual_install.html > I'm using a multi-tenent configuration using geneve and the flat > provider network is present on each hypervisor. Each hypervisor is > connected to the physical provider network, along with the tenent > network and is tagged as an external chassis under OVN. > br-int exists, as does br-provider > ovs-vsctl set open . > external-ids:ovn-cms-options=enable-chassis-as-gw > > For most cases, distributed FIP based connectivity is working without > issue, but I'm having an issue where VMs without a FIP are not always > able to use the SNAT services of the tenent network router. > Scenario: > Internal network named cs3319: with subnet 172.31.100.0/23 > Has a router named cs3319_router with external gateway set (snat > enabled) > > This network has 3 vms: > - #1 has a FIP and can be accessed externally > - #2 has no FIP, can be accessed via VM1 and can access > external resources via SNAT (ie OS repos, DNS, etc) > - #3 has no FIP, can be accessed via VM1 but has no external > SNAT connectivity > > From what I can tell, the chassis config is correct, compute05 is the > hypervisor and the faulty VM has a port binding on this hypervisor: > > ovn-sbctl show > ... > Chassis "8e0fa17c-e480-4b60-9015-bd8833412561" > hostname: compute05.cloud.sci.uwo.ca > Encap geneve > ip: "192.168.0.105" > options: {csum="true"} > Port_Binding "7a5257eb-caea-45bf-b48c-620c5dff4b39" > Port_Binding "50e16602-78e6-429b-8c2f-e7e838ece1b4" > Port_Binding "f121c9f4-c3fe-4ea9-b754-a809be95a3fd" > > The router has the candidate gateways, and the snat set: > > ovn-nbctl show 92df19a7-4ebe-43ea-b233-f4e9f5a46e7c > router 92df19a7-4ebe-43ea-b233-f4e9f5a46e7c > (neutron-389439b5-07f8-44b6-a35b-c76651b48be5) (aka cs3319_public_router) > port lrp-44ae1753-845e-4822-9e3d-a41e0469e257 > mac: "fa:16:3e:9a:db:d8" > networks: ["129.100.21.94/22"] > gateway chassis: [5c039d38-70b2-4ee6-9df1-596f82c68106 > 99facd23-ad17-4b68-a8c2-1ff6da15ac5f > 1694116c-6d30-4c31-b5ea-0f411878316e > 2a4bbaf9-228a-462e-8970-0cdbf59086e6 9332c61b-93e1-4a70-9547-701a014bfd98] > port lrp-509bba37-fa06-42d6-9210-2342045490db > mac: "fa:16:3e:ff:0f:3b" > networks: ["172.31.100.1/23"] > nat 11e0565a-4695-4f67-b4ee-101f1b1b9a4f > external ip: "129.100.21.94" > logical ip: "172.31.100.0/23" > type: "snat" > nat 21e4be02-d81c-46e8-8fa8-3f94edb4aed1 > external ip: "129.100.21.87" > logical ip: "172.31.100.49" > type: "dnat_and_snat" > > Each network agent on the hypervisors shows the ovn controller up : > OVN Controller Gateway agent | compute05.cloud.sci.uwo.ca > | | :-) | UP | ovn-controller > > The ovs vswitch on the hypervisor looks correct afaict and ovn ports bfd > status are all forwarding to other hypervisors. ie: > Port ovn-2a4bba-0 > Interface ovn-2a4bba-0 > type: geneve > options: {csum="true", key=flow, > remote_ip="192.168.0.106"} > bfd_status: {diagnostic="No Diagnostic", > flap_count="1", forwarding="true", remote_diagnostic="No Diagnostic", > remote_state=up, state=up} > > > Any advice on where to look would be appreciated. > > PS. Version info: > Neutron 22.0.0-1 > OVN 22.12 > > neutron options: > enable_distributed_floating_ip = true > ovn_l3_scheduler = leastloaded > > > > Thanks > Gary > > > > -- > Gary Molenkamp Science Technology Services > Systems/Cloud Administrator University of Western Ontario > molenkam at uwo.ca http://sts.sci.uwo.ca > (519) 661-2111 x86882 (519) 661-3566 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Tue Jun 27 14:33:50 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Tue, 27 Jun 2023 21:33:50 +0700 Subject: [cinder]cinder qos question. In-Reply-To: References: Message-ID: Hello guys, To apply qos on existing volumes or modify qos on volume type, I need live migrate instances. Is there any way to achieve this without my solution? Nguyen Huu Khoi On Tue, Jun 27, 2023 at 8:05?PM Nguy?n H?u Kh?i wrote: > Hello guys. > I am trying to use cinder qos. > But i have question. > - I added qos after a volume was create and I see that qos did not apply > to this volume. > - i create new volume with assosiate qos then it is ok but when i changed > extra specs in qos, volume keep old specs. > Is there any way to make qos apply with two above cases. > Many thanks. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Tue Jun 27 14:37:45 2023 From: ykarel at redhat.com (Yatin Karel) Date: Tue, 27 Jun 2023 20:07:45 +0530 Subject: SNAT failure with OVN under Antelope In-Reply-To: <34841bdd-bb95-ff9a-258b-cfae26558627@uwo.ca> References: <34841bdd-bb95-ff9a-258b-cfae26558627@uwo.ca> Message-ID: Hi Gary, On top what Rodolfo said On Tue, Jun 27, 2023 at 5:15?PM Gary Molenkamp wrote: > Good morning, I'm having a problem with snat routing under OVN but I'm > not sure if something is mis-configured or just my understanding of how > OVN is architected is wrong. > > I've built a Zed cloud, since upgraded to Antelope, using the Neutron > Manual install method here: > https://docs.openstack.org/neutron/latest/install/ovn/manual_install.html > I'm using a multi-tenent configuration using geneve and the flat > provider network is present on each hypervisor. Each hypervisor is > connected to the physical provider network, along with the tenent > network and is tagged as an external chassis under OVN. > br-int exists, as does br-provider > ovs-vsctl set open . > external-ids:ovn-cms-options=enable-chassis-as-gw > Any specific reason to enable gateway on compute nodes? Generally it's recommended to use controller/network nodes as gateway. What's your env(number of controllers, network, compute nodes)? > For most cases, distributed FIP based connectivity is working without > issue, but I'm having an issue where VMs without a FIP are not always > able to use the SNAT services of the tenent network router. > Scenario: > Internal network named cs3319: with subnet 172.31.100.0/23 > Has a router named cs3319_router with external gateway set (snat > enabled) > > This network has 3 vms: > - #1 has a FIP and can be accessed externally > - #2 has no FIP, can be accessed via VM1 and can access > external resources via SNAT (ie OS repos, DNS, etc) > - #3 has no FIP, can be accessed via VM1 but has no external > SNAT connectivity > > Considering it works for some vm but for some not, the above point for enable-chassis-as-gw could be related. The working vm is hosted on compute05 or some other compute node? Where is the gateway router port scheduled(can check ovn-sbctl show for cr-lrp-)? > From what I can tell, the chassis config is correct, compute05 is the > hypervisor and the faulty VM has a port binding on this hypervisor: > > ovn-sbctl show > ... > Chassis "8e0fa17c-e480-4b60-9015-bd8833412561" > hostname: compute05.cloud.sci.uwo.ca > Encap geneve > ip: "192.168.0.105" > options: {csum="true"} > Port_Binding "7a5257eb-caea-45bf-b48c-620c5dff4b39" > Port_Binding "50e16602-78e6-429b-8c2f-e7e838ece1b4" > Port_Binding "f121c9f4-c3fe-4ea9-b754-a809be95a3fd" > > The router has the candidate gateways, and the snat set: > > ovn-nbctl show 92df19a7-4ebe-43ea-b233-f4e9f5a46e7c > router 92df19a7-4ebe-43ea-b233-f4e9f5a46e7c > (neutron-389439b5-07f8-44b6-a35b-c76651b48be5) (aka cs3319_public_router) > port lrp-44ae1753-845e-4822-9e3d-a41e0469e257 > mac: "fa:16:3e:9a:db:d8" > networks: ["129.100.21.94/22"] > gateway chassis: [5c039d38-70b2-4ee6-9df1-596f82c68106 > 99facd23-ad17-4b68-a8c2-1ff6da15ac5f > 1694116c-6d30-4c31-b5ea-0f411878316e > 2a4bbaf9-228a-462e-8970-0cdbf59086e6 9332c61b-93e1-4a70-9547-701a014bfd98] > port lrp-509bba37-fa06-42d6-9210-2342045490db > mac: "fa:16:3e:ff:0f:3b" > networks: ["172.31.100.1/23"] > nat 11e0565a-4695-4f67-b4ee-101f1b1b9a4f > external ip: "129.100.21.94" > logical ip: "172.31.100.0/23" > type: "snat" > nat 21e4be02-d81c-46e8-8fa8-3f94edb4aed1 > external ip: "129.100.21.87" > logical ip: "172.31.100.49" > type: "dnat_and_snat" > > Each network agent on the hypervisors shows the ovn controller up : > OVN Controller Gateway agent | compute05.cloud.sci.uwo.ca > | | :-) | UP | ovn-controller > > The ovs vswitch on the hypervisor looks correct afaict and ovn ports bfd > status are all forwarding to other hypervisors. ie: > Port ovn-2a4bba-0 > Interface ovn-2a4bba-0 > type: geneve > options: {csum="true", key=flow, > remote_ip="192.168.0.106"} > bfd_status: {diagnostic="No Diagnostic", > flap_count="1", forwarding="true", remote_diagnostic="No Diagnostic", > remote_state=up, state=up} > > > Any advice on where to look would be appreciated. > > I have seen mtu specific issues in the past, would be good to rule out any mtu issue with working and non working cases. PS. Version info: > Neutron 22.0.0-1 > OVN 22.12 > > neutron options: > enable_distributed_floating_ip = true > ovn_l3_scheduler = leastloaded > > > > Thanks > Gary > > > > -- > Gary Molenkamp Science Technology Services > Systems/Cloud Administrator University of Western Ontario > molenkam at uwo.ca http://sts.sci.uwo.ca > (519) 661-2111 x86882 (519) 661-3566 > > > Thanks and Regards Yatin Karel -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.acosta at luizalabs.com Tue Jun 27 15:18:22 2023 From: roberto.acosta at luizalabs.com (Roberto Bartzen Acosta) Date: Tue, 27 Jun 2023 12:18:22 -0300 Subject: SNAT failure with OVN under Antelope In-Reply-To: References: <34841bdd-bb95-ff9a-258b-cfae26558627@uwo.ca> Message-ID: Hi Gary, Em ter., 27 de jun. de 2023 ?s 11:47, Yatin Karel escreveu: > Hi Gary, > > On top what Rodolfo said > On Tue, Jun 27, 2023 at 5:15?PM Gary Molenkamp wrote: > >> Good morning, I'm having a problem with snat routing under OVN but I'm >> not sure if something is mis-configured or just my understanding of how >> OVN is architected is wrong. >> >> I've built a Zed cloud, since upgraded to Antelope, using the Neutron >> Manual install method here: >> https://docs.openstack.org/neutron/latest/install/ovn/manual_install.html >> I'm using a multi-tenent configuration using geneve and the flat >> provider network is present on each hypervisor. Each hypervisor is >> connected to the physical provider network, along with the tenent >> network and is tagged as an external chassis under OVN. >> br-int exists, as does br-provider >> ovs-vsctl set open . >> external-ids:ovn-cms-options=enable-chassis-as-gw >> > > Any specific reason to enable gateway on compute nodes? Generally it's > recommended to use controller/network nodes as gateway. What's your > env(number of controllers, network, compute nodes)? > Wouldn't it be interesting to enable-chassis-as-gw on the compute nodes, just in case you want to use DVR: If that's the case, you need to map the external bridge (ovs-vsctl set open . external-ids:ovn-bridge-mappings=...) via ansible this is created automatically, but in the manual installation I didn't see any mention of it. The problem is basically that the port of the OVN LRP may not be in the same chassis as the VM that failed (since the CR-LRP will be where the first VM of that network will be created). The suggestion is to remove the enable-chassis-as-gw from the compute nodes to allow the VM to forward traffic via tunneling/Geneve to the chassis where the LRP resides. ovs-vsctl remove open . external-ids ovn-cms-options="enable-chassis-as-gw" ovs-vsctl remove open . external-ids ovn-bridge-mappings ip link set br-provider-name down ovs-vsctl del-br br-provider-name systemctl restart ovn-controller systemctl restart openvswitch-switch > >> For most cases, distributed FIP based connectivity is working without >> issue, but I'm having an issue where VMs without a FIP are not always >> able to use the SNAT services of the tenent network router. >> Scenario: >> Internal network named cs3319: with subnet 172.31.100.0/23 >> Has a router named cs3319_router with external gateway set (snat >> enabled) >> >> This network has 3 vms: >> - #1 has a FIP and can be accessed externally >> - #2 has no FIP, can be accessed via VM1 and can access >> external resources via SNAT (ie OS repos, DNS, etc) >> - #3 has no FIP, can be accessed via VM1 but has no external >> SNAT connectivity >> >> Considering it works for some vm but for some not, the above point for > enable-chassis-as-gw could be related. > The working vm is hosted on compute05 or some other compute node? Where is > the gateway router port scheduled(can check ovn-sbctl show for > cr-lrp-)? > > >> From what I can tell, the chassis config is correct, compute05 is the >> hypervisor and the faulty VM has a port binding on this hypervisor: >> >> ovn-sbctl show >> ... >> Chassis "8e0fa17c-e480-4b60-9015-bd8833412561" >> hostname: compute05.cloud.sci.uwo.ca >> Encap geneve >> ip: "192.168.0.105" >> options: {csum="true"} >> Port_Binding "7a5257eb-caea-45bf-b48c-620c5dff4b39" >> Port_Binding "50e16602-78e6-429b-8c2f-e7e838ece1b4" >> Port_Binding "f121c9f4-c3fe-4ea9-b754-a809be95a3fd" >> >> The router has the candidate gateways, and the snat set: >> >> ovn-nbctl show 92df19a7-4ebe-43ea-b233-f4e9f5a46e7c >> router 92df19a7-4ebe-43ea-b233-f4e9f5a46e7c >> (neutron-389439b5-07f8-44b6-a35b-c76651b48be5) (aka cs3319_public_router) >> port lrp-44ae1753-845e-4822-9e3d-a41e0469e257 >> mac: "fa:16:3e:9a:db:d8" >> networks: ["129.100.21.94/22"] >> gateway chassis: [5c039d38-70b2-4ee6-9df1-596f82c68106 >> 99facd23-ad17-4b68-a8c2-1ff6da15ac5f >> 1694116c-6d30-4c31-b5ea-0f411878316e >> 2a4bbaf9-228a-462e-8970-0cdbf59086e6 9332c61b-93e1-4a70-9547-701a014bfd98] >> port lrp-509bba37-fa06-42d6-9210-2342045490db >> mac: "fa:16:3e:ff:0f:3b" >> networks: ["172.31.100.1/23"] >> nat 11e0565a-4695-4f67-b4ee-101f1b1b9a4f >> external ip: "129.100.21.94" >> logical ip: "172.31.100.0/23" >> type: "snat" >> nat 21e4be02-d81c-46e8-8fa8-3f94edb4aed1 >> external ip: "129.100.21.87" >> logical ip: "172.31.100.49" >> type: "dnat_and_snat" >> >> Each network agent on the hypervisors shows the ovn controller up : >> OVN Controller Gateway agent | compute05.cloud.sci.uwo.ca >> | | :-) | UP | ovn-controller >> >> The ovs vswitch on the hypervisor looks correct afaict and ovn ports bfd >> status are all forwarding to other hypervisors. ie: >> Port ovn-2a4bba-0 >> Interface ovn-2a4bba-0 >> type: geneve >> options: {csum="true", key=flow, >> remote_ip="192.168.0.106"} >> bfd_status: {diagnostic="No Diagnostic", >> flap_count="1", forwarding="true", remote_diagnostic="No Diagnostic", >> remote_state=up, state=up} >> >> >> Any advice on where to look would be appreciated. >> >> I have seen mtu specific issues in the past, would be good to rule out > any mtu issue with working and non working cases. > > PS. Version info: >> Neutron 22.0.0-1 >> OVN 22.12 >> >> neutron options: >> enable_distributed_floating_ip = true >> ovn_l3_scheduler = leastloaded >> >> >> >> Thanks >> Gary >> >> >> >> -- >> Gary Molenkamp Science Technology Services >> Systems/Cloud Administrator University of Western Ontario >> molenkam at uwo.ca http://sts.sci.uwo.ca >> (519) 661-2111 x86882 (519) 661-3566 >> >> >> Thanks and Regards > Yatin Karel > -- _?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?._ *?**?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Tue Jun 27 15:43:05 2023 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 27 Jun 2023 17:43:05 +0200 Subject: [qa] Cancelling office hour July 4th Message-ID: Hello everyone, I'll be on PTO starting this Thursday (June 29th) until July 10th. Therefore I'm gonna cancel our next office hour on July 4th. In case of emergencies, contact me via email - I'll be off the IRC. Thanks, -- Martin Kopec Principal Software Quality Engineer Red Hat EMEA IM: kopecmartin -------------- next part -------------- An HTML attachment was scrubbed... URL: From molenkam at uwo.ca Tue Jun 27 17:20:39 2023 From: molenkam at uwo.ca (Gary Molenkamp) Date: Tue, 27 Jun 2023 13:20:39 -0400 Subject: SNAT failure with OVN under Antelope In-Reply-To: References: <34841bdd-bb95-ff9a-258b-cfae26558627@uwo.ca> Message-ID: <536805d2-f334-e94f-1415-2984a219cb65@uwo.ca> On 2023-06-27 11:18, Roberto Bartzen Acosta wrote: > Hi Gary, > > Em ter., 27 de jun. de 2023 ?s 11:47, Yatin Karel > escreveu: > > Hi Gary, > > On top what Rodolfo said > On Tue, Jun 27, 2023 at 5:15?PM Gary Molenkamp > wrote: > > Good morning,?? I'm having a problem with snat routing under > OVN but I'm > not sure if something is mis-configured or just my > understanding of how > OVN is architected is wrong. > > I've built a Zed cloud, since upgraded to Antelope, using the > Neutron > Manual install method here: > https://docs.openstack.org/neutron/latest/install/ovn/manual_install.html > I'm using a multi-tenent configuration using geneve and the flat > provider network is present on each hypervisor. Each > hypervisor is > connected to the physical provider network, along with the tenent > network and is tagged as an external chassis under OVN. > ???????? br-int exists, as does br-provider > ???? ??? ovs-vsctl set open . > external-ids:ovn-cms-options=enable-chassis-as-gw > > > Any specific reason to enable gateway on compute nodes? Generally > it's recommended to use controller/network nodes as gateway. > What's your env(number of controllers, network, compute nodes)? > > > Wouldn't it be interesting to enable-chassis-as-gw on the compute > nodes, just in case you want to use DVR: If that's the case, you need > to map the external bridge (ovs-vsctl set open . > external-ids:ovn-bridge-mappings=...) via ansible this is created > automatically, but in the manual installation I didn't see any mention > of it. Our intention was to distribute the routing on our OVN cloud to take advantage of DVR as our provider network is just a tagged vlan in our physical infrastructure.? This avoids requiring dedicated network node(s) and fewer bottlenecks.? I had not set up any ovn-bridge-mappings as it was not mentioned in the manual install.? I will look into it. > The problem is basically that the port of the OVN LRP may not be in > the same chassis as the VM that failed (since the CR-LRP will be where > the first VM of that network will be created).?The suggestion is to > remove the enable-chassis-as-gw from the compute nodes to allow the VM > to forward traffic via tunneling/Geneve to the chassis where the LRP > resides. > I forced a similar VM onto the same chassis as the working VM, and it was able to communicate out.??? If we do want to keep multiple chassis' as gateways, would that be addressed with the ovn-bridge-mappings? > ovs-vsctl remove open . external-ids > ovn-cms-options="enable-chassis-as-gw" ovs-vsctl remove open . > external-ids ovn-bridge-mappings ip link set br-provider-name down > ovs-vsctl del-br br-provider-namesystemctl restart ovn-controller > systemctl restart openvswitch-switch > -- Gary Molenkamp Science Technology Services Systems Administrator University of Western Ontario molenkam at uwo.ca http://sts.sci.uwo.ca (519) 661-2111 x86882 (519) 661-3566 -------------- next part -------------- An HTML attachment was scrubbed... URL: From molenkam at uwo.ca Tue Jun 27 18:13:47 2023 From: molenkam at uwo.ca (Gary Molenkamp) Date: Tue, 27 Jun 2023 14:13:47 -0400 Subject: SNAT failure with OVN under Antelope In-Reply-To: References: <34841bdd-bb95-ff9a-258b-cfae26558627@uwo.ca> Message-ID: <8e291eeb-5845-e6b4-8778-fc7f889064f2@uwo.ca> Thanks for the pointers, itlooks like I'm starting to narrow it down.? Something still confusing me, though. > > I've built a Zed cloud, since upgraded to Antelope, using the > Neutron > Manual install method here: > https://docs.openstack.org/neutron/latest/install/ovn/manual_install.html > I'm using a multi-tenent configuration using geneve and the flat > provider network is present on each hypervisor. Each > hypervisor is > connected to the physical provider network, along with the tenent > network and is tagged as an external chassis under OVN. > ???????? br-int exists, as does br-provider > ???? ??? ovs-vsctl set open . > external-ids:ovn-cms-options=enable-chassis-as-gw > > > Any specific reason to enable gateway on compute nodes? Generally > it's recommended to use controller/network nodes as gateway. > What's your env(number of controllers, network, compute nodes)? > > > Wouldn't it be interesting to enable-chassis-as-gw on the compute > nodes, just in case you want to use DVR: If that's the case, you need > to map the external bridge (ovs-vsctl set open . > external-ids:ovn-bridge-mappings=...) via ansible this is created > automatically, but in the manual installation I didn't see any mention > of it. > The problem is basically that the port of the OVN LRP may not be in > the same chassis as the VM that failed (since the CR-LRP will be where > the first VM of that network will be created).?The suggestion is to > remove the enable-chassis-as-gw from the compute nodes to allow the VM > to forward traffic via tunneling/Geneve to the chassis where the LRP > resides. > > ovs-vsctl remove open . external-ids > ovn-cms-options="enable-chassis-as-gw" ovs-vsctl remove open . > external-ids ovn-bridge-mappings ip link set br-provider-name down > ovs-vsctl del-br br-provider-namesystemctl restart ovn-controller > systemctl restart openvswitch-switch > How does one support both use-case types? If I want to use DVR via each compute node, then I must create the br-provider bridge, set the chassis as a gateway and map the bridge.? This seems to be breaking forwarding to the OVN LRP.??? The hypervisor/VM with the working LRP works but any other hypervisor is not tunneling via Geneve. Thanks as always, this is very informative. Gary -- Gary Molenkamp Science Technology Services Systems Administrator University of Western Ontario molenkam at uwo.ca http://sts.sci.uwo.ca (519) 661-2111 x86882 (519) 661-3566 -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.acosta at luizalabs.com Tue Jun 27 18:15:20 2023 From: roberto.acosta at luizalabs.com (Roberto Bartzen Acosta) Date: Tue, 27 Jun 2023 15:15:20 -0300 Subject: SNAT failure with OVN under Antelope In-Reply-To: <536805d2-f334-e94f-1415-2984a219cb65@uwo.ca> References: <34841bdd-bb95-ff9a-258b-cfae26558627@uwo.ca> <536805d2-f334-e94f-1415-2984a219cb65@uwo.ca> Message-ID: Em ter., 27 de jun. de 2023 ?s 14:20, Gary Molenkamp escreveu: > > > On 2023-06-27 11:18, Roberto Bartzen Acosta wrote: > > Hi Gary, > > Em ter., 27 de jun. de 2023 ?s 11:47, Yatin Karel > escreveu: > >> Hi Gary, >> >> On top what Rodolfo said >> On Tue, Jun 27, 2023 at 5:15?PM Gary Molenkamp wrote: >> >>> Good morning, I'm having a problem with snat routing under OVN but I'm >>> not sure if something is mis-configured or just my understanding of how >>> OVN is architected is wrong. >>> >>> I've built a Zed cloud, since upgraded to Antelope, using the Neutron >>> Manual install method here: >>> https://docs.openstack.org/neutron/latest/install/ovn/manual_install.html >>> I'm using a multi-tenent configuration using geneve and the flat >>> provider network is present on each hypervisor. Each hypervisor is >>> connected to the physical provider network, along with the tenent >>> network and is tagged as an external chassis under OVN. >>> br-int exists, as does br-provider >>> ovs-vsctl set open . >>> external-ids:ovn-cms-options=enable-chassis-as-gw >>> >> >> Any specific reason to enable gateway on compute nodes? Generally it's >> recommended to use controller/network nodes as gateway. What's your >> env(number of controllers, network, compute nodes)? >> > > Wouldn't it be interesting to enable-chassis-as-gw on the compute nodes, > just in case you want to use DVR: If that's the case, you need to map the > external bridge (ovs-vsctl set open . external-ids:ovn-bridge-mappings=...) > via ansible this is created automatically, but in the manual installation I > didn't see any mention of it. > > > Our intention was to distribute the routing on our OVN cloud to take > advantage of DVR as our provider network is just a tagged vlan in our > physical infrastructure. This avoids requiring dedicated network node(s) > and fewer bottlenecks. I had not set up any ovn-bridge-mappings as it > was not mentioned in the manual install. I will look into it. > > > > The problem is basically that the port of the OVN LRP may not be in the > same chassis as the VM that failed (since the CR-LRP will be where the > first VM of that network will be created). The suggestion is to remove the > enable-chassis-as-gw from the compute nodes to allow the VM to forward > traffic via tunneling/Geneve to the chassis where the LRP resides. > > > I forced a similar VM onto the same chassis as the working VM, and it was > able to communicate out. If we do want to keep multiple chassis' as > gateways, would that be addressed with the ovn-bridge-mappings? > Verify your ml2 config file: cat /etc/neutron/plugins/ml2/ml2_conf.ini [ml2_type_vlan] network_vlan_ranges = vlan:101:200,vlan:301:400 Note the name used to map the vlan ranges: in this example = vlan. On compute nodes, check if exists or create the external bridge (usually called br-provider): ovs-vsctl --no-wait -- --may-exist add-br br-provider -- set bridge br-provider protocols=OpenFlow10,OpenFlow12,OpenFlow13,OpenFlow14,OpenFlow15 ovs-vsctl --no-wait br-set-external-id br-provider bridge-id br-provider Set the ovn-bridge-mappings using the network_vlan_ranges name and the ovs external bridge name (it is exactly the same configuration applied in gw node): ovs-vsctl set open . external-ids:ovn-bridge-mappings=vlan:br-provider Don't forget to enable DVR for FIPs...: vi /etc/neutron/plugins/ml2/ml2_conf.ini [ovn] enable_distributed_floating_ip = True > > > > > ovs-vsctl remove open . external-ids > ovn-cms-options="enable-chassis-as-gw" ovs-vsctl remove open . > external-ids ovn-bridge-mappings ip link set br-provider-name down ovs-vsctl > del-br br-provider-name systemctl restart ovn-controller systemctl > restart openvswitch-switch > > > > > -- > Gary Molenkamp Science Technology Services > Systems Administrator University of Western Ontariomolenkam at uwo.ca http://sts.sci.uwo.ca > (519) 661-2111 x86882 (519) 661-3566 > > -- _?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?._ *?**?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Tue Jun 27 18:29:17 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Tue, 27 Jun 2023 11:29:17 -0700 Subject: [tc] July 4 TC Meeting cancelled; next meeting July 11 Message-ID: Hi all, The next Technical Committee meeting, as currently scheduled, would fall on July 4th, a US holiday. For this reason, we've cancelled the July 4th meeting. The next scheduled meeting of the TC will be July 11th. As always, if you need anything from us in the meantime, we're in #openstack-tc. Thanks, Jay Faulkner Technical Committee Vice-Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.acosta at luizalabs.com Tue Jun 27 19:02:34 2023 From: roberto.acosta at luizalabs.com (Roberto Bartzen Acosta) Date: Tue, 27 Jun 2023 16:02:34 -0300 Subject: SNAT failure with OVN under Antelope In-Reply-To: <8e291eeb-5845-e6b4-8778-fc7f889064f2@uwo.ca> References: <34841bdd-bb95-ff9a-258b-cfae26558627@uwo.ca> <8e291eeb-5845-e6b4-8778-fc7f889064f2@uwo.ca> Message-ID: Em ter., 27 de jun. de 2023 ?s 15:22, Gary Molenkamp escreveu: > Thanks for the pointers, itlooks like I'm starting to narrow it down. > Something still confusing me, though. > > >>> I've built a Zed cloud, since upgraded to Antelope, using the Neutron >>> Manual install method here: >>> https://docs.openstack.org/neutron/latest/install/ovn/manual_install.html >>> I'm using a multi-tenent configuration using geneve and the flat >>> provider network is present on each hypervisor. Each hypervisor is >>> connected to the physical provider network, along with the tenent >>> network and is tagged as an external chassis under OVN. >>> br-int exists, as does br-provider >>> ovs-vsctl set open . >>> external-ids:ovn-cms-options=enable-chassis-as-gw >>> >> >> Any specific reason to enable gateway on compute nodes? Generally it's >> recommended to use controller/network nodes as gateway. What's your >> env(number of controllers, network, compute nodes)? >> > > Wouldn't it be interesting to enable-chassis-as-gw on the compute nodes, > just in case you want to use DVR: If that's the case, you need to map the > external bridge (ovs-vsctl set open . external-ids:ovn-bridge-mappings=...) > via ansible this is created automatically, but in the manual installation I > didn't see any mention of it. > > The problem is basically that the port of the OVN LRP may not be in the > same chassis as the VM that failed (since the CR-LRP will be where the > first VM of that network will be created). The suggestion is to remove the > enable-chassis-as-gw from the compute nodes to allow the VM to forward > traffic via tunneling/Geneve to the chassis where the LRP resides. > > ovs-vsctl remove open . external-ids > ovn-cms-options="enable-chassis-as-gw" ovs-vsctl remove open . > external-ids ovn-bridge-mappings ip link set br-provider-name down ovs-vsctl > del-br br-provider-name systemctl restart ovn-controller systemctl > restart openvswitch-switch > > > How does one support both use-case types? > > If I want to use DVR via each compute node, then I must create the > br-provider bridge, set the chassis as a gateway and map the bridge. This > seems to be breaking forwarding to the OVN LRP. The hypervisor/VM with > the working LRP works but any other hypervisor is not tunneling via Geneve. > https://docs.openstack.org/neutron/zed/ovn/faq/index.html The E/W traffic is "completely distributed in all cases." for OVN driver... It is natively supported and should work via openflow / tunneling / Geneve without any issues. The problem is that when you set the enable-chassis-as-gw flag you enable gateway router port scheduling for a chassis that may not have an external bridge mapped (and this breaks external traffic). You can trace the traffic where the VM is and check where it is breaking via datapath command: ovs-dpctl dump-flows But if you are facing problems on east/west traffic, please check your OVN settings (example): ovs-vsctl list open_vswitch - external_ids : {ovn-encap-ip="192.168.200.10", ovn-encap-type="geneve", ovn-remote="tcp:192.168.200.200:6642"}) ...and make sure geneve tunnels are established between all hypervisors (example): root at comp1:~# ovs-vsctl show Bridge br-int .... Port ovn-2e4ed2-0 Interface ovn-2e4ed2-0 type: geneve options: {csum="true", key=flow, remote_ip="192.168.200.11"} Port ovn-fc7744-0 Interface ovn-fc7744-0 type: geneve options: {csum="true", key=flow, remote_ip="192.168.200.30"} > Thanks as always, this is very informative. > > Gary > > > -- > Gary Molenkamp Science Technology Services > Systems Administrator University of Western Ontariomolenkam at uwo.ca http://sts.sci.uwo.ca > (519) 661-2111 x86882 (519) 661-3566 > > -- _?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?._ *?**?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From kkloppenborg at rwts.com.au Tue Jun 27 19:34:43 2023 From: kkloppenborg at rwts.com.au (Karl Kloppenborg) Date: Tue, 27 Jun 2023 19:34:43 +0000 Subject: [Neutron Community] Neutron OVN vs OVS Message-ID: Hi Neutron community, Firstly, apologies if this has been asked before, a cursory search of the list rendered no information. At a high level, is OVN replacing OVS in terms of the preferred standard neutron deployment? I am seeing more and more about OVN on the mailing list and wondering if I should be investigating it to supersede OVS standalone. Again, apologies if this is vague/asked before. Kind Regards, Karl Kloppenborg. Openstack-Helm Team. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnasiadka at gmail.com Tue Jun 27 20:07:02 2023 From: mnasiadka at gmail.com (=?utf-8?Q?Micha=C5=82_Nasiadka?=) Date: Tue, 27 Jun 2023 22:07:02 +0200 Subject: [kolla] Transitioning stable/wallaby to EOL Message-ID: <9B38B0C5-5917-4055-862B-F2B6F4AFCB02@gmail.com> Hello Koalas, On a weekly meeting last week - we?ve decided that due to core team size we?re unable to support that many stable branches - and with the recent release of Antelope (2023.1) we are going to transition Wallaby branch to End of Life. That means the branch will get removed and wallaby-eol tag will be available for those that are still using that code for their deployments. Best regards, Michal From haleyb.dev at gmail.com Tue Jun 27 20:36:06 2023 From: haleyb.dev at gmail.com (Brian Haley) Date: Tue, 27 Jun 2023 16:36:06 -0400 Subject: [Neutron Community] Neutron OVN vs OVS In-Reply-To: References: Message-ID: <4aa775cd-b9b6-914f-0e2c-2e938ab8226b@gmail.com> Hi Karl, On 6/27/23 3:34 PM, Karl Kloppenborg wrote: > Hi Neutron community, > > Firstly, apologies if this has been asked before, a cursory search of > the list rendered no information. > > At a high level, is OVN replacing OVS in terms of the preferred standard > neutron deployment? > > I am seeing more and more about OVN on the mailing list and wondering if > I should be investigating it to supersede OVS standalone. Yes, in the Ussuri cycle a large effort was made to move all the networking-ovn code into the Neutron tree, instead of having it be an out-of-tree driver [0]. And finally in the Yoga cycle it was made the default for devstack [1]. Since then the two major distros, Ubuntu and Red Hat, have made OVN the default in their deployment tools, so going forward it is safe to assume it will eventually be in the majority for installations. -Brian [0] https://blueprints.launchpad.net/neutron/+spec/neutron-ovn-merge [1] https://review.opendev.org/c/openstack/devstack/+/791436 From kkloppenborg at rwts.com.au Tue Jun 27 20:37:49 2023 From: kkloppenborg at rwts.com.au (Karl Kloppenborg) Date: Tue, 27 Jun 2023 20:37:49 +0000 Subject: [Neutron Community] Neutron OVN vs OVS In-Reply-To: <4aa775cd-b9b6-914f-0e2c-2e938ab8226b@gmail.com> References: <4aa775cd-b9b6-914f-0e2c-2e938ab8226b@gmail.com> Message-ID: Hi Brian, Thanks for a really detailed response. In light of that, I will discuss with my teams about moving openstack-helm in line with this. Thanks, Karl. From: Brian Haley Date: Wednesday, 28 June 2023 at 6:36 am To: Karl Kloppenborg , openstack-discuss at lists.openstack.org Subject: Re: [Neutron Community] Neutron OVN vs OVS Hi Karl, On 6/27/23 3:34 PM, Karl Kloppenborg wrote: > Hi Neutron community, > > Firstly, apologies if this has been asked before, a cursory search of > the list rendered no information. > > At a high level, is OVN replacing OVS in terms of the preferred standard > neutron deployment? > > I am seeing more and more about OVN on the mailing list and wondering if > I should be investigating it to supersede OVS standalone. Yes, in the Ussuri cycle a large effort was made to move all the networking-ovn code into the Neutron tree, instead of having it be an out-of-tree driver [0]. And finally in the Yoga cycle it was made the default for devstack [1]. Since then the two major distros, Ubuntu and Red Hat, have made OVN the default in their deployment tools, so going forward it is safe to assume it will eventually be in the majority for installations. -Brian [0] https://blueprints.launchpad.net/neutron/+spec/neutron-ovn-merge [1] https://review.opendev.org/c/openstack/devstack/+/791436 -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Jun 28 09:15:30 2023 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 28 Jun 2023 11:15:30 +0200 Subject: [largescale-sig] Next meeting: June 28, 8utc In-Reply-To: <6304dd67-9a45-8c10-deec-b9bde0792847@openstack.org> References: <6304dd67-9a45-8c10-deec-b9bde0792847@openstack.org> Message-ID: <931246e2-0726-5b85-65b7-1da303db84e4@openstack.org> Here is the summary of our SIG meeting today. We discussed the feedback we received during our standing-room-only forum session at the Vancouver Summit, including: - LINE is working on a HTTP-RPC oslo-messaging driver that is getting some momentum - There is still some confusion as to what SLURP enables - Could be interesting for Nova to have a way to describe how much pet vs. cattle the workloads are, to inform how fast evacuation can be done - we could do an OpenInfra Live episode on upgrade success stories, for a change You can read the detailed meeting logs at: https://meetings.opendev.org/meetings/large_scale_sig/2023/large_scale_sig.2023-06-28-08.00.html We'll take a break over the Northern hemisphere summer, and be back for our next IRC meeting on Sept 6, 15:00UTC on #openstack-operators on OFTC. Regards, -- Thierry Carrez (ttx) From ralonsoh at redhat.com Wed Jun 28 09:56:43 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Wed, 28 Jun 2023 11:56:43 +0200 Subject: [neutron] Transition networking-odl Ussuri to EOL Message-ID: Hello Neutrinos: Yesterday we found that we are still executing some periodic jobs for networking-odl in some stable releases. For the newer branches, we have proposed the corresponding patches to remove these zuul jobs [1][2]. The problem is in the Ussuri branch [3], which is completely broken (UTs, pep8, etc). This is why I'm proposing to transition it to EOL. If you have any concerns, suggestions or questions, please let me know (via mail or IRC in the #openstack-neutron channel). Regards. [1]https://review.opendev.org/c/openstack/networking-odl/+/887080 [2]https://review.opendev.org/c/openstack/networking-odl/+/887077 [3]https://review.opendev.org/c/openstack/networking-odl/+/887076 -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Jun 28 13:58:21 2023 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 28 Jun 2023 14:58:21 +0100 Subject: Cinder Bug Report 2023-06-28 Message-ID: Hello Argonauts, Cinder Bug Meeting Etherpad Medium - test_rebuild_volume_backed_server failing 100% on ceph job . - Status: The "test_rebuild_volume_backed_server" test is skipped for the Ceph job . Cheers, -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From molenkam at uwo.ca Wed Jun 28 14:03:03 2023 From: molenkam at uwo.ca (Gary Molenkamp) Date: Wed, 28 Jun 2023 10:03:03 -0400 Subject: SNAT failure with OVN under Antelope In-Reply-To: References: <34841bdd-bb95-ff9a-258b-cfae26558627@uwo.ca> <8e291eeb-5845-e6b4-8778-fc7f889064f2@uwo.ca> Message-ID: <9fe34cfa-f6c3-9507-12a0-4d327cb9524b@uwo.ca> On 2023-06-27 15:02, Roberto Bartzen Acosta wrote: > > > Em ter., 27 de jun. de 2023 ?s 15:22, Gary Molenkamp > escreveu: > > Thanks for the pointers, itlooks like I'm starting to narrow it > down.? Something still confusing me, though. > >> >> I've built a Zed cloud, since upgraded to Antelope, using >> the Neutron >> Manual install method here: >> https://docs.openstack.org/neutron/latest/install/ovn/manual_install.html >> I'm using a multi-tenent configuration using geneve and >> the flat >> provider network is present on each hypervisor. Each >> hypervisor is >> connected to the physical provider network, along with >> the tenent >> network and is tagged as an external chassis under OVN. >> ???????? br-int exists, as does br-provider >> ???? ??? ovs-vsctl set open . >> external-ids:ovn-cms-options=enable-chassis-as-gw >> >> >> Any specific reason to enable gateway on compute nodes? >> Generally it's recommended to use controller/network nodes as >> gateway. What's your env(number of controllers, network, >> compute nodes)? >> >> >> Wouldn't it be interesting to enable-chassis-as-gw on the compute >> nodes, just in case you want to use DVR: If that's the case, you >> need to map the external bridge (ovs-vsctl set open . >> external-ids:ovn-bridge-mappings=...) via ansible this is created >> automatically, but in the manual installation I didn't see any >> mention of it. >> The problem is basically that the port of the OVN LRP may not be >> in the same chassis as the VM that failed (since the CR-LRP will >> be where the first VM of that network will be created).?The >> suggestion is to remove the enable-chassis-as-gw from the compute >> nodes to allow the VM to forward traffic via tunneling/Geneve to >> the chassis where the LRP resides. >> >> ovs-vsctl remove open . external-ids >> ovn-cms-options="enable-chassis-as-gw" ovs-vsctl remove open . >> external-ids ovn-bridge-mappings ip link set br-provider-name >> down ovs-vsctl del-br br-provider-namesystemctl restart >> ovn-controller systemctl restart openvswitch-switch >> > > How does one support both use-case types? > > If I want to use DVR via each compute node, then I must create the > br-provider bridge, set the chassis as a gateway and map the > bridge.? This seems to be breaking forwarding to the OVN LRP.??? > The hypervisor/VM with the working LRP works but any other > hypervisor is not tunneling via Geneve. > > > https://docs.openstack.org/neutron/zed/ovn/faq/index.html > The?E/W traffic is "completely distributed in all cases." for OVN > driver... It is natively supported and should work via openflow / > tunneling / Geneve without any issues. > > The problem is that when you set the enable-chassis-as-gw?flag you > enable gateway router port scheduling for a chassis that may not have > an external bridge mapped (and this breaks external traffic). E/W traffic looks good and each compute shows forwarding connections to the other compute. Each compute has the proper external bridge mapped.? ie: external_ids??????? : {hostname=compute05.cloud.sci.uwo.ca, ovn-bridge-mappings="provider:br-provider", ovn-cms-options=enable-chassis-as-gw, ovn-encap-ip="192.168.0.105", ovn-encap-type=geneve, ovn-remote="tcp:172.31.102.100:6642", rundir="/var/run/openvswitch", system-id="8e0fa17c-e480-4b60-9015-bd8833412561"} Likewise all geneve tunnels between the compute nodes are established. -- Gary Molenkamp Science Technology Services Systems Administrator University of Western Ontario molenkam at uwo.ca http://sts.sci.uwo.ca (519) 661-2111 x86882 (519) 661-3566 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Wed Jun 28 16:54:36 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Wed, 28 Jun 2023 09:54:36 -0700 Subject: [ironic] Capping off storyboard migration Message-ID: Hey all, I've written a small tool[1] which should (I think; I can't test it all the way until I'm ready to actually change a thing) iterate through stories in a storyboard project, and close any open tasks as invalid with a message. I'd like to point this script at all the Ironic-related Storyboard projects, closing all the issues with a message like the following: "Hello Ironic contributor, thank you for filing this bug! We have migrated to the Launchpad bugtracker, located at https://bugs.launchpad.net/ironic, If this bug remains valid, please open an issue in launchpad with the URL to this story for context." Are folks onboard for having all the tasks closed as invalid in these Ironic-related projects? Any feedback on the messaging? Thanks, Jay Faulkner P.S. For other projects who might want to use the tool as well, wait until I use it for Ironic to help me smooth out any rough edges and then have at it! 1: https://github.com/jayofdoom/sb-issue-closer -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Thu Jun 29 04:46:09 2023 From: tkajinam at redhat.com (Takashi Kajinami) Date: Thu, 29 Jun 2023 13:46:09 +0900 Subject: [oslo][largescale-sig] HTTP base direct RPC oslo.messaging driver contribution In-Reply-To: <926bd380c381258c1885782ee5735d@cweb02.nmdf.nhnsystem.com> References: <926bd380c381258c1885782ee5735d@cweb02.nmdf.nhnsystem.com> Message-ID: Stephen and I from oslo core met the LINE team at Vancouver and discussed this topic, and in short we(at least Stephen and I) agreed with the proposal. Let me dump some topics we discussed there. # Sorry I thought I posted this somewhere early... Some concerns were raised about adding consul as a new core component, but we agreed with using it as the initial implementation. There were some suggestions to rely on existing service records stored in DB but we need some amount of work, to address the following points. - Not all OpenStack services store service records in DB. (eg ceilometer) - Even though some services store service records in DB, the available information there is not enough - We have to maintain mapping between host and reachable endpoint - Some services spawns worker and need endpoint *per worker*, and we need endpoint list for all workers Random port assignment from range is another topic we discussed there, but we eventually agreed that this is the required approach, mainly because it's hard to allocate port numbers for all services running in a single node(required number of ports can be different based on services/workers/etc). So we agreed that the proposed way to dynamically allocate available ports from the given range is a good solution. Also, LINE already deployed this feature in their production and it has been proven to work well in scale. As we don't see any technical blockers at this moment, we agreed with moving forward with the proposed architecture. There can be some improvement like adding the driver mechanism for cluster backend but, we would avoid expanding our current scope, until someone is really interested in working on such topics. Finally we suggested updating the test plan and adding a devstack(at least single node) job in the gate so that we can properly maintain the feature. Other people might have additional thoughts, but hope the above outcomes make the current proposal more clear to all. Thank you, Takashi On Sat, Jun 10, 2023 at 6:14?PM Masahito Muroi wrote: > Hi all, > > We have pushed the spec. Please feel free to review it. > https://review.opendev.org/c/openstack/oslo-specs/+/885809 > > best regards, > Masahito > > -----Original Message----- > *From:* "Jay Faulkner" > *To:* "Julia Kreger"; > *Cc:* "Masahito Muroi"; "Takashi Kajinami"< > tkajinam at redhat.com>; ; "Herve > Beraud"; "Arnaud Morin"; > *Sent:* 2023/06/07(?) 07:31 (GMT+09:00) > *Subject:* Re: [oslo][largescale-sig] HTTP base direct RPC oslo.messaging > driver contribution > > I'm interested in this as well, please add me to the spec if you need > additional brains :). I'll also be at the summit if you'd like to discuss > any of it in person. > > -- > Jay Faulkner > Ironic PTL > > On Tue, Jun 6, 2023 at 3:14 PM Julia Kreger > wrote: > > Jumping in because the thread has been rather reminiscent of the json-rpc > messaging feature ironic carries so our users don't have to run with > rabbit. I suspect Ironic might be happy to propose it to oslo.messaging if > this http driver is acceptable. > > Please feel free to add me as a reviewer on the spec. > > -Julia > > On Tue, Jun 6, 2023 at 2:10 PM Masahito Muroi > wrote: > > Hi, > > Thank you everyone for the kindly reply. > > I got the PTG situation. Submitting the spec seems to be a nice first > step. > > We don't have public repository of the driver because of internal > repository structure reason. The repository is really stick to the current > internal repository structure now. Cleaning up repository would take time > so that we didn't do the extra tasks. > > best regards. > Masahito > > -----Original Message----- > *From:* "Takashi Kajinami" > *To:* "Masahito Muroi"; > *Cc:* ; "Herve Beraud"< > hberaud at redhat.com>; > *Sent:* 2023/06/06(?) 18:50 (GMT+09:00) > *Subject:* Re: [oslo] HTTP base direct RPC oslo.messaging driver > contribution > > Hello, > > > This is very interesting and I agree having the spec would be the good way > to move this forward. > > We have not requested oslo sessions in the upcoming PTG but Stephen and I > are attending it so will be > available for the discussion. > > Because some other cores such as Herve won't be there, we'd need to > continue further discussions after PTG > in spec review, but if that early in-person discussion sounds helpful for > you then I'll reserve a table. > > Thank you, > Takashi > > > On Tue, Jun 6, 2023 at 4:48 PM Herve Beraud wrote: > > Hello, > > Indeed, Oslo doesn't have PTG sessions. > > Best regards > > Le lun. 5 juin 2023 ? 10:42, Masahito Muroi > a ?crit : > > Hello Herve, > > Thank you for the quick replying. Let us prepare the spec and submit it. > > btw, does olso team have PTG in the up-comming summit? We'd like to get a > quick feedback of the spec if time is allowed in the PTG. But it looks like > oslo team won't have PTG there. > > best regards, > Masahito > > -----Original Message----- > *From:* "Herve Beraud" > *To:* "????"; > *Cc:* ; > *Sent:* 2023/06/05(?) 17:21 (GMT+09:00) > *Subject:* Re: [oslo] HTTP base direct RPC oslo.messaging driver > contribution > > Hello Masahito, > > Submission to oslo-spec is a good starting point. > > Best regards > > Le lun. 5 juin 2023 ? 10:04, ???? a ?crit : > > Hi oslo team, > > We'd like to contribute HTTP base direct RPC driver to the oslo.messaging > community. We have developed the HTTP base driver internally. We have been > using the driver in the production with over 10K hypervisors now. > > I checked the IRC meeting log of the oslo team[1], but there is no regluar > meeting in 2023. Is it okay to submit oslo-spec[2] to propose the driver > directly, or is there another good place to discuss the feature before > submitting a spec? > > 1. https://meetings.opendev.org/#Oslo_Team_Meeting > 2. https://opendev.org/openstack/oslo-specs > > best regards, > Masahito > > > > > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > > > > > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Jun 29 13:46:27 2023 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 29 Jun 2023 06:46:27 -0700 Subject: [ironic] Capping off storyboard migration In-Reply-To: References: Message-ID: I'm semi-onboard with closing the items out. Surely we could determine some logic? Maybe, dunno. No concern with the messaging. -Julia On Wed, Jun 28, 2023 at 9:59?AM Jay Faulkner wrote: > Hey all, > > I've written a small tool[1] which should (I think; I can't test it all > the way until I'm ready to actually change a thing) iterate through stories > in a storyboard project, and close any open tasks as invalid with a message. > > I'd like to point this script at all the Ironic-related Storyboard > projects, closing all the issues with a message like the following: > > "Hello Ironic contributor, thank you for filing this bug! We have migrated > to the Launchpad bugtracker, located at https://bugs.launchpad.net/ironic, > If this bug remains valid, please open an issue in launchpad with the URL > to this story for context." > > Are folks onboard for having all the tasks closed as invalid in these > Ironic-related projects? Any feedback on the messaging? > > Thanks, > Jay Faulkner > > P.S. For other projects who might want to use the tool as well, wait until > I use it for Ironic to help me smooth out any rough edges and then have at > it! > > 1: https://github.com/jayofdoom/sb-issue-closer > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arne.Wiebalck at cern.ch Thu Jun 29 14:30:07 2023 From: Arne.Wiebalck at cern.ch (Arne Wiebalck) Date: Thu, 29 Jun 2023 14:30:07 +0000 Subject: [ironic] Capping off storyboard migration In-Reply-To: References: Message-ID: Hi, I used storyboard not only for bugs, but also to jot down some ideas for potential Ironic features/improvements, sometimes even with initial thoughts on how to implement them. Maybe others have done this as well and I wonder if we would lose some of these ideas by just marking all open items as invalid. Cheers, Arne ________________________________________ From: Julia Kreger Sent: Thursday, 29 June 2023 15:46 To: Jay Faulkner Cc: OpenStack Discuss Subject: Re: [ironic] Capping off storyboard migration I'm semi-onboard with closing the items out. Surely we could determine some logic? Maybe, dunno. No concern with the messaging. -Julia On Wed, Jun 28, 2023 at 9:59?AM Jay Faulkner > wrote: Hey all, I've written a small tool[1] which should (I think; I can't test it all the way until I'm ready to actually change a thing) iterate through stories in a storyboard project, and close any open tasks as invalid with a message. I'd like to point this script at all the Ironic-related Storyboard projects, closing all the issues with a message like the following: "Hello Ironic contributor, thank you for filing this bug! We have migrated to the Launchpad bugtracker, located at https://bugs.launchpad.net/ironic, If this bug remains valid, please open an issue in launchpad with the URL to this story for context." Are folks onboard for having all the tasks closed as invalid in these Ironic-related projects? Any feedback on the messaging? Thanks, Jay Faulkner P.S. For other projects who might want to use the tool as well, wait until I use it for Ironic to help me smooth out any rough edges and then have at it! 1: https://github.com/jayofdoom/sb-issue-closer From jay at gr-oss.io Thu Jun 29 14:31:28 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Thu, 29 Jun 2023 07:31:28 -0700 Subject: [ironic] Capping off storyboard migration In-Reply-To: References: Message-ID: If you have a specific suggestion, I'm game to hear it -- I don't have any ideas what kind of logic we could apply to this other than having a human perform toil, and I don't think that's a valuable use of my time (or really anyone elses') after doing a spot check of open issues in some of the projects. -Jay On Thu, Jun 29, 2023 at 6:46?AM Julia Kreger wrote: > I'm semi-onboard with closing the items out. Surely we could determine > some logic? Maybe, dunno. > > No concern with the messaging. > > -Julia > > On Wed, Jun 28, 2023 at 9:59?AM Jay Faulkner wrote: > >> Hey all, >> >> I've written a small tool[1] which should (I think; I can't test it all >> the way until I'm ready to actually change a thing) iterate through stories >> in a storyboard project, and close any open tasks as invalid with a message. >> >> I'd like to point this script at all the Ironic-related Storyboard >> projects, closing all the issues with a message like the following: >> >> "Hello Ironic contributor, thank you for filing this bug! We have >> migrated to the Launchpad bugtracker, located at >> https://bugs.launchpad.net/ironic, If this bug remains valid, please >> open an issue in launchpad with the URL to this story for context." >> >> Are folks onboard for having all the tasks closed as invalid in these >> Ironic-related projects? Any feedback on the messaging? >> >> Thanks, >> Jay Faulkner >> >> P.S. For other projects who might want to use the tool as well, wait >> until I use it for Ironic to help me smooth out any rough edges and then >> have at it! >> >> 1: https://github.com/jayofdoom/sb-issue-closer >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Thu Jun 29 15:01:55 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Thu, 29 Jun 2023 08:01:55 -0700 Subject: [ironic] Capping off storyboard migration In-Reply-To: References: Message-ID: My thought is actually the opposite -- right now those ideas are open, in storyboard, sitting there for absolutely nobody to read. We don't look in it anymore at all. If we close them, and put a message on them encouraging movement over to the launchpad, my hope is that any of those issues you've found valuable, once you see it's closed, you may go reopen it with a link in launchpad. Let's be clear though: the status quo is not okay. We have old bugs in both storyboard and launchpad. Bugs in our community, right now, are rarely used and are not providing value. By keeping the status quo, we are implying to our users we are thinking about those bugs -- when we aren't. I'd rather have a fresh start than persist the status quo. -Jay On Thu, Jun 29, 2023 at 7:39?AM Arne Wiebalck wrote: > Hi, > > I used storyboard not only for bugs, but also to jot down some ideas for > potential > Ironic features/improvements, sometimes even with initial thoughts on how > to > implement them. Maybe others have done this as well and I wonder if we > would > lose some of these ideas by just marking all open items as invalid. > > Cheers, > Arne > > ________________________________________ > From: Julia Kreger > Sent: Thursday, 29 June 2023 15:46 > To: Jay Faulkner > Cc: OpenStack Discuss > Subject: Re: [ironic] Capping off storyboard migration > > I'm semi-onboard with closing the items out. Surely we could determine > some logic? Maybe, dunno. > > No concern with the messaging. > > -Julia > > On Wed, Jun 28, 2023 at 9:59?AM Jay Faulkner jay at gr-oss.io>> wrote: > Hey all, > > I've written a small tool[1] which should (I think; I can't test it all > the way until I'm ready to actually change a thing) iterate through stories > in a storyboard project, and close any open tasks as invalid with a message. > > I'd like to point this script at all the Ironic-related Storyboard > projects, closing all the issues with a message like the following: > > "Hello Ironic contributor, thank you for filing this bug! We have migrated > to the Launchpad bugtracker, located at https://bugs.launchpad.net/ironic, > If this bug remains valid, please open an issue in launchpad with the URL > to this story for context." > > Are folks onboard for having all the tasks closed as invalid in these > Ironic-related projects? Any feedback on the messaging? > > Thanks, > Jay Faulkner > > P.S. For other projects who might want to use the tool as well, wait until > I use it for Ironic to help me smooth out any rough edges and then have at > it! > > 1: https://github.com/jayofdoom/sb-issue-closer > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arne.Wiebalck at cern.ch Thu Jun 29 15:31:22 2023 From: Arne.Wiebalck at cern.ch (Arne Wiebalck) Date: Thu, 29 Jun 2023 15:31:22 +0000 Subject: [ironic] Capping off storyboard migration In-Reply-To: References: Message-ID: Hi Jay, My point is mostly the distinction between bugs and feature ideas: - bugs may not be relevant anymore, submitters may have moved on, noone is looking at storyboard to solve them ... so, yes, fresh start! - suggestions may be worth keeping: if submitters have moved on, there is noone to vouch for an idea upon ticket closure and it will be lost. Now, I have no idea if storyboard is 99% outdated bugs or full of really cool ideas ... nor do I have a brilliant (read: time neutral) idea how to find the ideas worth keeping :) Cheers, Arne ________________________________________ From: Jay Faulkner Sent: Thursday, 29 June 2023 17:01 To: Arne Wiebalck Cc: OpenStack Discuss Subject: Re: [ironic] Capping off storyboard migration My thought is actually the opposite -- right now those ideas are open, in storyboard, sitting there for absolutely nobody to read. We don't look in it anymore at all. If we close them, and put a message on them encouraging movement over to the launchpad, my hope is that any of those issues you've found valuable, once you see it's closed, you may go reopen it with a link in launchpad. Let's be clear though: the status quo is not okay. We have old bugs in both storyboard and launchpad. Bugs in our community, right now, are rarely used and are not providing value. By keeping the status quo, we are implying to our users we are thinking about those bugs -- when we aren't. I'd rather have a fresh start than persist the status quo. -Jay On Thu, Jun 29, 2023 at 7:39?AM Arne Wiebalck > wrote: Hi, I used storyboard not only for bugs, but also to jot down some ideas for potential Ironic features/improvements, sometimes even with initial thoughts on how to implement them. Maybe others have done this as well and I wonder if we would lose some of these ideas by just marking all open items as invalid. Cheers, Arne ________________________________________ From: Julia Kreger > Sent: Thursday, 29 June 2023 15:46 To: Jay Faulkner Cc: OpenStack Discuss Subject: Re: [ironic] Capping off storyboard migration I'm semi-onboard with closing the items out. Surely we could determine some logic? Maybe, dunno. No concern with the messaging. -Julia On Wed, Jun 28, 2023 at 9:59?AM Jay Faulkner >> wrote: Hey all, I've written a small tool[1] which should (I think; I can't test it all the way until I'm ready to actually change a thing) iterate through stories in a storyboard project, and close any open tasks as invalid with a message. I'd like to point this script at all the Ironic-related Storyboard projects, closing all the issues with a message like the following: "Hello Ironic contributor, thank you for filing this bug! We have migrated to the Launchpad bugtracker, located at https://bugs.launchpad.net/ironic, If this bug remains valid, please open an issue in launchpad with the URL to this story for context." Are folks onboard for having all the tasks closed as invalid in these Ironic-related projects? Any feedback on the messaging? Thanks, Jay Faulkner P.S. For other projects who might want to use the tool as well, wait until I use it for Ironic to help me smooth out any rough edges and then have at it! 1: https://github.com/jayofdoom/sb-issue-closer From radoslaw.piliszek at gmail.com Thu Jun 29 16:18:43 2023 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 29 Jun 2023 18:18:43 +0200 Subject: [ironic] Capping off storyboard migration In-Reply-To: References: Message-ID: Sharing my 2 cents: Well, you can always record what the script does and archive the list of closed stories with their titles, descriptions and comments so that anyone in the future could come back to them. If it's not a huge burden on the repo, you can even commit it somewhere to keep forever. FWIW, this different format might even help correlate them quickly to similar ideas of others. ;-) Radek -yoctozepto On Thu, 29 Jun 2023 at 17:33, Arne Wiebalck wrote: > > Hi Jay, > > My point is mostly the distinction between bugs and feature ideas: > > - bugs may not be relevant anymore, submitters may have moved on, > noone is looking at storyboard to solve them ... so, yes, fresh start! > > - suggestions may be worth keeping: if submitters have moved on, > there is noone to vouch for an idea upon ticket closure and it will > be lost. > > Now, I have no idea if storyboard is 99% outdated bugs or full of really > cool ideas ... nor do I have a brilliant (read: time neutral) idea how to find > the ideas worth keeping :) > > Cheers, > Arne > > ________________________________________ > From: Jay Faulkner > Sent: Thursday, 29 June 2023 17:01 > To: Arne Wiebalck > Cc: OpenStack Discuss > Subject: Re: [ironic] Capping off storyboard migration > > My thought is actually the opposite -- right now those ideas are open, in storyboard, sitting there for absolutely nobody to read. We don't look in it anymore at all. > > If we close them, and put a message on them encouraging movement over to the launchpad, my hope is that any of those issues you've found valuable, once you see it's closed, you may go reopen it with a link in launchpad. > > Let's be clear though: the status quo is not okay. We have old bugs in both storyboard and launchpad. Bugs in our community, right now, are rarely used and are not providing value. By keeping the status quo, we are implying to our users we are thinking about those bugs -- when we aren't. I'd rather have a fresh start than persist the status quo. > > -Jay > > > On Thu, Jun 29, 2023 at 7:39?AM Arne Wiebalck > wrote: > Hi, > > I used storyboard not only for bugs, but also to jot down some ideas for potential > Ironic features/improvements, sometimes even with initial thoughts on how to > implement them. Maybe others have done this as well and I wonder if we would > lose some of these ideas by just marking all open items as invalid. > > Cheers, > Arne > > ________________________________________ > From: Julia Kreger > > Sent: Thursday, 29 June 2023 15:46 > To: Jay Faulkner > Cc: OpenStack Discuss > Subject: Re: [ironic] Capping off storyboard migration > > I'm semi-onboard with closing the items out. Surely we could determine some logic? Maybe, dunno. > > No concern with the messaging. > > -Julia > > On Wed, Jun 28, 2023 at 9:59?AM Jay Faulkner >> wrote: > Hey all, > > I've written a small tool[1] which should (I think; I can't test it all the way until I'm ready to actually change a thing) iterate through stories in a storyboard project, and close any open tasks as invalid with a message. > > I'd like to point this script at all the Ironic-related Storyboard projects, closing all the issues with a message like the following: > > "Hello Ironic contributor, thank you for filing this bug! We have migrated to the Launchpad bugtracker, located at https://bugs.launchpad.net/ironic, If this bug remains valid, please open an issue in launchpad with the URL to this story for context." > > Are folks onboard for having all the tasks closed as invalid in these Ironic-related projects? Any feedback on the messaging? > > Thanks, > Jay Faulkner > > P.S. For other projects who might want to use the tool as well, wait until I use it for Ironic to help me smooth out any rough edges and then have at it! > > 1: https://github.com/jayofdoom/sb-issue-closer From kkloppenborg at rwts.com.au Thu Jun 29 17:08:30 2023 From: kkloppenborg at rwts.com.au (Karl Kloppenborg) Date: Thu, 29 Jun 2023 17:08:30 +0000 Subject: [IRONIC] Firewall drivers / implementation Message-ID: Hi Team, We have Ironic deployed and configured to deploy baremetal on vlans attached to the neutron routers of a tenancy/project. However, when assigned a floating IP, there?s no firewall and the server is completely exposed. I cannot seem to see any information on Ironic Firewall?s, how are others achieving this? Any suggestions would be greatly appreciated. Thanks, Karl Kloppenborg. Openstack-Helm Team. -------------- next part -------------- An HTML attachment was scrubbed... URL: From atidor12 at gmail.com Thu Jun 29 17:22:19 2023 From: atidor12 at gmail.com (altidor JB) Date: Thu, 29 Jun 2023 13:22:19 -0400 Subject: [Neutron] Tap as a service Message-ID: Hello, Can anyone help me with the setup of Tap as a Service on Openstack with Juju and Maas? I've seen tutorials for installing it on DevStack but nothing for large scale setups. So far I've tried doing a similar process to this: https://zhuanlan.zhihu.com/p/101599786 basically install the apt on the neutron_api lxd and on the compute nodes, with configurations that are done in the above tutorial. Sounds simple enough. But the tap_services and tap_flows stay in Down state. Any ideas??? Also I'm open to another way to do port mirroring if this doesn't work. JB -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Jun 29 18:59:20 2023 From: melwittt at gmail.com (melanie witt) Date: Thu, 29 Jun 2023 11:59:20 -0700 Subject: instance console something went wrong, connection is closed | Wallaby DCN In-Reply-To: References: <55eff3d6-b852-840e-80e0-26cbb5a58aac@gmail.com> <3a7830e4-9f09-210c-c1c5-0ee27d8945ff@gmail.com> Message-ID: On 06/25/23 03:50, Swogat Pradhan wrote: > Hi, > After doing a console url show after migration, I am still unable to > access the console. > > ?My site consists of 1 central site and 2 DCN sites. Consoles for > central and DCN02 are working fine without any issues. > But when i am creating an instance for DCN01 the console for the > instance is not coming up (attached image for reference). > > Today I created 3 different VM's using the same flavor, image, security > group, the instances were created in the same compute host. Console?was > not accessible, so I shelved and unshelved all 3 instances, after which > I was able to access the console for 2 of those VM's and am still unable > to access the console of? the 3rd VM no matter what I do. Apologies for the delayed reply. It sounds like there may be some type of problem with regard to the network connection from the novnc console proxy service to the DCN01 site rather than something with the console proxy itself given that things work fine with DCN02 and central. You're seeing "Cannot write data: Broken pipe" in the connection between the compute host and the console proxy showing a connection being broken after being established. As for why shelve and unshelve sometimes helps, it may be because the network port bindings are updated to inactive during the shelve and then they are updated to active during the unshelve. Something about redoing the port bindings is helping the situation sometimes. It may be worthwhile to check if there is anything different between the networks/ports the instances with working consoles have vs what the instances with non-working consoles have. -melwitt > On Sat, Jun 24, 2023 at 2:00?AM melanie witt > wrote: > > On 06/22/23 20:07, Swogat Pradhan wrote: > > Hi Mel, > > Thank you for your response. > > I am facing issues with the instance console?(vnc) in the openstack > > dashboard, Most of the time I shelve the instance and unshelve the > > instance to get the console. > > But there are some VM's I created which are not working even after > > shelve/unshelve. > > > > I have used the same director to deploy a total of a central and > 2 edge > > sites. > > This issue is happening on a single edge site. > > Cold Migration also helps in some situations. > > OK, you didn't mention whether requesting a new console 'openstack > console url show --vnc ' gets you a working console after a > migration (or other event where you see the console stop working). I'm > trying to determine whether the behavior you're seeing is expected or a > bug. After an instance is moved to a different compute node than the > one > it was on when the console was started, that console is not expected to > work anymore. And a new console needs to be started. > > Can you give steps for reproducing the issue? Maybe that will provide > more clarity. > > -melwitt > > > On Fri, Jun 23, 2023 at 12:42?AM melanie witt > > >> wrote: > > > >? ? ?On 06/22/23 01:08, Swogat Pradhan wrote: > >? ? ? > Hi, > >? ? ? > Please find the below log: > >? ? ? > [root at dcn01-hci-1 libvirt]# cat virtqemud.log > >? ? ? > 2023-06-22 07:40:01.575+0000: 350319: error : > >? ? ?virNetSocketReadWire:1804 > >? ? ? > : End of file while reading data: Input/output error > >? ? ? > 2023-06-22 07:40:01.575+0000: 350319: error : > >? ? ?virNetSocketWriteWire:1844 > >? ? ? > : Cannot write data: Broken pipe > >? ? ? > > >? ? ? > I think this is causing the?problem of not getting the > instance > >? ? ?console. > > > >? ? ?When you say "instance console" are you referring to an > interactive > >? ? ?console like VNC or are you talking about the console log for the > >? ? ?instance? > > > >? ? ?If it's the interactive console, if you have a console open > and then > >? ? ?migrate the instance, that console will not be moved along > with the > >? ? ?instance. When a user requests a console, the console proxy > service > >? ? ?establishes a connection to the compute host where the > instance is > >? ? ?located. The proxy doesn't know when an instance has been > moved though, > >? ? ?so if the instance is moved, the user will need to request a new > >? ? ?console > >? ? ?(which will establish a new connection to the new compute host). > > > >? ? ?Is that the behavior you are seeing? > > > >? ? ?-melwitt > > > >? ? ? > On Fri, Jun 2, 2023 at 11:27?AM Swogat Pradhan > >? ? ? > > > >? ? ? > >? ? ? >>> wrote: > >? ? ? > > >? ? ? >? ? ?Update: > >? ? ? >? ? ?If the i am performing?any activity like migration or > resize > >? ? ?of an > >? ? ? >? ? ?instance whose console is accessible, the console becomes > >? ? ? >? ? ?inaccessible giving out the following error : > something went > >? ? ?wrong, > >? ? ? >? ? ?connection is closed > >? ? ? > > >? ? ? >? ? ?The was 1 other instance whose console was not > accessible and > >? ? ?i did > >? ? ? >? ? ?a shelve and unshelve and suddenly?the instance > console became > >? ? ? >? ? ?accessible. > >? ? ? > > >? ? ? >? ? ?This is a peculiar behavior and i don't understand > where is > >? ? ?the issue . > >? ? ? > > >? ? ? >? ? ?With regards, > >? ? ? >? ? ?Swogat Pradhan > >? ? ? > > >? ? ? >? ? ?On Fri, Jun 2, 2023 at 11:19?AM Swogat Pradhan > >? ? ? >? ? ? > > >? ? ? > >? ? ? >>> wrote: > >? ? ? > > >? ? ? >? ? ? ? ?Hi, > >? ? ? >? ? ? ? ?I am creating instances in my DCN site and i am > unable to get > >? ? ? >? ? ? ? ?the console sometimes, error:?something went wrong, > >? ? ?connection > >? ? ? >? ? ? ? ?is closed > >? ? ? > > >? ? ? >? ? ? ? ?I have 3 instances now running on my hci02 node > and there is > >? ? ? >? ? ? ? ?console access on 1 of the vm's and the rest two i > am not > >? ? ? >? ? ? ? ?getting the console, i have used the same flavor, > same image > >? ? ? >? ? ? ? ?same security group for the VM's. > >? ? ? > > >? ? ? >? ? ? ? ?Please suggest what can?be done. > >? ? ? > > >? ? ? >? ? ? ? ?With regards, > >? ? ? >? ? ? ? ?Swogat Pradhan > >? ? ? > > > > From skaplons at redhat.com Fri Jun 30 07:21:47 2023 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 30 Jun 2023 09:21:47 +0200 Subject: [IRONIC] Firewall drivers / implementation In-Reply-To: References: Message-ID: <5116769.M0otBKz7qh@p1> Hi, Dnia czwartek, 29 czerwca 2023 19:08:30 CEST Karl Kloppenborg pisze: > Hi Team, > > We have Ironic deployed and configured to deploy baremetal on vlans attached to the neutron routers of a tenancy/project. > > However, when assigned a floating IP, there?s no firewall and the server is completely exposed. > > I cannot seem to see any information on Ironic Firewall?s, how are others achieving this? > > Any suggestions would be greatly appreciated. > > Thanks, > Karl Kloppenborg. > Openstack-Helm Team. > For firewall on the Neutron's router level there is neutron-fwaas project [1]. Did You checked that? [1] https://docs.openstack.org/neutron/latest/admin/fwaas-v2-scenario.html -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From katonalala at gmail.com Fri Jun 30 07:28:23 2023 From: katonalala at gmail.com (Lajos Katona) Date: Fri, 30 Jun 2023 09:28:23 +0200 Subject: [Neutron] Tap as a service In-Reply-To: References: Message-ID: Hi, If you have technical questions I can help (I hope), but sadly I don't know if deployment tool integration is ready for tap-as-a-service. Lajos Katona (lajoskatona) altidor JB ezt ?rta (id?pont: 2023. j?n. 29., Cs, 19:31): > Hello, > Can anyone help me with the setup of Tap as a Service on Openstack with > Juju and Maas? > I've seen tutorials for installing it on DevStack but nothing for large > scale setups. > So far I've tried doing a similar process to this: > https://zhuanlan.zhihu.com/p/101599786 > basically install the apt on the neutron_api lxd and on the compute nodes, > with configurations that are done in the above tutorial. Sounds simple > enough. > But the tap_services and tap_flows stay in Down state. > Any ideas??? Also I'm open to another way to do port mirroring if this > doesn't work. > > JB > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at sec.in.tum.de Fri Jun 30 08:11:05 2023 From: alex at sec.in.tum.de (Alexander Luedtke) Date: Fri, 30 Jun 2023 10:11:05 +0200 Subject: Problem to login to skyline Message-ID: Hello Everyone, I have made an setup for testing of OpenStack Antelope on cent os. Nearly everything works fine, especially I can login to the Horizon Dashboard without problems. But when I installed skyline (over the tar package on the Openstack page, which start nginx and gunicorn.py it looks good - means the login page shows up and no errors are reported anywhere. Just when I try to login, it doesn't work - just got an Login error on the webpage, which says that the login credentials are wrong ! Funny thing: not one error in all the skyline log, even not in debug mode. The keystone doesn't show an error also. Maybe someone here had a similar problem and give me a hint - I honestly have no idea where else to look. Thanks a lot, ?Alex From ralonsoh at redhat.com Fri Jun 30 08:28:24 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 30 Jun 2023 10:28:24 +0200 Subject: [neutron] Neutron drivers meeting at 1400UTC Message-ID: Hello Neutrinos: This is a heads-up for today's meeting at 1400UTC. We have 3 RFE proposals today: https://wiki.openstack.org/wiki/Meetings/NeutronDrivers. See you later! -------------- next part -------------- An HTML attachment was scrubbed... URL: From bxzhu_5355 at 163.com Fri Jun 30 08:45:04 2023 From: bxzhu_5355 at 163.com (=?utf-8?B?5pyx5Y2a56Wl?=) Date: Fri, 30 Jun 2023 16:45:04 +0800 Subject: Problem to login to skyline In-Reply-To: References: Message-ID: <6EEFE1E9-043B-4895-B036-48D48228B4DF@163.com> Hi Alex, First, you can open a new issue about this[0]. We will track it : ) Then, you need to provide more info, such as how you run the skyline-console and skyline-apiserver, config of your skyline-apiserver and so on. From your description, you get them from tar packages, not install them with kolla-ansible(deploy tool). Thanks, Boxiang [0]: https://bugs.launchpad.net/skyline-apiserver/+bugs > 2023?6?30? ??4:11?Alexander Luedtke ??? > > Hello Everyone, > > I have made an setup for testing of OpenStack Antelope on cent os. > Nearly everything works fine, especially I can login to the Horizon Dashboard without problems. > > But when I installed skyline (over the tar package on the Openstack page, which start nginx and gunicorn.py it looks good > - means the login page shows up and no errors are reported anywhere. > Just when I try to login, it doesn't work - just got an Login error on the webpage, which says that the login credentials are wrong ! > Funny thing: not one error in all the skyline log, even not in debug mode. The keystone doesn't show an error also. > > Maybe someone here had a similar problem and give me a hint - I honestly have no idea where else to look. > > Thanks a lot, > > Alex > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Fri Jun 30 09:00:08 2023 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Fri, 30 Jun 2023 11:00:08 +0200 Subject: [OPENSTACK][rabbitmq] using quorum queues In-Reply-To: References: Message-ID: On 18/06/2023 04:04, Satish Patel wrote: > Great! This is good to?know that Quorum is a good solution. Everybody in this thread seems to have good experiences with using quorum queues and also RabbitMQ themselves clearly communicate that Quorum Queues are the future and classic queues will be gone soon: ?* "Classic mirrored queues were deprecated in RabbitMQ version 3.9" - https://www.rabbitmq.com/migrate-mcq-to-qq.html ?* https://blog.rabbitmq.com/posts/2021/08/4.0-deprecation-announcements/#removal-of-classic-queue-mirroring I wonder why there is not more effort to make this shift by? ... a) Why this is not an OpenStack tc goal for all services to use quorum queues by default initialize any new queues as quorum queues? Especially since "An oslo.messaging-compatible message queue" is one of the base services [2] overseen by the TC. RabbitMQ comes up time and again when talking about operational issues when running OpenStack. Steps to ensure this vital piece runs as smooth as possible is certainly worth discussing in my humble opinion. I don't know if anyone on the TC reads this, but does it make sense to propose such a goal to https://opendev.org/openstack/governance ? b) Have deployment tooling like openstack-ansible, kolla-ansible, ... support setting the oslo.messaging options [1] or even make quorum queues the new default Regards Christian [1] https://docs.openstack.org/releasenotes/oslo.messaging/yoga.html#relnotes-12-13-0-stable-yoga [2] https://governance.openstack.org/tc/reference/base-services.html#current-list-of-base-services From noonedeadpunk at gmail.com Fri Jun 30 09:45:55 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 30 Jun 2023 11:45:55 +0200 Subject: [OPENSTACK][rabbitmq] using quorum queues In-Reply-To: References: Message-ID: Hey, 1. I'm not sure if that's a proper community goal, simply because it boils down to individual deployments and affects a really minority of projects, like oslo.messaging and deployment projects. Moreover, this affects only rabbit driver, which is the most popular option in oslo.messaging but not the only one. Usually community goal is used when you need to apply changes to the absolute majority of services, like in case with SQLAlchemy. So here I think we should discuss more default behaviour of oslo.messaging for rabbit driver and default value of rabbit_quorum_queue, which is basically up to oslo team to decide. 2. In OpenStack-Ansible we already agreed on the latest PTG to make Quorum queues a new default and introduce migration to them. So it should be inlcuded in 2023.2 (bobcat) release, with keeping the upgrade path at very least to 2024.1 which will be the next SLURP. ??, 30 ???. 2023??. ? 11:05, Christian Rohmann : > > On 18/06/2023 04:04, Satish Patel wrote: > > Great! This is good to know that Quorum is a good solution. > > Everybody in this thread seems to have good experiences with using > quorum queues and also RabbitMQ themselves clearly communicate that > Quorum Queues are the future and classic queues will be gone soon: > > * "Classic mirrored queues were deprecated in RabbitMQ version 3.9" - > https://www.rabbitmq.com/migrate-mcq-to-qq.html > * > https://blog.rabbitmq.com/posts/2021/08/4.0-deprecation-announcements/#removal-of-classic-queue-mirroring > > I wonder why there is not more effort to make this shift by ... > > > a) Why this is not an OpenStack tc goal for all services to use quorum > queues by default initialize any new queues as quorum queues? > Especially since "An oslo.messaging-compatible message queue" is one of > the base services [2] overseen by the TC. > RabbitMQ comes up time and again when talking about operational issues > when running OpenStack. Steps to ensure this vital piece runs as smooth > as possible is certainly worth discussing in my humble opinion. > > I don't know if anyone on the TC reads this, but does it make sense to > propose such a goal to https://opendev.org/openstack/governance ? > > b) Have deployment tooling like openstack-ansible, kolla-ansible, ... > support setting the oslo.messaging options [1] or even make quorum > queues the new default > > > Regards > > > Christian > > > > > [1] > https://docs.openstack.org/releasenotes/oslo.messaging/yoga.html#relnotes-12-13-0-stable-yoga > [2] > https://governance.openstack.org/tc/reference/base-services.html#current-list-of-base-services > > > > > > From bram.kranendonk at nl.team.blue Fri Jun 30 11:48:15 2023 From: bram.kranendonk at nl.team.blue (Bram Kranendonk) Date: Fri, 30 Jun 2023 11:48:15 +0000 Subject: [kolla] kolla-toolbox Ubuntu 22 Erlang RMQ dependencies Message-ID: <73508178567141aab7bfd076400780a7@nl.team.blue> Hi all, I'm trying to build the kolla-toolbox image for Ubuntu 22 source for 2023.1 using kolla v16.0.0 This is however not possible due to broken packages for Erlang/RabbitMQ (tried without package cache): INFO:kolla.common.utils.kolla-toolbox: rabbitmq-server : Depends: erlang-base (< 1:26.0) but 1:26.0.2-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: erlang-base-hipe (< 1:26.0) but it is not installable or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-crypto (< 1:26.0) but 1:26.0.2-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-eldap (< 1:26.0) but 1:26.0.2-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-inets (< 1:26.0) but 1:26.0.2-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-mnesia (< 1:26.0) but 1:26.0.2-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-os-mon (< 1:26.0) but 1:26.0.2-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-parsetools (< 1:26.0) but 1:26.0.2-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-public-key (< 1:26.0) but 1:26.0.2-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-runtime-tools (< 1:26.0) but 1:26.0.2-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-ssl (< 1:26.0) but 1:26.0.2-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-syntax-tools (< 1:26.0) but 1:26.0.2-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-tools (< 1:26.0) but 1:26.0.2-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox: Depends: erlang-xmerl (< 1:26.0) but 1:26.0.2-1rmq1ppa1~ubuntu22.04.1 is to be installed or INFO:kolla.common.utils.kolla-toolbox: esl-erlang (< 1:26.0) but it is not installable INFO:kolla.common.utils.kolla-toolbox:E: Unable to correct problems, you have held broken packages. I saw a recent commit that pins the RabbitMQ version, but this commit was reverted later on. Does anyone has a idea how I can fix this issue? Thanks, . Bram Kranendonk System Engineer Oostmaaslaan 71 (15e etage) 3063 AN Rotterdam The Netherlands [team.blue logo] -------------- next part -------------- An HTML attachment was scrubbed... URL: From atidor12 at gmail.com Fri Jun 30 12:10:41 2023 From: atidor12 at gmail.com (altidor JB) Date: Fri, 30 Jun 2023 08:10:41 -0400 Subject: [Neutron] Tap as a service In-Reply-To: References: Message-ID: Thanks! Basically I need to port mirror traffic to an IDS instance. TaaaS seemed the most promising option. I couldn't find a documentated way of installing it on openstack with juju and maas and tried to do something similar than the devstack multi-node (link). But my tap flows and ports stay down. do you know of a better option to port mirror? My logic was : install taas on the neutron lxd and all the nodes + necessary configurations. Is there something I'm missing? JB On Fri, Jun 30, 2023, 03:28 Lajos Katona wrote: > Hi, > If you have technical questions I can help (I hope), but sadly I don't > know if deployment tool integration is ready for tap-as-a-service. > > Lajos Katona (lajoskatona) > > altidor JB ezt ?rta (id?pont: 2023. j?n. 29., Cs, > 19:31): > >> Hello, >> Can anyone help me with the setup of Tap as a Service on Openstack with >> Juju and Maas? >> I've seen tutorials for installing it on DevStack but nothing for large >> scale setups. >> So far I've tried doing a similar process to this: >> https://zhuanlan.zhihu.com/p/101599786 >> basically install the apt on the neutron_api lxd and on the compute >> nodes, with configurations that are done in the above tutorial. >> Sounds simple enough. >> But the tap_services and tap_flows stay in Down state. >> Any ideas??? Also I'm open to another way to do port mirroring if this >> doesn't work. >> >> JB >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Fri Jun 30 13:15:31 2023 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 30 Jun 2023 06:15:31 -0700 Subject: [IRONIC] Firewall drivers / implementation In-Reply-To: <5116769.M0otBKz7qh@p1> References: <5116769.M0otBKz7qh@p1> Message-ID: Thanks for the pointer Slawek! I am wondering if the OP is thinking of security groups, and if so that is through an ML2 plugin mechanism on the switch level configuration, however.... very few ML2 plugins have supported applying security groups to switches because the translation can be difficult or the switches don't support packet inspection without performance degradation. On Fri, Jun 30, 2023 at 12:27?AM Slawek Kaplonski wrote: > Hi, > > Dnia czwartek, 29 czerwca 2023 19:08:30 CEST Karl Kloppenborg pisze: > > > Hi Team, > > > > > > We have Ironic deployed and configured to deploy baremetal on vlans > attached to the neutron routers of a tenancy/project. > > > > > > However, when assigned a floating IP, there?s no firewall and the server > is completely exposed. > > > > > > I cannot seem to see any information on Ironic Firewall?s, how are > others achieving this? > > > > > > Any suggestions would be greatly appreciated. > > > > > > Thanks, > > > Karl Kloppenborg. > > > Openstack-Helm Team. > > > > > For firewall on the Neutron's router level there is neutron-fwaas project > [1]. Did You checked that? > > [1] https://docs.openstack.org/neutron/latest/admin/fwaas-v2-scenario.html > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Fri Jun 30 14:34:52 2023 From: elod.illes at est.tech (=?iso-8859-1?Q?El=F5d_Ill=E9s?=) Date: Fri, 30 Jun 2023 14:34:52 +0000 Subject: [release] Release countdown for week R-13, July 03-07 Message-ID: Development Focus ----------------- The Bobcat-2 milestone is next week, on July 6th, 2023! 2023.2 Bobcat-related specs should now be finalized so that teams can move to implementation ASAP. Some teams observe specific deadlines on the second milestone (mostly spec freezes): please refer to https://releases.openstack.org/bobcat/schedule.html for details. General Information ------------------- Libraries need to be released at least once per milestone period. Next week, the release team will propose releases for any library that has not been otherwise released since milestone 1. PTL's and release liaisons, please watch for these and give a +1 to acknowledge them. If there is some reason to hold off on a release, let us know that as well. A +1 would be appreciated, but if we do not hear anything at all by the end of the week, we will assume things are OK to proceed. Remember that non-library deliverables that follow the cycle-with-intermediary release model should have an intermediary release before milestone-2. Those who haven't will be proposed to switch to the cycle-with-rc model, which is more suited to deliverables that are released only once per cycle. Next week is also the deadline to freeze the contents of the final release. All new '2023.2 Bobcat' deliverables need to have a deliverable file in https://opendev.org/openstack/releases/src/branch/master/deliverables and need to have done a release by milestone-2. Changes proposing those deliverables for inclusion in 2023.2 Bobcat have been posted, please update them with an actual release request before the milestone-2 deadline if you plan on including that deliverable in 2023.2 Bobcat, or -1 if you need one more cycle to be ready. Upcoming Deadlines & Dates -------------------------- Bobcat-2 Milestone: July 6th, 2023 Final 2023.2 Bobcat release: October 4th, 2023 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Fri Jun 30 17:03:53 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Fri, 30 Jun 2023 10:03:53 -0700 Subject: [ironic] Capping off storyboard migration In-Reply-To: References: Message-ID: Hey Radek, One of the nice things is that the stories will all still be in storyboard, along with their tasks and all associated information. Automatically closing the issues does not remove the history, but instead changes the status quo from "users/contributors who submitted those are being ignored" to "users/contributors who submitted those now understand where to go to get attention". Given there's also a giant backlog of older launchpad bugs to triage before we are "caught up" as a project, I intend on placing extra effort there. Ironic's bugtracking situation is an unfortunate reality, and I think taking action -- even if it's not the ideal action -- is better than maintaining the status quo. -JayF -------------- next part -------------- An HTML attachment was scrubbed... URL: From kkloppenborg at rwts.com.au Fri Jun 30 18:05:58 2023 From: kkloppenborg at rwts.com.au (Karl Kloppenborg) Date: Fri, 30 Jun 2023 18:05:58 +0000 Subject: [IRONIC] Firewall drivers / implementation In-Reply-To: References: <5116769.M0otBKz7qh@p1> Message-ID: Hi Team! Firstly, thank you for your replies, I really appreciate it. Probably worth me outlining how we do this currently. I have attached a screenshot of the network topology as seen from the horizon dashboard. ?Management? node is a VM. ?HCI-POC-1? is a Bare Metal instance. Essentially, each tenancy has: 1. Two neutron routers are setup and attached to the external NORTHBOUND VLAN network, where neutron gets an external public IP. * Routers are IPv4 and IPv6 separation due to BGP limitations in OS currently. 2. A VXLAN Tenancy network is created and attached to the neutron routers, this is used for VM connectivity. 3. A VLAN Bare Metal Network is created with the VLAN ID being the same as the VXLAN ID, this is then attached to the IPv4 Router (we only support IPv4 on BM currently) * This then uses networking-generic-switch to allow ironic to configure the switchports as needed. So when a floating IP is assigned to a BM node, it?s attached on the neutron router itself. Naturally IPTables Security groups don?t work in this setup because at no point does any flow pass a BR-INT or compute node, the traffic transverses the L3 Agent and it?s neutron router before going directly back out the VLAN BM Network and hitting the relevant node. However, could firewalling be achieved by using Open vSwitch Firwall driver? https://docs.openstack.org/neutron/latest/admin/config-ovsfwdriver.html How are others achieving this, surely people are just leaving BM?s out of the mix when it comes to security groups! ? Also, I should mention, for those following along with this, we?re offering paid consulting time on this issue for anyone who feels they?re up for it! Thanks, Karl. From: Julia Kreger Date: Friday, 30 June 2023 at 11:15 pm To: Slawek Kaplonski Cc: openstack-discuss at lists.openstack.org , Karl Kloppenborg Subject: Re: [IRONIC] Firewall drivers / implementation Thanks for the pointer Slawek! I am wondering if the OP is thinking of security groups, and if so that is through an ML2 plugin mechanism on the switch level configuration, however.... very few ML2 plugins have supported applying security groups to switches because the translation can be difficult or the switches don't support packet inspection without performance degradation. On Fri, Jun 30, 2023 at 12:27?AM Slawek Kaplonski > wrote: Hi, Dnia czwartek, 29 czerwca 2023 19:08:30 CEST Karl Kloppenborg pisze: > Hi Team, > > We have Ironic deployed and configured to deploy baremetal on vlans attached to the neutron routers of a tenancy/project. > > However, when assigned a floating IP, there?s no firewall and the server is completely exposed. > > I cannot seem to see any information on Ironic Firewall?s, how are others achieving this? > > Any suggestions would be greatly appreciated. > > Thanks, > Karl Kloppenborg. > Openstack-Helm Team. > For firewall on the Neutron's router level there is neutron-fwaas project [1]. Did You checked that? [1] https://docs.openstack.org/neutron/latest/admin/fwaas-v2-scenario.html -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot 2023-07-01 at 3.46.09 am.png Type: image/png Size: 19728 bytes Desc: Screenshot 2023-07-01 at 3.46.09 am.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot 2023-07-01 at 3.58.47 am.png Type: image/png Size: 45826 bytes Desc: Screenshot 2023-07-01 at 3.58.47 am.png URL: