[neutron][neutron-dynamic-routing] Call for maintainers
Hi, During last, virtual PTG we discussed about health of neutron stadium projects (again). And it seems that neutron-dynamic-routing project is slowly going to be in similar state as neutron-fwaas in Ussuri. So there is basically no active maintainers of this project in our community. In Ussuri cycle there was Ryan Tidwell from SUSE who was taking care of this project but AFAIK he is not able to do that anymore in Victoria cycle. So, if You are using this project or are interested in it, please contact me by email or on IRC that You want to take care of it. Usually this don't means like a lot of work every day. But we need someone who we can ask for help e.g. when gate is broken or when there is some new bug reported and there is need to triage it. -- Slawek Kaplonski Senior software engineer Red Hat
Hello Slawek, We are users of neutron-dynamic-routing. We would like to see the project continuing, we have very limited resources but I will check with our manager to see if we can help with this. Hopefully others will step up as well so we can share the work. Best regards Tobias ________________________________________ From: Slawek Kaplonski <skaplons@redhat.com> Sent: Tuesday, June 16, 2020 9:44 PM To: OpenStack Discuss ML Subject: [neutron][neutron-dynamic-routing] Call for maintainers Hi, During last, virtual PTG we discussed about health of neutron stadium projects (again). And it seems that neutron-dynamic-routing project is slowly going to be in similar state as neutron-fwaas in Ussuri. So there is basically no active maintainers of this project in our community. In Ussuri cycle there was Ryan Tidwell from SUSE who was taking care of this project but AFAIK he is not able to do that anymore in Victoria cycle. So, if You are using this project or are interested in it, please contact me by email or on IRC that You want to take care of it. Usually this don't means like a lot of work every day. But we need someone who we can ask for help e.g. when gate is broken or when there is some new bug reported and there is need to triage it. -- Slawek Kaplonski Senior software engineer Red Hat
Hi folks, Every time I get an ironic switchover I end up with a few resource provider errors as follows. 2020-06-10 05:11:28.129 75837 INFO nova.compute.resource_tracker [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] Compute node record created for sc-ironic06.nvc.nvidia.com:7c95f255-3c54-46ab-87cf-0b1707971e9c with uuid: 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e 2020-06-10 05:11:28.502 75837 ERROR nova.scheduler.client.report [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] [req-d8af2589-8c75-427c-a7b1-d5270840a4c8] Failed to create resource provider record in placement API for UUID 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e. Got 409: {"errors": [{"status": 409, "request_id": "req-d8af2589-8c75-427c-a7b1-d5270840a4c8", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider name: 7c95f255-3c54-46ab-87cf-0b1707971e9c already exists. ", "title": "Conflict"}]}. 2020-06-10 05:12:49.463 75837 ERROR nova.scheduler.client.report [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] [req-ffd13abc-08f3-47cd-a224-a07183b066ec] Failed to create resource provider record in placement API for UUID 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e. Got 409: {"errors": [{"status": 409, "request_id": "req-ffd13abc-08f3-47cd-a224-a07183b066ec", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider name: 7c95f255-3c54-46ab-87cf-0b1707971e9c already exists. ", "title": "Conflict"}]}. So far the only way that works for me to fix these is to un-enroll and then re-enroll the node. Is there a simpler way to fix this? Thanks,Fred.
Hi Fred: You'll need to update to grab this change: https://review.opendev.org/#/c/675496/ Thanks, Mohammed On Thu, Jun 18, 2020 at 2:29 PM fsbiz@yahoo.com <fsbiz@yahoo.com> wrote:
Hi folks,
Every time I get an ironic switchover I end up with a few resource provider errors as follows.
2020-06-10 05:11:28.129 75837 INFO nova.compute.resource_tracker [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] Compute node record created for sc-ironic06.nvc.nvidia.com:7c95f255-3c5 4-46ab-87cf-0b1707971e9c with uuid: 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e
2020-06-10 05:11:28.502 75837 ERROR nova.scheduler.client.report [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] [req-d8af2589-8c75-427c-a7b1-d5270840a4c8] Failed to create resource provider record in placement API for UUID 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e. Got 409: {"errors": [{"status": 409, "request_id": "req-d8af2589-8c75-427c-a7b1-d5270840a4c8", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider name: 7c95f255-3c54-46ab-87cf-0b1707971e9c already exists. ", "title": "Conflict"}]}.
2020-06-10 05:12:49.463 75837 ERROR nova.scheduler.client.report [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] [req-ffd13abc-08f3-47cd-a224-a07183b066ec] Failed to create resource provider record in placement API for UUID 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e. Got 409: {"errors": [{"status": 409, "request_id": "req-ffd13abc-08f3-47cd-a224-a07183b066ec", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider name: 7c95f255-3c54-46ab-87cf-0b1707971e9c already exists. ", "title": "Conflict"}]}.
So far the only way that works for me to fix these is to un-enroll and then re-enroll the node.
Is there a simpler way to fix this?
Thanks, Fred.
-- Mohammed Naser VEXXHOST, Inc.
Thanks Mohammed.Seems relevant but the patches were applied only to train, ussuri and master. I'm still on Queens. Not sure if the patches are relevant to Queens also. Regards,Fred. On Friday, June 19, 2020, 01:59:27 PM PDT, Mohammed Naser <mnaser@vexxhost.com> wrote: Hi Fred: You'll need to update to grab this change: https://review.opendev.org/#/c/675496/ Thanks,Mohammed On Thu, Jun 18, 2020 at 2:29 PM fsbiz@yahoo.com <fsbiz@yahoo.com> wrote: Hi folks, Every time I get an ironic switchover I end up with a few resource provider errors as follows. 2020-06-10 05:11:28.129 75837 INFO nova.compute.resource_tracker [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] Compute node record created for sc-ironic06.nvc.nvidia.com:7c95f255-3c54-46ab-87cf-0b1707971e9c with uuid: 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e 2020-06-10 05:11:28.502 75837 ERROR nova.scheduler.client.report [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] [req-d8af2589-8c75-427c-a7b1-d5270840a4c8] Failed to create resource provider record in placement API for UUID 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e. Got 409: {"errors": [{"status": 409, "request_id": "req-d8af2589-8c75-427c-a7b1-d5270840a4c8", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider name: 7c95f255-3c54-46ab-87cf-0b1707971e9c already exists. ", "title": "Conflict"}]}. 2020-06-10 05:12:49.463 75837 ERROR nova.scheduler.client.report [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] [req-ffd13abc-08f3-47cd-a224-a07183b066ec] Failed to create resource provider record in placement API for UUID 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e. Got 409: {"errors": [{"status": 409, "request_id": "req-ffd13abc-08f3-47cd-a224-a07183b066ec", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider name: 7c95f255-3c54-46ab-87cf-0b1707971e9c already exists. ", "title": "Conflict"}]}. So far the only way that works for me to fix these is to un-enroll and then re-enroll the node. Is there a simpler way to fix this? Thanks,Fred. -- Mohammed Naser VEXXHOST, Inc.
Hi Fred, we're also on queens and face the same problem. Looking at the linked patch from Mohammed, the code relies on https://github.com/openstack/nova/commit/6865baccd3825bf891a763627693b8b299e... to work. So I guess, we will apply both in our environment. Have a nice day, Johannes On 6/20/20 2:22 AM, fsbiz@yahoo.com wrote:
Thanks Mohammed.Seems relevant but the patches were applied only to train, ussuri and master.
I'm still on Queens. Not sure if the patches are relevant to Queens also. Regards,Fred. On Friday, June 19, 2020, 01:59:27 PM PDT, Mohammed Naser <mnaser@vexxhost.com> wrote:
Hi Fred: You'll need to update to grab this change: https://review.opendev.org/#/c/675496/ Thanks,Mohammed
On Thu, Jun 18, 2020 at 2:29 PM fsbiz@yahoo.com <fsbiz@yahoo.com> wrote:
Hi folks, Every time I get an ironic switchover I end up with a few resource provider errors as follows.
2020-06-10 05:11:28.129 75837 INFO nova.compute.resource_tracker [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] Compute node record created for sc-ironic06.nvc.nvidia.com:7c95f255-3c54-46ab-87cf-0b1707971e9c with uuid: 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e 2020-06-10 05:11:28.502 75837 ERROR nova.scheduler.client.report [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] [req-d8af2589-8c75-427c-a7b1-d5270840a4c8] Failed to create resource provider record in placement API for UUID 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e. Got 409: {"errors": [{"status": 409, "request_id": "req-d8af2589-8c75-427c-a7b1-d5270840a4c8", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider name: 7c95f255-3c54-46ab-87cf-0b1707971e9c already exists. ", "title": "Conflict"}]}. 2020-06-10 05:12:49.463 75837 ERROR nova.scheduler.client.report [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] [req-ffd13abc-08f3-47cd-a224-a07183b066ec] Failed to create resource provider record in placement API for UUID 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e. Got 409: {"errors": [{"status": 409, "request_id": "req-ffd13abc-08f3-47cd-a224-a07183b066ec", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider name: 7c95f255-3c54-46ab-87cf-0b1707971e9c already exists. ", "title": "Conflict"}]}.
So far the only way that works for me to fix these is to un-enroll and then re-enroll the node. Is there a simpler way to fix this? Thanks,Fred.
-- Johannes Kulik IT Architecture Senior Specialist *SAP SE *| Rosenthaler Str. 30 | 10178 Berlin | Germany
Sorry, I misread the commit out of excitement for a fix. The one I mentioned is already applied in queens. But the exceptions we see differ from the one catched in https://review.opendev.org/#/c/675496/ Creating a node in queens is fine and doesn't raise the DBDuplicateEntry exception, because the UUID of the ironic node is actually the hypervisor_hostname of the compute node in queens - might have changed later on. So a unique index on the compute node's UUID doesn't help. Johannes On 6/22/20 10:09 AM, Johannes Kulik wrote:
Hi Fred,
we're also on queens and face the same problem.
Looking at the linked patch from Mohammed, the code relies on https://github.com/openstack/nova/commit/6865baccd3825bf891a763627693b8b299e... to work. So I guess, we will apply both in our environment.
Have a nice day, Johannes
On 6/20/20 2:22 AM, fsbiz@yahoo.com wrote:
Thanks Mohammed.Seems relevant but the patches were applied only to train, ussuri and master.
I'm still on Queens. Not sure if the patches are relevant to Queens also. Regards,Fred. On Friday, June 19, 2020, 01:59:27 PM PDT, Mohammed Naser <mnaser@vexxhost.com> wrote: Hi Fred: You'll need to update to grab this change: https://review.opendev.org/#/c/675496/ Thanks,Mohammed
On Thu, Jun 18, 2020 at 2:29 PM fsbiz@yahoo.com <fsbiz@yahoo.com> wrote:
Hi folks, Every time I get an ironic switchover I end up with a few resource provider errors as follows.
2020-06-10 05:11:28.129 75837 INFO nova.compute.resource_tracker [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] Compute node record created for sc-ironic06.nvc.nvidia.com:7c95f255-3c54-46ab-87cf-0b1707971e9c with uuid: 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e 2020-06-10 05:11:28.502 75837 ERROR nova.scheduler.client.report [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] [req-d8af2589-8c75-427c-a7b1-d5270840a4c8] Failed to create resource provider record in placement API for UUID 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e. Got 409: {"errors": [{"status": 409, "request_id": "req-d8af2589-8c75-427c-a7b1-d5270840a4c8", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider name: 7c95f255-3c54-46ab-87cf-0b1707971e9c already exists. ", "title": "Conflict"}]}. 2020-06-10 05:12:49.463 75837 ERROR nova.scheduler.client.report [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] [req-ffd13abc-08f3-47cd-a224-a07183b066ec] Failed to create resource provider record in placement API for UUID 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e. Got 409: {"errors": [{"status": 409, "request_id": "req-ffd13abc-08f3-47cd-a224-a07183b066ec", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider name: 7c95f255-3c54-46ab-87cf-0b1707971e9c already exists. ", "title": "Conflict"}]}.
So far the only way that works for me to fix these is to un-enroll and then re-enroll the node. Is there a simpler way to fix this? Thanks,Fred.
-- Johannes Kulik IT Architecture Senior Specialist *SAP SE *| Rosenthaler Str. 30 | 10178 Berlin | Germany
This commit from rocky https://github.com/openstack/nova/commit/9f28727eb75e05e07bad51b6eecce667d09... should make https://github.com/openstack/nova/commit/8b007266f438ec0a5a797d05731cce6f2b1... work for queens users. Sorry for the stream of mails. Johannes On 6/22/20 3:02 PM, Johannes Kulik wrote:
Sorry, I misread the commit out of excitement for a fix. The one I mentioned is already applied in queens. But the exceptions we see differ from the one catched in https://review.opendev.org/#/c/675496/
Creating a node in queens is fine and doesn't raise the DBDuplicateEntry exception, because the UUID of the ironic node is actually the hypervisor_hostname of the compute node in queens - might have changed later on. So a unique index on the compute node's UUID doesn't help.
Johannes
On 6/22/20 10:09 AM, Johannes Kulik wrote:
Hi Fred,
we're also on queens and face the same problem.
Looking at the linked patch from Mohammed, the code relies on https://github.com/openstack/nova/commit/6865baccd3825bf891a763627693b8b299e... to work. So I guess, we will apply both in our environment.
Have a nice day, Johannes
On 6/20/20 2:22 AM, fsbiz@yahoo.com wrote:
Thanks Mohammed.Seems relevant but the patches were applied only to train, ussuri and master.
I'm still on Queens. Not sure if the patches are relevant to Queens also. Regards,Fred. On Friday, June 19, 2020, 01:59:27 PM PDT, Mohammed Naser <mnaser@vexxhost.com> wrote: Hi Fred: You'll need to update to grab this change: https://review.opendev.org/#/c/675496/ Thanks,Mohammed
On Thu, Jun 18, 2020 at 2:29 PM fsbiz@yahoo.com <fsbiz@yahoo.com> wrote:
Hi folks, Every time I get an ironic switchover I end up with a few resource provider errors as follows.
2020-06-10 05:11:28.129 75837 INFO nova.compute.resource_tracker [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] Compute node record created for sc-ironic06.nvc.nvidia.com:7c95f255-3c54-46ab-87cf-0b1707971e9c with uuid: 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e 2020-06-10 05:11:28.502 75837 ERROR nova.scheduler.client.report [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] [req-d8af2589-8c75-427c-a7b1-d5270840a4c8] Failed to create resource provider record in placement API for UUID 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e. Got 409: {"errors": [{"status": 409, "request_id": "req-d8af2589-8c75-427c-a7b1-d5270840a4c8", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider name: 7c95f255-3c54-46ab-87cf-0b1707971e9c already exists. ", "title": "Conflict"}]}. 2020-06-10 05:12:49.463 75837 ERROR nova.scheduler.client.report [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] [req-ffd13abc-08f3-47cd-a224-a07183b066ec] Failed to create resource provider record in placement API for UUID 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e. Got 409: {"errors": [{"status": 409, "request_id": "req-ffd13abc-08f3-47cd-a224-a07183b066ec", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider name: 7c95f255-3c54-46ab-87cf-0b1707971e9c already exists. ", "title": "Conflict"}]}.
So far the only way that works for me to fix these is to un-enroll and then re-enroll the node. Is there a simpler way to fix this? Thanks,Fred.
-- Johannes Kulik IT Architecture Senior Specialist *SAP SE *| Rosenthaler Str. 30 | 10178 Berlin | Germany
On 2020-06-22 15:09:57 +0200 (+0200), Johannes Kulik wrote:
This commit from rocky https://github.com/openstack/nova/commit/9f28727eb75e05e07bad51b6eecce667d09... should make https://github.com/openstack/nova/commit/8b007266f438ec0a5a797d05731cce6f2b1... work for queens users. [...]
Now that you've done the legwork to determine this solves the problem, perhaps we can backport those commits. Briefly skimming the diffs, they look like they should be safe to cherry-pick to stable/queens if anyone has time and interest in doing so. -- Jeremy Stanley
Hi, Thanks a lot. That's great news. You can sync with Ryan about details but TBH I don't think it requires a lot of additional work to keep it running. On Thu, Jun 18, 2020 at 02:55:45PM +0000, Tobias Urdin wrote:
Hello Slawek, We are users of neutron-dynamic-routing.
We would like to see the project continuing, we have very limited resources but I will check with our manager to see if we can help with this.
Hopefully others will step up as well so we can share the work.
Best regards Tobias
________________________________________ From: Slawek Kaplonski <skaplons@redhat.com> Sent: Tuesday, June 16, 2020 9:44 PM To: OpenStack Discuss ML Subject: [neutron][neutron-dynamic-routing] Call for maintainers
Hi,
During last, virtual PTG we discussed about health of neutron stadium projects (again). And it seems that neutron-dynamic-routing project is slowly going to be in similar state as neutron-fwaas in Ussuri. So there is basically no active maintainers of this project in our community. In Ussuri cycle there was Ryan Tidwell from SUSE who was taking care of this project but AFAIK he is not able to do that anymore in Victoria cycle.
So, if You are using this project or are interested in it, please contact me by email or on IRC that You want to take care of it. Usually this don't means like a lot of work every day. But we need someone who we can ask for help e.g. when gate is broken or when there is some new bug reported and there is need to triage it.
-- Slawek Kaplonski Senior software engineer Red Hat
-- Slawek Kaplonski Senior software engineer Red Hat
Hello Slawek, Thank I will see if I can catch Ryan on IRC if he's still hanging around there or if anybody here knows about a valid email address to him. Best regards Tobias ________________________________________ From: Slawek Kaplonski <skaplons@redhat.com> Sent: Thursday, June 18, 2020 10:38 PM To: Tobias Urdin Cc: OpenStack Discuss ML Subject: Re: [neutron][neutron-dynamic-routing] Call for maintainers Hi, Thanks a lot. That's great news. You can sync with Ryan about details but TBH I don't think it requires a lot of additional work to keep it running. On Thu, Jun 18, 2020 at 02:55:45PM +0000, Tobias Urdin wrote:
Hello Slawek, We are users of neutron-dynamic-routing.
We would like to see the project continuing, we have very limited resources but I will check with our manager to see if we can help with this.
Hopefully others will step up as well so we can share the work.
Best regards Tobias
________________________________________ From: Slawek Kaplonski <skaplons@redhat.com> Sent: Tuesday, June 16, 2020 9:44 PM To: OpenStack Discuss ML Subject: [neutron][neutron-dynamic-routing] Call for maintainers
Hi,
During last, virtual PTG we discussed about health of neutron stadium projects (again). And it seems that neutron-dynamic-routing project is slowly going to be in similar state as neutron-fwaas in Ussuri. So there is basically no active maintainers of this project in our community. In Ussuri cycle there was Ryan Tidwell from SUSE who was taking care of this project but AFAIK he is not able to do that anymore in Victoria cycle.
So, if You are using this project or are interested in it, please contact me by email or on IRC that You want to take care of it. Usually this don't means like a lot of work every day. But we need someone who we can ask for help e.g. when gate is broken or when there is some new bug reported and there is need to triage it.
-- Slawek Kaplonski Senior software engineer Red Hat
-- Slawek Kaplonski Senior software engineer Red Hat
participants (6)
-
fsbiz@yahoo.com
-
Jeremy Stanley
-
Johannes Kulik
-
Mohammed Naser
-
Slawek Kaplonski
-
Tobias Urdin