Error creating octavia amphora

Michael Johnson johnsomor at gmail.com
Thu Jun 9 20:49:43 UTC 2022


This isn't an issue with how you are creating the load balancer. It's a
nova error booting a VM, and most likely a configuration issue in the
deployment. It sounds like it is a charms bug.

At boot time, we only plug the lb-mgmt-network in the amphora instance. All
of the required settings for this are in the octavia.conf.

First thing to check is the security group Octavia is configured to used:
[controller_worker]
amp_secgroup_list

For each security group ID in that list, do an "openstack security group
show <ID>" and note the project ID that owns the group.

Then, also in the octavia.conf, check the project ID used to create the
amphora instances in nova:
[service_auth]
project_id

If project_id is not specified, then look up the project_name with
"openstack project show <name>".

All of these project IDs must match.

You can also lookup the project IDs of the amphora VM "openstack server
show" while it's attempting to boot and compare that to the security group
project ID, just to make sure the current running configuration is also
correct (as mentioned in a previous email).

I haven't seen this error before, but I also don't use charms to deploy
OpenStack.

Michael

On Thu, Jun 9, 2022 at 7:37 AM Russell Stather <
Russell.Stather at ignitiongroup.co.za> wrote:

> Hi
>
> I've built new security groups to make sure they are in the correct
> project. I can start a new server manually using these security groups.
>
> The error is coming from the nova compute node. Same error can't find
> security group.
>
> I am creating the load balancer and specifying the project on the command
> line even.
>
> openstack loadbalancer create --name lb25 --vip-subnet-id admin_subnet1
> --project 5e168b652a374a02aff855a5e250b7f8
>
> Kind of running out of ideas.
>
>
> ------------------------------
> *From:* Brendan Shephard <bshephar at redhat.com>
> *Sent:* 09 June 2022 13:16
> *To:* Russell Stather <Russell.Stather at ignitiongroup.co.za>
> *Cc:* openstack-discuss at lists.openstack.org <
> openstack-discuss at lists.openstack.org>
> *Subject:* Re: Error creating octavia amphora
>
> Hey Russell,
>
> Are you able to share the outputs from:
> $ openstack server show 524ae27b-1542-4c2d-9118-138d9e7f3770 -c id -c
> project_id -c security_groups -c status -f yaml
>
> And:
> $ openstack security group show0b683c75-d900-4e45-acb2-8bc321580666 -c id
> -c project_id -f yaml
>
> I agree with James, my assumption would be that we will find they aren't
> in the same project, so Nova can't use that security group for the
> Amphorae. I'm not familiar with charmed openstack though, but it does look
> like James is from Canonical and might be able to advise on the specifics.
>
> All the best,
>
> Brendan Shephard
>
> Software Engineer
>
> Red Hat APAC <https://www.redhat.com>
>
> 193 N Quay
>
> Brisbane City QLD 4000
> @RedHat <https://twitter.com/redhat>  Red Hat
> <https://www.linkedin.com/company/red-hat>  Red Hat
> <https://www.facebook.com/RedHatInc>
> <https://red.ht/sig>
> <https://redhat.com/summit>
>
>
> On Thu, Jun 9, 2022 at 8:48 PM Russell Stather <
> Russell.Stather at ignitiongroup.co.za> wrote:
>
> Hi
>
> I am seeing the below error when creating a load balancer. The security
> group does exist, and it is the correct group (tagged with charm-octavia)
>
> What am I missing here to resolve this?
>
> 2022-06-09 10:10:03.789 678859 ERROR oslo_messaging.rpc.server
> octavia.common.exceptions.ComputeBuildException: Failed to build compute
> instance due to: {'code': 500, 'created': '2022-06-09T10:09:59Z',
> 'message': 'Exceeded maximum number of retries. Exceeded max scheduling
> attempts 3 for instance 524ae27b-1542-4c2d-9118-138d9e7f3770. Last
> exception: Security group 0b683c75-d900-4e45-acb2-8bc321580666 not found.',
> 'details': 'Traceback (most recent call last):\n  File
> "/usr/lib/python3/dist-packages/nova/conductor/manager.py", line 654, in
> build_instances\n    scheduler_utils.populate_retry(\n  File
> "/usr/lib/python3/dist-packages/nova/scheduler/utils.py", line 989, in
> populate_retry\n    raise
> exception.MaxRetriesExceeded(reason=msg)\nnova.exception.MaxRetriesExceeded:
> Exceeded maximum number of retries. Exceeded max scheduling attempts 3 for
> instance 524ae27b-1542-4c2d-9118-138d9e7f3770. Last exception: Security
> group 0b683c75-d900-4e45-acb2-8bc321580666 not found.\n'}
> 2022-06-09 10:10:03.789 678859 ERROR oslo_messaging.rpc.server
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20220609/ec9e1894/attachment-0001.htm>


More information about the openstack-discuss mailing list