[victoria][octavia] Operating status OFFLINE

Michael Johnson johnsomor at gmail.com
Wed Feb 17 02:36:46 UTC 2021


Hi Malik,

If the load balancer is functioning as expected, but the operation
status is not updating to ONLINE it likely means your lb-mgmt-net
network is not functioning correctly.
The health heartbeat messages from the amphora are not making it back
to the Octavia controllers.

Check that your controllers are able to receive the UDP heartbeat
packets to port 5555 on the controllers over the lb-mgmt-net.

Michael

On Mon, Feb 15, 2021 at 9:25 PM Malik Obaid <malikobaidadil at gmail.com> wrote:
>
> Hi,
>
>
>
> I have configured Octavia using Openstack Victoria release on Ubuntu 20.04.
>
>
>
> Everything is working as expected like loadbalancing, healthmonitoring etc.
>
>
>
> The issue I am facing is that the operating_status of Loadbalancer, listener, pool and its members always stay OFFLINE.
>
>
>
> I am new to Openstack  and would really appreciate your help in this regard.
>
>
>
>
>
> -------------------------------------------------------------------------
>
>
>
> loadbalancer status:
>
>
>
> {
>
>     "loadbalancer": {
>
>         "id": "b33d7de7-4bf6-4815-a151-5ca7a7a3a40e",
>
>         "name": "testlb",
>
>         "operating_status": "OFFLINE",
>
>         "provisioning_status": "ACTIVE",
>
>         "listeners": [
>
>             {
>
>                 "id": "03972160-dd11-4eac-855b-870ee9ee909b",
>
>                 "name": "testlistener",
>
>                 "operating_status": "OFFLINE",
>
>                 "provisioning_status": "ACTIVE",
>
>                 "pools": [
>
>                     {
>
>                         "id": "5ad57d34-a9e7-4aa1-982d-e528b38c84ed",
>
>                         "name": "testpool",
>
>                         "provisioning_status": "ACTIVE",
>
>                         "operating_status": "OFFLINE",
>
>                         "health_monitor": {
>
>                             "id": "3524ea0f-ce2b-4957-9967-27d71033f964",
>
>                             "name": "testhm",
>
>                             "type": "HTTP",
>
>                             "provisioning_status": "ACTIVE",
>
>                             "operating_status": "ONLINE"
>
>                         },
>
>                        "members": [
>
>                             {
>
>                                 "id": "25b94432-6464-4245-a5db-6ecedb286721",
>
>                                 "name": "",
>
>                                 "operating_status": "OFFLINE",
>
>                                 "provisioning_status": "ACTIVE",
>
>                                 "address": "192.168.100.44",
>
>                                 "protocol_port": 80
>
>                             },
>
>                             {
>
>                                 "id": "9157600c-280e-4eb9-a9aa-6b683da76420",
>
>                                 "name": "",
>
>                                 "operating_status": "OFFLINE",
>
>                                 "provisioning_status": "ACTIVE",
>
>                                 "address": "192.168.100.90",
>
>                                 "protocol_port": 80
>
>                             }
>
>                         ]
>
>                     }
>
>                 ]
>
>             }
>
>         ]
>
>     }
>
> }
>
>
>
> ----------------------------------------------------------------------------------------
>
>
>
> octavia.conf
>
>
>
> #create new
>
> [DEFAULT]
>
> # RabbitMQ connection info
>
> transport_url = rabbit://openstack:password@172.16.30.46
>
>
>
> [api_settings]
>
> # IP address this host listens
>
> bind_host = 172.16.30.46
>
> bind_port = 9876
>
> auth_strategy = keystone
>
> api_base_uri = http://172.16.30.46:9876
>
>
>
> # MariaDB connection info
>
> [database]
>
> connection = mysql+pymysql://octavia:password@172.16.30.45/octavia
>
>
>
> [health_manager]
>
> bind_ip = 0.0.0.0
>
> bind_port = 5555
>
>
>
> # Keystone auth info
>
> [keystone_authtoken]
>
> www_authenticate_uri = http://172.16.30.46:5000
>
> auth_url = http://172.16.30.46:5000
>
> memcached_servers = 172.16.30.46:11211
>
> auth_type = password
>
> project_domain_name = default
>
> user_domain_name = default
>
> project_name = service
>
> username = octavia
>
> password = servicepassword
>
>
>
> # specify certificates created on [2]
>
> [certificates]
>
> ca_private_key = /etc/octavia/certs/private/server_ca.key.pem
>
> ca_certificate = /etc/octavia/certs/server_ca.cert.pem
>
> server_certs_key_passphrase = insecure-key-do-not-use-this-key
>
> ca_private_key_passphrase = not-secure-passphrase
>
>
>
> # specify certificates created on [2]
>
> [haproxy_amphora]
>
> server_ca = /etc/octavia/certs/server_ca-chain.cert.pem
>
> client_cert = /etc/octavia/certs/private/client.cert-and-key.pem
>
>
>
> # specify certificates created on [2]
>
> [controller_worker]
>
> client_ca = /etc/octavia/certs/client_ca.cert.pem
>
> amp_image_tag = Amphora
>
> # specify [flavor] ID for Amphora instance
>
> amp_flavor_id = 200
>
> # specify security group ID Amphora instance
>
> amp_secgroup_list = 4fcb5a29-06b3-4a5d-8804-23c4670c200e
>
> # specify network ID to boot Amphora instance (example below specifies public network [public])
>
> amp_boot_network_list = 7d6af354-206f-4c30-a0d6-dcf0f7f35f08
>
> network_driver = allowed_address_pairs_driver
>
> compute_driver = compute_nova_driver
>
> amphora_driver = amphora_haproxy_rest_driver
>
>
>
> [oslo_messaging]
>
> topic = octavia_prov
>
>
>
> # Keystone auth info
>
> [service_auth]
>
> auth_url = http://172.16.30.46:5000
>
> memcached_servers = 172.16.30.46:11211
>
> auth_type = password
>
> project_domain_name = default
>
> user_domain_name = default
>
> project_name = service
>
> username = octavia
>
> password = servicepassword
>
>
>
> -----------------------------------------------------------------------
>
>
>
> Regards,
>
>
>
> Malik Obaid



More information about the openstack-discuss mailing list