Why don't you just spin up a cirros image and attach it to octavia-mgmt-net and try to do debugging like ping etc. There is no magic configuration in OVS to attach interfaces etc.. It will just work when you run the playbook to deploy octavia. 

It always works for me without any funky configuration. Just make sure your VLAN tag etc.. is correct and properly tagged in switch fabric. 



On Mon, Feb 5, 2024 at 10:46 AM Jayesh Chaudhari <jayesh.chaudhari1990@gmail.com> wrote:
Somehow during deployment its skipping config-host.yml
- include_tasks: hm-interface.yml
  when:
    - octavia_auto_configure | bool
    - octavia_network_type == "tenant"
    - inventory_hostname in groups[octavia_services['octavia-health-manager']['group']]

which has task to add octavia port to br-int
- name: Add Octavia port to openvswitch br-int
  vars:
    port_mac: "{{ port_info.port.mac_address }}"
    port_id: "{{ port_info.id }}"
  become: True
  command: >
    docker exec openvswitch_vswitchd ovs-vsctl --may-exist \
    add-port br-int {{ octavia_network_interface }} \
    -- set Interface {{ octavia_network_interface }} type=internal \
    -- set Interface {{ octavia_network_interface }} external-ids:iface-status=active \
    -- set Interface {{ octavia_network_interface }} external-ids:attached-mac={{ port_mac }} \
    -- set Interface {{ octavia_network_interface }} external-ids:iface-id={{ port_id }} \
    -- set Interface {{ octavia_network_interface }} external-ids:skip_cleanup=true

On Mon, Feb 5, 2024 at 8:52 PM Jayesh Chaudhari <jayesh.chaudhari1990@gmail.com> wrote:
My environment is POC, and I am using a VLAN based network type.

As suggested vlan.2140 interface is configured with IP on controller nodes only.
But I dont see the interface getting configured on the OVS bridge.
Do you have an idea in which version that change merged ?

On Mon, Feb 5, 2024 at 6:54 PM Satish Patel <satish.txt@gmail.com> wrote:
Is this a production environment or your home lab? There are two ways to setup octavia mgmt network. 

For development I mostly go with tenant type and for production like multi node I will pick a VLAN based network type. 

In your case make sure you have vlan.2140 interface on all controller nodes with IP and on compute nodes you don't need to create that interface because OVS will take that on the provider bridge. 

On Mon, Feb 5, 2024 at 8:05 AM Jayesh Chaudhari <jayesh.chaudhari1990@gmail.com> wrote:
Thanks Eugen, for the prompt response.

config_drive is set to True in amphora VM.
I tried to reach amphora VM from my controller node, but its not reachable.
I found some old articles where they manually created a port on the OVS bridge for lbaas. ( https://cloudbase.it/openstack-on-arm64-lbaas/ ) . 

Is it still recommended ? if it is, should it be on both controller and compute ?



On Mon, Feb 5, 2024 at 4:42 PM Eugen Block <eblock@nde.ag> wrote:
Hi,

can you verify if your amphorae instance has config_drive enabled?

control01:~ # openstack loadbalancer amphora list --long

gives you the list including "compute_id" which is the nova instance, 
then check:

control01:~ # openstack server show <UUID> | grep config_drive
| config_drive                        | True

Usually, with external (provider) networks the instances require the 
config-drive to get their network configuration.
You could also use a customized image for the amphorae to be able to 
login and inspect errors, that's how we usually do it.

Regards,
Eugen

Zitat von Jayesh Chaudhari <jayesh.chaudhari1990@gmail.com>:

> Folks,
>
> I have set up kolla-ansible yoga openstack and configured Octavia using
> VLAN.
> But when I am trying to create a LB, it is stuck in pending
> create then eventually failed. And in logs I can see octavia-worker unable
> to connect to amphora instance.
>
> Is there any sanity which I can do to check if my implementation is correct
> ? Or am I missing something? Please advise.
>
> My configuration in global.yml
> enable_octavia: yes
> octavia_network_interface: "vlan.2140"
> octavia_auto_configure: yes
> octavia_amp_flavor:
>   name: "amphora"
>   is_public: yes
>   vcpus: 2
>   ram: 1024
>   disk: 5
> octavia_amp_security_groups:
>   mgmt-sec-group:
>   name: "lb-mgmt-sec-grp"
>   rules:
>    - protocol: icmp
>    - protocol: tcp
>      src_port: 22
>      dst_port: 22
> octavia_amp_network:
>   name: lb-mgmt-net
>   provider_network_type: vlan
>   provider_segmentation_id: 2140
>   provider_physical_network: physnet1
>   external: false
>   shared: false
>   subnet:
>     name: lb-mgmt-subnet
>     cidr: "10.145.50.128/26"
>     allocation_pool_start: "10.145.50.135"
>     allocation_pool_end: "10.145.50.190"
>     gateway_ip: "10.145.50.129"
>     enable_dhcp: yes
> octavia_amp_image_tag" "amphora"
> octavia_loadbalancer_topology: "SINGLE"
>
>
> Thanks,
> Jayesh