[OSA][NEUTRON]

Dmitriy Rabotyagov noonedeadpunk at gmail.com
Sat Aug 19 04:47:44 UTC 2023


Hey,

One very important question right away: what network driver are you using?
Is it OVS or OVN? If you're just relying on defaults and don't really know,
then what version of OSA you're using?

As based on the choice of network driver following configuration and
advices would be different.

On Sat, Aug 19, 2023, 00:54 Murilo Morais <murilo at evocorp.com.br> wrote:

> Good evening everyone!
>
> When using Tenant Network I can't make the instances communicate when they
> are on different hosts, I believe I'm forgetting to configure something
> related to network_hosts, but I'm not sure what. It wasn't clear to me
> exactly the difference between "network_hosts", "network-infra_hosts" and
> "network-agent_hosts".
>
> I'm using the following configuration:
> openstack_user_config.yml:
> cidr_networks:
>   container: ...
>   storage: ...
>
>
> used_ips:
>   [...]
>
>
> global_overrides:
>   internal_lb_vip_address: ...
>   external_lb_vip_address: ...
>
>   management_bridge: br-mgmt
>   no_containers: false
>   provider_networks:
>     - network:
>         container_bridge: br-mgmt
>         container_interface: eth1
>         container_type: veth
>         ip_from_q: container
>         is_container_address: true
>         type: raw
>         group_binds:
>           - all_containers
>           - hosts
>
>     - network:
>         container_bridge: br-provider
>         container_type: veth
>         type: vlan
>         range: "100:200"
>         network_interface: "eno2"
>         net_name: vlan
>         group_binds:
>           - neutron_openvswitch_agent
>
>     - network:
>         container_bridge: "br-flat"
>         container_type: "veth"
>         type: "flat"
>         network_interface: "trunk.103"
>         net_name: "router"
>         group_binds:
>           - neutron_openvswitch_agent
>
>     - network:
>         container_bridge: br-storage
>         container_interface: eth2
>         container_type: veth
>         ip_from_q: storage
>         type: raw
>         group_binds:
>           - glance_api
>           - cinder_api
>           - cinder_volume
>           - nova_compute
>           - ceph-mon
>           - ceph-osd
>
> shared-infra_hosts:
>   dcn2:
>     ip: ...2
>
> coordination_hosts:
>   dcn2:
>     ip: ...2
>
> repo-infra_hosts:
>   dcn2:
>     ip: ...2
>
> haproxy_hosts:
>   dcn2:
>     ip: ...2
>
> identity_hosts:
>   dcn2:
>     ip: ...2
>
> storage-infra_hosts:
>   dcn2:
>     ip: ...2
>
> storage_hosts:
>   dcn2:
>     ip: ...2
>
> image_hosts:
>   dcn2:
>     ip: ...2
>
> placement-infra_hosts:
>   dcn2:
>     ip: ...2
>
> compute-infra_hosts:
>   dcn2:
>     ip: ...2
>
> dashboard_hosts:
>   dcn2:
>     ip: ...2
>
> network_hosts:
>   dcn2:
>     ip: ...2
>
> compute_hosts:
>   dcn2:
>     ip: ...2
>   dcn3:
>     ip: ...3
>   dcn8:
>     ip: ...14
>   dcn10:
>     ip: ...19
>
>
>
> Example to upload the instances:
> network create network_test
> subnet create --network network_test --subnet-range 192.168.0.0/24
> subnet_test
> server create --flavor m1.large --image
> debian-11-genericcloud-amd64-20230601-1398 --network network_test
> --key-name my_key --use-config-drive debian_test1
> server create --flavor m1.large --image
> debian-11-genericcloud-amd64-20230601-1398 --network network_test
> --key-name my_key --use-config-drive debian_test2
>
> If for some reason any of them go up on different hosts, even configuring
> the IP manually, they don't communicate, but the instant they are on the
> same host the ping starts to work.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230819/1c2f8b14/attachment.htm>


More information about the openstack-discuss mailing list