openstack instances unable to reach the internet
I have been banging my head on this issue for quite some time now. I am hoping that somebody can help me see what I have misconfigured. Here is the situation. I have tried setting up several openstack installs, and in none of them am I able to reach the internet from the instances running inside of openstack. All of them are on Rocky Linux 9.5, running the dalmatian version of openstack, installed using packstack. (I have tried an all-in-one, with everything on a single node, and a cluster with the controller/networking on one node, and the compute running on separate nodes). I can get the cirros instances running on the backend network within openstack, and can ssh to them via the floating ip from the host without issues. The cirros instance can ping other cirros instances in the openstack controlled network (both on the backend network(subnet is in the 192.168.100.0/24 range, or on the external public network(subnet is in the 172.24.4.024 range) Even trying the simplest configuration, where everything is all on a single node (compute/networking/controller/etc), Everything from instance creation to VM migration/etc all seems to work just fine. However, the cirros instances cannot ping anything on the internet -- for example, I am unable to ping 8.8.8.8 or even the gateway that the host can reach (The host has an ip used to reach horizon on the 10.61.157.0/24 subnet on our internal lan) I can ping all interfaces from the cirros interface through the backend_network (192.168.100.0/24) through the public interface (172.24.4.0/24), and can even ping the host ip address (10.61.157.59) . . .However, I can't ping anything beyond that (i.e. the 10.61.147.1 gateway does not ping from the cirros instance) From the host commandline, it is able to reach everything external just fine (i.e. it can ping it's gateway 10.61.147.1 and 8.8.8.8 or anything else external) Here is the ip addr output from the host: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000 link/ether 00:1d:d8:b7:1e:df brd ff:ff:ff:ff:ff:ff inet6 fe80::21d:d8ff:feb7:1edf/64 scope link valid_lft forever preferred_lft forever 3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 56:d8:51:53:10:b0 brd ff:ff:ff:ff:ff:ff 4: br-int: <BROADCAST,MULTICAST> mtu 1442 qdisc noop state DOWN group default qlen 1000 link/ether 62:8c:b0:c2:3e:5a brd ff:ff:ff:ff:ff:ff 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 00:1d:d8:b7:1e:df brd ff:ff:ff:ff:ff:ff inet 10.61.157.59/24 brd 10.61.157.255 scope global br-ex valid_lft forever preferred_lft forever inet 172.24.4.1/24 brd 172.24.4.255 scope global br-ex valid_lft forever preferred_lft forever inet6 fe80::2446:29ff:fe58:6541/64 scope link valid_lft forever preferred_lft forever 6: tapeffe7cc1-56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000 link/ether fe:16:3e:e2:66:5d brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fee2:665d/64 scope link valid_lft forever preferred_lft forever 7: tap65170f1f-e0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000 link/ether 3a:26:09:cf:3d:4f brd ff:ff:ff:ff:ff:ff link-netns ovnmeta-65170f1f-e837-4b5e-a471-13e783e6b48e inet6 fe80::3826:9ff:fecf:3d4f/64 scope link valid_lft forever preferred_lft forever And the output from the ovs-vsctl show command: 46294eda-8a39-4775-875d-ef6ec9929346 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-int fail_mode: secure datapath_type: system Port tap65170f1f-e0 Interface tap65170f1f-e0 Port patch-br-int-to-provnet-371bdb7c-d2f3-4e70-aca9-1c4838c57e24 Interface patch-br-int-to-provnet-371bdb7c-d2f3-4e70-aca9-1c4838c57e24 type: patch options: {peer=patch-provnet-371bdb7c-d2f3-4e70-aca9-1c4838c57e24-to-br-int} Port tapeffe7cc1-56 Interface tapeffe7cc1-56 Port br-int Interface br-int type: internal Bridge br-ex fail_mode: standalone Port patch-provnet-371bdb7c-d2f3-4e70-aca9-1c4838c57e24-to-br-int Interface patch-provnet-371bdb7c-d2f3-4e70-aca9-1c4838c57e24-to-br-int type: patch options: {peer=patch-br-int-to-provnet-371bdb7c-d2f3-4e70-aca9-1c4838c57e24} Port br-ex Interface br-ex type: internal Port eth0 Interface eth0 ovs_version: "3.3.1" Here is the info for the networks/subnets currently configured: From "openstack network list --long" +------------------------+-----------------+--------+------------------------+-------+--------+------------------------+--------------+-------------+--------------------+------+ | ID | Name | Status | Project | State | Shared | Subnets | Network Type | Router Type | Availability Zones | Tags | +------------------------+-----------------+--------+------------------------+-------+--------+------------------------+--------------+-------------+--------------------+------+ | 65170f1f-e837-4b5e-a47 | backend_network | ACTIVE | c5f408c4daed4fb2afefa4 | UP | False | b1e5aef4-77da-4b0b-9b9 | geneve | Internal | | | | 1-13e783e6b48e | | | 4fa039fe26 | | | 4-5aaf395484e0 | | | | | | b8e7a305-65f0-44ea-93a | public | ACTIVE | c5f408c4daed4fb2afefa4 | UP | False | 6cff8f70-9dc8-4f17-b78 | flat | External | | | | a-cc6ec741d408 | | | 4fa039fe26 | | | 0-32cf2aedb017 | | | | | | c0c67da4-22bf-4dd2-b4c | private | ACTIVE | 2967b1fe70794fa88cbb3d | UP | False | caed16ae-22ff-427a-a37 | geneve | Internal | | | | 2-2edcb41ffb97 | | | e7227cec77 | | | 0-6b891755bfa7 | | | | | +------------------------+-----------------+--------+------------------------+-------+--------+------------------------+--------------+-------------+--------------------+------+ From "openstack subnet list --long" +---------------+---------------+---------------+---------------+---------------+-------+--------------+------------------+-------------+------------+---------------+---------------+------+ | ID | Name | Network | Subnet | Project | DHCP | Name Servers | Allocation Pools | Host Routes | IP Version | Gateway | Service Types | Tags | +---------------+---------------+---------------+---------------+---------------+-------+--------------+------------------+-------------+------------+---------------+---------------+------+ | 6cff8f70-9dc8 | public_subnet | b8e7a305-65f0 | 172.24.4.0/24 | c5f408c4daed4 | False | | 172.24.4.2-172.2 | | 4 | 172.24.4.1 | | | | -4f17-b780-32 | | -44ea-93aa- | | fb2afefa44fa0 | | | 4.4.254 | | | | | | | cf2aedb017 | | cc6ec741d408 | | 39fe26 | | | | | | | | | | b1e5aef4-77da | backend_subne | 65170f1f-e837 | 192.168.100.0 | c5f408c4daed4 | True | | 192.168.100.100- | | 4 | 192.168.100.1 | | | | -4b0b-9b94-5a | t | -4b5e-a471-13 | /24 | fb2afefa44fa0 | | | 192.168.100.250 | | | | | | | af395484e0 | | e783e6b48e | | 39fe26 | | | | | | | | | | caed16ae-22ff | private_subne | c0c67da4-22bf | 10.0.0.0/24 | 2967b1fe70794 | True | | 10.0.0.2-10.0.0. | | 4 | 10.0.0.1 | | | | -427a-a370-6b | t | -4dd2-b4c2-2e | | fa88cbb3de722 | | | 254 | | | | | | | 891755bfa7 | | dcb41ffb97 | | 7cec77 | | | | | | | | | +---------------+---------------+---------------+---------------+---------------+-------+--------------+------------------+-------------+------------+---------------+---------------+------+ From "openstack floating ip list" +------------------------------------+---------------------+------------------+------------------------------------+------------------------------------+----------------------------------+ | ID | Floating IP Address | Fixed IP Address | Port | Floating Network | Project | +------------------------------------+---------------------+------------------+------------------------------------+------------------------------------+----------------------------------+ | 61d98a1c-f19a-451f-9353-6cb8305a0c | 172.24.4.175 | 192.168.100.128 | effe7cc1-566d-4574-b4e3-d9f440b776 | b8e7a305-65f0-44ea-93aa- | c5f408c4daed4fb2afefa44fa039fe26 | | 2f | | | 5e | cc6ec741d408 | | +------------------------------------+---------------------+------------------+------------------------------------+------------------------------------+----------------------------------+ From "openstack server list" +--------------------------------------+-------+--------+-----------------------------------------------+--------------------------+---------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+-------+--------+-----------------------------------------------+--------------------------+---------+ | e42372d2-946d-4c90-bf2d-1e36cda80e2f | test3 | ACTIVE | backend_network=172.24.4.175, 192.168.100.128 | N/A (booted from volume) | m1.tiny | +--------------------------------------+-------+--------+-----------------------------------------------+--------------------------+---------+ Any assistance in helping me figure out what I am missing to get external access would be greatly appreciated. Thank you!
Update: please replace 10.61.147.1 for the gateway referred to in my previous message (apologies for the typo) with the correct gateway of 10.61.157.1
Just a ping to see if anybody has any suggestions on what to look at to debug why my instances in openstack are unable to reach anything on the internet?
Hi, although I can't comment on geneve network type (never used that), your config looks fine from my point of view, but I also don't know how the whole setup would look like on a single-node env. My best guess would be the network stack behind the openstack node. Is there any sort of routing between the floating network and your gateway into the outside world? I would probably try to trace the icmp packets through the stack and see where they get lost. Zitat von collinl@churchofjesuschrist.org:
I have been banging my head on this issue for quite some time now. I am hoping that somebody can help me see what I have misconfigured.
Here is the situation.
I have tried setting up several openstack installs, and in none of them am I able to reach the internet from the instances running inside of openstack.
All of them are on Rocky Linux 9.5, running the dalmatian version of openstack, installed using packstack. (I have tried an all-in-one, with everything on a single node, and a cluster with the controller/networking on one node, and the compute running on separate nodes). I can get the cirros instances running on the backend network within openstack, and can ssh to them via the floating ip from the host without issues. The cirros instance can ping other cirros instances in the openstack controlled network (both on the backend network(subnet is in the 192.168.100.0/24 range, or on the external public network(subnet is in the 172.24.4.024 range)
Even trying the simplest configuration, where everything is all on a single node (compute/networking/controller/etc), Everything from instance creation to VM migration/etc all seems to work just fine.
However, the cirros instances cannot ping anything on the internet -- for example, I am unable to ping 8.8.8.8 or even the gateway that the host can reach (The host has an ip used to reach horizon on the 10.61.157.0/24 subnet on our internal lan)
I can ping all interfaces from the cirros interface through the backend_network (192.168.100.0/24) through the public interface (172.24.4.0/24), and can even ping the host ip address (10.61.157.59) . . .However, I can't ping anything beyond that (i.e. the 10.61.147.1 gateway does not ping from the cirros instance)
From the host commandline, it is able to reach everything external just fine (i.e. it can ping it's gateway 10.61.147.1 and 8.8.8.8 or anything else external)
Here is the ip addr output from the host: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000 link/ether 00:1d:d8:b7:1e:df brd ff:ff:ff:ff:ff:ff inet6 fe80::21d:d8ff:feb7:1edf/64 scope link valid_lft forever preferred_lft forever 3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 56:d8:51:53:10:b0 brd ff:ff:ff:ff:ff:ff 4: br-int: <BROADCAST,MULTICAST> mtu 1442 qdisc noop state DOWN group default qlen 1000 link/ether 62:8c:b0:c2:3e:5a brd ff:ff:ff:ff:ff:ff 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 00:1d:d8:b7:1e:df brd ff:ff:ff:ff:ff:ff inet 10.61.157.59/24 brd 10.61.157.255 scope global br-ex valid_lft forever preferred_lft forever inet 172.24.4.1/24 brd 172.24.4.255 scope global br-ex valid_lft forever preferred_lft forever inet6 fe80::2446:29ff:fe58:6541/64 scope link valid_lft forever preferred_lft forever 6: tapeffe7cc1-56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000 link/ether fe:16:3e:e2:66:5d brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fee2:665d/64 scope link valid_lft forever preferred_lft forever 7: tap65170f1f-e0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000 link/ether 3a:26:09:cf:3d:4f brd ff:ff:ff:ff:ff:ff link-netns ovnmeta-65170f1f-e837-4b5e-a471-13e783e6b48e inet6 fe80::3826:9ff:fecf:3d4f/64 scope link valid_lft forever preferred_lft forever
And the output from the ovs-vsctl show command: 46294eda-8a39-4775-875d-ef6ec9929346 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-int fail_mode: secure datapath_type: system Port tap65170f1f-e0 Interface tap65170f1f-e0 Port patch-br-int-to-provnet-371bdb7c-d2f3-4e70-aca9-1c4838c57e24 Interface patch-br-int-to-provnet-371bdb7c-d2f3-4e70-aca9-1c4838c57e24 type: patch options: {peer=patch-provnet-371bdb7c-d2f3-4e70-aca9-1c4838c57e24-to-br-int} Port tapeffe7cc1-56 Interface tapeffe7cc1-56 Port br-int Interface br-int type: internal Bridge br-ex fail_mode: standalone Port patch-provnet-371bdb7c-d2f3-4e70-aca9-1c4838c57e24-to-br-int Interface patch-provnet-371bdb7c-d2f3-4e70-aca9-1c4838c57e24-to-br-int type: patch options: {peer=patch-br-int-to-provnet-371bdb7c-d2f3-4e70-aca9-1c4838c57e24} Port br-ex Interface br-ex type: internal Port eth0 Interface eth0 ovs_version: "3.3.1"
Here is the info for the networks/subnets currently configured:
From "openstack network list --long" +------------------------+-----------------+--------+------------------------+-------+--------+------------------------+--------------+-------------+--------------------+------+ | ID | Name | Status | Project | State | Shared | Subnets | Network Type | Router Type | Availability Zones | Tags | +------------------------+-----------------+--------+------------------------+-------+--------+------------------------+--------------+-------------+--------------------+------+ | 65170f1f-e837-4b5e-a47 | backend_network | ACTIVE | c5f408c4daed4fb2afefa4 | UP | False | b1e5aef4-77da-4b0b-9b9 | geneve | Internal | | | | 1-13e783e6b48e | | | 4fa039fe26 | | | 4-5aaf395484e0 | | | | | | b8e7a305-65f0-44ea-93a | public | ACTIVE | c5f408c4daed4fb2afefa4 | UP | False | 6cff8f70-9dc8-4f17-b78 | flat | External | | | | a-cc6ec741d408 | | | 4fa039fe26 | | | 0-32cf2aedb017 | | | | | | c0c67da4-22bf-4dd2-b4c | private | ACTIVE | 2967b1fe70794fa88cbb3d | UP | False | caed16ae-22ff-427a-a37 | geneve | Internal | | | | 2-2edcb41ffb97 | | | e7227cec77 | | | 0-6b891755bfa7 | | | | | +------------------------+-----------------+--------+------------------------+-------+--------+------------------------+--------------+-------------+--------------------+------+ From "openstack subnet list --long" +---------------+---------------+---------------+---------------+---------------+-------+--------------+------------------+-------------+------------+---------------+---------------+------+ | ID | Name | Network | Subnet | Project | DHCP | Name Servers | Allocation Pools | Host Routes | IP Version | Gateway | Service Types | Tags | +---------------+---------------+---------------+---------------+---------------+-------+--------------+------------------+-------------+------------+---------------+---------------+------+ | 6cff8f70-9dc8 | public_subnet | b8e7a305-65f0 | 172.24.4.0/24 | c5f408c4daed4 | False | | 172.24.4.2-172.2 | | 4 | 172.24.4.1 | | | | -4f17-b780-32 | | -44ea-93aa- | | fb2afefa44fa0 | | | 4.4.254 | | | | | | | cf2aedb017 | | cc6ec741d408 | | 39fe26 | | | | | | | | | | b1e5aef4-77da | backend_subne | 65170f1f-e837 | 192.168.100.0 | c5f408c4daed4 | True | | 192.168.100.100- | | 4 | 192.168.100.1 | | | | -4b0b-9b94-5a | t | -4b5e-a471-13 | /24 | fb2afefa44fa0 | | | 192.168.100.250 | | | | | | | af395484e0 | | e783e6b48e | | 39fe26 | | | | | | | | | | caed16ae-22ff | private_subne | c0c67da4-22bf | 10.0.0.0/24 | 2967b1fe70794 | True | | 10.0.0.2-10.0.0. | | 4 | 10.0.0.1 | | | | -427a-a370-6b | t | -4dd2-b4c2-2e | | fa88cbb3de722 | | | 254 | | | | | | | 891755bfa7 | | dcb41ffb97 | | 7cec77 | | | | | | | | | +---------------+---------------+---------------+---------------+---------------+-------+--------------+------------------+-------------+------------+---------------+---------------+------+
From "openstack floating ip list" +------------------------------------+---------------------+------------------+------------------------------------+------------------------------------+----------------------------------+ | ID | Floating IP Address | Fixed IP Address | Port | Floating Network | Project | +------------------------------------+---------------------+------------------+------------------------------------+------------------------------------+----------------------------------+ | 61d98a1c-f19a-451f-9353-6cb8305a0c | 172.24.4.175 | 192.168.100.128 | effe7cc1-566d-4574-b4e3-d9f440b776 | b8e7a305-65f0-44ea-93aa- | c5f408c4daed4fb2afefa44fa039fe26 | | 2f | | | 5e | cc6ec741d408 | | +------------------------------------+---------------------+------------------+------------------------------------+------------------------------------+----------------------------------+ From "openstack server list" +--------------------------------------+-------+--------+-----------------------------------------------+--------------------------+---------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+-------+--------+-----------------------------------------------+--------------------------+---------+ | e42372d2-946d-4c90-bf2d-1e36cda80e2f | test3 | ACTIVE | backend_network=172.24.4.175, 192.168.100.128 | N/A (booted from volume) | m1.tiny | +--------------------------------------+-------+--------+-----------------------------------------------+--------------------------+---------+
Any assistance in helping me figure out what I am missing to get external access would be greatly appreciated.
Thank you!
Eugen, Thank you very much for your response. What network type do you typically use? flat? vxlan? I don't believe that there is routing between the controller/compute node and the gateway. . .and from my testing that is where the pings are getting lost. Here is an example from the cirros instance pinging the controller/compute node (successfully), and the external gateway(not successfully): $ ping 10.61.157.59 PING 10.61.157.59 (10.61.157.59): 56 data bytes 64 bytes from 10.61.157.59: seq=0 ttl=63 time=1.378 ms 64 bytes from 10.61.157.59: seq=1 ttl=63 time=1.317 ms 64 bytes from 10.61.157.59: seq=2 ttl=63 time=0.901 ms ^C --- 10.61.157.59 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.901/1.198/1.378 ms $ ping 10.61.157.1 PING 10.61.157.1 (10.61.157.1): 56 data bytes ^C --- 10.61.157.1 ping statistics --- 9 packets transmitted, 0 packets received, 100% packet loss How would I go about determining what is causing the packets to get lost at that point? From the host 10.61.157.59 itself, it CAN ping the 10.61.157.1 gateway just fine. . .and all sites externally (i.e. 8.8.8.8) here is the traceroute output from the controller/compute node, showing a successful output, and with only one hop. . .so I don't think there is anything between the host and the gateway network wise: # traceroute 10.61.157.1 traceroute to 10.61.157.1 (10.61.157.1), 30 hops max, 60 byte packets 1 _gateway (10.61.157.1) 0.374 ms 0.322 ms 0.297 ms
We use flat,vlan and vxlan. Did you check if the instance's security-group allows icmp traffic? Is there anything else potentially blocking traffic like firewalld, apparmor, selinux and so on? I guess looking at iptables could help here as well, but I'm not sure if I can really help with that. Zitat von collinl@churchofjesuschrist.org:
Eugen, Thank you very much for your response.
What network type do you typically use? flat? vxlan?
I don't believe that there is routing between the controller/compute node and the gateway. . .and from my testing that is where the pings are getting lost.
Here is an example from the cirros instance pinging the controller/compute node (successfully), and the external gateway(not successfully): $ ping 10.61.157.59 PING 10.61.157.59 (10.61.157.59): 56 data bytes 64 bytes from 10.61.157.59: seq=0 ttl=63 time=1.378 ms 64 bytes from 10.61.157.59: seq=1 ttl=63 time=1.317 ms 64 bytes from 10.61.157.59: seq=2 ttl=63 time=0.901 ms ^C --- 10.61.157.59 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.901/1.198/1.378 ms $ ping 10.61.157.1 PING 10.61.157.1 (10.61.157.1): 56 data bytes ^C --- 10.61.157.1 ping statistics --- 9 packets transmitted, 0 packets received, 100% packet loss
How would I go about determining what is causing the packets to get lost at that point?
From the host 10.61.157.59 itself, it CAN ping the 10.61.157.1 gateway just fine. . .and all sites externally (i.e. 8.8.8.8)
here is the traceroute output from the controller/compute node, showing a successful output, and with only one hop. . .so I don't think there is anything between the host and the gateway network wise: # traceroute 10.61.157.1 traceroute to 10.61.157.1 (10.61.157.1), 30 hops max, 60 byte packets 1 _gateway (10.61.157.1) 0.374 ms 0.322 ms 0.297 ms
Yes, the security group allows icmp traffic as well as ssh traffic on top of the other default security group settings. Selinux is running in permissive mode (so shouldn't be blocking anything) and firewalld is disabled. We don't use apparmor. I also am not an iptables expert, but I haven't seen anything in the rules that jump out to me as being problematic: Here are the iptables rules: # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports amqps,amqp /* 001 amqp incoming amqp_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports fs-agent /* 001 aodh-api incoming aodh_api */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports iscsi-target /* 001 cinder incoming cinder_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 8776 /* 001 cinder-api incoming cinder_api */ ACCEPT tcp -- anywhere anywhere multiport dports armtechdaemon /* 001 glance incoming glance_api */ ACCEPT tcp -- anywhere anywhere multiport dports 8041 /* 001 gnocchi-api incoming gnocchi_api */ ACCEPT tcp -- anywhere anywhere multiport dports http /* 001 horizon 80 incoming */ ACCEPT tcp -- anywhere anywhere multiport dports commplex-main /* 001 keystone incoming keystone */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports mysql /* 001 mariadb incoming mariadb_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 9696 /* 001 neutron server incoming neutron_server_10.61.157.59 */ ACCEPT udp -- l21652.ldschurch.org anywhere udp dpt:geneve /* 001 neutron tunnel port incoming neutron_tunnel_10.61.157.59_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 8773,8774,8775,uec /* 001 nova api incoming nova_api */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports rfb:cvsup /* 001 nova compute incoming nova_compute */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports 16509,49152:49215 /* 001 nova qemu migration incoming nova_qemu_migration_10.61.157.59_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 6080 /* 001 novncproxy incoming */ ACCEPT tcp -- anywhere anywhere multiport dports 6641 /* 001 ovn northd incoming ovn_northd_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 6642 /* 001 ovn southd incoming ovn_southd_10.61.157.59 */ ACCEPT tcp -- l21652.ldschurch.org anywhere tcp dpt:redis /* 001 redis service incoming redis service from 10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports webcache /* 001 swift proxy incoming swift_proxy */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports x11,6001,6002,rsync /* 001 swift storage and rsync incoming swift_storage_and_rsync_10.61.157.59 */ ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere /* 000 forward in */ ACCEPT all -- anywhere anywhere /* 000 forward out */ REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination Thanks! Collin ________________________________ From: Eugen Block <eblock@nde.ag> Sent: Monday, February 3, 2025 8:20 AM To: openstack-discuss@lists.openstack.org <openstack-discuss@lists.openstack.org> Subject: [Ext:] Re: openstack instances unable to reach the internet [External Email] We use flat,vlan and vxlan. Did you check if the instance's security-group allows icmp traffic? Is there anything else potentially blocking traffic like firewalld, apparmor, selinux and so on? I guess looking at iptables could help here as well, but I'm not sure if I can really help with that. Zitat von collinl@churchofjesuschrist.org:
Eugen, Thank you very much for your response.
What network type do you typically use? flat? vxlan?
I don't believe that there is routing between the controller/compute node and the gateway. . .and from my testing that is where the pings are getting lost.
Here is an example from the cirros instance pinging the controller/compute node (successfully), and the external gateway(not successfully): $ ping 10.61.157.59 PING 10.61.157.59 (10.61.157.59): 56 data bytes 64 bytes from 10.61.157.59: seq=0 ttl=63 time=1.378 ms 64 bytes from 10.61.157.59: seq=1 ttl=63 time=1.317 ms 64 bytes from 10.61.157.59: seq=2 ttl=63 time=0.901 ms ^C --- 10.61.157.59 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.901/1.198/1.378 ms $ ping 10.61.157.1 PING 10.61.157.1 (10.61.157.1): 56 data bytes ^C --- 10.61.157.1 ping statistics --- 9 packets transmitted, 0 packets received, 100% packet loss
How would I go about determining what is causing the packets to get lost at that point?
From the host 10.61.157.59 itself, it CAN ping the 10.61.157.1 gateway just fine. . .and all sites externally (i.e. 8.8.8.8)
here is the traceroute output from the controller/compute node, showing a successful output, and with only one hop. . .so I don't think there is anything between the host and the gateway network wise: # traceroute 10.61.157.1 traceroute to 10.61.157.1 (10.61.157.1), 30 hops max, 60 byte packets 1 _gateway (10.61.157.1) 0.374 ms 0.322 ms 0.297 ms
Do you see actual packet drops or rejections if you run 'watch iptables -nvL'? I have those REJECT rules neither in my test clusters nor in our production cluster:
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Zitat von Collin Linford <clinford@familysearch.org>:
Yes, the security group allows icmp traffic as well as ssh traffic on top of the other default security group settings.
Selinux is running in permissive mode (so shouldn't be blocking anything) and firewalld is disabled.
We don't use apparmor.
I also am not an iptables expert, but I haven't seen anything in the rules that jump out to me as being problematic:
Here are the iptables rules: # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports amqps,amqp /* 001 amqp incoming amqp_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports fs-agent /* 001 aodh-api incoming aodh_api */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports iscsi-target /* 001 cinder incoming cinder_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 8776 /* 001 cinder-api incoming cinder_api */ ACCEPT tcp -- anywhere anywhere multiport dports armtechdaemon /* 001 glance incoming glance_api */ ACCEPT tcp -- anywhere anywhere multiport dports 8041 /* 001 gnocchi-api incoming gnocchi_api */ ACCEPT tcp -- anywhere anywhere multiport dports http /* 001 horizon 80 incoming */ ACCEPT tcp -- anywhere anywhere multiport dports commplex-main /* 001 keystone incoming keystone */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports mysql /* 001 mariadb incoming mariadb_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 9696 /* 001 neutron server incoming neutron_server_10.61.157.59 */ ACCEPT udp -- l21652.ldschurch.org anywhere udp dpt:geneve /* 001 neutron tunnel port incoming neutron_tunnel_10.61.157.59_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 8773,8774,8775,uec /* 001 nova api incoming nova_api */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports rfb:cvsup /* 001 nova compute incoming nova_compute */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports 16509,49152:49215 /* 001 nova qemu migration incoming nova_qemu_migration_10.61.157.59_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 6080 /* 001 novncproxy incoming */ ACCEPT tcp -- anywhere anywhere multiport dports 6641 /* 001 ovn northd incoming ovn_northd_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 6642 /* 001 ovn southd incoming ovn_southd_10.61.157.59 */ ACCEPT tcp -- l21652.ldschurch.org anywhere tcp dpt:redis /* 001 redis service incoming redis service from 10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports webcache /* 001 swift proxy incoming swift_proxy */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports x11,6001,6002,rsync /* 001 swift storage and rsync incoming swift_storage_and_rsync_10.61.157.59 */ ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere /* 000 forward in */ ACCEPT all -- anywhere anywhere /* 000 forward out */ REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT) target prot opt source destination
Thanks! Collin ________________________________ From: Eugen Block <eblock@nde.ag> Sent: Monday, February 3, 2025 8:20 AM To: openstack-discuss@lists.openstack.org <openstack-discuss@lists.openstack.org> Subject: [Ext:] Re: openstack instances unable to reach the internet
[External Email]
We use flat,vlan and vxlan. Did you check if the instance's security-group allows icmp traffic? Is there anything else potentially blocking traffic like firewalld, apparmor, selinux and so on? I guess looking at iptables could help here as well, but I'm not sure if I can really help with that.
Zitat von collinl@churchofjesuschrist.org:
Eugen, Thank you very much for your response.
What network type do you typically use? flat? vxlan?
I don't believe that there is routing between the controller/compute node and the gateway. . .and from my testing that is where the pings are getting lost.
Here is an example from the cirros instance pinging the controller/compute node (successfully), and the external gateway(not successfully): $ ping 10.61.157.59 PING 10.61.157.59 (10.61.157.59): 56 data bytes 64 bytes from 10.61.157.59: seq=0 ttl=63 time=1.378 ms 64 bytes from 10.61.157.59: seq=1 ttl=63 time=1.317 ms 64 bytes from 10.61.157.59: seq=2 ttl=63 time=0.901 ms ^C --- 10.61.157.59 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.901/1.198/1.378 ms $ ping 10.61.157.1 PING 10.61.157.1 (10.61.157.1): 56 data bytes ^C --- 10.61.157.1 ping statistics --- 9 packets transmitted, 0 packets received, 100% packet loss
How would I go about determining what is causing the packets to get lost at that point?
From the host 10.61.157.59 itself, it CAN ping the 10.61.157.1 gateway just fine. . .and all sites externally (i.e. 8.8.8.8)
here is the traceroute output from the controller/compute node, showing a successful output, and with only one hop. . .so I don't think there is anything between the host and the gateway network wise: # traceroute 10.61.157.1 traceroute to 10.61.157.1 (10.61.157.1), 30 hops max, 60 byte packets 1 _gateway (10.61.157.1) 0.374 ms 0.322 ms 0.297 ms
Eugen, Thank you again for your response. Your help is greatly appreciated. When I run iptables -nvL, I see that the REJECT in the input chain does have some packets that hit that rule, but the one in the FORWARD chain has a 0 packet count. However, when I ping the 10.61.157.1 gateway address or 8.8.8.8 from the cirros instance, that counter does not increment, so I don't think that is the issue 🙁 I believe that those REJECT rules are the "default" ones that take effect if no other rule applies. Is my understanding correct there? I am willing to try to make adjustments to my iptables rules, but am not sure how without breaking things. Do you have any suggestions? Here is the output of the iptables -nvL Every 2.0s: iptables -nvL l21652.ldschurch.org: Tue Feb 4 08:05:49 2025 Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 5843K 1115M ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 multiport dports 5671,5672 /* 001 amqp incoming amqp_10.61.157.59 */ 0 0 ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 8042 /* 001 aodh-api incoming aodh_api */ 268K 21M ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 multiport dports 3260 /* 001 cinder incoming cinder_10.61.157.59 */ 36713 4659K ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 8776 /* 001 cinder-api incoming cinder_api */ 1409 293K ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 9292 /* 001 glance incoming glance_api */ 150K 30M ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 8041 /* 001 gnocchi-api incoming gnocchi_api */ 7846 1851K ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 80 /* 001 horizon 80 incoming */ 246K 31M ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 5000 /* 001 keystone incoming keystone */ 23M 3860M ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 multiport dports 3306 /* 001 mariadb incoming mariadb_10.61.157.59 */ 155K 62M ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 9696 /* 001 neutron server incoming neutron_server_10.61.157.59 */ 0 0 ACCEPT 17 -- * * 10.61.157.59 0.0.0.0/0 udp dpt:6081 /* 001 neutron tunnel port incoming neutron_tunnel_10.61.157.59_10.61.157.59 */ 163K 32M ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 8773,8774,8775,8778 /* 001 nova api incoming nova_api */ 177K 11M ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 multiport dports 5900:5999 /* 001 nova compute incoming nova_compute */ 33021 8785K ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 multiport dports 16509,49152:49215 /* 001 nova qemu migration incoming nova_qemu_migration_10.61.157.59_10.61.157.59 */ 171K 9554K ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 6080 /* 001 novncproxy incoming */ 928K 84M ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 6641 /* 001 ovn northd incoming ovn_northd_10.61.157.59 */ 1544K 139M ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 6642 /* 001 ovn southd incoming ovn_southd_10.61.157.59 */ 7224K 1214M ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 tcp dpt:6379 /* 001 redis service incoming redis service from 10.61.157.59 */ 6 428 ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 8080 /* 001 swift proxy incoming swift_proxy */ 17015 2244K ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 multiport dports 6000,6001,6002,873 /* 001 swift storage and rsync incoming swift_storage_and_rsync_10.61.157.59 */ 43M 8270M ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 20 1456 ACCEPT 1 -- * * 0.0.0.0/0 0.0.0.0/0 11403 684K ACCEPT 0 -- lo * 0.0.0.0/0 0.0.0.0/0 72 3800 ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 5880 1317K REJECT 0 -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT 0 -- br-ex * 0.0.0.0/0 0.0.0.0/0 /* 000 forward in */ 0 0 ACCEPT 0 -- * br-ex 0.0.0.0/0 0.0.0.0/0 /* 000 forward out */ 0 0 REJECT 0 -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT 83M packets, 16G bytes) pkts bytes target prot opt in out source destination Thanks! Collin ________________________________ From: Eugen Block <eblock@nde.ag> Sent: Tuesday, February 4, 2025 2:42 AM To: Collin Linford <clinford@familysearch.org> Cc: openstack-discuss@lists.openstack.org <openstack-discuss@lists.openstack.org> Subject: Re: [Ext:] Re: openstack instances unable to reach the internet Do you see actual packet drops or rejections if you run 'watch iptables -nvL'? I have those REJECT rules neither in my test clusters nor in our production cluster:
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Zitat von Collin Linford <clinford@familysearch.org>:
Yes, the security group allows icmp traffic as well as ssh traffic on top of the other default security group settings.
Selinux is running in permissive mode (so shouldn't be blocking anything) and firewalld is disabled.
We don't use apparmor.
I also am not an iptables expert, but I haven't seen anything in the rules that jump out to me as being problematic:
Here are the iptables rules: # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports amqps,amqp /* 001 amqp incoming amqp_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports fs-agent /* 001 aodh-api incoming aodh_api */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports iscsi-target /* 001 cinder incoming cinder_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 8776 /* 001 cinder-api incoming cinder_api */ ACCEPT tcp -- anywhere anywhere multiport dports armtechdaemon /* 001 glance incoming glance_api */ ACCEPT tcp -- anywhere anywhere multiport dports 8041 /* 001 gnocchi-api incoming gnocchi_api */ ACCEPT tcp -- anywhere anywhere multiport dports http /* 001 horizon 80 incoming */ ACCEPT tcp -- anywhere anywhere multiport dports commplex-main /* 001 keystone incoming keystone */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports mysql /* 001 mariadb incoming mariadb_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 9696 /* 001 neutron server incoming neutron_server_10.61.157.59 */ ACCEPT udp -- l21652.ldschurch.org anywhere udp dpt:geneve /* 001 neutron tunnel port incoming neutron_tunnel_10.61.157.59_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 8773,8774,8775,uec /* 001 nova api incoming nova_api */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports rfb:cvsup /* 001 nova compute incoming nova_compute */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports 16509,49152:49215 /* 001 nova qemu migration incoming nova_qemu_migration_10.61.157.59_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 6080 /* 001 novncproxy incoming */ ACCEPT tcp -- anywhere anywhere multiport dports 6641 /* 001 ovn northd incoming ovn_northd_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 6642 /* 001 ovn southd incoming ovn_southd_10.61.157.59 */ ACCEPT tcp -- l21652.ldschurch.org anywhere tcp dpt:redis /* 001 redis service incoming redis service from 10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports webcache /* 001 swift proxy incoming swift_proxy */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports x11,6001,6002,rsync /* 001 swift storage and rsync incoming swift_storage_and_rsync_10.61.157.59 */ ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere /* 000 forward in */ ACCEPT all -- anywhere anywhere /* 000 forward out */ REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT) target prot opt source destination
Thanks! Collin ________________________________ From: Eugen Block <eblock@nde.ag> Sent: Monday, February 3, 2025 8:20 AM To: openstack-discuss@lists.openstack.org <openstack-discuss@lists.openstack.org> Subject: [Ext:] Re: openstack instances unable to reach the internet
[External Email]
We use flat,vlan and vxlan. Did you check if the instance's security-group allows icmp traffic? Is there anything else potentially blocking traffic like firewalld, apparmor, selinux and so on? I guess looking at iptables could help here as well, but I'm not sure if I can really help with that.
Zitat von collinl@churchofjesuschrist.org:
Eugen, Thank you very much for your response.
What network type do you typically use? flat? vxlan?
I don't believe that there is routing between the controller/compute node and the gateway. . .and from my testing that is where the pings are getting lost.
Here is an example from the cirros instance pinging the controller/compute node (successfully), and the external gateway(not successfully): $ ping 10.61.157.59 PING 10.61.157.59 (10.61.157.59): 56 data bytes 64 bytes from 10.61.157.59: seq=0 ttl=63 time=1.378 ms 64 bytes from 10.61.157.59: seq=1 ttl=63 time=1.317 ms 64 bytes from 10.61.157.59: seq=2 ttl=63 time=0.901 ms ^C --- 10.61.157.59 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.901/1.198/1.378 ms $ ping 10.61.157.1 PING 10.61.157.1 (10.61.157.1): 56 data bytes ^C --- 10.61.157.1 ping statistics --- 9 packets transmitted, 0 packets received, 100% packet loss
How would I go about determining what is causing the packets to get lost at that point?
From the host 10.61.157.59 itself, it CAN ping the 10.61.157.1 gateway just fine. . .and all sites externally (i.e. 8.8.8.8)
here is the traceroute output from the controller/compute node, showing a successful output, and with only one hop. . .so I don't think there is anything between the host and the gateway network wise: # traceroute 10.61.157.1 traceroute to 10.61.157.1 (10.61.157.1), 30 hops max, 60 byte packets 1 _gateway (10.61.157.1) 0.374 ms 0.322 ms 0.297 ms
I did some digging into the packets that are acted on by that REJECT rule by adding a logging rule just before it. All of the packets that are hitting that rule are broadcast packets sent to the whole subnet over udp from a machine outside of openstack. . .so that REJECT rule does not seem to be the reason for me not being able to reach the gateway or beyond. . . Any other ideas? Thanks! Collin ________________________________ From: Collin Linford <clinford@familysearch.org> Sent: Tuesday, February 4, 2025 8:14 AM To: Eugen Block <eblock@nde.ag> Cc: openstack-discuss@lists.openstack.org <openstack-discuss@lists.openstack.org> Subject: Re: [Ext:] Re: openstack instances unable to reach the internet Eugen, Thank you again for your response. Your help is greatly appreciated. When I run iptables -nvL, I see that the REJECT in the input chain does have some packets that hit that rule, but the one in the FORWARD chain has a 0 packet count. However, when I ping the 10.61.157.1 gateway address or 8.8.8.8 from the cirros instance, that counter does not increment, so I don't think that is the issue 🙁 I believe that those REJECT rules are the "default" ones that take effect if no other rule applies. Is my understanding correct there? I am willing to try to make adjustments to my iptables rules, but am not sure how without breaking things. Do you have any suggestions? Here is the output of the iptables -nvL Every 2.0s: iptables -nvL l21652.ldschurch.org: Tue Feb 4 08:05:49 2025 Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 5843K 1115M ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 multiport dports 5671,5672 /* 001 amqp incoming amqp_10.61.157.59 */ 0 0 ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 8042 /* 001 aodh-api incoming aodh_api */ 268K 21M ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 multiport dports 3260 /* 001 cinder incoming cinder_10.61.157.59 */ 36713 4659K ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 8776 /* 001 cinder-api incoming cinder_api */ 1409 293K ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 9292 /* 001 glance incoming glance_api */ 150K 30M ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 8041 /* 001 gnocchi-api incoming gnocchi_api */ 7846 1851K ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 80 /* 001 horizon 80 incoming */ 246K 31M ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 5000 /* 001 keystone incoming keystone */ 23M 3860M ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 multiport dports 3306 /* 001 mariadb incoming mariadb_10.61.157.59 */ 155K 62M ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 9696 /* 001 neutron server incoming neutron_server_10.61.157.59 */ 0 0 ACCEPT 17 -- * * 10.61.157.59 0.0.0.0/0 udp dpt:6081 /* 001 neutron tunnel port incoming neutron_tunnel_10.61.157.59_10.61.157.59 */ 163K 32M ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 8773,8774,8775,8778 /* 001 nova api incoming nova_api */ 177K 11M ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 multiport dports 5900:5999 /* 001 nova compute incoming nova_compute */ 33021 8785K ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 multiport dports 16509,49152:49215 /* 001 nova qemu migration incoming nova_qemu_migration_10.61.157.59_10.61.157.59 */ 171K 9554K ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 6080 /* 001 novncproxy incoming */ 928K 84M ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 6641 /* 001 ovn northd incoming ovn_northd_10.61.157.59 */ 1544K 139M ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 6642 /* 001 ovn southd incoming ovn_southd_10.61.157.59 */ 7224K 1214M ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 tcp dpt:6379 /* 001 redis service incoming redis service from 10.61.157.59 */ 6 428 ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 8080 /* 001 swift proxy incoming swift_proxy */ 17015 2244K ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 multiport dports 6000,6001,6002,873 /* 001 swift storage and rsync incoming swift_storage_and_rsync_10.61.157.59 */ 43M 8270M ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 20 1456 ACCEPT 1 -- * * 0.0.0.0/0 0.0.0.0/0 11403 684K ACCEPT 0 -- lo * 0.0.0.0/0 0.0.0.0/0 72 3800 ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 5880 1317K REJECT 0 -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT 0 -- br-ex * 0.0.0.0/0 0.0.0.0/0 /* 000 forward in */ 0 0 ACCEPT 0 -- * br-ex 0.0.0.0/0 0.0.0.0/0 /* 000 forward out */ 0 0 REJECT 0 -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT 83M packets, 16G bytes) pkts bytes target prot opt in out source destination Thanks! Collin ________________________________ From: Eugen Block <eblock@nde.ag> Sent: Tuesday, February 4, 2025 2:42 AM To: Collin Linford <clinford@familysearch.org> Cc: openstack-discuss@lists.openstack.org <openstack-discuss@lists.openstack.org> Subject: Re: [Ext:] Re: openstack instances unable to reach the internet Do you see actual packet drops or rejections if you run 'watch iptables -nvL'? I have those REJECT rules neither in my test clusters nor in our production cluster:
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Zitat von Collin Linford <clinford@familysearch.org>:
Yes, the security group allows icmp traffic as well as ssh traffic on top of the other default security group settings.
Selinux is running in permissive mode (so shouldn't be blocking anything) and firewalld is disabled.
We don't use apparmor.
I also am not an iptables expert, but I haven't seen anything in the rules that jump out to me as being problematic:
Here are the iptables rules: # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports amqps,amqp /* 001 amqp incoming amqp_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports fs-agent /* 001 aodh-api incoming aodh_api */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports iscsi-target /* 001 cinder incoming cinder_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 8776 /* 001 cinder-api incoming cinder_api */ ACCEPT tcp -- anywhere anywhere multiport dports armtechdaemon /* 001 glance incoming glance_api */ ACCEPT tcp -- anywhere anywhere multiport dports 8041 /* 001 gnocchi-api incoming gnocchi_api */ ACCEPT tcp -- anywhere anywhere multiport dports http /* 001 horizon 80 incoming */ ACCEPT tcp -- anywhere anywhere multiport dports commplex-main /* 001 keystone incoming keystone */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports mysql /* 001 mariadb incoming mariadb_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 9696 /* 001 neutron server incoming neutron_server_10.61.157.59 */ ACCEPT udp -- l21652.ldschurch.org anywhere udp dpt:geneve /* 001 neutron tunnel port incoming neutron_tunnel_10.61.157.59_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 8773,8774,8775,uec /* 001 nova api incoming nova_api */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports rfb:cvsup /* 001 nova compute incoming nova_compute */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports 16509,49152:49215 /* 001 nova qemu migration incoming nova_qemu_migration_10.61.157.59_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 6080 /* 001 novncproxy incoming */ ACCEPT tcp -- anywhere anywhere multiport dports 6641 /* 001 ovn northd incoming ovn_northd_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 6642 /* 001 ovn southd incoming ovn_southd_10.61.157.59 */ ACCEPT tcp -- l21652.ldschurch.org anywhere tcp dpt:redis /* 001 redis service incoming redis service from 10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports webcache /* 001 swift proxy incoming swift_proxy */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports x11,6001,6002,rsync /* 001 swift storage and rsync incoming swift_storage_and_rsync_10.61.157.59 */ ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere /* 000 forward in */ ACCEPT all -- anywhere anywhere /* 000 forward out */ REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT) target prot opt source destination
Thanks! Collin ________________________________ From: Eugen Block <eblock@nde.ag> Sent: Monday, February 3, 2025 8:20 AM To: openstack-discuss@lists.openstack.org <openstack-discuss@lists.openstack.org> Subject: [Ext:] Re: openstack instances unable to reach the internet
[External Email]
We use flat,vlan and vxlan. Did you check if the instance's security-group allows icmp traffic? Is there anything else potentially blocking traffic like firewalld, apparmor, selinux and so on? I guess looking at iptables could help here as well, but I'm not sure if I can really help with that.
Zitat von collinl@churchofjesuschrist.org:
Eugen, Thank you very much for your response.
What network type do you typically use? flat? vxlan?
I don't believe that there is routing between the controller/compute node and the gateway. . .and from my testing that is where the pings are getting lost.
Here is an example from the cirros instance pinging the controller/compute node (successfully), and the external gateway(not successfully): $ ping 10.61.157.59 PING 10.61.157.59 (10.61.157.59): 56 data bytes 64 bytes from 10.61.157.59: seq=0 ttl=63 time=1.378 ms 64 bytes from 10.61.157.59: seq=1 ttl=63 time=1.317 ms 64 bytes from 10.61.157.59: seq=2 ttl=63 time=0.901 ms ^C --- 10.61.157.59 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.901/1.198/1.378 ms $ ping 10.61.157.1 PING 10.61.157.1 (10.61.157.1): 56 data bytes ^C --- 10.61.157.1 ping statistics --- 9 packets transmitted, 0 packets received, 100% packet loss
How would I go about determining what is causing the packets to get lost at that point?
From the host 10.61.157.59 itself, it CAN ping the 10.61.157.1 gateway just fine. . .and all sites externally (i.e. 8.8.8.8)
here is the traceroute output from the controller/compute node, showing a successful output, and with only one hop. . .so I don't think there is anything between the host and the gateway network wise: # traceroute 10.61.157.1 traceroute to 10.61.157.1 (10.61.157.1), 30 hops max, 60 byte packets 1 _gateway (10.61.157.1) 0.374 ms 0.322 ms 0.297 ms
You're right, seems some sort of default which I don't see in my environments. But for now I'm out of ideas, I'd probably start to look at tcpdumps, maybe they reveal something. Zitat von Collin Linford <clinford@familysearch.org>:
I did some digging into the packets that are acted on by that REJECT rule by adding a logging rule just before it.
All of the packets that are hitting that rule are broadcast packets sent to the whole subnet over udp from a machine outside of openstack. . .so that REJECT rule does not seem to be the reason for me not being able to reach the gateway or beyond. . .
Any other ideas?
Thanks! Collin ________________________________ From: Collin Linford <clinford@familysearch.org> Sent: Tuesday, February 4, 2025 8:14 AM To: Eugen Block <eblock@nde.ag> Cc: openstack-discuss@lists.openstack.org <openstack-discuss@lists.openstack.org> Subject: Re: [Ext:] Re: openstack instances unable to reach the internet
Eugen,
Thank you again for your response. Your help is greatly appreciated.
When I run iptables -nvL, I see that the REJECT in the input chain does have some packets that hit that rule, but the one in the FORWARD chain has a 0 packet count.
However, when I ping the 10.61.157.1 gateway address or 8.8.8.8 from the cirros instance, that counter does not increment, so I don't think that is the issue 🙁
I believe that those REJECT rules are the "default" ones that take effect if no other rule applies. Is my understanding correct there?
I am willing to try to make adjustments to my iptables rules, but am not sure how without breaking things. Do you have any suggestions?
Here is the output of the iptables -nvL
Every 2.0s: iptables -nvL
l21652.ldschurch.org: Tue Feb 4 08:05:49 2025
Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 5843K 1115M ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 multiport dports 5671,5672 /* 001 amqp incoming amqp_10.61.157.59 */ 0 0 ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 8042 /* 001 aodh-api incoming aodh_api */ 268K 21M ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 multiport dports 3260 /* 001 cinder incoming cinder_10.61.157.59 */ 36713 4659K ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 8776 /* 001 cinder-api incoming cinder_api */ 1409 293K ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 9292 /* 001 glance incoming glance_api */ 150K 30M ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 8041 /* 001 gnocchi-api incoming gnocchi_api */ 7846 1851K ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 80 /* 001 horizon 80 incoming */ 246K 31M ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 5000 /* 001 keystone incoming keystone */ 23M 3860M ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 multiport dports 3306 /* 001 mariadb incoming mariadb_10.61.157.59 */ 155K 62M ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 9696 /* 001 neutron server incoming neutron_server_10.61.157.59 */ 0 0 ACCEPT 17 -- * * 10.61.157.59 0.0.0.0/0 udp dpt:6081 /* 001 neutron tunnel port incoming neutron_tunnel_10.61.157.59_10.61.157.59 */ 163K 32M ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 8773,8774,8775,8778 /* 001 nova api incoming nova_api */ 177K 11M ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 multiport dports 5900:5999 /* 001 nova compute incoming nova_compute */ 33021 8785K ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 multiport dports 16509,49152:49215 /* 001 nova qemu migration incoming nova_qemu_migration_10.61.157.59_10.61.157.59 */ 171K 9554K ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 6080 /* 001 novncproxy incoming */ 928K 84M ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 6641 /* 001 ovn northd incoming ovn_northd_10.61.157.59 */ 1544K 139M ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 6642 /* 001 ovn southd incoming ovn_southd_10.61.157.59 */ 7224K 1214M ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 tcp dpt:6379 /* 001 redis service incoming redis service from 10.61.157.59 */ 6 428 ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 8080 /* 001 swift proxy incoming swift_proxy */ 17015 2244K ACCEPT 6 -- * * 10.61.157.59 0.0.0.0/0 multiport dports 6000,6001,6002,873 /* 001 swift storage and rsync incoming swift_storage_and_rsync_10.61.157.59 */ 43M 8270M ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 20 1456 ACCEPT 1 -- * * 0.0.0.0/0 0.0.0.0/0 11403 684K ACCEPT 0 -- lo * 0.0.0.0/0 0.0.0.0/0 72 3800 ACCEPT 6 -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 5880 1317K REJECT 0 -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT 0 -- br-ex * 0.0.0.0/0 0.0.0.0/0 /* 000 forward in */ 0 0 ACCEPT 0 -- * br-ex 0.0.0.0/0 0.0.0.0/0 /* 000 forward out */ 0 0 REJECT 0 -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT 83M packets, 16G bytes) pkts bytes target prot opt in out source destination
Thanks! Collin ________________________________ From: Eugen Block <eblock@nde.ag> Sent: Tuesday, February 4, 2025 2:42 AM To: Collin Linford <clinford@familysearch.org> Cc: openstack-discuss@lists.openstack.org <openstack-discuss@lists.openstack.org> Subject: Re: [Ext:] Re: openstack instances unable to reach the internet
Do you see actual packet drops or rejections if you run 'watch iptables -nvL'? I have those REJECT rules neither in my test clusters nor in our production cluster:
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Zitat von Collin Linford <clinford@familysearch.org>:
Yes, the security group allows icmp traffic as well as ssh traffic on top of the other default security group settings.
Selinux is running in permissive mode (so shouldn't be blocking anything) and firewalld is disabled.
We don't use apparmor.
I also am not an iptables expert, but I haven't seen anything in the rules that jump out to me as being problematic:
Here are the iptables rules: # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports amqps,amqp /* 001 amqp incoming amqp_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports fs-agent /* 001 aodh-api incoming aodh_api */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports iscsi-target /* 001 cinder incoming cinder_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 8776 /* 001 cinder-api incoming cinder_api */ ACCEPT tcp -- anywhere anywhere multiport dports armtechdaemon /* 001 glance incoming glance_api */ ACCEPT tcp -- anywhere anywhere multiport dports 8041 /* 001 gnocchi-api incoming gnocchi_api */ ACCEPT tcp -- anywhere anywhere multiport dports http /* 001 horizon 80 incoming */ ACCEPT tcp -- anywhere anywhere multiport dports commplex-main /* 001 keystone incoming keystone */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports mysql /* 001 mariadb incoming mariadb_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 9696 /* 001 neutron server incoming neutron_server_10.61.157.59 */ ACCEPT udp -- l21652.ldschurch.org anywhere udp dpt:geneve /* 001 neutron tunnel port incoming neutron_tunnel_10.61.157.59_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 8773,8774,8775,uec /* 001 nova api incoming nova_api */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports rfb:cvsup /* 001 nova compute incoming nova_compute */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports 16509,49152:49215 /* 001 nova qemu migration incoming nova_qemu_migration_10.61.157.59_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 6080 /* 001 novncproxy incoming */ ACCEPT tcp -- anywhere anywhere multiport dports 6641 /* 001 ovn northd incoming ovn_northd_10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports 6642 /* 001 ovn southd incoming ovn_southd_10.61.157.59 */ ACCEPT tcp -- l21652.ldschurch.org anywhere tcp dpt:redis /* 001 redis service incoming redis service from 10.61.157.59 */ ACCEPT tcp -- anywhere anywhere multiport dports webcache /* 001 swift proxy incoming swift_proxy */ ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports x11,6001,6002,rsync /* 001 swift storage and rsync incoming swift_storage_and_rsync_10.61.157.59 */ ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere /* 000 forward in */ ACCEPT all -- anywhere anywhere /* 000 forward out */ REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT) target prot opt source destination
Thanks! Collin ________________________________ From: Eugen Block <eblock@nde.ag> Sent: Monday, February 3, 2025 8:20 AM To: openstack-discuss@lists.openstack.org <openstack-discuss@lists.openstack.org> Subject: [Ext:] Re: openstack instances unable to reach the internet
[External Email]
We use flat,vlan and vxlan. Did you check if the instance's security-group allows icmp traffic? Is there anything else potentially blocking traffic like firewalld, apparmor, selinux and so on? I guess looking at iptables could help here as well, but I'm not sure if I can really help with that.
Zitat von collinl@churchofjesuschrist.org:
Eugen, Thank you very much for your response.
What network type do you typically use? flat? vxlan?
I don't believe that there is routing between the controller/compute node and the gateway. . .and from my testing that is where the pings are getting lost.
Here is an example from the cirros instance pinging the controller/compute node (successfully), and the external gateway(not successfully): $ ping 10.61.157.59 PING 10.61.157.59 (10.61.157.59): 56 data bytes 64 bytes from 10.61.157.59: seq=0 ttl=63 time=1.378 ms 64 bytes from 10.61.157.59: seq=1 ttl=63 time=1.317 ms 64 bytes from 10.61.157.59: seq=2 ttl=63 time=0.901 ms ^C --- 10.61.157.59 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.901/1.198/1.378 ms $ ping 10.61.157.1 PING 10.61.157.1 (10.61.157.1): 56 data bytes ^C --- 10.61.157.1 ping statistics --- 9 packets transmitted, 0 packets received, 100% packet loss
How would I go about determining what is causing the packets to get lost at that point?
From the host 10.61.157.59 itself, it CAN ping the 10.61.157.1 gateway just fine. . .and all sites externally (i.e. 8.8.8.8)
here is the traceroute output from the controller/compute node, showing a successful output, and with only one hop. . .so I don't think there is anything between the host and the gateway network wise: # traceroute 10.61.157.1 traceroute to 10.61.157.1 (10.61.157.1), 30 hops max, 60 byte packets 1 _gateway (10.61.157.1) 0.374 ms 0.322 ms 0.297 ms
Thanks again for the response. I have already been trying to do some packet captures. . .but again, not an area I am strong in. I (so far) don't see any clues. . . Here are the icmp packets that I captured while trying to ping the gateway from the instance (captured from the controller host capturing on all interfaces): No. Time Source Destination Protocol Length Info 3724 2025-02-05 08:39:47.548638 192.168.100.128 10.61.157.1 ICMP 104 Echo (ping) request id=0x9e03, seq=0/0, ttl=64 (no response found!) 3725 2025-02-05 08:39:47.549273 172.24.4.175 10.61.157.1 ICMP 104 Echo (ping) request id=0x9e03, seq=0/0, ttl=63 (no response found!) 3987 2025-02-05 08:39:48.547496 192.168.100.128 10.61.157.1 ICMP 104 Echo (ping) request id=0x9e03, seq=1/256, ttl=64 (no response found!) 3988 2025-02-05 08:39:48.547528 172.24.4.175 10.61.157.1 ICMP 104 Echo (ping) request id=0x9e03, seq=1/256, ttl=63 (no response found!) 4266 2025-02-05 08:39:49.548150 192.168.100.128 10.61.157.1 ICMP 104 Echo (ping) request id=0x9e03, seq=2/512, ttl=64 (no response found!) 4267 2025-02-05 08:39:49.548182 172.24.4.175 10.61.157.1 ICMP 104 Echo (ping) request id=0x9e03, seq=2/512, ttl=63 (no response found!) 4488 2025-02-05 08:39:50.549341 192.168.100.128 10.61.157.1 ICMP 104 Echo (ping) request id=0x9e03, seq=3/768, ttl=64 (no response found!) 4489 2025-02-05 08:39:50.549373 172.24.4.175 10.61.157.1 ICMP 104 Echo (ping) request id=0x9e03, seq=3/768, ttl=63 (no response found!) 4817 2025-02-05 08:39:51.550211 192.168.100.128 10.61.157.1 ICMP 104 Echo (ping) request id=0x9e03, seq=4/1024, ttl=64 (no response found!) 4818 2025-02-05 08:39:51.550239 172.24.4.175 10.61.157.1 ICMP 104 Echo (ping) request id=0x9e03, seq=4/1024, ttl=63 (no response found!) I can see the ping on two interfaces (the one with the source of 192.168.100.128 is one of the tap interfaces, and the one with the source of 172.24.4.175 is the br-ex interface), as the ping goes from the instance through the ovs networking stack, and through the br-ex interface. . .but all I can see is the "no response found" Not sure what to do to figure out where the packets are actually going, and why the response isn't coming back. . .
On 2/5/25 6:40 PM, collinl@churchofjesuschrist.org wrote: ...
No. Time Source Destination Protocol Length Info 3724 2025-02-05 08:39:47.548638 192.168.100.128 10.61.157.1 ICMP 104 Echo (ping) request id=0x9e03, seq=0/0, ttl=64 (no response found!)
Either routing is wrong or firewall
Thank you tjoen. Here is my route from the host: # ip route default via 10.61.157.1 dev br-ex 10.61.157.0/24 dev br-ex scope link 172.24.4.0/24 dev br-ex proto kernel scope link src 172.24.4.1 Does that look correct? (I have attempted to change the scope to global for both the 10.61.157.0/24 and 172.24.4.0/24 routes, but that didn't seem to make a difference) As for firewall, firewalld is disabled/dead, so it isn't a (local) firewall --at least that I can tell. Thanks! Collin
As I have been unable to get it working with packstack, I took a different route, and did an aio insatll via openstack-ansible using this quickstart guide: https://docs.openstack.org/openstack-ansible/2024.2/user/aio/quickstart.html I just wanted to report that I did have to make several adjustments for that install to work for me I found that before running "openstack-ansible openstack.osa.setup_infrastructure", I had to make sure to enable the rocky extras repo. There is probably a much better way to do it than I did, but this is what I did: in the /etc/ansible/ansible_collections/openstack/osa/roles/glusterfs/vars/redhat.yml I changed this: glusterfs_server_dnf_enable: [] to this: glusterfs_server_dnf_enable: - extras And before running openstack-ansible openstack.osa.setup_openstack, I had to modify /etc/ansible/ansible_collections/openstack/osa/playbooks/neutron.yml file by adding this: tasks: - name: Enable the extras repository command: dnf config-manager --enable extras Also, the tempest jobs all fail for me, so I got by that by commenting out the following entries in the /etc/ansible/ansible_collections/openstack/osa/playbooks/setup_openstack.yml file: # - name: Importing tempest playbook # import_playbook: openstack.osa.tempest # - name: Importing rally playbook # import_playbook: openstack.osa.rally After making those changes to the openstack-ansible code, the instance I spun up is able to reach the internet as expected. Thank you to everyone who responded trying to help me figure this out.
participants (4)
-
Collin Linford
-
collinl@churchofjesuschrist.org
-
Eugen Block
-
tjoen