Yes, the security group allows icmp traffic as well as ssh traffic on top of the other default security group settings.
Selinux is running in permissive mode (so shouldn't be blocking anything) and firewalld is disabled.
We don't use apparmor.
I also am not an iptables expert, but I haven't seen anything in the rules that jump out to me as being problematic:
Here are the iptables rules:
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports amqps,amqp /* 001 amqp incoming amqp_10.61.157.59 */
ACCEPT tcp -- anywhere anywhere multiport dports fs-agent /* 001 aodh-api incoming aodh_api */
ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports iscsi-target /* 001 cinder incoming cinder_10.61.157.59 */
ACCEPT tcp -- anywhere anywhere multiport dports 8776 /* 001 cinder-api incoming cinder_api */
ACCEPT tcp -- anywhere anywhere multiport dports armtechdaemon /* 001 glance incoming glance_api */
ACCEPT tcp -- anywhere anywhere multiport dports 8041 /* 001 gnocchi-api incoming gnocchi_api */
ACCEPT tcp -- anywhere anywhere multiport dports http /* 001 horizon 80 incoming */
ACCEPT tcp -- anywhere anywhere multiport dports commplex-main /* 001 keystone incoming keystone */
ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports mysql /* 001 mariadb incoming mariadb_10.61.157.59 */
ACCEPT tcp -- anywhere anywhere multiport dports 9696 /* 001 neutron server incoming neutron_server_10.61.157.59 */
ACCEPT udp -- l21652.ldschurch.org anywhere udp dpt:geneve /* 001 neutron tunnel port incoming neutron_tunnel_10.61.157.59_10.61.157.59 */
ACCEPT tcp -- anywhere anywhere multiport dports 8773,8774,8775,uec /* 001 nova api incoming nova_api */
ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports rfb:cvsup /* 001 nova compute incoming nova_compute */
ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports 16509,49152:49215 /* 001 nova qemu migration incoming nova_qemu_migration_10.61.157.59_10.61.157.59 */
ACCEPT tcp -- anywhere anywhere multiport dports 6080 /* 001 novncproxy incoming */
ACCEPT tcp -- anywhere anywhere multiport dports 6641 /* 001 ovn northd incoming ovn_northd_10.61.157.59 */
ACCEPT tcp -- anywhere anywhere multiport dports 6642 /* 001 ovn southd incoming ovn_southd_10.61.157.59 */
ACCEPT tcp -- l21652.ldschurch.org anywhere tcp dpt:redis /* 001 redis service incoming redis service from 10.61.157.59 */
ACCEPT tcp -- anywhere anywhere multiport dports webcache /* 001 swift proxy incoming swift_proxy */
ACCEPT tcp -- l21652.ldschurch.org anywhere multiport dports x11,6001,6002,rsync /* 001 swift storage and rsync incoming swift_storage_and_rsync_10.61.157.59 */
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* 000 forward in */
ACCEPT all -- anywhere anywhere /* 000 forward out */
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Thanks!
Collin
From: Eugen Block <eblock@nde.ag>
Sent: Monday, February 3, 2025 8:20 AM
To: openstack-discuss@lists.openstack.org <openstack-discuss@lists.openstack.org>
Subject: [Ext:] Re: openstack instances unable to reach the internet
[External Email]
We use flat,vlan and vxlan.
Did you check if the instance's security-group allows icmp traffic? Is
there anything else potentially blocking traffic like firewalld,
apparmor, selinux and so on?
I guess looking at iptables could help here as well, but I'm not sure
if I can really help with that.
Zitat von collinl@churchofjesuschrist.org:
> Eugen, Thank you very much for your response.
>
> What network type do you typically use? flat? vxlan?
>
> I don't believe that there is routing between the controller/compute
> node and the gateway. . .and from my testing that is where the pings
> are getting lost.
>
> Here is an example from the cirros instance pinging the
> controller/compute node (successfully), and the external gateway(not
> successfully):
> $ ping 10.61.157.59
> PING 10.61.157.59 (10.61.157.59): 56 data bytes
> 64 bytes from 10.61.157.59: seq=0 ttl=63 time=1.378 ms
> 64 bytes from 10.61.157.59: seq=1 ttl=63 time=1.317 ms
> 64 bytes from 10.61.157.59: seq=2 ttl=63 time=0.901 ms
> ^C
> --- 10.61.157.59 ping statistics ---
> 3 packets transmitted, 3 packets received, 0% packet loss
> round-trip min/avg/max = 0.901/1.198/1.378 ms
> $ ping 10.61.157.1
> PING 10.61.157.1 (10.61.157.1): 56 data bytes
> ^C
> --- 10.61.157.1 ping statistics ---
> 9 packets transmitted, 0 packets received, 100% packet loss
>
> How would I go about determining what is causing the packets to get
> lost at that point?
>
>> From the host 10.61.157.59 itself, it CAN ping the 10.61.157.1
>> gateway just fine. . .and all sites externally (i.e. 8.8.8.8)
>
> here is the traceroute output from the controller/compute node,
> showing a successful output, and with only one hop. . .so I don't
> think there is anything between the host and the gateway network wise:
> # traceroute 10.61.157.1
> traceroute to 10.61.157.1 (10.61.157.1), 30 hops max, 60 byte packets
> 1 _gateway (10.61.157.1) 0.374 ms 0.322 ms 0.297 ms