About the use of security groups with neutron ports

ahmed.zaky.abdallah at gmail.com ahmed.zaky.abdallah at gmail.com
Fri Dec 27 14:38:03 UTC 2019


Thank you very much, Slawek.

 

In case I have multiple configuration files, how to know which one is currently loaded in neutron?

For example, in my environment I have:

*	ml2_conf.ini
*	ml2_conf_odl.ini 
*	ml2_conf_sriov.ini 
*	openvswitch_agent.ini 
*	sriov_agent.ini

 

 

[root at overcloud-controller-0 cbis-admin]# cd /etc/neutron/plugins/ml2/

[root at overcloud-controller-0 ml2]# ls

ml2_conf.ini  ml2_conf_odl.ini  ml2_conf_sriov.ini  openvswitch_agent.ini  sriov_agent.ini

 

 

Which one of these is used?

 

Cheers,

Ahmed

 

 

 

-----Original Message-----
From: Slawek Kaplonski <skaplons at redhat.com> 
Sent: Friday, December 27, 2019 10:28 AM
To: ahmed.zaky.abdallah at gmail.com
Cc: openstack-discuss at lists.openstack.org
Subject: Re: About the use of security groups with neutron ports

 

Hi,

 

> On 27 Dec 2019, at 00:14,  <mailto:ahmed.zaky.abdallah at gmail.com> ahmed.zaky.abdallah at gmail.com wrote:

> 

> Hi All,

>  

> I am trying to wrap my head around something I came across in one of the OpenStack deployments. I am running Telco VNFs one of them is having different VMs using SR-IOV interfaces. 

>  

> On one of my VNFs on Openstack, I defined a wrong IPv6 Gm bearer interface to be exactly the same as the IPv6 Gateway. As I hate re-onboarding, I decided to embark on a journey of changing the IPv6 of the Gm bearer interface manually on the application side, everything went on fine.

>  

> After two weeks, my customer started complaining about one way RTP flow. The customer was reluctant to blame the operation I carried out because everything worked smooth after my modification. 

> After days of investigation, I remembered that I have port-security enabled and this means AAP “Allowed-Address-Pairs” are defined per vPort (AAP contain the floating IP address of the VM so that  the security to allow traffic to and from this VIP). I gave it a try and edited AAP “Allowed-Address-Pairs” to include the correct new IPv6 address. Doing that everything started working fine.

>  

> The only logical explanation at that time is security group rules are really invoked. 

>  

> Now, I am trying to understand how the iptables are really invoked. I did some digging and it seems like we can control the firewall drivers on two levels:

>  

>             • Nova compute 

>             • ML2 plugin

>  

> I was curious to check nova.conf and it has already the following line: firewall_driver=nova.virt.firewall.NoopFirewallDriver

>  

> However, checking the ml2 plugin configuration, the following is found:

>  

>     230 [securitygroup]

>     231

>     232 #

>     233 # From neutron.ml2

>     234 #

>     235

>     236 # Driver for security groups firewall in the L2 agent (string value)

>     237 #firewall_driver = <None>

>     238 firewall_driver = openvswitch

>  

> So, I am jumping to a conclusion that ml2 plugin is the one responsible for enforcing the firewall rules in my case.

>  

> Have you had a similar experience?

> Is my assumption correct: If I comment out the ml2 plugin firewall driver then the port security carries no sense at all and security groups won’t be invoked?

 

Firewall_driver config option has to be set to some value. You can set “noop” as firewall_driver to completely disable this feature for all ports.

But please remember that You need to set it on agent’s side so it’s on compute nodes, not on neutron-server side.

Also, if You want to disable it only for some ports, You can set “port_security_enabled” to False and than SG will not be applied for such port and You will not need to configure any additional IPs in allowed address pairs for this port.

 

>  

> Cheers,

> Ahmed

 

— 

Slawek Kaplonski

Senior software engineer

Red Hat

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20191227/283f66d6/attachment.html>


More information about the openstack-discuss mailing list