[openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

Amir Sadoughi amir.sadoughi at RACKSPACE.COM
Thu Jan 16 19:25:04 UTC 2014


Hi all,

I just want to make sure I understand the plan and its consequences. I’m on board with the YAGNI principle of hardwiring mechanism drivers to return their firewall_driver types for now. 

However, after (A), (B), and (C) are completed, to allow for Open vSwitch-based security groups (blueprint ovs-firewall-driver) is it correct to say: we’ll need to implement a method such that the ML2 mechanism driver is aware of its agents and each of the agents' configured firewall_driver? i.e. additional RPC communication?

>From yesterday’s meeting: <http://eavesdrop.openstack.org/meetings/networking_ml2/2014/networking_ml2.2014-01-15-16.00.log.html>

16:44:17 <rkukura> I've suggested that the L2 agent could get the vif_security info from its firewall_driver, and include this in its agents_db info
16:44:39 <rkukura> then the bound MD would return this as the vif_security for the port
16:45:47 <rkukura> existing agents_db RPC would send it from agent to server and store it in the agents_db table

Does the above suggestion change with the plan as-is now? From Nachi’s response, it seemed like maybe we should support concurrent firewall_driver instances in a single agent. i.e. don’t statically configure firewall_driver in the agent, but let the MD choose the firewall_driver for the port based on what firewall_drivers the agent supports. 

Thanks,

Amir


On Jan 16, 2014, at 11:42 AM, Nachi Ueno <nachi at ntti3.com> wrote:

> Hi Mathieu, Bob
> 
> Thank you for your reply
> OK let's do (A) - (C) for now.
> 
> (A) Remove firewall_driver from server side
>     Remove Noop <-- I'll write patch for this
> 
> (B) update ML2 with extend_port_dict <-- Bob will push new review for this
> 
> (C) Fix vif_security patch using (1) and (2). <-- I'll update the
> patch after (A) and (B) merged
>     # config is hardwired for each mech drivers for now
> 
> (Optional D) Rething firewall_driver config in the agent
> 
> 
> 
> 
> 
> 2014/1/16 Robert Kukura <rkukura at redhat.com>:
>> On 01/16/2014 04:43 AM, Mathieu Rohon wrote:
>>> Hi,
>>> 
>>> your proposals make sense. Having the firewall driver configuring so
>>> much things looks pretty stange.
>> 
>> Agreed. I fully support proposed fix 1, adding enable_security_group
>> config, at least for ml2. I'm not sure whether making this sort of
>> change go the openvswitch or linuxbridge plugins at this stage is needed.
>> 
>> 
>>> Enabling security group should be a plugin/MD decision, not a driver decision.
>> 
>> I'm not so sure I support proposed fix 2, removing firewall_driver
>> configuration. I think with proposed fix 1, firewall_driver becomes an
>> agent-only configuration variable, which seems fine to me, at least for
>> now. The people working on ovs-firewall-driver need something like this
>> to choose the between their new driver and the iptables driver. Each L2
>> agent could obviously revisit this later if needed.
>> 
>>> 
>>> For ML2, in a first implementation, having vif security based on
>>> vif_type looks good too.
>> 
>> I'm not convinced to support proposed fix 3, basing ml2's vif_security
>> on the value of vif_type. It seems to me that if vif_type was all that
>> determines how nova handles security groups, there would be no need for
>> either the old capabilities or new vif_security port attribute.
>> 
>> I think each ML2 bound MechanismDriver should be able to supply whatever
>> vif_security (or capabilities) value it needs. It should be free to
>> determine that however it wants. It could be made configurable on the
>> server-side as Mathieu suggest below, or could be kept configurable in
>> the L2 agent and transmitted via agents_db RPC to the MechanismDriver in
>> the server as I have previously suggested.
>> 
>> As an initial step, until we really have multiple firewall drivers to
>> choose from, I think we can just hardwire each agent-based
>> MechanismDriver to return the correct vif_security value for its normal
>> firewall driver, as we currently do for the capabilities attribute.
>> 
>> Also note that I really like the extend_port_dict() MechanismDriver
>> methods in Nachi's current patch set. This is a much nicer way for the
>> bound MechanismDriver to return binding-specific attributes than what
>> ml2 currently does for vif_type and capabilities. I'm working on a patch
>> taking that part of Nachi's code, fixing a few things, and extending it
>> to handle the vif_type attribute as well as the current capabilities
>> attribute. I'm hoping to post at least a WIP version of this today.
>> 
>> I do support hardwiring the other plugins to return specific
>> vif_security values, but those values may need to depend on the value of
>> enable_security_group from proposal 1.
>> 
>> -Bob
>> 
>>> Once OVSfirewallDriver will be available, the firewall drivers that
>>> the operator wants to use should be in a MD config file/section and
>>> ovs MD could bind one of the firewall driver during
>>> port_create/update/get.
>>> 
>>> Best,
>>> Mathieu
>>> 
>>> On Wed, Jan 15, 2014 at 6:29 PM, Nachi Ueno <nachi at ntti3.com> wrote:
>>>> Hi folks
>>>> 
>>>> Security group for OVS agent (ovs plugin or ML2) is being broken.
>>>> so we need vif_security port binding to fix this
>>>> (https://review.openstack.org/#/c/21946/)
>>>> 
>>>> We got discussed about the architecture for ML2 on ML2 weekly meetings, and
>>>> I wanna continue discussion in here.
>>>> 
>>>> Here is my proposal for how to fix it.
>>>> 
>>>> https://docs.google.com/presentation/d/1ktF7NOFY_0cBAhfqE4XjxVG9yyl88RU_w9JcNiOukzI/edit#slide=id.p
>>>> 
>>>> Best
>>>> Nachi
>>>> 
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




More information about the OpenStack-dev mailing list