[openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

Nachi Ueno nachi at ntti3.com
Thu Jan 16 22:35:17 UTC 2014


Thanks! Kyle

2014/1/16 Kyle Mestery <mestery at siliconloons.com>:
> On Jan 16, 2014, at 4:27 PM, Nachi Ueno <nachi at ntti3.com> wrote:
>
>> Hi Bob, Kyle
>>
>> I pushed (A) https://review.openstack.org/#/c/67281/.
>> so could you review it?
>>
> Just did, looks good Nachi, thanks!
>
>> 2014/1/16 Robert Kukura <rkukura at redhat.com>:
>>> On 01/16/2014 03:13 PM, Kyle Mestery wrote:
>>>>
>>>> On Jan 16, 2014, at 1:37 PM, Nachi Ueno <nachi at ntti3.com> wrote:
>>>>
>>>>> Hi Amir
>>>>>
>>>>> 2014/1/16 Amir Sadoughi <amir.sadoughi at rackspace.com>:
>>>>>> Hi all,
>>>>>>
>>>>>> I just want to make sure I understand the plan and its consequences. I’m on board with the YAGNI principle of hardwiring mechanism drivers to return their firewall_driver types for now.
>>>>>>
>>>>>> However, after (A), (B), and (C) are completed, to allow for Open vSwitch-based security groups (blueprint ovs-firewall-driver) is it correct to say: we’ll need to implement a method such that the ML2 mechanism driver is aware of its agents and each of the agents' configured firewall_driver? i.e. additional RPC communication?
>>>>>>
>>>>>> From yesterday’s meeting: <http://eavesdrop.openstack.org/meetings/networking_ml2/2014/networking_ml2.2014-01-15-16.00.log.html>
>>>>>>
>>>>>> 16:44:17 <rkukura> I've suggested that the L2 agent could get the vif_security info from its firewall_driver, and include this in its agents_db info
>>>>>> 16:44:39 <rkukura> then the bound MD would return this as the vif_security for the port
>>>>>> 16:45:47 <rkukura> existing agents_db RPC would send it from agent to server and store it in the agents_db table
>>>>>>
>>>>>> Does the above suggestion change with the plan as-is now? From Nachi’s response, it seemed like maybe we should support concurrent firewall_driver instances in a single agent. i.e. don’t statically configure firewall_driver in the agent, but let the MD choose the firewall_driver for the port based on what firewall_drivers the agent supports.
>>>
>>> I don't see the need for anything that complex, although it could
>>> certainly be done in any MD+agent that needed it.
>>>
>>> I personally feel statically configuring a firewall driver for an L2
>>> agent is sufficient right now, and all ports handled by that agent will
>>> use that firewall driver.
>>>
>>> Clearly, different kinds of L2 agents that coexist within a deployment
>>> may use different firewall drivers. For example, linuxbridge-agent might
>>> use iptables-firewall-driver, openvswitch-agent might use
>>> ovs-firewall-driver, and hyperv-agent might use something else.
>>>
>>> I can also imagine cases where different instances of the same kind of
>>> L2 agent on different nodes might use different firewall drivers. Just
>>> as a hypothetical example, lets say that the ovs-firewall-driver
>>> requires new OVS features (maybe connection tracking). A deployment
>>> might have this new OVS feature available on some if its nodes, but not
>>> on others. It could be useful to configure openvswitch-agent on the
>>> nodes with the new OVS version to use ovs-firewall-driver, and configure
>>> openvswitch-agent on the nodes without the new OVS version to use
>>> iptables-firewall-driver. That kind of flexibility seems best supported
>>> by simply configuring the firewall driver in /ovs_neutron_plugin.ini on
>>> each node, which is what we currently do.
>>>
>>>>>
>>>>> Let's say we have OpenFlowBasedFirewallDriver and
>>>>> IptablesBasedFirewallDriver in future.
>>>>> I believe there is no usecase to let user to select such
>>>>> implementation detail by host.
>>>
>>> I suggest a hypothetical use case above. Not sure how important it is,
>>> but I'm hesitant to rule it out without good reason.
>>
>> Our community resource is limited, so we should focus on some usecase and
>> functionalities.
>> If there is no strong supporter for this usecase, we shouldn't do it.
>> We should take simplest implementation for our focused usecase.
>>
>>>>> so it is enough if we have a config security_group_mode=(openflow or
>>>>> iptables) in OVS MD configuration, then update vif_security based on
>>>>> this value.
>>>
>>> This is certainly one way the MD+agent combination could do it. It would
>>> require some RPC to transmit the choice of driver or mode to the agent.
>>> But I really don't think the MD and server have any business worrying
>>> about which firewall driver class runs in the L2 agent. Theoretically,
>>> the agent could be written in java;-). And don't forget that users may
>>> want to plug in a custom firewall driver class instead.
>>>
>>> I think these are the options, in my descending or of current preference:
>>>
>>> 1) Configure firewall_driver only in the agent and pass vif_security
>>> from the agent to the server. Each L2 agent gets the vif_security value
>>> from its configured driver and includes it in the agents_db RPC data.
>>> The MD copies the vif_security value from the agents_db to the port
>>> dictionary.
>>>
>>> 2) Configure firewall_driver only in the agent but the hardwire
>>> vif_security value for each MD. This is a reasonable short term solution
>>> until we actually have multiple firewall drivers that can work with
>>> single MD+agent.
>>>
>>> 3) Configure firewall_driver only in the agent and configure the
>>> vif_security value for each MD in the server. This is a slight
>>> improvement on #2 but doesn't handle the use case above. It seems more
>>> complicated and error prone for the user than #1.
>>>
>>> 4) Configure the firewall_driver or security_group_mode for each MD in
>>> the server. This would mean some new RPC is needed to for the agent to
>>> fetch the fthis from the server at startup. This could be problematic if
>>> the server isn't running when the L2 agent starts.
>>
>> Let's discuss more when you could have openflow based security group
>> implementation.
>>
>> This is my thought for general architecture.
>> - We should be able to manage such agent network behavior via Agent
>> Resource REST API in the server.
>> - The server should control agents,
>> - Agents should have only rpc connection information.
>>
>> so I'm +1 for option4. Agent can't work without server anyway, and he
>> can wait until it will be connected with servers.
>>
>>>>>
>>>> I agree with your thinking here Nachi. Leaving this as a global
>>>> configuration makes the most sense.
>>>>
>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Amir
>>>>>>
>>>>>>
>>>>>> On Jan 16, 2014, at 11:42 AM, Nachi Ueno <nachi at ntti3.com> wrote:
>>>>>>
>>>>>>> Hi Mathieu, Bob
>>>>>>>
>>>>>>> Thank you for your reply
>>>>>>> OK let's do (A) - (C) for now.
>>>>>>>
>>>>>>> (A) Remove firewall_driver from server side
>>>>>>>   Remove Noop <-- I'll write patch for this
>>>
>>> This gets replaced with the enable_security_groups server config, right?
>>>
>>>>>>>
>>>>>>> (B) update ML2 with extend_port_dict <-- Bob will push new review for this
>>>>>>>
>>>>>>> (C) Fix vif_security patch using (1) and (2). <-- I'll update the
>>>>>>> patch after (A) and (B) merged
>>>>>>>   # config is hardwired for each mech drivers for now
>>>
>>> I completely agree with doing A, B, and C now. My understanding is that
>>> this is equivalent to my option 2 above.
>>>
>>>>>>>
>>>>>>> (Optional D) Rething firewall_driver config in the agent
>>>
>>> See above for my current view on that. But a decision on D can be
>>> deferred for now, at least until we have a choice of firewall drivers.
>>>
>>> -Bob
>>>
>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> 2014/1/16 Robert Kukura <rkukura at redhat.com>:
>>>>>>>> On 01/16/2014 04:43 AM, Mathieu Rohon wrote:
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> your proposals make sense. Having the firewall driver configuring so
>>>>>>>>> much things looks pretty stange.
>>>>>>>>
>>>>>>>> Agreed. I fully support proposed fix 1, adding enable_security_group
>>>>>>>> config, at least for ml2. I'm not sure whether making this sort of
>>>>>>>> change go the openvswitch or linuxbridge plugins at this stage is needed.
>>>>>>>>
>>>>>>>>
>>>>>>>>> Enabling security group should be a plugin/MD decision, not a driver decision.
>>>>>>>>
>>>>>>>> I'm not so sure I support proposed fix 2, removing firewall_driver
>>>>>>>> configuration. I think with proposed fix 1, firewall_driver becomes an
>>>>>>>> agent-only configuration variable, which seems fine to me, at least for
>>>>>>>> now. The people working on ovs-firewall-driver need something like this
>>>>>>>> to choose the between their new driver and the iptables driver. Each L2
>>>>>>>> agent could obviously revisit this later if needed.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> For ML2, in a first implementation, having vif security based on
>>>>>>>>> vif_type looks good too.
>>>>>>>>
>>>>>>>> I'm not convinced to support proposed fix 3, basing ml2's vif_security
>>>>>>>> on the value of vif_type. It seems to me that if vif_type was all that
>>>>>>>> determines how nova handles security groups, there would be no need for
>>>>>>>> either the old capabilities or new vif_security port attribute.
>>>>>>>>
>>>>>>>> I think each ML2 bound MechanismDriver should be able to supply whatever
>>>>>>>> vif_security (or capabilities) value it needs. It should be free to
>>>>>>>> determine that however it wants. It could be made configurable on the
>>>>>>>> server-side as Mathieu suggest below, or could be kept configurable in
>>>>>>>> the L2 agent and transmitted via agents_db RPC to the MechanismDriver in
>>>>>>>> the server as I have previously suggested.
>>>>>>>>
>>>>>>>> As an initial step, until we really have multiple firewall drivers to
>>>>>>>> choose from, I think we can just hardwire each agent-based
>>>>>>>> MechanismDriver to return the correct vif_security value for its normal
>>>>>>>> firewall driver, as we currently do for the capabilities attribute.
>>>>>>>>
>>>>>>>> Also note that I really like the extend_port_dict() MechanismDriver
>>>>>>>> methods in Nachi's current patch set. This is a much nicer way for the
>>>>>>>> bound MechanismDriver to return binding-specific attributes than what
>>>>>>>> ml2 currently does for vif_type and capabilities. I'm working on a patch
>>>>>>>> taking that part of Nachi's code, fixing a few things, and extending it
>>>>>>>> to handle the vif_type attribute as well as the current capabilities
>>>>>>>> attribute. I'm hoping to post at least a WIP version of this today.
>>>>>>>>
>>>>>>>> I do support hardwiring the other plugins to return specific
>>>>>>>> vif_security values, but those values may need to depend on the value of
>>>>>>>> enable_security_group from proposal 1.
>>>>>>>>
>>>>>>>> -Bob
>>>>>>>>
>>>>>>>>> Once OVSfirewallDriver will be available, the firewall drivers that
>>>>>>>>> the operator wants to use should be in a MD config file/section and
>>>>>>>>> ovs MD could bind one of the firewall driver during
>>>>>>>>> port_create/update/get.
>>>>>>>>>
>>>>>>>>> Best,
>>>>>>>>> Mathieu
>>>>>>>>>
>>>>>>>>> On Wed, Jan 15, 2014 at 6:29 PM, Nachi Ueno <nachi at ntti3.com> wrote:
>>>>>>>>>> Hi folks
>>>>>>>>>>
>>>>>>>>>> Security group for OVS agent (ovs plugin or ML2) is being broken.
>>>>>>>>>> so we need vif_security port binding to fix this
>>>>>>>>>> (https://review.openstack.org/#/c/21946/)
>>>>>>>>>>
>>>>>>>>>> We got discussed about the architecture for ML2 on ML2 weekly meetings, and
>>>>>>>>>> I wanna continue discussion in here.
>>>>>>>>>>
>>>>>>>>>> Here is my proposal for how to fix it.
>>>>>>>>>>
>>>>>>>>>> https://docs.google.com/presentation/d/1ktF7NOFY_0cBAhfqE4XjxVG9yyl88RU_w9JcNiOukzI/edit#slide=id.p
>>>>>>>>>>
>>>>>>>>>> Best
>>>>>>>>>> Nachi
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> OpenStack-dev mailing list
>>>>>>>>>> OpenStack-dev at lists.openstack.org
>>>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> OpenStack-dev mailing list
>>>>>>> OpenStack-dev at lists.openstack.org
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> OpenStack-dev mailing list
>>>>>> OpenStack-dev at lists.openstack.org
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>> _______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> OpenStack-dev at lists.openstack.org
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



More information about the OpenStack-dev mailing list