[openstack-dev] [neutron][ml2] Mechanism drivers ! OpenvSwich or Linuxbridge or both of them?

Kevin Benton kevin at benton.pub
Thu Jan 5 12:01:24 UTC 2017


The mechanism drivers populate the vif details that tell nova how it's
supposed to setup the VM port. So the linux bridge driver tells it the port
type is linux bridge[1] and the OVS tells it that the type is OVS.

So if you have both loaded and ovs is running on the compute node. The
following steps will happen:

* nova sends a port update populating the host_id of the compute node the
port will be on
* ML2 processes the update and starts the port binding operation and calls
each driver
* The linux bridge mech driver will see that it has no active agents on
that host so it will not bind the port
* The openvswitch mech driver will see that it does have an active agent,
so it will bind the port and populate the details indicating it's an OVS
port
* The updated port with the vif details indicating that it's an OVS port
will be returned to Nova and nova will wire up the port for OVS




1.
https://github.com/openstack/neutron/blob/bcd6fddb127f4fe3f7ce3415f5b5e0da910e0e0b/neutron/plugins/ml2/drivers/linuxbridge/mech_driver/mech_linuxbridge.py#L40-L43

On Wed, Jan 4, 2017 at 7:51 PM, zhi <changzhi1990 at gmail.com> wrote:

> Hi, Kevin. If I load openvswitch and linuxbridge mechanism drivers in
> neutron server, and running ovs-agent in compute nodes. What does
> openvsitch mechanism driver do? What does linuxbridge mechanism do? I think
> there must have some differences between the openvswitch and the
> linuxbridge mechanism driver. But I can't get the exact point about the two
> mechanism drivers when running ovs-agent in compute nodes now.
>
> 2017-01-04 16:16 GMT+08:00 Kevin Benton <kevin at benton.pub>:
>
>> Note that with the openvswitch and linuxbridge mechanism drivers, it will
>> be safe to have both loaded on the Neutron server at the same time since
>> each driver will only bind a port if it has an agent of that type running
>> on the host.
>>
>> On Fri, Dec 30, 2016 at 1:24 PM, Sławek Kapłoński <slawek at kaplonski.pl>
>> wrote:
>>
>>> Hello,
>>>
>>> I don't know what is hierarchical port binding but about mechanism
>>> drivers, You should use this mechanism driver which L2 agent You are
>>> using on compute/network nodes. If You have OVS L2 agent then You should
>>> have enabled openvswitch mechanism driver.
>>> In general both of those drivers are doing similar work on
>>> neutron-server side because they are checking if proper agent type is
>>> working on host and if other conditions required to bind port are valid.
>>> Mechanism drivers can have also some additional informations about
>>> backend driver, e.g. there is info about supported QoS rule types for
>>> each backend driver (OVS, Linuxbridge and SR-IOV).
>>>
>>> BTW. IMHO You should send such questions to
>>> openstack at lists.openstack.org
>>>
>>> --
>>> Best regards / Pozdrawiam
>>> Sławek Kapłoński
>>> slawek at kaplonski.pl
>>>
>>> On Fri, 30 Dec 2016, zhi wrote:
>>>
>>> > Hi, all
>>> >
>>> > First of all. Happy New year for everyone!
>>> >
>>> > I have a question about mechanism drivers when using ML2 driver.
>>> >
>>> > When should I use openvswitch mechanism driver ?
>>> >
>>> > When should I use linuxbridge mechanism driver ?
>>> >
>>> > And, when should I use openvswitch and linuxbridge mechanism drivers ?
>>> >
>>> > In my opinion, ML2 driver has supported hierarchical port binding. By
>>> using
>>> > hierarchical port binding,
>>> > neutron will know every binding info in network topology, isn't it? If
>>> yes,
>>> > where I can found the every binding info. And what the relationship
>>> between
>>> > hierarchical port binding and mechanism drivers?
>>> >
>>> >
>>> > Hope for your reply.
>>> >
>>> > Thanks
>>> > Zhi Chang
>>>
>>> > ____________________________________________________________
>>> ______________
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe: OpenStack-dev-request at lists.op
>>> enstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> ____________________________________________________________
>>> ______________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170105/7fe259f7/attachment.html>


More information about the OpenStack-dev mailing list