[openstack-dev] [Neutron] ML2 extensions info propagation

Mohammad Banikazemi mb at us.ibm.com
Thu May 8 15:11:20 UTC 2014


Hi Mathieu,

Yes, the enhancement of the get_device_details method sounds like an
interesting and useful option.
The option of using drivers in the agent for supporting extensions is to
make the agent more modular and allow for selectively supporting extensions
as needed by a given agent. If we take the approach you are suggesting and
eliminate or reduce the use of extension specific RPCs how can we achieve
the modularity goal above? Is there a way to make these options useful
together? More broadly, what would be the impact of your proposal on the
modularity of the agent (if any)?

Please note that as per discussion during the ML2 meeting yesterday we are
going to have a single etherpad for each of ML2 sessions. The etherpad for
the Modular Layer 2 Agent session can be found at [2] from your original
email below. We may reorganize the information that is already there but
please do add your comments there.

Thanks,

Mohammad




From:	Mathieu Rohon <mathieu.rohon at gmail.com>
To:	OpenStack Development Mailing List
            <openstack-dev at lists.openstack.org>,
Date:	05/07/2014 10:25 AM
Subject:	[openstack-dev] [Neutron] ML2 extensions info propagation



Hi ML2er and others,

I'm considering discussions around ML2 for the summit. Unfortunatly I
won't attend the summit, so I'll try to participate through the
mailing list and etherpads.

I'm especially interested in extension support by Mechanism Driver[1]
and Modular agent[2]. During the Juno cycle I'll work on the capacity
to propagate IPVPN informations (route-target) down to the agent, so
that the agent can manage MPLS encapsulation.
I think that the easiest way to do that is to enhance
get_device_details() RPC message to add network extension informations
of the concerned port in the dict sent.

Moreover I think this approach could be generalized, and
get_device_details() in the agent should return serialized information
of a port with every extension informations (security_group,
port_binding...). When the core datamodel or the extension datamodel
would be modified, this would result in a port_update() with the
updated serialization of the datamodel. This way, we could get rid of
security-group and l2pop RPC. Modular agent wouldn't need to deal with
one driver by extension which need to register its RPC callbacks.

Those informations should also be stored in ML2 driver context. When a
port is created by ML2 plugin, it calls super() for creating core
datamodel, which will return a dict without extension informations,
because extension informations in the Rest call has not been processed
yet. But once the plugin call its core extension, it should call MD
registered extensions as proposed by nader here [4] and then call
make_port_dict(with extension), or an equivalent serialization
function, to create the driver context. this seralization function
would be used by get_device_details() RPC callbacks too.

Regards,

Mathieu

[1]https://etherpad.openstack.org/p/ML2_mechanismdriver_extensions_support
[2]https://etherpad.openstack.org/p/juno-neutron-modular-l2-agent
[3]http://summit.openstack.org/cfp/details/240
[4]https://review.openstack.org/#/c/89211/

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140508/af0cfc80/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140508/af0cfc80/attachment.gif>


More information about the OpenStack-dev mailing list