<div dir="ltr"><div class="gmail_extra"><br><br><div class="gmail_quote">On 11 April 2014 19:11, Robert Kukura <span dir="ltr"><<a href="mailto:kukura@noironetworks.com" target="_blank">kukura@noironetworks.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><div class="">
<br>
<div>On 4/10/14, 6:35 AM, Salvatore Orlando
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>The bug for documenting the 'multi-provider' API extension
is still open [1].<br>
</div>
<div>The bug report has a good deal of information, but perhaps
it might be worth also documenting how ML2 uses the segment
information, as this might be useful to understand when one
should use the 'provider' extension and when instead the
'multi-provider' would be a better fit.</div>
<div><br>
</div>
<div>Unfortunately I do not understand enough how ML2 handles
multi-segment networks, so I hope somebody from the ML2 team
can chime in.</div>
</div>
</blockquote></div>
Here's a quick description of ML2 port binding, including how
multi-segment networks are handled:<br>
<br>
<blockquote>Port binding is how the ML2 plugin determines the
mechanism driver that handles the port, the network segment to
which the port is attached, and the values of the binding:vif_type
and binding:vif_details port attributes. Its inputs are the
binding:host_id and binding:profile port attributes, as well as
the segments of the port's network. When port binding is
triggered, each registered mechanism driver’s bind_port() function
is called, in the order specified in the mechanism_drivers config
variable, until one succeeds in binding, or all have been tried.
If none succeed, the binding:vif_type attribute is set to
'binding_failed'. In bind_port(), each mechanism driver checks if
it can bind the port on the binding:host_id host, using any of the
network’s segments, honoring any requirements it understands in
binding:profile. If it can bind the port, the mechanism driver
calls PortContext.set_binding() from within bind_port(), passing
the chosen segment's ID, the values for binding:vif_type and
binding:vif_details, and optionally, the port’s status. A common
base class for mechanism drivers supporting L2 agents implements
bind_port() by iterating over the segments and calling a
try_to_bind_segment_for_agent() function that decides whether the
port can be bound based on the agents_db info periodically
reported via RPC by that specific L2 agent. For network segment
types of 'flat' and 'vlan', the try_to_bind_segment_for_agent()
function checks whether the L2 agent on the host has a mapping
from the segment's physical_network value to a bridge or
interface. For tunnel network segment types,
try_to_bind_segment_for_agent() checks whether the L2 agent has
that tunnel type enabled.<br>
</blockquote>
<br>
Note that, although ML2 can manage binding to multi-segment
networks, neutron does not manage bridging between the segments of a
multi-segment network. This is assumed to be done administratively.<br></div></blockquote><div><br></div><div>Thanks Bob. I think the above paragraph is the answer I was looking for.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<br>
Finally, at least in ML2, the providernet and multiprovidernet
extensions are two different APIs to supply/view the same underlying
information. The older providernet extension can only deal with
single-segment networks, but is easier to use. The newer
multiprovidernet extension handles multi-segment networks and
potentially supports an extensible set of a segment properties, but
is more cumbersome to use, at least from the CLI. Either extension
can be used to create single-segment networks with ML2. Currently,
ML2 network operations return only the providernet attributes
(provider:network_type, provider:physical_network, and
provider:segmentation_id) for single-segment networks, and only the
multiprovidernet attribute (segments) for multi-segment networks. It
could be argued that all attributes should be returned from all
operations, with a provider:network_type value of 'multi-segment'
returned when the network has multiple segments. A blueprint in the
works for juno that lets each ML2 type driver define whatever
segment properties make sense for that type may lead to eventual
deprecation of the providernet extension.<br>
<br>
Hope this helps,<br>
<br>
-Bob<br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div><br>
</div>
<div>Salvatore</div>
<div><br>
</div>
<div><br>
</div>
<div>[1] <a href="https://bugs.launchpad.net/openstack-api-site/+bug/1242019" target="_blank">https://bugs.launchpad.net/openstack-api-site/+bug/1242019</a></div>
</div>
<br>
<fieldset></fieldset>
<br>
<pre>_______________________________________________
OpenStack-dev mailing list
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a>
</pre>
</blockquote>
<br>
</div>
<br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div></div>