[Openstack-operators] [neutron] multiple external networks on the same host NIC

Uwe Sauter uwe.sauter.de at gmail.com
Tue Apr 28 06:04:04 UTC 2015


Adam,

depending on your current setup and what you are trying to do, there are different possibilities.

The easiest would be if you want transparent VLANs, meaning that neither Neutron nor your VM guests know about VLANs. Then you would have  one bridge (earlier: br-join) where all the tagging would take place. The external interfaace would be configured as trunk while each connectick interface is taggedn with the one VLAN ID for its network (from Neutrons view still "outside").

If you want Neutron to manage VLANs than I'd have to think a bit more about the setup. But in this case, a bit more information about your setup would help, too.


Regards,

    Uwe

Am 28. April 2015 04:44:33 MESZ, schrieb Adam Lawson <alawson at aqorn.com>:
>So quickly since I'm working on a similar use case:
>
>What are the requirements to implement multiple external networks on
>the
>same NIC if we *can* use VLAN tags? Is it as simple as adding the
>external
>network to Neutron the same way we did with the existing external
>network
>and trunk that subnet via VLAN#nnn? Is there any special Neuton
>handlers
>for traffic on one VLAN versus another?
>
>
>*Adam Lawson*
>
>AQORN, Inc.
>427 North Tatnall Street
>Ste. 58461
>Wilmington, Delaware 19801-2230
>Toll-free: (844) 4-AQORN-NOW ext. 101
>International: +1 302-387-4660
>Direct: +1 916-246-2072
>
>
>On Mon, Apr 27, 2015 at 10:22 AM, Uwe Sauter <uwe.sauter.de at gmail.com>
>wrote:
>
>>
>> >>
>> >>
>> >> if I understood Georges answer correctly he suggested one bridge
>> >> (br-join, either OVS or linux bridge) to connect other bridges
>> >> via patch links, one for each external network you'd like to
>create.
>> >> These second level bridges are then used for the Neutron
>> >> configuration:
>> >>
>> >>                 br-ext1 -> Neutron
>> >>                /
>> >>             patch-link
>> >>              /
>> >> ethX –br-join
>> >>              \
>> >>             patch-link
>> >>                \
>> >>                 br-ext2 -> Neutron
>> >>
>> >>
>> >>
>> >> I suggested to use an OVS bridge because there it'd be possible to
>> >> stay away from the performance-wise worse patch-links and Linux
>> >> bridges and use "internal" interfaces to connect to Neutron
>directly
>> >> – which on second thought won't work if Neutron expects a
>> >> bridge in that place.
>> >>
>> >> What I suggested later on is that you probably don't need any
>second
>> >> level bridge at all. Just create a second/third external
>> >> network with appropriate CIDR. As long as those networks are
>> >> externally connected to your interface (and thus the bridge) you
>> >> should be good to go.
>> >
>> > In parallel emails we have established that I have to do what you
>have
>> drawn.  I need to do that the node(s) that run L3
>> > agents.  Do I need to modify the bridge_mappings, flat_networks, or
>> network_vlan_ranges configuration statement on the
>> > other nodes (compute hosts)?
>> >
>> > Thanks,
>> > Mike
>> >
>>
>> I think you just need to create the cascading bridges with their
>> inter-connects, then tell Neutron the association
>> between secondary bridge (e.g. br-ext1, br-ext2) and external
>network.
>> Then create (!) the external networks and restart
>> Neutron.
>>
>> Concerning you intra-cloud networking I don't think you need to
>> reconfigure anything as long as this is already working.
>> Compute hosts shouldn't be affected as its not their business to know
>> about external networks.
>>
>>
>> Regards,
>>
>>         Uwe
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>>
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>

-- 
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20150428/4b2e490f/attachment.html>


More information about the OpenStack-operators mailing list