[openstack-dev] [neutron] [linuxbridge] Multiple VXLAN multicast groups

Jiří Kotlín jiri.kotlin at ultimum.io
Mon Jun 6 11:24:30 UTC 2016


Hi,

unfortunately straightforward mapping is not usable for us, we need own
distribution of addresses according to vni.

For example - 2 data centers connected with VLANs, both with Cisco ASR-9K
on L3.

Routers have its own vni-multicast group mappings.

Router(config-if)# member vni 6010-6030 multicast-group 225.1.1.1

So we can create network with vni i.e. 6011 and it will get right multicast
address.

We also need to create networks on demand and not configure routers at each
change.

Jiří Kotlín
Developer

Ultimum Technologies s.r.o.
Na Poříčí 1047/26, 11000 Praha 1
Czech Republic

+420 602 288 358
jiri.kotlin at ultimum.io
https://ultimum.io <http://ultimum.io>

linkedin <https://www.linkedin.com/company/ultimum-technologies> | twitter
<https://twitter.com/ultimumtech> | facebook
<https://www.facebook.com/ultimumtechnologies/timeline> | google+
<https://plus.google.com/+Ultimumtechnologies001/posts>

2016-06-06 11:21 GMT+02:00 Kevin Benton <kevin at benton.pub>:

> Just to be clear, it's not random. It follows a masking pattern so it is
> possible to know which address a given VNI will use. And if you use a /8
> prefix the VNIs will have a straightforward 1:1 mapping to multicast
> addresses.
> On Jun 6, 2016 01:35, "Jiří Kotlín" <jiri.kotlin at ultimum.io> wrote:
>
>> Hi,
>>
>> yes sorry - I was not concrete enough and the RFE should really be
>> reworded.
>>
>> Our goal is to have the  ability to control vni-multicast address
>> distribution somehow, not randomly.
>>
>> Considering multiple addresses support is already implemented in linux
>> bridge agent, I suppose implementing this feature should not cause any
>> problems.
>>
>> Thanks a lot for this hint and reply, I have tested the CIDR feature, but
>> forgot to mention this in RFE.
>>
>> Jiří Kotlín
>> Developer
>>
>> Ultimum Technologies s.r.o.
>> Na Poříčí 1047/26, 11000 Praha 1
>> Czech Republic
>>
>> +420 602 288 358
>> jiri.kotlin at ultimum.io
>> https://ultimum.io <http://ultimum.io>
>>
>> linkedin <https://www.linkedin.com/company/ultimum-technologies> |
>> twitter <https://twitter.com/ultimumtech> | facebook
>> <https://www.facebook.com/ultimumtechnologies/timeline> | google+
>> <https://plus.google.com/+Ultimumtechnologies001/posts>
>>
>> 2016-06-06 9:36 GMT+02:00 Kevin Benton <kevin at benton.pub>:
>>
>>> The linux bridge agent does support using multiple VXLAN groups. You can
>>> specify a prefix for 'vxlan_group' and the VNIs will be spread across the
>>> multicast addresses in that prefix.[1]
>>>
>>> The only difference between that and what your RFE proposes is specific
>>> control over which multicast address is associated with each VNI. If that
>>> is a specific requirement, then the RFE needs to be reworded because that
>>> is the only difference between your proposal and what we have now for Linux
>>> Bridge.
>>>
>>>
>>> 1.
>>> https://github.com/openstack/neutron/blob/d8ae9cf4755416ca65108112a60e8b2e67607daf/neutron/plugins/ml2/drivers/linuxbridge/agent/common/config.py#L34-L42
>>>
>>> On Mon, Jun 6, 2016 at 12:06 AM, Jiří Kotlín <jiri.kotlin at ultimum.io>
>>> wrote:
>>>
>>>> Hi linuxbridge experts,
>>>>
>>>> the ability to define multiple VXLAN groups can be very useful in
>>>> practice. Is there any design rationale why the vxlan_group was considered
>>>> a single attribute?
>>>>
>>>> More info is in this RFE:
>>>> https://bugs.launchpad.net/bugs/1579068
>>>>
>>>> Thank you in advance for any help you can provide.
>>>>
>>>>
>>>> Jiří Kotlín
>>>> Developer
>>>>
>>>> Ultimum Technologies s.r.o.
>>>> Na Poříčí 1047/26, 11000 Praha 1
>>>> Czech Republic
>>>>
>>>> +420 602 288 358
>>>> jiri.kotlin at ultimum.io
>>>> https://ultimum.io <http://ultimum.io>
>>>>
>>>> linkedin <https://www.linkedin.com/company/ultimum-technologies> |
>>>> twitter <https://twitter.com/ultimumtech> | facebook
>>>> <https://www.facebook.com/ultimumtechnologies/timeline> | google+
>>>> <https://plus.google.com/+Ultimumtechnologies001/posts>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160606/0cb972df/attachment.html>


More information about the OpenStack-dev mailing list