[openstack-dev] [neutron] Re: [Blueprint vlan-aware-vms] VLAN aware VMs

Erik Moe emoe264 at gmail.com
Wed Jan 8 08:58:33 UTC 2014


I feel that we are getting quite far away from supporting my use case. Use
case: VM wants to connect to different 'normal' Neutron networks from one
VNIC. VLANs are proposed in blueprint since it's a common way to separate
'networks'. It is just a way to connect to different Neutron networks, it
does not put requirements on method used for tenant separation in Neutron.
Ability to specify VID from user is there since, for this use case, the
service would be used by normal tenants, and preferable not exposing
Neutron internals (that might not use VLANS at all for tenant separation).
Also several VMs could specify the same VID for connecting to different
Neutron networks, this to avoid dependencies between tenants.

We would like to have this functionality close to the VNIC, not requiring a
extra 'hop' in the network both for latency, throughput performance and
fault management. The strange optimizations are there because of this.

Also, for this use case, the APIs from a user perspective could be cleaner.

Maybe we should break out this use case from the L2-gateway?

/Erik



On Mon, Dec 23, 2013 at 10:09 PM, Ian Wells <ijw.ubuntu at cack.org.uk> wrote:

> I think we have two different cases here - one where a 'trunk' network
> passes all VLANs, which is potentially supportable by anything that's not
> based on VLANs for separation, and one where a trunk can't feasibly do that
> but where we could make it pass a restricted set of VLANs by mapping.
>
> In the former case, obviously we need no special awareness of the nature
> of the network to implement an L2 gateway.
>
> In the latter case, we're looking at a specialisation of networks, one
> where you would first create them with a set of VLANs you wanted to pass
> (and - presumably - the driver would say 'ah, I must allocate multiple
> VLANs to this network rather than just one'.  You've jumped in with two
> optimisations on top of that:
>
> - we can precalculate the VLANs the network needs to pass in some cases,
> because it's the sum of VLANs that L2 gateways on that network know about
> - we can use L2 gateways to make the mapping from 'tenant' VLANs to
> 'overlay' VLANs
>
> They're good ideas but they add some limitations to what you can do with
> trunk networks that aren't actually necessary in a number of solutions.
>
> I wonder if we should try the general case first with e.g. a
> Linuxbridge/GRE based infrastructure, and then add the optimisations
> afterwards.  If I were going to do that optimisation I'd start with the
> capability mechanism and add the ability to let the tenant specify the
> specific VLAN tags which must be passed (as you normally would on a
> physical switch). I'd then have two port types - a user-facing one that
> ensures the entry and exit mapping is made on the port, and an
> administrative one which exposes that mapping internally and lets the
> client code (e.g. the L2 gateway) do the mapping itself.  But I think it
> would be complicated, and maybe even has more complexity than is
> immediately apparent (e.g. we're effectively allocating a cluster-wide
> network to get backbone segmentation IDs for each VLAN we pass, which is
> new and different) hence my thought that we should start with the easy case
> first just to have something working, and see how the tenant API feels.  We
> could do this with a basic bit of gateway code running on a system using
> Linuxbridge + GRE, I think - the key seems to be avoiding VLANs in the
> overlay and then the problem is drastically simplified.
> --
> Ian.
>
>
> On 21 December 2013 23:00, Erik Moe <emoe264 at gmail.com> wrote:
>
>> Hi Ian,
>>
>> I think your VLAN trunking capability proposal can be a good thing, so
>> the user can request a Neutron network that can trunk VLANs without caring
>> about detailed information regarding which VLANs to pass. This could be
>> used for use cases there user wants to pass VLANs between endpoints on a L2
>> network etc.
>>
>> For the use case there a VM wants to connect to several "normal" Neutron
>> networks using VLANs, I would prefer a solution that did not require a
>> Neutron trunk network. Possibly by connecting a L2-gateway directly to the
>> Neutron 'vNic' port, or some other solution. IMHO it would be good to map
>> VLAN to Neutron network as soon as possible.
>>
>> Thanks,
>> Erik
>>
>>
>>
>> On Thu, Dec 19, 2013 at 2:15 PM, Ian Wells <ijw.ubuntu at cack.org.uk>wrote:
>>
>>> On 19 December 2013 06:35, Isaku Yamahata <isaku.yamahata at gmail.com>wrote:
>>>
>>>>
>>>> Hi Ian.
>>>>
>>>> I can't see your proposal. Can you please make it public viewable?
>>>>
>>>
>>> Crap, sorry - fixed.
>>>
>>>
>>>> > Even before I read the document I could list three use cases.  Eric's
>>>> > covered some of them himself.
>>>>
>>>> I'm not against trunking.
>>>> I'm trying to understand what requirements need "trunk network" in
>>>> the figure 1 in addition to "L2 gateway" directly connected to VM via
>>>> "trunk port".
>>>>
>>>
>>> No problem, just putting the information there for you.
>>>
>>> --
>>> Ian.
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140108/8c0e6bcf/attachment-0001.html>


More information about the OpenStack-dev mailing list