[openstack-dev] [neutron] Vlan aware VMs or trunking
Kevin Benton
kevin at benton.pub
Wed Dec 7 17:34:52 UTC 2016
>It work only when whole switch is aimed by single customer, it will not
work when several customers sharing the same switch.
Do you know what vendors have this limitation? I know the broadcom chipsets
didn't prevent this (we allowed VLAN rewrites scoped to ports at Big
Switch). If it's common to Cisco/Juniper then I guess we are stuck
reflecting bad hardware in the API. :(
On Wed, Dec 7, 2016 at 9:22 AM, Vasyl Saienko <vsaienko at mirantis.com> wrote:
>
>
> On Wed, Dec 7, 2016 at 7:12 PM, Kevin Benton <kevin at benton.pub> wrote:
>
>>
>>
>> On Wed, Dec 7, 2016 at 8:47 AM, Vasyl Saienko <vsaienko at mirantis.com>
>> wrote:
>>
>>> @Armando: IMO the spec [0] is not about enablement of Trunks and
>>> baremetal. This spec is rather about trying to make user request with any
>>> network configuration (number of requested NICs) to be able successfully
>>> deployed on ANY ironic node (even when number of hardware interfaces is
>>> less than number of requested attached networks to instance) by implicitly
>>> creating neutron trunks on the fly.
>>>
>>> I have a concerns about it and left a comment [1]. The guaranteed
>>> number of NICs on hardware server should be available to user via nova
>>> flavor information. User should decide if he needs a trunk or not only by
>>> his own, as his image may even not support trunking. I suggest that
>>> creating trunks implicitly (w/o user knowledge) shouldn't happen.
>>>
>>> Current trunks implementation in Neutron will work just fine with
>>> baremetal case with one small addition:
>>>
>>> 1. segmentation_type and segmentation_id should not be API mandatory
>>> fields at least for the case when provider segmentation is VLAN.
>>>
>>> 2. User still should know what segmentation_id was picked to configure
>>> it on Instance side. (Not sure if it is done automatically via network
>>> metadata at the moment.). So it should be inherited from network
>>> provider:segmentation_id and visible to the user.
>>>
>>>
>>> @Kevin: Having VLAN mapping support on the switch will not solve problem
>>> described in scenario 3 when multiple users pick the same segmentation_id
>>> for different networks and their instances were spawned to baremetal nodes
>>> on the same switch.
>>>
>>> I don’t see other option than to control uniqueness of segmentation_id
>>> on Neutron side for baremetal case.
>>>
>>
>> Well unless there is a limitation in the switch hardware, VLAN mapping is
>> scoped to each individual port so users can pick the same local
>> segmentation_id. The point of the feature on switches is for when you have
>> customers that specify their own VLANs and you need to map them to service
>> provider VLANs (i.e. what is happening here).
>>
>
> It work only when whole switch is aimed by single customer, it will not
> work when several customers sharing the same switch.
>
>
>>
>>
>>>
>>> Reference:
>>>
>>> [0] https://review.openstack.org/#/c/277853/
>>> [1] https://review.openstack.org/#/c/277853/10/specs/approved/VL
>>> AN-aware-baremetal-instances.rst at 35
>>>
>>> On Wed, Dec 7, 2016 at 6:14 PM, Kevin Benton <kevin at benton.pub> wrote:
>>>
>>>> Just to be clear, in this case the switches don't support VLAN
>>>> translation (e.g. [1])? Because that also solves the problem you are
>>>> running into. This is the preferable path for bare metal because it avoids
>>>> exposing provider details to users and doesn't tie you to VLANs on the
>>>> backend.
>>>>
>>>> 1. http://ipcisco.com/vlan-mapping-vlan-translation-%E2%80%93-part-2/
>>>>
>>>> On Wed, Dec 7, 2016 at 7:49 AM, Armando M. <armamig at gmail.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 7 December 2016 at 04:02, Vasyl Saienko <vsaienko at mirantis.com>
>>>>> wrote:
>>>>>
>>>>>> Armando, Kevin,
>>>>>>
>>>>>> Thanks for your comments.
>>>>>>
>>>>>> To be more clear we are trying to use neutron trunks implementation
>>>>>> with baremetal servers (Ironic). Baremetal servers are plugged to Tor (Top
>>>>>> of the Rack) switch. User images are spawned directly onto hardware.
>>>>>>
>>>>> Ironic uses Neutron ML2 drivers to plug baremetal servers to Neutron
>>>>>> networks (it looks like changing vlan on the port to segmentation_id from
>>>>>> Neutron network, scenario 1 in the attachment). Ironic works with VLAN
>>>>>> segmentation only for now, but some vendors ML2 like arista allows to use
>>>>>> VXLAN (in this case VXLAN to VLAN mapping is created on the switch.).
>>>>>> Different users may have baremetal servers connected to the same ToR switch.
>>>>>>
>>>>>> By trying to apply current neutron trunking model leads to the
>>>>>> following errors:
>>>>>>
>>>>>> *Scenario 2: single user scenario, create VMs with trunk and
>>>>>> non-trunk ports.*
>>>>>>
>>>>>> - User create two networks:
>>>>>> net-1: (provider:segmentation_id: 100)
>>>>>> net-2: (provider:segmentation_id: 101)
>>>>>>
>>>>>> - User create 1 trunk port
>>>>>> port0 - parent port in net-1
>>>>>> port1 - subport in net-2 and define user segmentation_id: 300
>>>>>>
>>>>>> - User boot VMs:
>>>>>> BM1: with trunk (connected to ToR Fa0/1)
>>>>>> BM4: in net-2 (connected to ToR Fa0/4)
>>>>>>
>>>>>> - VLAN on the switch are configured as follow:
>>>>>> Fa0/1 - trunk, native 100, allowed vlan 300
>>>>>> Fa0/4 - access vlan 101
>>>>>>
>>>>>> *Problem:* BM1 has no access BM4 on net-2
>>>>>>
>>>>>>
>>>>>> *Scenario 3: multiple user scenario, create VMs with trunk.*
>>>>>>
>>>>>> - User1 create two networks:
>>>>>> net-1: (provider:segmentation_id: 100)
>>>>>> net-2: (provider:segmentation_id: 101)
>>>>>>
>>>>>> - User2 create two networks:
>>>>>> net-3: (provider:segmentation_id: 200)
>>>>>> net-4: (provider:segmentation_id: 201)
>>>>>>
>>>>>> - User1 create 1 trunk port
>>>>>> port0 - parent port in net-1
>>>>>> port1 - subport in net-2 and define user segmentation_id: 300
>>>>>>
>>>>>> - User2 create 1 trunk port
>>>>>> port0 - parent port in net-3
>>>>>> port1 - subport in net-4 and define user segmentation_id: 300
>>>>>>
>>>>>> - User1 boot VM:
>>>>>> BM1: with trunk (connected to ToR Fa0/1)
>>>>>>
>>>>>> - User2 boot VM:
>>>>>> BM4: with trunk (connected to ToR Fa0/4)
>>>>>>
>>>>>> - VLAN on the switch are configured as follow:
>>>>>> Fa0/1 - trunk, native 100, allowed vlan 300
>>>>>> Fa0/4 - trunk, native 200, allowed vlan 300
>>>>>>
>>>>>> *Problem:* User1 BM1 has access to User2 BM4 on net-2, Conflict in
>>>>>> VLAN mapping provider vlan 101 should be mapped to user vlan 300, and
>>>>>> provider vlan 201 should be also mapped to vlan 300
>>>>>>
>>>>>>
>>>>>> Making segmentation_id on trunk subport optional and inheriting it
>>>>>> from port network segmentation_id solves such problems.
>>>>>> According to original spec both segmentation_type and segmentation_id
>>>>>> are optional [0].
>>>>>>
>>>>>> Does Neutron/Nova place information about user's VLAN onto instance
>>>>>> via network metadata?
>>>>>>
>>>>>> Reference:
>>>>>> [0] https://review.openstack.org/#/c/308521/1/specs/newton/v
>>>>>> lan-aware-vms.rst at 118
>>>>>>
>>>>>
>>>>> Ah, I was actually going to add the following:
>>>>>
>>>>> Whether segmentation type and segmentation ID are mandatory or not
>>>>> depends on the driver in charge of the trunk. This is because for use cases
>>>>> like Ironic, as you wonder, these details may be inferred by the underlying
>>>>> network, as you point out.
>>>>>
>>>>> However, we have not tackled the Ironic use case just yet, for the
>>>>> main reason that ironic spec [1] is still WIP. So as far as newton is
>>>>> concerned, Ironic is not on the list of supported use cases for
>>>>> vlan-aware-vms, yet [2]. The reason why we have not tackled it yet is that
>>>>> there's the 'nuisance' in that a specific driver is known to the trunk
>>>>> plugin only at the time a parent port is bound and we hadn't come up with a
>>>>> clean and elegant way to developer a validator that took into account of
>>>>> it. I'll file a bug report to make sure this won't fall through the cracks.
>>>>> It'll be tagged with 'trunk'.
>>>>>
>>>>> [1] https://review.openstack.org/#/c/277853/
>>>>> [2] https://github.com/openstack/neutron/blob/master/neutron
>>>>> /services/trunk/rules.py#L215
>>>>>
>>>>> Cheers,
>>>>> Armando
>>>>>
>>>>>
>>>>>>
>>>>>> Thanks in advance,
>>>>>> Vasyl Saienko
>>>>>>
>>>>>> On Tue, Dec 6, 2016 at 7:08 PM, Armando M. <armamig at gmail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 6 December 2016 at 08:49, Vasyl Saienko <vsaienko at mirantis.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hello Neutron Community,
>>>>>>>>
>>>>>>>>
>>>>>>>> I've found that nice feature vlan-aware-vms was implemented in
>>>>>>>> Newton [0].
>>>>>>>> However the usage of this feature for regular users is impossible,
>>>>>>>> unless I'm missing something.
>>>>>>>>
>>>>>>>> As I understood correctly it should work in the following way:
>>>>>>>>
>>>>>>>> 1. It is possible to group neutron ports to trunks.
>>>>>>>> 2. When trunk is created parent port should be defined:
>>>>>>>> Only one port can be parent.
>>>>>>>> segmentation of parent port is set as native or untagged vlan
>>>>>>>> on the trunk.
>>>>>>>> 3. Other ports may be added as subports to existing trunk.
>>>>>>>> When subport is added to trunk *segmentation_type* and *segmentation_id
>>>>>>>> *should be specified.
>>>>>>>> segmentation_id of subport is set as allowed vlan on the trunk
>>>>>>>>
>>>>>>>> Non-admin user do not know anything about *segmentation_type* and
>>>>>>>> *segmentation_id.*
>>>>>>>>
>>>>>>>
>>>>>>> Segmentation type and ID are used to multiplex/demultiplex traffic
>>>>>>> in/out of the guest associated to a particular trunk. Aside the fact that
>>>>>>> the only supported type is VLAN at the moment (if ever), the IDs are user
>>>>>>> provided to uniquely identify the traffic coming in/out of the trunked
>>>>>>> networks so that it can reach the appropriate vlan interface within the
>>>>>>> guest. The documentation [1] is still in flight, but it clarifies this
>>>>>>> point.
>>>>>>>
>>>>>>> HTH
>>>>>>> Armando
>>>>>>>
>>>>>>> [1] https://review.openstack.org/#/c/361776
>>>>>>>
>>>>>>>
>>>>>>>> So it is strange that those fields are mandatory when subport is
>>>>>>>> added to trunk. Furthermore they may conflict with port's network
>>>>>>>> segmentation_id and type. Why we can't inherit segmentation_type and
>>>>>>>> segmentation_id from network settings of subport?
>>>>>>>>
>>>>>>>> References:
>>>>>>>> [0] https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
>>>>>>>> [1] https://review.openstack.org/#/c/361776/15/doc/networking-gu
>>>>>>>> ide/source/config-trunking.rst
>>>>>>>> [2] https://etherpad.openstack.org/p/trunk-api-dump-newton
>>>>>>>>
>>>>>>>> Thanks in advance,
>>>>>>>> Vasyl Saienko
>>>>>>>>
>>>>>>>> ____________________________________________________________
>>>>>>>> ______________
>>>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>>>> Unsubscribe: OpenStack-dev-request at lists.op
>>>>>>>> enstack.org?subject:unsubscribe
>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> ____________________________________________________________
>>>>>>> ______________
>>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>>> Unsubscribe: OpenStack-dev-request at lists.op
>>>>>>> enstack.org?subject:unsubscribe
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> ____________________________________________________________
>>>>>> ______________
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe: OpenStack-dev-request at lists.op
>>>>>> enstack.org?subject:unsubscribe
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>
>>>>> ____________________________________________________________
>>>>> ______________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe: OpenStack-dev-request at lists.op
>>>>> enstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>
>>>> ____________________________________________________________
>>>> ______________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: OpenStack-dev-request at lists.op
>>>> enstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>> ____________________________________________________________
>>> ______________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20161207/2f3418e5/attachment.html>
More information about the OpenStack-dev
mailing list