[openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Richard Woo richardwoo2003 at gmail.com
Mon Nov 3 22:55:45 UTC 2014


Hello, will this topic be discussed in the design session?

Richard

On Mon, Nov 3, 2014 at 10:36 PM, Erik Moe <erik.moe at ericsson.com> wrote:

>
>
> I created an etherpad and added use cases (so far just the ones in your
> email).
>
>
>
> https://etherpad.openstack.org/p/tenant_vlans
>
>
>
> /Erik
>
>
>
>
>
> *From:* Erik Moe [mailto:erik.moe at ericsson.com]
> *Sent:* den 2 november 2014 23:12
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
> blueprints
>
>
>
>
>
>
>
> *From:* Ian Wells [mailto:ijw.ubuntu at cack.org.uk <ijw.ubuntu at cack.org.uk>]
>
> *Sent:* den 31 oktober 2014 23:35
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
> blueprints
>
>
>
>
> On 31 October 2014 06:29, Erik Moe <erik.moe at ericsson.com> wrote:
>
>
>
>
>
> I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk
> network + L2GW were different use cases.
>
>
>
> Still I get the feeling that the proposals are put up against each other.
>
>
>
> I think we agreed they were different, or at least the light was beginning
> to dawn on the differences, but Maru's point was that if we really want to
> decide what specs we have we need to show use cases not just for each spec
> independently, but also include use cases where e.g. two specs are required
> and the third doesn't help, so as to show that *all* of them are needed.
> In fact, I suggest that first we do that - here - and then we meet up one
> lunchtime and attack the specs in etherpad before submitting them.  In
> theory we could have them reviewed and approved by the end of the week.
> (This theory may not be very realistic, but it's good to set lofty goals,
> my manager tells me.)
>
> Ok, let’s try. I hope you theory turns out to be realistic. J
>
>  Here are some examples why bridging between Neutron internal networks
> using trunk network and L2GW IMO should be avoided. I am still fine with
> bridging to external networks.
>
>
>
> Assuming VM with trunk port wants to use floating IP on specific VLAN.
> Router has to be created on a Neutron network behind L2GW since Neutron
> router cannot handle VLANs. (Maybe not too common use case, but just to
> show what kind of issues you can get into)
>
> neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
>
> The code to check if valid port has to be able to traverse the L2GW.
> Handing of IP addresses of VM will most likely be affected since VM port is
> connected to several broadcast domains. Alternatively new API can be
> created.
>
>
>
> Now, this is a very good argument for 'trunk ports', yes.  It's not
> actually an argument against bridging between networks.  I think the
> bridging case addresses use cases (generally NFV use cases) where you're
> not interested in Openstack managing addresses - often because you're
> forwarding traffic rather than being an endpoint, and/or you plan on
> disabling all firewalling for speed reasons, but perhaps because you wish
> to statically configure an address rather than use DHCP.  The point is
> that, in the absence of a need for address-aware functions, you don't
> really care much about ports, and in fact configuring ports with many
> addresses may simply be overhead.  Also, as you say, this doesn't address
> the external bridging use case where what you're bridging to is not
> necessarily in Openstack's domain of control.
>
> I know that many NFVs currently prefer to manage everything themselves. At
> the same time, IMO, I think they should be encouraged to become
> Neutronified.
>
>  In “VLAN aware VMs” trunk port mac address has to be globally unique
> since it can be connected to any network, other ports still only has to be
> unique per network. But for L2GW all mac addresses has to be globally
> unique since they might be bridged together at a later stage.
>
>
>
> I'm not sure that that's particularly a problem - any VM with a port will
> have one globally unique MAC address.  I wonder if I'm missing the point
> here, though.
>
> Ok, this was probably too specific, sorry. Neutron can reuse MAC addresses
> among Neutron networks. But I guess this is configurable.
>
>  Also some implementations might not be able to take VID into account
> when doing mac address learning, forcing at least unique macs on a trunk
> network.
>
>
>
> If an implementation struggles with VLANs then the logical thing to do
> would be not to implement them in that driver.  Which is fine: I would
> expect (for instance) LB-driver networking to work for this and leave
> OVS-driver networking to never work for this, because there's little point
> in fixing it.
>
> Same as above, this is related to reuse of MAC addresses.
>
>  Benefits with “VLAN aware VMs” are integration with existing Neutron
> services.
>
> Benefits with Trunk networks are less consumption of Neutron networks,
> less management per VLAN.
>
>
>
> Actually, the benefit of trunk networks is:
>
> - if I use an infrastructure where all networks are trunks, I can find out
> that a network is a trunk
>
> - if I use an infrastructure where no networks are trunks, I can find out
> that a network is not a trunk
>
> - if I use an infrastructure where trunk networks are more expensive, my
> operator can price accordingly
>
>
>
> And, again, this is all entirely independent of either VLAN-aware ports or
> L2GW blocks.
>
> Both are true. I was referring of “true” trunk networks, you were
> referring to your additions, right?
>
>  Benefits with L2GW is ease to do network stitching.
>
> There are other benefits with the different proposals, the point is that
> it might be beneficial to have all solutions.
>
>
>
> I totally agree with this.
>
> So, use cases that come to mind:
>
> 1. I want to pass VLAN-encapped traffic from VM A to VM B.  I do not know
> at network setup time what VLANs I will use.
> case A: I'm simulating a network with routers in.  The router config is
> not under my control, so I don't know addresses or the number of VLANs in
> use.  (Yes, this use case exists, search for 'Cisco VIRL'.)
> case B: NFV scenarios where the VNF orchestrator decides how few or many
> VLANs are used, where the endpoints may or may not be addressed, and where
> the addresses are selected by the VNF manager.  (For instance, every time I
> add a customer to a VNF service I create another VLAN on an internal link.
> The orchestrator is intelligent and selects the VLAN; telling Openstack the
> details is needless overhead.)
>
>   - this use case set suggests VLAN trunks, but says nothing about
> anything else.
>
>
>
> 2. Service VMs, where I'm attaching one VM to many networks so that I can
> use that VM to implement many instances of the same service.  Either the VM
> won't hotplug VIFs, or it won't hotplug enough VIFs (max # VIFs << max #
> VLANs).
>
>   - this use case set suggests bringing multiple networks into a single
> port, which is the trunk port use case
>
>   - addressing would likely be Openstack's responsibility, again
> suggesting trunk ports
>
>   - this use case could equally be solved using an L2GW and a trunk
> network, but that would require more API calls and doesn't add much value
>
>
>
> 3. An external service appliance, where I'm attaching one external port to
> many networks so that I can use that appliance to implement many instances
> of the same service.
>
>   - given the external service is probably on a provider network, this
> suggests that I want to composite multiple tenant networks to a trunked
> (external) network, indicating an L2GW or an external port specific
> extension
>
>   - I would probably like the addresses to be under the control of
> Openstack (so that I can take a tenant network address and prevent it from
> being re-used, implying that the tenant-side ports can have addresses
>
> 4. I want to connect multiple VMs to a trunk network, and some VMs to
> individual VLANs in the same network
>
> (seems useful with my network engineer hat on, but I'm struggling to think
> of a concrete example)
>
>   - works best with L2GW; also works with two trunk ports
>
>
>
> 5. An anti-use-case: I want to send Neutron's networking into a death
> spiral by making a forwarding loop
>
>   - the L2GW allows you to do this (connect trunk port to access port); we
> may want to avoid this 'feature'
>
> Yes, the loop one also came to my mind. J
>
> Here’s a future use-case: I am an NFV using ports with and without VLANs.
> Now I want QoS.
>
> That's still coming out as 'we probably only require one of trunk ports
> and L2GWs, but both is nicer'.  Also, I know that we'd like not to make a
> mistake here, but we've been talking about this for at least 18 months, so
> I would prefer that we try for at least one and ideally both of these
> solutions and risk deprecating them later rather than sitting on the fence
> for another six months.
>
>
>
> Agree.
>
>  Platforms that have issues forking of VLANs at VM port level could get
> around with trunk network + L2GW but having more hacks if integration with
> other parts of Neutron is needed.
>
>
>
> My inclination is that the L2GW should not try and take advantage of the
> encap in a VLAN-based underlay.  It makes it more efficient but ties it too
> closely with the actual physical implementation of the network.
>
>
>
> Not sure I follow, I’ll re read this tomorrow….
>
>  Platforms that have issues implementing trunk networks could get around
> using “VLAN aware VMs” but being forced to separately manage every VLAN as
> a Neutron network. On platforms that have both, user can select method
> depending on what is needed.
>
>
>
> Thanks,
>
> Erik
>
>
> --
>
> Ian.
>
>
>
> /Erik
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141103/e5353195/attachment.html>


More information about the OpenStack-dev mailing list