[Openstack] Mixed ml2 drivers in single Openstack setup

Kevin Benton kevin at benton.pub
Wed Nov 2 17:22:05 UTC 2016


Yes, if you represent the different network  partitions yourself using host
aggregates, then it should be doable.

If users are launching VMs themselves with the API/CLI/horizon, they will
have to know which host aggregate to pick for each network.

On Nov 2, 2016 02:04, "Chris" <contact at progbau.de> wrote:

> Hello,
>
>
>
> Thanks for answers they really helped us to understand more of the
> insights.
>
>
>
> So to summarize a bit:
>
> It’s possible to use different type drivers in a single Openstack setup as
> long as a network is created with just one specific type driver.
>
> The nova scheduler is not aware of the different type drivers, I think
> this can be fixed by creating availability zones/aggregates.
>
>
>
> With this information it sounds doable. Anything we are missing?
>
>
>
>
>
> Cheers
>
> Chris
>
>
>
> *From:* Kevin Benton [mailto:kevin at benton.pub]
> *Sent:* Wednesday, November 02, 2016 03:19
> *To:* Neil Jerram <neil at tigera.io>
> *Cc:* Remo Mattei <remo at italy1.com>; Chris <contact at progbau.de>;
> openstack at lists.openstack.org; Soputhi Sea <puthi at live.com>
> *Subject:* Re: [Openstack] Mixed ml2 drivers in single Openstack setup
>
>
>
> >- If you configure multiple type drivers, the first one will always be
> used, subject to resource availability. E.g. with 'vlan,vxlan', you would
> only start using VXLAN if all 4094 VLANs were already in us.
>
>
>
> Not quite. It will only use a type driver automatically if it's in
> 'tenant_network_types'. You can have all the type drivers you want
> configured in 'type_drivers' and they will only be available to admins who
> manually specify the type in a network creation request.
>
>
>
> So if you wanted to allow the creation of new networks and set them to a
> specific type while leaving the old networks untouched, you would just
> remove 'flat' from the 'tenant_network_types' option.
>
>
>
>
>
> >- One issue that I suspect to be lurking is: what if the L3 plugin/agent
> that you need for a port - e.g. to map a floating IP onto it - depends on
> the mechanism driver that is used to bind that port?
>
>
>
> The reference L3 plugin doesn't have a problem with this because it's not
> the L3 agent's responsibility to do L2 wiring for its ports. It just
> creates the ports and lets the L2 agent (or other mech driver) deal with
> wiring the VIFs.
>
>
>
>
>
> It's not clear in the original email, but if the original flat networks
> are not available to the new compute nodes, you will encounter issues if
> you try to boot a VM onto one of the flat networks and it gets scheduled to
> a new compute node. It will result in a binding failure (assuming the
> vendor ML2 driver was written correctly).
>
>
>
> Currently, Nova scheduling doesn't account for networks that aren't
> available to every compute node in the datacenter. So if the intention is
> to have the flat network available to the old compute nodes, and then a
> different network available to the new compute nodes, it's not really going
> to work.
>
>
>
> On Tue, Nov 1, 2016 at 5:35 AM, Neil Jerram <neil at tigera.io> wrote:
>
> In principle... In the fully general case I think there are issues
> lurking, so it would indeed be interesting to hear about real experience
> from people who have done this.
>
>
>
> FWIW here's what I think I understand about mixed ML2 configurations:
>
>
>
> - We're talking about 'type' drivers and 'mechanism' drivers. Type drivers
> are about how instance data is transported between compute hosts. Mechanism
> drivers are about what happens on each compute host to connect instance
> data into that ('type') system.
>
>
>
> - If you configure multiple type drivers, the first one will always be
> used, subject to resource availability. E.g. with 'vlan,vxlan', you would
> only start using VXLAN if all 4094 VLANs were already in us.
>
>
>
> - OTOH if you configure multiple mechanism drivers, they will be asked in
> turn if they want to handle connecting up (aka binding) each new Neutron
> port. So different mechanism drivers can in principle handle different
> kinds of port.
>
>
>
> - One issue that I suspect to be lurking is: what if the L3 plugin/agent
> that you need for a port - e.g. to map a floating IP onto it - depends on
> the mechanism driver that is used to bind that port?
>
>
>
> I hope those points are useful to start discussing this interesting topic
> further.
>
>
>
>     Neil
>
>
>
>
>
> *From: *Remo Mattei
>
> *Sent: *Tuesday, 1 November 2016 06:09
>
> *To: *Chris
>
> *Cc: *openstack at lists.openstack.org; Soputhi Sea
>
> *Subject: *Re: [Openstack] Mixed ml2 drivers in single Openstack setup
>
>
>
> You can with ml2 mix plugins. You will have to look at the options but it
> is designed for that.
>
>
> Sent from my iPad
>
>
> On Oct 31, 2016, at 9:08 PM, Chris <contact at progbau.de> wrote:
>
> Hello,
>
>
>
> We currently use the flat ml2 driver in our Openstack setup (around 500
> nodes).
>
> We now want to change to a network vendor specific ml2 driver. So the
> legacy compute nodes will use the flat network and compute nodes added in
> the future will use the network based on the new ml2 vendor plugin.
>
>
>
> Question is it possible to mix two network setups like described above? We
> don’t use regions could this be a possible solution?
>
>
>
> Any help and answer appreciated.
>
>
>
> Cheers
>
> Chris
>
> !DSPAM:1,581818fd210478584850555!
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
>
> !DSPAM:1,581818fd210478584850555!
>
>
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20161102/a7bed81d/attachment.html>


More information about the Openstack mailing list