[openstack-dev] [Fuel][TripleO] NIC bonding for OpenStack

Robert Collins robertc at robertcollins.net
Tue Feb 11 23:42:34 UTC 2014


On 12 February 2014 05:42, Andrey Danin <adanin at mirantis.com> wrote:
> Hi Openstackers,
>
>
> We are working on link aggregation support in Fuel. We wonder what are the
> most desirable types of bonding now in datacenters. We had some issues (see
> below) with OVS bond in LACP mode, and it turned out that standard Linux
> bonding (attached to OVS bridges) was a better option in our setup.

OVS implements SLB bonding as well as LACP, so we shouldn't need
standard linux bonding at all - I'd rather keep things simple if
possible - having more moving parts than we need is a problem :).

> I want to hear your opinion, guys. What types of bonding do you think are
> better now in terms of stability and performance, so that we can properly
> support them for OpenStack installations.

We'll depend heavily on operator feedback here. - Jay has forwarded
this to the operators list, so lets see what they say.

> Also, we are wondering if there any plans to support bonding in TripleO, and
> how you guys would like to see it be implemented? What is the general
> approach for such complex network configurations for TripleO? We would love
> to extract this piece from Fuel and make it fully independent, so that the
> larger community can use it and we could work collaboratively on it. Right
> now it is actually already granular and can be reused in other projects, and
> implemented as a separated puppet module:
> https://github.com/stackforge/fuel-library/tree/master/deployment/puppet/l23network.

Yes, we'd like to support bonding.

I think we need this modelled in Neutron to do it properly, though I'm
just drafting up a schema for us to model this manually in heat in the
interim (will be at
https://etherpad.openstack.org/p/tripleo-network-configuration
shortly).

Ideally:
 - we use LACP active mode on all ports
 - in-instance logic configures bonding when the same switch is
plugged into two ethernet ports
 - nova's BM driver puts all NIC ports on the same l2 network, if
requested by the user.

However, we don't really want one IP per port when bonding. That will
make DHCP hard - we'd have to have
vswitch
  port nic0
  port nic1
   ..
  port nicN
and then

ovs-vsctl add-port vswitch dhcp1
ovs-vsctl set port dchp1
ovs-vsctl set interface dhcp1 type=internal
ovs-vsctl set interface mac="$macofnic1"

...

ovs-vsctl add-port vswitch dhcpN
ovs-vsctl set port dchpN
ovs-vsctl set interface dhcpNM type=internal
ovs-vsctl set interface mac="$macofnic0N"

As the virtual switch only has one MAC - the lowest by default IIRC -
and so we'd lose the excess IP's and then become unreachable unless
other folk reimplement the heuristic the bridge has to pick the right
ip. Ugh.

So my long term plan is:
Ironic knows about the NICs
nova boot specifies which NICs are bonded (1)
Neutron gets one port for each bonding group, with one ip, and *all*
the possible MACs - so it can answer DHCP for whichever one the server
DHCPs from
We include all the MACs in a vendor DHCP option, and then the server
in-instance logic can build a bridge from that + explicit in-instance
modelling we might do for e.g. heat.

(1): because in a SDN world we might bond things differently on
different deploys

> Description of the problem with LACP we ran into:
>
> https://etherpad.openstack.org/p/LACP_issue

Yeah, matching the configuration is important :). The port overload is
particularly interesting - scalablility issues galore.

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud



More information about the OpenStack-dev mailing list