[Openstack-operators] Active/passive nova-network failover results in both controllers APRing for gateway addresses

Jay Pipes jaypipes at gmail.com
Wed Oct 29 12:57:41 UTC 2014


Hi Mike, I'm no networking or HA expert, but I've added some comments 
inline and cc'd Florian Haas (who *is* an HA expert!) to see if he can 
help you out...

On 10/29/2014 12:34 AM, Mike Smith wrote:
> I’ve been running nova-network in VLAN mode as an active/passive
> cluster resource (corosync + rgmanager) on my OpenStack Havana and
> Folsom controller pairs for a good long while.  This week I found an
> oddity that I hadn’t noticed before, and I’d like to ask the
> community about it.
>
> When nova-network starts up, it of course launches a dnsmasq process
> for each network, which listens on the .1 address of the assigned
> network and acts as the gateway for that network.   When the
> nova-network service is moved to the passive node, nova-network
> starts up dnsmasq processes on that node as well, again listening on
> the .1 addresses.   However, since now both nodes have the .1
> addresses configured, they basically take turns ARPing for the
> addresses and stealing the traffic from each other.  VMs will route
> through the “active” node for a minute or so and then suddenly start
> routing through the “passive” node.  Then the cycle repeats.   Among
> other things, this results in only one controller at a time being
> able to reach the VMs and adds latency to VM traffic when the shift
> happens.

It sounds like your failover is not actually failing over. In other 
words, it sounds like your previously active node is not being marked as 
fully down in order to facilitate the transition to the backup/passive 
node. I would expect some minimal disruption during the failover while 
the ARP table entries are repopulated when network connectivity to the 
old active node is not possible, but it's the "Then the cycle repeats." 
part that has me questioning things...

> To stop this, I had to manually remove the VLAN interfaces from the
> bridges, bring down the bridges, then delete the bridges from the
> now-passive node.  Things then returned to normal, with all traffic
> flowing through the “active” controller and both controllers being
> able to reach the VMs.
>
> I have not seen anything in the HA guides about how people are
> preventing this situation from occuring - nothing about killing off
> dnsmasq or tearing down these network interfaces to prevent the ARP
> wars.  Anybody else out there experienced this?   How are people
> handling the situation?
>
> I am considering bringing up arptables to block ARP for the gateway
> addresses when cluster failover happens, or alternatively automating
> the tear-down of these gateway addresses.  Am I missing something
> here?

I'll let Florian talk about what is expected of the networking layer 
during failover, but I'll just say that we used multi-host nova-network 
node in our Folsom deployments to great effect. It was incredibly 
reliable, and the nice thing about it was that if nova-network went down 
on a compute node, it only affected the VMs running on that particular 
compute node. A simple (re)start of nova-network daemon was enough to 
bring up tenant networking on the compute node, and there was no 
disruption in service to any other VMs on other compute nodes. The 
downside was each compute node would use an extra public IP address...

Anyway, just something to think about. The DVR functionality in Neutron 
is attempting to achieve some parity with the nova-network multi-host 
functionality, so if you're interested in this area, it's something to 
keep an eye on.

All the best,
-jay

> Thanks,
>
> Mike Smith Principal Engineer, Website Systems Overstock.com



More information about the OpenStack-operators mailing list