[openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)
brian.haley at hp.com
Thu Jan 29 21:34:19 UTC 2015
On 01/29/2015 03:55 AM, Kevin Benton wrote:
>>Why would users want to change an active port's IP address anyway?
> Re-addressing. It's not common, but the entire reason I brought this up is
> because a user was moving an instance to another subnet on the same network and
> stranded one of their VMs.
>> I worry about setting a default config value to handle a very unusual use case.
> Changing a static lease is something that works on normal networks so I don't
> think we should break it in Neutron without a really good reason.
How is Neutron breaking this? If I move a port on my physical switch to a
different subnet, can you still communicate with the host sitting on it?
Probably not since it has a view of the world (next-hop router) that no longer
exists, and the network won't route packets for it's old IP address to the new
location. It has to wait for it's current DHCP lease to tick down to the point
where it will use broadcast to get a new one, after which point it will work.
> Right now, the big reason to keep a high lease time that I agree with is that it
> buys operators lots of dnsmasq downtime without affecting running clients. To
> get the best of both worlds we can set DHCP option 58 (a.k.a dhcp-renewal-time
> or T1) to 240 seconds. Then the lease time can be left to be something large
> like 10 days to allow for tons of DHCP server downtime without affecting running
> There are two issues with this approach. First, some simple dhcp clients don't
> honor that dhcp option (e.g. the one with Cirros), but it works with dhclient so
> it should work on CentOS, Fedora, etc (I verified it works on Ubuntu). This
> isn't a big deal because the worst case is what we have already (half of the
> lease time). The second issue is that dnsmasq hardcodes that option, so a patch
> would be required to allow it to be specified in the options file. I am happy to
> submit the patch required there so that isn't a big deal either.
Does it work on Windows VMs too? People run those in clouds too. The point is
that if we don't know if all the DHCP clients will support it then it's a
non-starter since there's no way to tell from the server side.
> If we implement that fix, the remaining issue is Brian's other comment about too
> much DHCP traffic. I've been doing some packet captures and the standard
> request/reply for a renewal is 2 unicast packets totaling about 725 bytes.
> Assuming 10,000 VMs renewing every 240 seconds, there will be an average of 242
> kbps background traffic across the entire network. Even at a density of 50 VMs,
> that's only 1.2 kbps per compute node. If that's still too much, then the
> deployer can adjust the value upwards, but that's hardly a reason to have a high
"... then the deployer can adjust the value upwards...", hmm, can they adjust it
downwards as well? :)
> That just leaves the logging problem. Since we require a change to dnsmasq
> anyway, perhaps we could also request an option to suppress logs from renewals?
> If that's not adequate, I think 2 log entries per vm every 240 seconds is really
> only a concern for operators with large clouds and they should have the
> knowledge required to change a config file anyway. ;-)
I'm glad you're willing to "boil the ocean" to try and get the default changed,
but is all this really worth it when all you have to do is edit the config file
in your deployment? That's why the value is there in the first place.
Sorry, I'm still unconvinced we need to do anything more than document this.
More information about the OpenStack-dev