[openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

Kevin Benton blak111 at gmail.com
Tue Feb 3 23:48:02 UTC 2015


> If you had created a second network and subnet this would have been
dropped (different broadcast domain).

Well that update wouldn't have been allowed at the API. You can't use a
fixed IP from a subnet on a network that your port isn't attached to.
Changing a neutron port to a different network is not what we are talking
about here.

> I said that's a bad design because other things can cause it to go
offline, for example:

Yet people do it anyway, which is why I referenced the EC2 example. People
can deal with outages caused by unexpected failures. The outage we are
talking about is part of a normal API call and it doesn't make any sense to
the user.

> If it takes 10 minutes for them to re-create their instance elsewhere
that cannot be blamed on neutron, even if it was our API call that caused
it to go offline.

The outage can still be blamed on Neutron. What you are implying here is
that instead of improving the usability of Neutron, we just give up and
tell users that they should have known better. I don't like supporting a
project with that kind of approach to usability. It leads to unhappy users
and it reflects poorly on the quality of the project.

>The difference in a port IP change API call is that it requires action on
the VMs part that neutron can't trigger immediately.

We know why these are different because we understand how Neutron works
internally, but there is no reason to think that a user would know why
these are different. From a user's perspective, one API call to change an
IP (floating IP) works as expected, the other has a huge variable delay
(port IP).

>How is warning the user about this a bad thing?

We can and should make a note of this behavior, but it's not enough IMO.
Users don't read the documentation for these kind of things until they hit
an issue. We can update the Neutron server to return the DHCP interval to
the Neutron client and update the client to output these warnings, but it's
still a bit late at that point since we are telling the user, "You just
broke your VM for 0-$(1/2 dhcp lease) hours. If you need it sooner,
hopefully you have console access or are fine with a forced restart."

>There is no delay in the API call here, the port was updated just as the
user requested.

I never said there was a delay in the API call. I am talking about how long
it takes for that to take effect on the data plane. For it to take full
effect, the VMs need to get the information from the DHCP server. The long
default lease we have now means they won't get the information for hours on
average, which is the long delay I am referring to.


>And adding a DHCP option to tell them to renew more frequently doesn't fix
the problem, it only lessens it to ~(interval/2) - that might not be
acceptable to users and they need to know the danger.

In the very first email in this thread, I pointed out that this is only
reducing the time. I don't think that was ever up for debate. The danger
exists already and warning them with whatever mechanism you had in mind
is orthogonal to my proposal to reduce the downtime.

>This is the one point I've been trying to get across in this whole
discussion - these are advanced options that users need to take caution
with, neutron can only do so much.

Neutron is completely responsible for the management of the DHCP server in
this case. We have a lot of room for improvement here. I don't think we
should throw in the towel yet.

On Tue, Feb 3, 2015 at 8:53 AM, Brian Haley <brian.haley at hp.com> wrote:

> On 02/03/2015 05:10 AM, Kevin Benton wrote:
> >>The unicast DHCP will make it to the "wire", but if you've renumbered the
> > subnet either a) the DHCP server won't respond because it's IP has
> changed as
> > well; or b) the DHCP server won't respond because there is no mapping
> for the VM
> > on it's old subnet.
> >
> > We aren't changing the DHCP server's IP here. The process that I saw was
> to add
> > a subnet and start moving VMs over. It's not 'b' either, because the
> server
> > generates a DHCPNAK in response and which will immediately cause the
> client to
> > release/renew. I have verified this behavior already and recorded a
> packet
> > capture for you.[1]
> >
> > In the capture, the renewal value is 4 seconds. I captured one renewal
> before
> > the IP address change from 99.99.99.5 to 10.0.0.25 took place. You can
> see on
> > the next renewal, the DHCP server immediately generates a NACK. The
> client then
> > releases its address, requests a new one, assigns it and ACKs within a
> couple of
> > seconds.
>
> Thanks for the trace.  So one thing I noticed is that this unicast DHCP
> only got
> to the server since you created a second subnet on this network (dest MAC
> of
> packet was that of same router interface).  If you had created a second
> network
> and subnet this would have been dropped (different broadcast domain).
> These
> little differences are things users need to know because they lead to heads
> banging on desks :(
>
> >>This would happen if the AZ their VM was in went offline as well, at
> which
> > point they would change their design to be more cloud-aware than it
> was.  Let's
> > not heap all the blame on neutron - the user is tasked with vetting that
> > their decisions meet the requirements they desire by thoroughly testing
> it.
> >
> > An availability zone going offline is not the same as an API operation
> that
> > takes a day to apply. In an internal cloud, maintenance for AZs can be
> > advertised and planned around by tenants running single-AZ services.
> Even if you
> > want to reference a public cloud, look how much of the Internet breaks
> when
> > Amazon's us-east-1a or us-east-1d AZs have issues. Even though people are
> > supposed to be bringing cattle to the cloud, a huge portion already have
> pets
> > that they are attached to or that they can't convert into cattle.
>
> You completely missed the context of my reply Kevin - an AZ failure is not
> a
> planned event.  You said people bring pets along, and rebooting them is
> painful.
>  I said that's a bad design because other things can cause it to go
> offline, for
> example:
>
>         1. Compute node failure
>         2. Network node failure
>         3. Router/switch failure
>         4. Internet failure
>         ...
>         99. API call
>
> All the user knows is they can't reach their VM - the cause doesn't matter
> when
> they can't sell their widgets to customers because their site is down.  If
> it
> takes 10 minutes for them to re-create their instance elsewhere that
> cannot be
> blamed on neutron, even if it was our API call that caused it to go
> offline.
>
> > If our floating IP 'associate' action took 12 hours to take effect on a
> running
> > instance, would telling users to reboot their instances to apply
> floating IPs
> > faster be okay? I would certainly heap the blame on Neutron there.
>
> The difference in a port IP change API call is that it requires action on
> the
> VMs part that neutron can't trigger immediately.  It's still asynchronous
> like a
> floating IP call, but the delay is typically going to be longer.  All we
> can say
> is it will take from (0 -> interval) seconds.  How is warning the user
> about
> this a bad thing?
>
> >>How about a big (*) next to all the things that could cause issues?  :)
> >
> > You want to put it next to all of the API calls to put the burden on the
> users.
> > I want to put it next to the DHCP renewal interval in the config files
> to put
> > the burden on the operators. :)
> >
> > (*) Increasing this value will increase the delay between API calls and
> when
> > they take effect on the data plane for any that depend on DHCP to relay
> the
> > information. (e.g. port IP/subnet changes, port dhcp option changes,
> subnet
> > gateways, subnet routes, subnet DNS servers, etc)
>
> There is no delay in the API call here, the port was updated just as the
> user
> requested.  Since they can't see into my config file (unless they look at
> their
> lease info or run a tcpdump trace) they are essentially making a blind
> change
> that immediately affects their instance.
>
> And adding a DHCP option to tell them to renew more frequently doesn't fix
> the
> problem, it only lessens it to ~(interval/2) - that might not be
> acceptable to
> users and they need to know the danger.  This is the one point I've been
> trying
> to get across in this whole discussion - these are advanced options that
> users
> need to take caution with, neutron can only do so much.
>
> -Brian
>
>
> > 1. http://paste.openstack.org/show/166048/
> >
> >
> > On Mon, Feb 2, 2015 at 8:57 AM, Brian Haley <brian.haley at hp.com
> > <mailto:brian.haley at hp.com>> wrote:
> >
> >     Kevin,
> >
> >     I think we are finally converging.  One of the points I've been
> trying to make
> >     is that users are playing with fire when they start playing with
> some of these
> >     port attributes, and given the tool we have to work with (DHCP), the
> >     instantiation of these changes cannot be made seamlessly to a VM.
> That's life
> >     in the cloud, and most of these things can (and should) be designed
> around.
> >
> >     On 02/02/2015 06:48 AM, Kevin Benton wrote:
> >     >> The only thing this discussion has convinced me of is that
> allowing users
> >     > to change the fixed IP address on a neutron port leads to a bad
> >     > user-experience.
> >     >
> >     > Not as bad as having to delete a port and create another one on
> the same
> >     > network just to change addresses though...
> >     >
> >     >> Even with an 8-minute renew time you're talking up to a 7-minute
> blackout
> >     > (87.5% of lease time before using broadcast).
> >     >
> >     > I suggested 240 seconds renewal time, which is up to 4 minutes of
> >     > connectivity outage. This doesn't have anything to do with lease
> time and
> >     > unicast DHCP will work because the spoof rules allow DHCP client
> traffic
> >     > before restricting to specific IPs.
> >
> >     The unicast DHCP will make it to the "wire", but if you've
> renumbered the subnet
> >     either a) the DHCP server won't respond because it's IP has changed
> as well; or
> >     b) the DHCP server won't respond because there is no mapping for the
> VM on it's
> >     old subnet.
> >
> >     >> Most would have rebooted long before then, true?  Cattle not
> pets, right?
> >     >
> >     > Only in an ideal world that I haven't encountered with customer
> deployments.
> >     > Many enterprise deployments end up bringing pets along where
> reboots aren't
> >     > always free. The time taken to relaunch programs and restore state
> can end
> >     > up being 10 minutes+ if it's something like a VDI deployment or dev
> >     > environment where someone spends a lot of time working on one VM.
> >
> >     This would happen if the AZ their VM was in went offline as well, at
> which point
> >     they would change their design to be more cloud-aware than it was.
> Let's not
> >     heap all the blame on neutron - the user is tasked with vetting that
> their
> >     decisions meet the requirements they desire by thoroughly testing it.
> >
> >     >> Changing the lease time is just papering-over the real bug -
> neutron
> >     > doesn't support seamless changes in IP addresses on ports, since
> it totally
> >     > relies on the dhcp configuration settings a deployer has chosen.
> >     >
> >     > It doesn't need to be seamless, but it certainly shouldn't be
> useless.
> >     > Connectivity interruptions can be expected with IP changes (e.g.
> I've seen
> >     > changes in elastic IPs on EC2 can interrupt connectivity to an
> instance for
> >     > up to 2 minutes), but an entire day of downtime is awful.
> >
> >     Yes, I agree, an entire day of downtime is bad.
> >
> >     > One of the things I'm getting at is that a deployer shouldn't be
> choosing
> >     > such high lease times and we are encouraging it with a high
> default. You are
> >     > arguing for infrequent renewals to work around excessive logging,
> which is
> >     > just an implementation problem that should be addressed with a
> patch to your
> >     > logging collector (de-duplication) or to dnsmasq (don't log
> renewals).
> >
> >     My #1 deployment problem was around control-plane upgrade, not
> logging:
> >
> >     "During a control-plane upgrade or outage, having a short DHCP lease
> time will
> >     take all your VMs offline.  The old value of 2 minutes is not a
> realistic value
> >     for an upgrade, and I don't think 8 minutes is much better.  Yes,
> when DHCP is
> >     down you can't boot a new VM, but as long as customers can get to
> their existing
> >     VMs they're pretty happy and won't scream bloody murder."
> >
> >     >> Documenting a VM reboot is necessary, or even deprecating this
> (you won't
> >     >> like
> >     > that) are sounding better to me by the minute.
> >     >
> >     > If this is an approach you really want to go with, then we should
> at least
> >     > be consistent and deprecate the extra dhcp options extension (or
> at least
> >     > the ability to update ports' dhcp options). Updating subnet
> attributes like
> >     > gateway_ip, dns_nameserves, and host_routes should be thrown out
> as well. All
> >     > of these things depend on the DHCP server to deliver updated
> information and
> >     > are hindered by renewal times. Why discriminate against IP updates
> on a port?
> >     > A failure to receive many of those other types of changes could
> result in
> >     > just as severe of a connection disruption.
> >
> >     How about a big (*) next to all the things that could cause issues?
> :)  We've
> >     completely "loaded the gun" exposing all these attributes to the
> general user
> >     when only the network-aware power-user should be playing with them.
> >
> >     (*) Changing these attributes could cause VMs to become unresponsive
> for a long
> >     period of time depending on the deployment settings, and should be
> used with
> >     caution.  Sometimes a VM reboot will be required to re-gain
> connectivity.
> >
> >     > In summary, the information the DHCP server gives to clients is
> not static.
> >     > Unless we eliminate updates to everything in the Neutron API that
> results in
> >     > different DHCP lease information, my suggestion is that we include
> a new
> >     > option for the renewal interval and have the default set <5
> minutes. We can
> >     > leave the lease default to 1 day so the amount of time a DHCP
> server can be
> >     > offline without impacting running clients can stay the same.
> >
> >     I'm fine with adding Option 58, even though it only lessens the
> effect of this
> >     problem, doesn't truly fix it, and might not work with all clients
> (like in
> >     Cirros).
> >
> >     -Brian
> >
> >     > On Fri, Jan 30, 2015 at 8:00 AM, Brian Haley <brian.haley at hp.com
> <mailto:brian.haley at hp.com>
> >     > <mailto:brian.haley at hp.com <mailto:brian.haley at hp.com>>> wrote:
> >     >
> >     > Kevin,
> >     >
> >     > The only thing this discussion has convinced me of is that
> allowing users to
> >     > change the fixed IP address on a neutron port leads to a bad
> >     > user-experience. Even with an 8-minute renew time you're talking
> up to a
> >     > 7-minute blackout (87.5% of lease time before using broadcast).
> This is time
> >     > that customers are paying for.  Most would have rebooted long
> before then,
> >     > true?  Cattle not pets, right?
> >     >
> >     > Changing the lease time is just papering-over the real bug -
> neutron doesn't
> >     > support seamless changes in IP addresses on ports, since it
> totally relies
> >     > on the dhcp configuration settings a deployer has chosen.
> Bickering over the
> >     > lease time doesn't fix that non-deterministic recovery for the VM.
> >     > Documenting a VM reboot is necessary, or even deprecating this
> (you won't
> >     > like that) are sounding better to me by the minute.
> >     >
> >     > Is there anyone else that has used, or has customers using, this
> part of the
> >     > neutron API?  Can they share their experiences?
> >     >
> >     > -Brian
> >     >
> >     >
> >     > On 01/30/2015 07:26 AM, Kevin Benton wrote:
> >     >>> But they will if we document it well, which is what Salvatore
> suggested.
> >     >>
> >     >> I don't think this is a good approach, and it's a big part of why
> I
> >     > started this
> >     >> thread. Most of the deployers/operators I have worked with only
> read the
> >     >> bare minimum documentation to get a Neutron deployment working
> and they
> >     >> only adjust the settings necessary for basic functionality.
> >     >>
> >     >> We have an overwhelming amount of configuration options and
> adding a note
> >     >> specifying that a particular setting for DHCP leases has been
> optimized to
> >     >> reduce logging at the cost of long downtimes during port IP
> address
> >     > updates is a
> >     >> waste of time and effort on our part.
> >     >>
> >     >>> I think the current default value is also more indicative of
> something
> >     >> you'd find in your house, or at work - i.e. stable networks.
> >     >>
> >     >> Tenants don't care what the DHCP lease time is or that it matches
> what
> >     >> they would see from a home router. They only care about
> connectivity.
> >     >>
> >     >>> One solution is to disallow this operation.
> >     >>
> >     >> I want this feature to be useful in deployments by default, not
> strip it
> >     >> away. You can probably do this with /etc/neutron/policy.json
> without a
> >     >> code change if you wanted to block it in a deployment like yours
> where you
> >     >> have
> >     > such
> >     >> a high lease time.
> >     >>
> >     >>> Perhaps letting the user set it, but allow the admin to set the
> valid
> >     >>> range
> >     >> for min/max?  And if they don't specify they get the default?
> >     >>
> >     >> Tenants wouldn't have any reason to adjust this default. They
> would be
> >     > even less
> >     >> likely than the operator to know about this weird relationship
> between a
> >     >> DHCP setting and the amount of time they lose connectivity after
> updating
> >     >> their ports' IPs.
> >     >>
> >     >>> It impacts anyone that hasn't changed from the default since
> July 2013
> >     >>> and
> >     > later
> >     >> (Havana), since if they don't notice, they might get bitten by it.
> >     >>
> >     >> Keep in mind that what I am suggesting with the
> lease-renewal-time would
> >     >> be separate from the lease expiration time. The only difference
> that an
> >     >> operator would see on upgrade (if using the defaults) is
> increased DHCP
> >     >> traffic and
> >     > more
> >     >> logs to syslog from dnsmasq. The lease time would still be the
> same so the
> >     >> downtime windows for DHCP agents would be maintained. That is
> much less of
> >     >> an impact than many of the non-config changes we make between
> cycles.
> >     >>
> >     >> To clarify, even with an option for dhcp-renewal-time I am
> proposing, you
> >     >> are still opposed to setting it to anything low because of
> logging and the
> >     >> ~24 bps background DHCP traffic per VM?
> >     >>
> >     >> On Thu, Jan 29, 2015 at 7:11 PM, Brian Haley <brian.haley at hp.com
> >     <mailto:brian.haley at hp.com>
> >     > <mailto:brian.haley at hp.com <mailto:brian.haley at hp.com>>
> >     >> <mailto:brian.haley at hp.com <mailto:brian.haley at hp.com>
> >     <mailto:brian.haley at hp.com <mailto:brian.haley at hp.com>>>> wrote:
> >     >>
> >     >> On 01/29/2015 05:28 PM, Kevin Benton wrote:
> >     >>>> How is Neutron breaking this?  If I move a port on my physical
> >     > switch to a
> >     >>> different subnet, can you still communicate with the host
> sitting on it?
> >     >>> Probably not since it has a view of the world (next-hop router)
> that
> >     > no longer
> >     >>> exists, and the network won't route packets for it's old IP
> address
> >     > to the new
> >     >>> location.  It has to wait for it's current DHCP lease to tick
> down
> >     > to the point
> >     >>> where it will use broadcast to get a new one, after which point
> it
> >     > will work.
> >     >>>
> >     >>> That's not just moving to a different subnet. That's moving to a
> >     > different
> >     >>> broadcast domain. Neutron supports multiple subnets per network
> >     > (broadcast
> >     >>> domain). An address on either subnet will work. The router has
> two
> >     > interfaces
> >     >>> into the network, one on each subnet.[2]
> >     >>>
> >     >>>
> >     >>>> Does it work on Windows VMs too?  People run those in clouds
> too.
> >     > The point is
> >     >>> that if we don't know if all the DHCP clients will support it
> then
> >     > it's a
> >     >>> non-starter since there's no way to tell from the server side.
> >     >>>
> >     >>> It appears they do.[1] Even for clients that don't, the worst
> case
> >     > scenario is
> >     >>> just that they are stuck where we are now.
> >     >>>
> >     >>>> "... then the deployer can adjust the value upwards...", hmm,
> can
> >     > they adjust it
> >     >>> downwards as well?  :)
> >     >>>
> >     >>> Yes, but most people doing initial openstack deployments don't
> and
> >     > wouldn't
> >     >>> think to without understanding the intricacies of the security
> >     > groups filtering
> >     >>> in Neutron.
> >     >>
> >     >> But they will if we document it well, which is what Salvatore
> suggested.
> >     >>
> >     >>>> I'm glad you're willing to "boil the ocean" to try and get the
> >     > default changed,
> >     >>> but is all this really worth it when all you have to do is edit
> the
> >     > config file
> >     >>> in your deployment?  That's why the value is there in the first
> place.
> >     >>>
> >     >>> The default value is basically incompatible with port IP
> changes. We
> >     > shouldn't
> >     >>> be shipping defaults that lead to half-broken functionality.
> What I'm
> >     >>> understanding is that the current default value is to workaround
> >     > shortcomings in
> >     >>> dnsmasq. This is an example of implementation details leaking out
> >     > and leading to
> >     >>> bad UX.
> >     >>
> >     >> I think the current default value is also more indicative of
> something
> >     > you'd
> >     >> find in your house, or at work - i.e. stable networks.
> >     >>
> >     >> I had another thought on this Kevin, hoping that we could come to
> some
> >     >> resolution, because sure, shipping broken functionality isn't
> great.
> >     > But here's
> >     >> the rub - how do we make a change in a fixed IP work in *all*
> deployments?
> >     >> Since the end-user can't set this value, they'll run into this
> problem
> >     > in my
> >     >> deployment, or any other that has some not-very-short lease
> time.  One
> >     > solution
> >     >> is to disallow this operation.  The other is to fix neutron to
> make
> >     > this work
> >     >> better (I don't know what that involves, but there's bound to be
> a way).
> >     >> Perhaps letting the user set it, but allow the admin to set the
> valid
> >     > range for
> >     >> min/max?  And if they don't specify they get the default?
> >     >>
> >     >>> If we had an option to configure how often iptables rules were
> >     > refreshed to
> >     >>> match their security group, there is no way we would have a
> default
> >     > of 12 hours.
> >     >>> This is essentially the same level of connectivity interruption,
> it
> >     > just happens
> >     >>> to be a narrow use case so it hasn't been getting any attention.
> >     >>>
> >     >>> To flip your question around, why do you care if the default is
> >     > lower? You
> >     >>> already adjust it beyond the 1 day default in your deployment, so
> >     > how would a
> >     >>> different default impact you?
> >     >>
> >     >> It impacts anyone that hasn't changed from the default since July
> 2013
> >     > and later
> >     >> (Havana), since if they don't notice, they might get bitten by it.
> >     >>
> >     >> -Brian
> >     >>
> >     >>
> >     >>>
> >     >>> 1. http://support.microsoft.com/kb/121005 2. Similar to using
> the
> >     >>> "secondary" keyword on Cisco devices. Or
> >     > just the "ip
> >     >>> addr add" command on linux.
> >     >>>
> >     >>> On Thu, Jan 29, 2015 at 1:34 PM, Brian Haley <brian.haley at hp.com
> >     <mailto:brian.haley at hp.com>
> >     > <mailto:brian.haley at hp.com <mailto:brian.haley at hp.com>>
> >     <mailto:brian.haley at hp.com <mailto:brian.haley at hp.com>
> >     > <mailto:brian.haley at hp.com <mailto:brian.haley at hp.com>>>
> >     >>> <mailto:brian.haley at hp.com <mailto:brian.haley at hp.com>
> >     <mailto:brian.haley at hp.com <mailto:brian.haley at hp.com>>
> >     > <mailto:brian.haley at hp.com <mailto:brian.haley at hp.com>
> >     <mailto:brian.haley at hp.com <mailto:brian.haley at hp.com>>>>> wrote:
> >     >>>
> >     >>> On 01/29/2015 03:55 AM, Kevin Benton wrote:
> >     >>>>> Why would users want to change an active port's IP address
> anyway?
> >     >>>>
> >     >>>> Re-addressing. It's not common, but the entire reason I
> >     > brought this
> >     >> up is
> >     >>>> because a user was moving an instance to another subnet on the
> >     > same
> >     >> network and
> >     >>>> stranded one of their VMs.
> >     >>>>
> >     >>>>> I worry about setting a default config value to handle a very
> >     >> unusual use case.
> >     >>>>
> >     >>>> Changing a static lease is something that works on normal
> networks
> >     >> so I don't
> >     >>>> think we should break it in Neutron without a really good
> reason.
> >     >>>
> >     >>> How is Neutron breaking this?  If I move a port on my physical
> >     > switch to a
> >     >>> different subnet, can you still communicate with the host
> >     > sitting on it?
> >     >>> Probably not since it has a view of the world (next-hop router)
> that
> >     >> no longer
> >     >>> exists, and the network won't route packets for it's old IP
> >     > address to
> >     >> the new
> >     >>> location.  It has to wait for it's current DHCP lease to tick
> >     > down to
> >     >> the point
> >     >>> where it will use broadcast to get a new one, after which point
> it
> >     >> will work.
> >     >>>
> >     >>>> Right now, the big reason to keep a high lease time that I agree
> >     >> with is that it
> >     >>>> buys operators lots of dnsmasq downtime without affecting
> running
> >     >> clients. To
> >     >>>> get the best of both worlds we can set DHCP option 58 (a.k.a
> >     >> dhcp-renewal-time
> >     >>>> or T1) to 240 seconds. Then the lease time can be left to be
> >     >> something large
> >     >>>> like 10 days to allow for tons of DHCP server downtime without
> >     >> affecting running
> >     >>>> clients.
> >     >>>>
> >     >>>> There are two issues with this approach. First, some simple dhcp
> >     >> clients don't
> >     >>>> honor that dhcp option (e.g. the one with Cirros), but it
> >     > works with
> >     >> dhclient so
> >     >>>> it should work on CentOS, Fedora, etc (I verified it works on
> >     >> Ubuntu). This
> >     >>>> isn't a big deal because the worst case is what we have already
> >     >> (half of the
> >     >>>> lease time). The second issue is that dnsmasq hardcodes that
> >     > option,
> >     >> so a patch
> >     >>>> would be required to allow it to be specified in the options
> >     > file. I
> >     >> am happy to
> >     >>>> submit the patch required there so that isn't a big deal either.
> >     >>>
> >     >>> Does it work on Windows VMs too?  People run those in clouds
> >     > too.  The
> >     >> point is
> >     >>> that if we don't know if all the DHCP clients will support it
> >     > then it's a
> >     >>> non-starter since there's no way to tell from the server side.
> >     >>>
> >     >>>> If we implement that fix, the remaining issue is Brian's other
> >     >> comment about too
> >     >>>> much DHCP traffic. I've been doing some packet captures and
> >     > the standard
> >     >>>> request/reply for a renewal is 2 unicast packets totaling
> >     > about 725
> >     >> bytes.
> >     >>>> Assuming 10,000 VMs renewing every 240 seconds, there will be an
> >     >> average of 242
> >     >>>> kbps background traffic across the entire network. Even at a
> >     > density
> >     >> of 50 VMs,
> >     >>>> that's only 1.2 kbps per compute node. If that's still too much,
> >     >> then the
> >     >>>> deployer can adjust the value upwards, but that's hardly a
> >     > reason to
> >     >> have a high
> >     >>>> default.
> >     >>>
> >     >>> "... then the deployer can adjust the value upwards...", hmm,
> >     > can they
> >     >> adjust it
> >     >>> downwards as well?  :)
> >     >>>
> >     >>>> That just leaves the logging problem. Since we require a change
> to
> >     >> dnsmasq
> >     >>>> anyway, perhaps we could also request an option to suppress logs
> >     >> from renewals?
> >     >>>> If that's not adequate, I think 2 log entries per vm every 240
> >     >> seconds is really
> >     >>>> only a concern for operators with large clouds and they should
> >     > have the
> >     >>>> knowledge required to change a config file anyway. ;-)
> >     >>>
> >     >>> I'm glad you're willing to "boil the ocean" to try and get the
> >     > default
> >     >> changed,
> >     >>> but is all this really worth it when all you have to do is edit
> the
> >     >> config file
> >     >>> in your deployment?  That's why the value is there in the first
> >     > place.
> >     >>>
> >     >>> Sorry, I'm still unconvinced we need to do anything more than
> >     > document
> >     >> this.
> >     >>>
> >     >>> -Brian
> >     >>>
> >     >>>
> >     >>>
> >     >>>
> >     >
> __________________________________________________________________________
> >     >>> OpenStack Development Mailing List (not for usage questions)
> >     >>> Unsubscribe:
> >     >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <
> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> >     > <
> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> >     >> <
> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> >     >>>
> >     > <
> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> >     >>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >     >>>
> >     >>>
> >     >>>
> >     >>>
> >     >>> -- Kevin Benton
> >     >>>
> >     >>>
> >     >>>
> >     >
> __________________________________________________________________________
> >     >>> OpenStack Development Mailing List (not for usage questions)
> >     >>> Unsubscribe:
> >     > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <
> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> >     > <
> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> >     >> <
> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> >     >>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >     >>>
> >     >>
> >     >>
> >     >>
> __________________________________________________________________________
> >     >> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> >     > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <
> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> >     > <
> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> >     >> <
> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> >     >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >     >>
> >     >>
> >     >>
> >     >>
> >     >> -- Kevin Benton
> >     >>
> >     >>
> >     >>
> __________________________________________________________________________
> >     >> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> >     >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <
> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> >     > <
> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> >     >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >     >>
> >     >
> >     >
> >     >
> __________________________________________________________________________
> >     > OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> >     > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <
> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> >     > <
> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> >     > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >     >
> >     >
> >     >
> >     >
> >     > -- Kevin Benton
> >     >
> >     >
> >     >
> __________________________________________________________________________
> >     > OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> >     > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <
> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> >     > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >     >
> >
> >
> >
>  __________________________________________________________________________
> >     OpenStack Development Mailing List (not for usage questions)
> >     Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <
> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > --
> > Kevin Benton
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150203/33e20b54/attachment-0001.html>


More information about the OpenStack-dev mailing list