[Openstack-operators] Request for feedback on DHCP IP usage

Jay Pipes jaypipes at gmail.com
Mon Oct 6 17:48:55 UTC 2014

On 10/06/2014 06:11 AM, Mike Kolesnik wrote:
>> On 10/06/2014 04:09 AM, Mike Kolesnik wrote:
>>> Now, I know the 1st solution seems very appealing but thinking of it
>>> further
>>> reveals very serious limitations:
>>> * No HA for DHCP agents is possible (more prone to certain race
>>> conditions).
>> eventually they will be just bugs, bugs can be fixed
>>> * DHCP IP can't be reached from outside the cloud.
>> that's a feature :)
>>> * You will just see a single port per subnet in Neutron, without
>>> granularity of
>>> the host binding (but perhaps it's not that bad).
>> may be an issue for monitoring, i will have more ports deployed that
>> registered in my db.
>> i don't know if is *really* an issue, still does not sounds good
>>> * This solution will be tied initially only to OVS mechanism driver, each
>>> other
>>> driver or 3rd party plugin will have to support it individually in some
>>> way.
>>> So basically my question is - which solution would you prefer as a cloud
>>> op?
>> option 2 is a no go for me, i can't waste that many ip
>>> Is it that bad to consume more than 1 IP, given that we're talking about
>>> private
>>> isolated networks?
>> not always, all the vm we deploy in the prod environment have public ip,
>> they speak freely to the internet. no nat, no lbaas.
> So basically the DHCP server is also consuming a public IP?

Yes. And, for the record, this is how nova-network in multi-host mode works.

> Also since you're always using the public network, does distributing the
> DHCP agents/servers sound interesting?

Yes, due to the spread out of the failure domain. As you point out, with 
multi-host nova-network mode, each compute node has a DHCP server that 
services the VMs on that particular compute node only. This means that a 
centralized DHCP agent doesn't bring down IP assignment services across 
a large swath of the deployment, which is a huge plus (and what DVR is 
aiming for, IIRC)


More information about the OpenStack-operators mailing list