[openstack-dev] [Quantum] DHCP agent and LBaaS

Gary Kotton gkotton at redhat.com
Mon Nov 26 20:38:04 UTC 2012


On 11/26/2012 09:20 PM, Mark McClain wrote:
> Sorry I realized that my reply did not go back to the list.
>
> The DHCP protocol is designed for active/active setups, so we don't 
> need to front it with a load balancer.  The protocol specifies how 
> clients should handle when servers go offline and lease renewals 
> cannot be completed.  You can get HA right now by starting more than 
> one DHCP agent instance on other hosts.

If I understand correctly the IP address of the DHCP server is passed by 
Nova to the VM. Which IP address will this be? If a load balancer is 
used the address can be the same - that is a virtual IP.


>
> mark
>
> On Nov 26, 2012, at 2:21 AM, Gary Kotton <gkotton at redhat.com 
> <mailto:gkotton at redhat.com>> wrote:
>
>> On 11/26/2012 06:45 AM, Vinay Bannai wrote:
>>> I would agree that having a active/standby for DHCP agent makes a 
>>> lot of sense. We might want to leverage the VRRP infastructure for 
>>> that.
>>> I am not sure I understand clearly the need to have the DHCP agents 
>>> sit behind the load balancers. What are we trying to load balance 
>>> here? The amount of DHCP intermittent and transient to say the least 
>>> with a heavy bias towards more traffic at the time of a VM booting up.
>>
>> At the moment there are a number of problems with the DHCP agents:
>>     - single point of failure
>>     - it does not scale
>>
>> A simple solution to addressing the above is making use of a standard 
>> load balancer (as depicted in the diagram below). This enables us to 
>> scale and to have HA for the DHCP agents. I really like the solution 
>> and it addresses a number of problems and concerns about the DHCP agents.
>>
>>>
>>> If we were to truly load balance we would need to keep the state of 
>>> the DHCP servers in sync (dynamically) as they would be allocating 
>>> from a common pool of resources. That might not be a problem that we 
>>> would want to inherit.
>>
>> Yes, a load balcner maintaining a persistent entry will ensure that 
>> the leasing works correctly. In the event that a DHCP agent 
>> terminates (maintenance, network issues, excpetion etc.) the the load 
>> balcner will select another active DHCP agent. The advantage here is 
>> that the current implementation has the DHCP agents all having the 
>> relevant host information - i.e. the routes, ip address and mac address.
>>
>>>
>>> On the other hand, your suggestion to use VRRP would be a great idea 
>>> for those use cases where the L3 agent and the DHCP agent would be 
>>> co-located. The problem of keeping the state in sync would still 
>>> have to be dealt with but is not as severe as the load balancing case.
>>
>> VRRP is a way of providing the high availability. All off the shelf 
>> load balancers today support this. Some may have their own 
>> proprietary ways of performance HA. This will ensure that the load 
>> balancer is not a single point of failure. Originally I was in favor 
>> of implementing VRRP on the L3 agents but now that the LBaaS is 
>> starting to crystallize this is a far better solution for the 
>> infrastructure and Openstack as a whole.
>>
>>>
>>> Just my thoughts.
>>> Vinay
>>>
>>> On Sun, Nov 25, 2012 at 5:56 AM, Gary Kotton <gkotton at redhat.com 
>>> <mailto:gkotton at redhat.com>> wrote:
>>>
>>>     Hi,
>>>     There were two ideas discussed at the summit the first is the
>>>     LBaaS and the second was improvements for the DHCP agent
>>>     (multinode). I think that we can leverage the LBaaS to support a
>>>     highly available and robust Quantum DHCP service.
>>>
>>>     This can be achieved as follows:
>>>
>>>     1. For each network that supports a DHCP service there will be a
>>>     VIP for the DHCP address (this will also have the relevant
>>>     health checks etc.)
>>>     2. Each DHCP running agent will be registered as a member (I
>>>     hope that I have the terminology correct here). Basically vip =
>>>     {dhcps1, dhcps2, ...}
>>>     3. All of the DHCP requests and lease updates will be sent via
>>>     the VIP for the DHCP. The load balcner will select a DHCP server
>>>     if this is the first time a request from the client has been
>>>     made or it will forward to a existing server entry.
>>>
>>>     Please see the diagram below. This will enable a cluster of
>>>     hosts on the same network tenant to get a highly available DHCP
>>>     service - the DHCP server IP is the virtual IP (it is ideal to
>>>     have an active backup load balancing pair to ensure HA - this
>>>     could either be by VRRP or some propriatery method that any of
>>>     the vendors support). My thinking is that if we can use this for
>>>     the first LBaaS integration example then we are certainly moving
>>>     in the right direction and we have killed two birds with one stone.
>>>
>>>     In the example below there will be 2 DHCP agents. The traffic
>>>     will be load balanced by the active load balancer (in an active
>>>     back configuration the persistent sessions will be maintained :)).
>>>
>>>     A few minor changes may be required when Nova receives the DHCP
>>>     address - we should return the VIP address.
>>>
>>>     <Mail Attachment.png>
>>>
>>>     Ideas, comments or thoughts?
>>>
>>>     Thanks
>>>     Gary
>>>
>>>
>>>     _______________________________________________
>>>     OpenStack-dev mailing list
>>>     OpenStack-dev at lists.openstack.org
>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> -- 
>>> Vinay Bannai
>>> Email: vbannai at gmail.com <mailto:vbannai at gmail.com>
>>> Google Voice: 415 938 7576
>>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org 
>> <mailto:OpenStack-dev at lists.openstack.org>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121126/48676f05/attachment.html>


More information about the OpenStack-dev mailing list