[openstack-dev] [neutron] HA of dhcp agents?

Armando M. armamig at gmail.com
Wed Oct 22 14:00:24 UTC 2014


Hi Noel,

On 22 October 2014 01:57, Noel Burton-Krahn <noel at pistoncloud.com> wrote:

> Hi Armando,
>
> Sort of... but what happens when the second one dies?
>

You mean, you lost both (all) agents? In this case, yes you'd need to
resurrect the agents or move the networks to another available agent.


> If one DHCP agent dies, I need to be able to start a new DHCP agent on
> another host and take over from it.  As far as I can tell right now, when
> one DHCP agent dies, another doesn't take up the slack.
>

I am not sure I fully understand the failure mode you are trying to
address. The DHCP agents can work in an active-active configuration, so if
you have N agents assigned per network, all of them should be able to
address DHCP traffic. If this is not your experience, ie. one agent dies
and DHCP is no longer served on the network by any other agent, then there
might be some other problem going on.


>
>
> I have the same problem wit L3 agents by the way, that's next on my list
>
> --
> Noel
>
>
> On Tue, Oct 21, 2014 at 12:52 PM, Armando M. <armamig at gmail.com> wrote:
>
>> As far as I can tell when you specify:
>>
>> dhcp_agents_per_network = X > 1
>>
>> The server binds the network to all the agents (up to X), which means
>> that you have multiple instances of dnsmasq serving dhcp requests at the
>> same time. If one agent dies, there is no fail-over needed per se, as the
>> other agent will continue to server dhcp requests unaffected.
>>
>> For instance, in my env I have dhcp_agents_per_network=2, so If I create
>> a network, and list the agents serving the network I will see the following:
>>
>> neutron dhcp-agent-list-hosting-net test
>>
>> +--------------------------------------+--------+----------------+-------+
>>
>> | id                                   | host   | admin_state_up | alive |
>>
>> +--------------------------------------+--------+----------------+-------+
>>
>> | 6dd09649-5e24-403b-9654-7aa0f69f04fb | host1  | True           | :-)   |
>>
>> | 7d47721a-2725-45f8-b7c4-2731cfabdb48 | host2  | True           | :-)   |
>>
>> +--------------------------------------+--------+----------------+-------+
>>
>> Isn't that what you're after?
>>
>> Cheers,
>> Armando
>>
>> On 21 October 2014 22:26, Noel Burton-Krahn <noel at pistoncloud.com> wrote:
>>
>>> We currently have a mechanism for restarting the DHCP agent on another
>>> node, but we'd like the new agent to take over all the old networks of the
>>> failed dhcp instance.  Right now, since dhcp agents are distinguished by
>>> host, and the host has to match the host of the ovs agent, and the ovs
>>> agent's host has to be unique per node, the new dhcp agent is registered as
>>> a completely new agent and doesn't take over the failed agent's networks.
>>> I'm looking for a way to give the new agent the same roles as the previous
>>> one.
>>>
>>> --
>>> Noel
>>>
>>>
>>> On Tue, Oct 21, 2014 at 12:12 AM, Kevin Benton <blak111 at gmail.com>
>>> wrote:
>>>
>>>> No, unfortunately when the DHCP agent dies there isn't automatic
>>>> rescheduling at the moment.
>>>>
>>>> On Mon, Oct 20, 2014 at 11:56 PM, Noel Burton-Krahn <
>>>> noel at pistoncloud.com> wrote:
>>>>
>>>>> Thanks for the pointer!
>>>>>
>>>>> I like how the first google hit for this is:
>>>>>
>>>>> Add details on dhcp_agents_per_network option for DHCP agent HA
>>>>> https://bugs.launchpad.net/openstack-manuals/+bug/1370934
>>>>>
>>>>> :) Seems reasonable to set dhcp_agents_per_network > 1.  What happens
>>>>> when a DHCP agent dies?  Does the scheduler automatically bind another
>>>>> agent to that network?
>>>>>
>>>>> Cheers,
>>>>> --
>>>>> Noel
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Oct 20, 2014 at 9:03 PM, Jian Wen <wenjianhn at gmail.com> wrote:
>>>>>
>>>>>> See dhcp_agents_per_network in neutron.conf.
>>>>>>
>>>>>> https://bugs.launchpad.net/neutron/+bug/1174132
>>>>>>
>>>>>> 2014-10-21 6:47 GMT+08:00 Noel Burton-Krahn <noel at pistoncloud.com>:
>>>>>>
>>>>>>> I've been working on failover for dhcp and L3 agents.  I see that in
>>>>>>> [1], multiple dhcp agents can host the same network.  However, it looks
>>>>>>> like I have to manually assign networks to multiple dhcp agents, which
>>>>>>> won't work.  Shouldn't multiple dhcp agents automatically fail over?
>>>>>>>
>>>>>>> [1]
>>>>>>> http://docs.openstack.org/trunk/config-reference/content/multi_agent_demo_configuration.html
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> OpenStack-dev mailing list
>>>>>>> OpenStack-dev at lists.openstack.org
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best,
>>>>>>
>>>>>> Jian
>>>>>>
>>>>>> _______________________________________________
>>>>>> OpenStack-dev mailing list
>>>>>> OpenStack-dev at lists.openstack.org
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> OpenStack-dev at lists.openstack.org
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Kevin Benton
>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141022/eca6f947/attachment.html>


More information about the OpenStack-dev mailing list