<html><head><meta http-equiv="content-type" content="text/html; charset=us-ascii"><style>body { line-height: 1.5; }body { font-size: 10.5pt; font-family: ????; color: rgb(0, 0, 0); line-height: 1.5; }</style></head><body>
<div><span></span>hi:</div><div> the high-availability-guide (<a id="yui_3_10_3_1_1400063442661_1817" rel="nofollow" href="http://docs.openstack.org/high-availability-guide/content/ch-network.html" style="font-size: 10.5pt; line-height: 1.5; background-color: window;">http://<wbr>docs.openstack.<wbr>org/high-<wbr>availability-<wbr>guide/content/<wbr>ch-network.<wbr>html</a>) says that Both nodes should have the same hostname since the Networking scheduler
will be aware of one node, for example a virtual router attached to a
single L3 node.</div><div><br></div><div> <span style="font-size: 10.5pt; line-height: 1.5; background-color: window;">But when I test it on two servers with same hostname,after installing
corosync and pacemaker service on them(with no resource configured),the
crm_mon output goes into endless loop.And in the log of corosync,there
are so many messages like:May 09 22:25:40 [2149] TEST crmd:
warning: crm_get_peer: Node 'TEST' and 'TEST' share the same
cluster nodeid: 1678901258.After this I set diffrent nodeid in
/etc/corosync/</span><wbr style="font-size: 10.5pt; line-height: 1.5; background-color: window;"><span style="font-size: 10.5pt; line-height: 1.5; background-color: window;">corosync.</span><wbr style="font-size: 10.5pt; line-height: 1.5; background-color: window;"><span style="font-size: 10.5pt; line-height: 1.5; background-color: window;">conf of each test node,but it didn't help.</span></div>
So,I set diffrent hostname for each server,and then configure
pacemaker just like the manual except the hostname,the
neutron-dhcp-agent and neutron-<wbr>metadata-<wbr>agent works well,but
neutron-l3-agent not(VM instance can't not access the external
net,further more the gateway of the VM instance can't be accessed
either).<br>
After two days checking,finally I found that we can use "netron l3-agent-<wbr>router-<wbr>remove
network1_l3_agentid external-routeid" and "netron l3-agent-router-add
network2_l3_agentid external-routeid" to let the backup l3-agent to
work when the former network node is down.(assume the two node's names
are network1 and network2)<wbr>,alternatively,<wbr>we can update the mysql table routerl3agentbi<wbr>ndings in neutron base directly.If it make sense,I think we can change the scrip neutron-agent-l3 , in it's neutron_<wbr>l3_agent_<wbr>start() function,only need few lines to make it work well.<div><br></div><hr style="width: 210px; height: 1px;" color="#b5c4df" size="1" align="left">
<div><span>Walter Xu</span></div>
</body></html>