<div dir="ltr">Hi Assaf,<div><br></div><div>another update, if I ping the floating ip from my instance it works. If I ping from outside/provider network, from my pc, it doesn't. </div><div><br></div><div>Thanks</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Dec 30, 2014 at 11:35 AM, Pedro Sousa <span dir="ltr"><<a href="mailto:pgsousa@gmail.com" target="_blank">pgsousa@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Assaf,<div><br></div><div>According your instructions I can confirm that I have l2pop disabled. </div><div><br></div><div>Meanwhile, I've made another test, yesterday when I left the office this wasn't working, but when I arrived this morning it was pinging again, and I didn't changed or touched anything. So my interpretation that this has some sort of timeout issue.</div><div><br></div><div>Thanks</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Dec 30, 2014 at 11:27 AM, Assaf Muller <span dir="ltr"><<a href="mailto:amuller@redhat.com" target="_blank">amuller@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Sorry I can't open zip files on this email. You need l2pop to not exist<br>
in the ML2 mechanism drivers list in neutron.conf where the Neutron server<br>
is, and you need l2population = False in each OVS agent.<br>
<br>
----- Original Message -----<br>
><br>
> [Text File:warning1.txt]<br>
<div><div>><br>
> Hi Asaf,<br>
><br>
> I think I disabled it, but maybe you can check my conf files? I've attached<br>
> the zip.<br>
><br>
> Thanks<br>
><br>
> On Tue, Dec 30, 2014 at 8:27 AM, Assaf Muller < <a href="mailto:amuller@redhat.com" target="_blank">amuller@redhat.com</a> > wrote:<br>
><br>
><br>
><br>
><br>
> ----- Original Message -----<br>
> > Hi Britt,<br>
> ><br>
> > some update on this after running tcpdump:<br>
> ><br>
> > I have keepalived master running on controller01, If I reboot this server<br>
> > it<br>
> > failovers to controller02 which now becomes Keepalived Master, then I see<br>
> > ping packets arriving to controller02, this is good.<br>
> ><br>
> > However when the controller01 comes online I see that ping requests stop<br>
> > being forwarded to controller02 and start being sent to controller01 that<br>
> > is<br>
> > now in Backup State, so it stops working.<br>
> ><br>
><br>
> If traffic is being forwarded to a backup node, that sounds like L2pop is on.<br>
> Is that true by chance?<br>
><br>
> > Any hint for this?<br>
> ><br>
> > Thanks<br>
> ><br>
> ><br>
> ><br>
> > On Mon, Dec 29, 2014 at 11:06 AM, Pedro Sousa < <a href="mailto:pgsousa@gmail.com" target="_blank">pgsousa@gmail.com</a> > wrote:<br>
> ><br>
> ><br>
> ><br>
> > Yes,<br>
> ><br>
> > I was using l2pop, disabled it, but the issue remains.<br>
> ><br>
> > I also stopped "bogus VRRP" messages configuring a user/password for<br>
> > keepalived, but when I reboot the servers, I see keepalived process running<br>
> > on them but I cannot ping the virtual router ip address anymore.<br>
> ><br>
> > So I rebooted the node that is running Keepalived as Master, starts pinging<br>
> > again, but when that node comes online, everything stops working. Anyone<br>
> > experienced this?<br>
> ><br>
> > Thanks<br>
> ><br>
> ><br>
> > On Tue, Dec 23, 2014 at 5:03 PM, David Martin < <a href="mailto:dmartls1@gmail.com" target="_blank">dmartls1@gmail.com</a> > wrote:<br>
> ><br>
> ><br>
> ><br>
> > Are you using l2pop? Until <a href="https://bugs.launchpad.net/neutron/+bug/1365476" target="_blank">https://bugs.launchpad.net/neutron/+bug/1365476</a><br>
> > is<br>
> > fixed it's pretty broken.<br>
> ><br>
> > On Tue, Dec 23, 2014 at 10:48 AM, Britt Houser (bhouser) <<br>
> > <a href="mailto:bhouser@cisco.com" target="_blank">bhouser@cisco.com</a><br>
> > > wrote:<br>
> ><br>
> ><br>
> ><br>
> > Unfortunately I've not had a chance yet to play with neutron router HA, so<br>
> > no<br>
> > hints from me. =( Can you give a little more details about "it stops<br>
> > working"? I.e. You see packets dropped while controller 1 is down? Do<br>
> > packets begin flowing before controller1 comes back online? Does<br>
> > controller1<br>
> > come back online successfully? Do packets begin to flow after controller1<br>
> > comes back online? Perhaps that will help.<br>
> ><br>
> > Thx,<br>
> > britt<br>
> ><br>
> > From: Pedro Sousa < <a href="mailto:pgsousa@gmail.com" target="_blank">pgsousa@gmail.com</a> ><br>
> > Date: Tuesday, December 23, 2014 at 11:14 AM<br>
> > To: Britt Houser < <a href="mailto:bhouser@cisco.com" target="_blank">bhouser@cisco.com</a> ><br>
> > Cc: " <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a> " <<br>
> > <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a> ><br>
> > Subject: Re: [Openstack-operators] Neutron DVR HA<br>
> ><br>
> > I understand Britt, thanks.<br>
> ><br>
> > So I disabled DVR and tried to test L3_HA, but it's not working properly,<br>
> > it<br>
> > seems a keepalived issue. I see that it's running on 3 nodes:<br>
> ><br>
> > [root@controller01 keepalived]# neutron l3-agent-list-hosting-router<br>
> > harouter<br>
> > +--------------------------------------+--------------+----------------+-------+<br>
> > | id | host | admin_state_up | alive |<br>
> > +--------------------------------------+--------------+----------------+-------+<br>
> > | 09cfad44-2bb2-4683-a803-ed70f3a46a6a | controller01 | True | :-) |<br>
> > | 58ff7c42-7e71-4750-9f05-61ad5fbc5776 | compute03 | True | :-) |<br>
> > | 8d778c6a-94df-40b7-a2d6-120668e699ca | compute02 | True | :-) |<br>
> > +--------------------------------------+--------------+----------------+-------+<br>
> ><br>
> > However if I reboot one of the l3-agent nodes it stops working. I see this<br>
> > in<br>
> > the logs:<br>
> ><br>
> > Dec 23 16:12:28 Compute02 Keepalived_vrrp[18928]: ip address associated<br>
> > with<br>
> > VRID not present in received packet : 172.16.28.20<br>
> > Dec 23 16:12:28 Compute02 Keepalived_vrrp[18928]: one or more VIP<br>
> > associated<br>
> > with VRID mismatch actual MASTER advert<br>
> > Dec 23 16:12:28 Compute02 Keepalived_vrrp[18928]: bogus VRRP packet<br>
> > received<br>
> > on ha-a509de81-1c !!!<br>
> > Dec 23 16:12:28 Compute02 Keepalived_vrrp[18928]: VRRP_Instance(VR_1)<br>
> > ignoring received advertisment...<br>
> ><br>
> > Dec 23 16:13:10 Compute03 Keepalived_vrrp[12501]: VRRP_Instance(VR_1)<br>
> > ignoring received advertisment...<br>
> > Dec 23 16:13:12 Compute03 Keepalived_vrrp[12501]: ip address associated<br>
> > with<br>
> > VRID not present in received packet : 172.16.28.20<br>
> > Dec 23 16:13:12 Compute03 Keepalived_vrrp[12501]: one or more VIP<br>
> > associated<br>
> > with VRID mismatch actual MASTER advert<br>
> > Dec 23 16:13:12 Compute03 Keepalived_vrrp[12501]: bogus VRRP packet<br>
> > received<br>
> > on ha-d5718741-ef !!!<br>
> > Dec 23 16:13:12 Compute03 Keepalived_vrrp[12501]: VRRP_Instance(VR_1)<br>
> > ignoring received advertisment...<br>
> ><br>
> > Any hint?<br>
> ><br>
> > Thanks<br>
> ><br>
> ><br>
> ><br>
> > On Tue, Dec 23, 2014 at 3:17 PM, Britt Houser (bhouser) < <a href="mailto:bhouser@cisco.com" target="_blank">bhouser@cisco.com</a><br>
> > ><br>
> > wrote:<br>
> ><br>
> ><br>
> ><br>
> > Currently HA and DVR are mutually exclusive features.<br>
> ><br>
> > From: Pedro Sousa < <a href="mailto:pgsousa@gmail.com" target="_blank">pgsousa@gmail.com</a> ><br>
> > Date: Tuesday, December 23, 2014 at 9:42 AM<br>
> > To: " <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a> " <<br>
> > <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a> ><br>
> > Subject: [Openstack-operators] Neutron DVR HA<br>
> ><br>
> > Hi all,<br>
> ><br>
> > I've been trying Neutron DVR with 2 controllers + 2 computes. When I create<br>
> > a<br>
> > router I can see that is running on all the servers:<br>
> ><br>
> > [root@controller01 ~]# neutron l3-agent-list-hosting-router router<br>
> > +--------------------------------------+--------------+----------------+-------+<br>
> > | id | host | admin_state_up | alive |<br>
> > +--------------------------------------+--------------+----------------+-------+<br>
> > | 09cfad44-2bb2-4683-a803-ed70f3a46a6a | controller01 | True | :-) |<br>
> > | 0ca01d56-b6dd-483d-9c49-cc7209da2a5a | controller02 | True | :-) |<br>
> > | 52379f0f-9046-4b73-9d87-bab7f96be5e7 | compute01 | True | :-) |<br>
> > | 8d778c6a-94df-40b7-a2d6-120668e699ca | compute02 | True | :-) |<br>
> > +--------------------------------------+--------------+----------------+-------+<br>
> ><br>
> > However if controller01 server dies I cannot ping ip external gateway<br>
> > anymore. Is this the expected behavior? Shouldn't it failback to the<br>
> > another<br>
> > controller node?<br>
> ><br>
> > Thanks<br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > OpenStack-operators mailing list<br>
> > <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>
> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > OpenStack-operators mailing list<br>
> > <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>
> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
> ><br>
><br>
><br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>