[openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able to ping loadbalancer ip

Wanjing Xu (waxu) waxu at cisco.com
Thu Nov 3 18:29:53 UTC 2016


Going through the log , I saw the following error on o-hm

2016-11-03 03:31:06.441 19560 ERROR octavia.controller.worker.controller_worker     request_ids=request_ids)
2016-11-03 03:31:06.441 19560 ERROR octavia.controller.worker.controller_worker BadRequest: Unrecognized attribute(s) 'dns_name'
2016-11-03 03:31:06.441 19560 ERROR octavia.controller.worker.controller_worker Neutron server returns request_ids: ['req-1daed46e-ce79-471c-a0af-6d86d191eeb2']

And it seemed that I need to upgrade my neutron client.  While I am planning to do it, could somebody please send me the document on how this vip is supposed to plug into the lbaas vm and what the failover is about ?

Thanks!
Wanjing


From: Cisco Employee <waxu at cisco.com<mailto:waxu at cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Wednesday, November 2, 2016 at 7:04 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able to ping loadbalancer ip

So I bring up octavia using devstack (stable/mitaka).   I created a loadbalander and a listener(not create member yet) and start to look at how things are connected to each other.  I can ssh to amphora vm and I do see a haproxy is up with front end point to my listener.  I tried to ping (from dhcp namespace) to the loadbalancer ip, and ping could not go through.  I am wondering how packet is supposed to reach this amphora vm.  I can see that the vm is launched on both network(lb_mgmt network and my vipnet), but I don't see any nic associated with my vipnet:

ubuntu at amphora-dad2f14e-76b4-4bd8-9051-b7a5627c6699:~$ ifconfig -a
eth0      Link encap:Ethernet  HWaddr fa:16:3e:b4:b2:45
          inet addr:192.168.0.4  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:feb4:b245/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2496 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2626 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:307518 (307.5 KB)  TX bytes:304447 (304.4 KB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:212 errors:0 dropped:0 overruns:0 frame:0
          TX packets:212 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:18248 (18.2 KB)  TX bytes:18248 (18.2 KB)

localadmin at dmz-eth2-ucs1:~/devstack$ nova list
+--------------------------------------+----------------------------------------------+--------+------------+-------------+-----------------------------------------------+
| ID                                   | Name                                         | Status | Task State | Power State | Networks                                      |
+--------------------------------------+----------------------------------------------+--------+------------+-------------+-----------------------------------------------+
| 557a3de3-a32e-419d-bdf5-41d92dd2333b | amphora-dad2f14e-76b4-4bd8-9051-b7a5627c6699 | ACTIVE | -          | Running     | lb-mgmt-net=192.168.0.4; vipnet=100.100.100.4 |
+--------------------------------------+----------------------------------------------+--------+------------+-------------+-----------------------+

And it seemed that amphora created a port from the vipnet for its vrrp_ip, but now sure how it is used and how it is supposed to help packet to reach loadbalancer ip

It will be great if somebody can help on this, especially on network side.

Thanks
Wanjing

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20161103/42aff35f/attachment.html>


More information about the OpenStack-dev mailing list