[openstack-dev] [neutron][lbaas] Trying to set up LBaaS V2 on Juno with DVR
Al Miller
ajmiller at ajmiller.net
Fri Jan 23 18:01:30 UTC 2015
I have been trying to set up LBaaS v2 in a juno-based environment.
I have successfully done this in devstack by setting it up based on stable/juno, then grabbing https://review.openstack.org/#/c/123491/ and the client from https://review.openstack.org/#/c/111475/, and then editing neutron.conf to include the neutron.services.loadbalancer.plugin.LoadBalancerPluginv2 service_plugin and service_provider=LOADBALANCERV2:Haproxy:neutron.services.loadbalancer.drivers.haproxy.synchronous_namespace_driver.HaproxyNSDriver:default. I have also enabled DVR,
With this setup in devstack, I can use the LBaaS V2 CLI commands to set up a working V2 loadbalancer.
The problem comes in when I try to do this in an openstack installation. I have set up a three node installation based on Ubuntu 14.04 following the procedure in http://docs.openstack.org/juno/install-guide/install/apt/openstack-install-guide-apt-juno.pdf. I have a controller node for the API services, a network node, and a compute node. I can boot instances and create V1 loadbalancers.
When I bring in the LBaaS V2 code into this environment, it is more complex. I need to add it to the neutron API server on the controller, but also the compute node (the goal here is to test it with DVR). So on the compute node I install the neutron-lbaas-agent package, bring in the 123491 patch, and make the neutron.conf edits. In this configuration, the lbaas agent fails with an RPC timeout:
2015-01-22 16:10:52.712 14795 ERROR neutron.services.loadbalancer.agent.agent_manager [-] Unable to retrieve ready devices
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager Traceback (most recent call last):
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/dist-packages/neutron/services/loadbalancer/agent/agent_manager.py", line 148, in sync_state
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager ready_instances = set(self.plugin_rpc.get_ready_devices())
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/dist-packages/neutron/services/loadbalancer/agent/agent_api.py", line 38, in get_ready_devices
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager self.make_msg('get_ready_devices', host=self.host)
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/dist-packages/neutron/common/log.py", line 36, in wrapper
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager return method(*args, **kwargs)
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 175, in call
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager context, msg, rpc_method='call', **kwargs)
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 201, in __call_rpc_method
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager return func(context, msg['method'], **msg['args'])
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 389, in call
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager return self.prepare().call(ctxt, method, **kwargs)
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 152, in call
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager retry=self.retry)
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, in _send
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager timeout=timeout, retry=retry)
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 408, in send
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager retry=retry)
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 397, in _send
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager result = self._waiter.wait(msg_id, timeout)
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 285, in wait
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager reply, ending = self._poll_connection(msg_id, timeout)
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 235, in _poll_connection
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager % msg_id)
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager MessagingTimeout: Timed out waiting for a reply to message ID e928ae87e89e442790b84f053a75f58f
2015-01-22 16:10:52.712 14795 TRACE neutron.services.loadbalancer.agent.agent_manager
Digging into this, I see that it is running the v1 namespace_driver (not synchronous_namespace_driver), so I edited the lbaas-agent.conf to load synchronous_namespace_driver), and when I do that it fails to load because __init__() takes two arguments instead of three.
I'm clearly missing something here, does anyone have any suggestions? I would appreciate any advice, and can provide more details as needed.
Thanks,
Al
More information about the OpenStack-dev
mailing list