[openstack-dev] [NOVA][Neutron][ML2][Tunnel] Error in nova-agent when launching VM in compute for Tunnel cases
Padmanabhan Krishnan
kprad1 at yahoo.com
Sat Apr 12 00:59:07 UTC 2014
Hello,
I have two Openstack nodes (controller & Compute and a Compute). VM's are getting launched fine on the node that also acts as the controller. But, the VM's that are scheduled on the compute node seems to go to
error state. I am running Icehouse Master version and my ML2 type driver is GRE (even VXLAN has the same error) . I used devstack for my installation. If I use non-tunnel mode and change it to VLAN, I don't see this error and VM's are launched fine in compute nodes as well.
My devstack configuration on controller/compute is:
Q_PLUGIN=ml2
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch
Q_ML2_PLUGIN_TYPE_DRIVERS=gre
ENABLE_TENANT_TUNNELS=True
TENANT_TUNNEL_RANGE=32000:33000
The error i see in compute nova screen is:
2014-04-11 09:38:57.042 ERROR nova.compute.manager [-] [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] An error occurred while refreshing the network cache.
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] Traceback (most recent call last):
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] File "/opt/stack/nova/nova/compute/manager.py", line 4871, in _heal_instance_info_cache
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] self._get_instance_nw_info(context, instance, use_slave=True)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] File "/opt/stack/nova/nova/compute/manager.py", line 1129, in _get_instance_nw_info
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] instance)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] File "/opt/stack/nova/nova/network/api.py", line 48, in wrapper
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] res = f(self, context, *args, **kwargs)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] File "/opt/stack/nova/nova/network/neutronv2/api.py", line 465, in get_instance_nw_info
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] port_ids)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] File "/opt/stack/nova/nova/network/neutronv2/api.py", line 474, in _get_instance_nw_info
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] port_ids)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] File "/opt/stack/nova/nova/network/neutronv2/api.py", line 1106, in _build_network_info_model
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] data = client.list_ports(**search_opts)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] File "/opt/stack/python-neutronclient/neutronclient/v2_0/client.py", line 108, in with_params
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] ret = self.function(instance, *args, **kwargs)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] File "/opt/stack/python-neutronclient/neutronclient/v2_0/client.py", line 310, in list_ports
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] **_params)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] File "/opt/stack/python-neutronclient/neutronclient/v2_0/client.py", line 1302, in list
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] for r in self._pagination(collection, path, **params):
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] File "/opt/stack/python-neutronclient/neutronclient/v2_0/client.py", line 1315, in _pagination
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] res = self.get(path, params=params)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] File "/opt/stack/python-neutronclient/neutronclient/v2_0/client.py", line 1288, in get
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] headers=headers, params=params)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] File "/opt/stack/python-neutronclient/neutronclient/v2_0/client.py", line 1280, in retry_request
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] raise exceptions.ConnectionFailed(reason=_("Maximum attempts reached"))
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516] ConnectionFailed: Connection to neutron failed: Maximum attempts reached
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: b12490cc-a237-47e1-b2df-fe69d4a5e516]
I did check that the URL seems to be right. The IP address of controller is specified right and compute can ping the controller.
Any tips/suggestions on what could be wrong?
Thanks,
Paddu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140411/6c0d90d7/attachment.html>
More information about the OpenStack-dev
mailing list