[Openstack] [Neutron] Openvswitch agent not alive

Vikram Choudhary vikschw at gmail.com
Fri May 6 06:30:55 UTC 2016


Widening audience!

On Thu, May 5, 2016 at 9:03 PM, Silvia Fichera <fichera.sil at gmail.com>
wrote:

> There is an error in q-meta:
>
> 2016-05-05 17:12:52.594 ERROR neutron.agent.metadata.agent [-] Failed
> reporting state!
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent Traceback (most
> recent call last):
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent   File
> "/opt/stack/neutron/neutron/agent/metadata/agent.py", line 314, in
> _report_state
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent
> use_call=self.agent_state.get('start_flag'))
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent   File
> "/opt/stack/neutron/neutron/agent/rpc.py", line 86, in report_state
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent     return
> method(context, 'report_state', **kwargs)
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent   File
> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line
> 158, in call
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent
> retry=self.retry)
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent   File
> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line
> 90, in _send
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent
> timeout=timeout, retry=retry)
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent   File
> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> line 431, in send
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent     retry=retry)
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent   File
> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> line 420, in _send
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent     result =
> self._waiter.wait(msg_id, timeout)
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent   File
> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> line 318, in wait
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent     message =
> self.waiters.get(msg_id, timeout=timeout)
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent   File
> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> line 223, in get
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent     'to message
> ID %s' % msg_id)
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent
> MessagingTimeout: Timed out waiting for a reply to message ID
> b71f29ac5bca496c9dd02c8979bbe556
> 2016-05-05 17:12:52.594 TRACE neutron.agent.metadata.agent
> 2016-05-05 17:12:52.622 WARNING oslo.service.loopingcall [-] Function
> 'neutron.agent.metadata.agent.UnixDomainMetadataProxy._report_state' run
> outlasted interval by 30.19 sec
>
> and in c-vol:
>
> 2016-05-05 17:31:02.799 ERROR cinder.service [-] Manager for service
> cinder-volume devstack1 at lvmdriver-1 is reporting problems, not sending
> heartbeat. Service will appear "down".
> 2016-05-05 17:31:12.807 ERROR cinder.service [-] Manager for service
> cinder-volume devstack1 at lvmdriver-1 is reporting problems, not sending
> heartbeat. Service will appear "down".
> 2016-05-05 17:31:22.815 ERROR cinder.service [-] Manager for service
> cinder-volume devstack1 at lvmdriver-1 is reporting problems, not sending
> heartbeat. Service will appear "down".
> 2016-05-05 17:31:33.041 ERROR cinder.service [-] Manager for service
> cinder-volume devstack1 at lvmdriver-1 is reporting problems, not sending
> heartbeat. Service will appear "down".
> 2016-05-05 17:31:43.043 ERROR cinder.service [-] Manager for service
> cinder-volume devstack1 at lvmdriver-1 is reporting problems, not sending
> heartbeat. Service will appear "down".
> 2016-05-05 17:31:44.028 DEBUG oslo_service.periodic_task
> [req-b829e2a7-41cf-4bc6-a86b-ebdec66f9882 None None] Running periodic task
> VolumeManager._publish_service_capabilities from (pid=4038)
> run_periodic_tasks
> /usr/local/lib/python2.7/dist-packages/oslo_service/periodic_task.py:213
> 2016-05-05 17:31:44.028 DEBUG oslo_service.periodic_task
> [req-b829e2a7-41cf-4bc6-a86b-ebdec66f9882 None None] Running periodic task
> VolumeManager._report_driver_status from (pid=4038) run_periodic_tasks
> /usr/local/lib/python2.7/dist-packages/oslo_service/periodic_task.py:213
> 2016-05-05 17:31:44.029 WARNING cinder.volume.manager
> [req-b829e2a7-41cf-4bc6-a86b-ebdec66f9882 None None] Update driver status
> failed: (config name lvmdriver-1) is uninitialized.
> 2016-05-05 17:31:53.047 ERROR cinder.service [-] Manager for service
> cinder-volume devstack1 at lvmdriver-1 is reporting problems, not sending
> heartbeat. Service will appear "down".
> 2016-05-05 17:32:03.050 ERROR cinder.service [-] Manager for service
> cinder-volume devstack1 at lvmdriver-1 is reporting problems, not sending
> heartbeat. Service will appear "down".
> 2016-05-05 17:32:13.060 ERROR cinder.service [-] Manager for service
> cinder-volume devstack1 at lvmdriver-1 is reporting problems, not sending
> heartbeat. Service will appear "down".
> 2016-05-05 17:32:23.383 ERROR cinder.service [-] Manager for service
> cinder-volume devstack1 at lvmdriver-1 is reporting problems, not sending
> heartbeat. Service will appear "down".
> 2016-05-05 17:32:33.386 ERROR cinder.service [-] Manager for service
> cinder-volume devstack1 at lvmdriver-1 is reporting problems, not sending
> heartbeat. Service will appear "down".
> 2016-05-05 17:32:43.391 ERROR cinder.service [-] Manager for service
> cinder-volume devstack1 at lvmdriver-1 is reporting problems, not sending
> heartbeat. Service will appear "down".
>
>
> 2016-05-05 17:17 GMT+02:00 Silvia Fichera <fichera.sil at gmail.com>:
>
>> I have rebooted the machine and the agent is now up.
>> But I still can't ping  from the floating IP that I have assigned.
>>
>> 2016-05-05 14:06 GMT+02:00 Vikram Choudhary <vikschw at gmail.com>:
>>
>>> Can you please check your disk space.. I can find memory failure ..
>>>
>>> 2016-05-05 09:58:19.263 TRACE neutron* OSError: [Errno 12] Cannot
>>> allocate memory*
>>>
>>> On Thu, May 5, 2016 at 4:55 PM, Silvia Fichera <fichera.sil at gmail.com>
>>> wrote:
>>>
>>>> from the screen there are few errors and than q-agt fails:
>>>>
>>>> 2016-05-05 09:58:19.263 TRACE neutron   File
>>>> "/usr/lib/python2.7/contextlib.py", line 35, in __exit__
>>>> 2016-05-05 09:58:19.263 TRACE neutron     self.gen.throw(type, value,
>>>> traceback)
>>>> 2016-05-05 09:58:19.263 TRACE neutron   File
>>>> "/opt/stack/neutron/neutron/agent/linux/polling.py", line 38, in
>>>> get_polling_manager
>>>> 2016-05-05 09:58:19.263 TRACE neutron     pm.stop()
>>>> 2016-05-05 09:58:19.263 TRACE neutron   File
>>>> "/opt/stack/neutron/neutron/agent/linux/polling.py", line 56, in stop
>>>> 2016-05-05 09:58:19.263 TRACE neutron     self._monitor.stop()
>>>> 2016-05-05 09:58:19.263 TRACE neutron   File
>>>> "/opt/stack/neutron/neutron/agent/linux/async_process.py", line 128, in stop
>>>> 2016-05-05 09:58:19.263 TRACE neutron     self._kill(kill_signal)
>>>> 2016-05-05 09:58:19.263 TRACE neutron   File
>>>> "/opt/stack/neutron/neutron/agent/linux/ovsdb_monitor.py", line 115, in
>>>> _kill
>>>> 2016-05-05 09:58:19.263 TRACE neutron     super(SimpleInterfaceMonitor,
>>>> self)._kill(*args, **kwargs)
>>>> 2016-05-05 09:58:19.263 TRACE neutron   File
>>>> "/opt/stack/neutron/neutron/agent/linux/async_process.py", line 161, in
>>>> _kill
>>>> 2016-05-05 09:58:19.263 TRACE neutron     pid = self.pid
>>>> 2016-05-05 09:58:19.263 TRACE neutron   File
>>>> "/opt/stack/neutron/neutron/agent/linux/async_process.py", line 157, in pid
>>>> 2016-05-05 09:58:19.263 TRACE neutron     run_as_root=self.run_as_root)
>>>> 2016-05-05 09:58:19.263 TRACE neutron   File
>>>> "/opt/stack/neutron/neutron/agent/linux/utils.py", line 277, in
>>>> get_root_helper_child_pid
>>>> 2016-05-05 09:58:19.263 TRACE neutron     pid = find_child_pids(pid)[0]
>>>> 2016-05-05 09:58:19.263 TRACE neutron   File
>>>> "/opt/stack/neutron/neutron/agent/linux/utils.py", line 203, in
>>>> find_child_pids
>>>> 2016-05-05 09:58:19.263 TRACE neutron     log_fail_as_error=False)
>>>> 2016-05-05 09:58:19.263 TRACE neutron   File
>>>> "/opt/stack/neutron/neutron/agent/linux/utils.py", line 120, in execute
>>>> 2016-05-05 09:58:19.263 TRACE neutron     addl_env=addl_env)
>>>> 2016-05-05 09:58:19.263 TRACE neutron   File
>>>> "/opt/stack/neutron/neutron/agent/linux/utils.py", line 89, in
>>>> create_process
>>>> 2016-05-05 09:58:19.263 TRACE neutron     stderr=subprocess.PIPE)
>>>> 2016-05-05 09:58:19.263 TRACE neutron   File
>>>> "/opt/stack/neutron/neutron/common/utils.py", line 199, in subprocess_popen
>>>> 2016-05-05 09:58:19.263 TRACE neutron     close_fds=close_fds, env=env)
>>>> 2016-05-05 09:58:19.263 TRACE neutron   File
>>>> "/usr/local/lib/python2.7/dist-packages/eventlet/green/subprocess.py", line
>>>> 53, in __init__
>>>> 2016-05-05 09:58:19.263 TRACE neutron
>>>> subprocess_orig.Popen.__init__(self, args, 0, *argss, **kwds)
>>>> 2016-05-05 09:58:19.263 TRACE neutron   File
>>>> "/usr/lib/python2.7/subprocess.py", line 710, in __init__
>>>> 2016-05-05 09:58:19.263 TRACE neutron     errread, errwrite)
>>>> 2016-05-05 09:58:19.263 TRACE neutron   File
>>>> "/usr/lib/python2.7/subprocess.py", line 1223, in _execute_child
>>>> 2016-05-05 09:58:19.263 TRACE neutron     self.pid = os.fork()
>>>> 2016-05-05 09:58:19.263 TRACE neutron OSError: [Errno 12] Cannot
>>>> allocate memory
>>>> 2016-05-05 09:58:19.263 TRACE neutron
>>>> 2016-05-05 09:58:20.310 INFO oslo_rootwrap.client
>>>> [req-a61d57b4-aa7d-4b1a-ad73-d4eb36c37a4a None None] Stopping rootwrap
>>>> daemon process with pid=18261
>>>> q-agt failed to start
>>>>
>>>> 2016-05-05 12:31 GMT+02:00 Vikram Choudhary <vikschw at gmail.com>:
>>>>
>>>>> Can you navigate to the q-agt devstack screen to check what's going on?
>>>>>
>>>>> Execute screen -x and then go to the q-agt screen by using CTRL+A N
>>>>> On May 5, 2016 3:53 PM, "Silvia Fichera" <fichera.sil at gmail.com>
>>>>> wrote:
>>>>>
>>>>>> There is no error in q-svc and when I try to open q-agt the system
>>>>>> crashes.
>>>>>> Il 05/Mag/2016 12:03, "Vikram Choudhary" <vikschw at gmail.com> ha
>>>>>> scritto:
>>>>>>
>>>>>> Can you check q-agt and q-svc logs for abnormalities?
>>>>>> On May 5, 2016 3:29 PM, "Silvia Fichera" <fichera.sil at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi all,
>>>>>>>
>>>>>>> I have installed OpenStack via Devstack, Liberty, I created the
>>>>>>> external network, the private one and the router to connect each other,
>>>>>>> launched instances and assigned the floating IP.
>>>>>>>
>>>>>>> BUT when I check the connectivity I can't ping the router interface
>>>>>>> and from the instances, I can't ping an external address.
>>>>>>>
>>>>>>> I have checked
>>>>>>> neutron agent-list
>>>>>>>
>>>>>>> id | agent_type | host | alive | admin_state_up | binary |
>>>>>>> +--------------------------------------+--------------------+-----------+-------+----------------+---------------------------+
>>>>>>> |
>>>>>>>
>>>>>>> 0b1a6478-4f1d-4338-a45d-112fda0c46af | DHCP agent | devstack1 | :-)
>>>>>>> | True | neutron-dhcp-agent |
>>>>>>>
>>>>>>> | 53a726ca-d874-41da-a06e-9e30624e6fce | Open vSwitch agent |
>>>>>>> devstack1 | xxx | True | neutron-openvswitch-agent |
>>>>>>>
>>>>>>> | a8938910-8ef2-4894-a31c-5047d536156d | L3 agent | devstack1 | :-)
>>>>>>> | True | neutron-l3-agent |
>>>>>>>
>>>>>>> | ede1be74-12fc-435c-9f4b-ef44ae088a2d | Metadata agent | devstack1
>>>>>>> | :-) | True | neutron-metadata-agent |
>>>>>>> +--------------------------------------+--------------------+-----------+-------+----------------+---------------------------+
>>>>>>>
>>>>>>> the openswitch agent is not alive.
>>>>>>>
>>>>>>> How to debug it?
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Silvia Fichera
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Mailing list:
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>> Post to     : openstack at lists.openstack.org
>>>>>>> Unsubscribe :
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>>
>>>>>>>
>>>>
>>>>
>>>> --
>>>> Silvia Fichera
>>>>
>>>
>>>
>>
>>
>> --
>> Silvia Fichera
>>
>
>
>
> --
> Silvia Fichera
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160506/fc57b90a/attachment.html>


More information about the Openstack mailing list