Hi. I'm setting up a Openstack system on the servers of my laboratory. While I try to create an instance, a problem has occurred! Instance creation was failed and it seems that libvirt failed to attaching the vif to the instance. When I create a virtual machine by using virsh tool (libvirt) manually, there was no problem. I add the logs as follows: 1. controller node
"/var/log/nova/nova-conductor.log" 2018-11-28 21:18:13.033 2657 ERROR nova.scheduler.utils [req-291fdb2d-fa94-461c-9f5f-68d340791c77 3367829d9c004653bdc9102443bd4736 47270e4fb58045dc88b6f0f736286ffc - default default] [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] Error from last host: node1 (node node1): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1840, in _do_build_and_run_instance\n filter_properties, request_spec)\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2120, in _build_and_run_instance\n instance_uuid=instance.uuid, reason=six.text_type(e))\n', u"RescheduledException: Build of instance 9c2d08f3-0680-4709-a64d-ae1729a11304 was re-scheduled: internal error: libxenlight failed to create new domain 'instance-00000008'\n"] 2018-11-28 21:18:13.033 2657 WARNING nova.scheduler.utils [req-291fdb2d-fa94-461c-9f5f-68d340791c77 3367829d9c004653bdc9102443bd4736 47270e4fb58045dc88b6f0f736286ffc - default default] Failed to compute_task_build_instances: Exceeded maximum number of retries. Exceeded max scheduling attempts 3 for instance 9c2d08f3-0680-4709-a64d-ae1729a11304. Last exception: internal error: libxenlight failed to create new domain 'instance-00000008': MaxRetriesExceeded: Exceeded maximum number of retries. Exceeded max scheduling attempts 3 for instance 9c2d08f3-0680-4709-a64d-ae1729a11304. Last exception: internal error: libxenlight failed to create new domain 'instance-00000008' 2018-11-28 21:18:13.034 2657 WARNING nova.scheduler.utils [req-291fdb2d-fa94-461c-9f5f-68d340791c77 3367829d9c004653bdc9102443bd4736 47270e4fb58045dc88b6f0f736286ffc - default default] [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] Setting instance to ERROR state.: MaxRetriesExceeded: Exceeded maximum number of retries. Exceeded max scheduling attempts 3 for instance 9c2d08f3-0680-4709-a64d-ae1729a11304. Last exception: internal error: libxenlight failed to create new domain 'instance-00000008' 2018-11-28 21:18:13.067 2657 WARNING oslo_config.cfg [req-291fdb2d-fa94-461c-9f5f-68d340791c77 3367829d9c004653bdc9102443bd4736 47270e4fb58045dc88b6f0f736286ffc - default default] Option "url" from group "neutron" is deprecated for removal (Endpoint lookup uses the service catalog via common keystoneauth1 Adapter configuration options. In the current release, "url" will override this behavior, but will be ignored and/or removed in a future release. To achieve the same result, use the endpoint_override option instead.). Its value may be silently ignored in the future.
"/var/log/neutron/neutron-linuxbridge-agent.log" 2018-11-28 17:41:45.593 2476 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Interface mappings: {'provider': 'enp1s0f1'} 2018-11-28 17:41:45.593 2476 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Bridge mappings: {} 2018-11-28 17:41:45.624 2476 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Agent initialized successfully, now running... 2018-11-28 17:41:45.901 2476 INFO neutron.plugins.ml2.drivers.agent._common_agent [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] RPC agent_id: lba0369fa2714a 2018-11-28 17:41:45.907 2476 INFO neutron.agent.agent_extensions_manager [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] Loaded agent extensions: [] 2018-11-28 17:41:46.121 2476 INFO neutron.plugins.ml2.drivers.agent._common_agent [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] Linux bridge agent Agent RPC Daemon Started! 2018-11-28 17:41:46.122 2476 INFO neutron.plugins.ml2.drivers.agent._common_agent [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] Linux bridge agent Agent out of sync with plugin! 2018-11-28 17:41:46.512 2476 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] Clearing orphaned ARP spoofing entries for devices [] 2018-11-28 17:41:47.020 2476 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] Clearing orphaned ARP spoofing entries for devices [] 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent [-] Failed reporting state!: MessagingTimeout: Timed out waiting for a reply to message ID 20ef587240864120b878559ab821adbf 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent Traceback (most recent call last): 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py", line 128, in _report_state 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent True) 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent File "/usr/lib/python2.7/dist-packages/neutron/agent/rpc.py", line 93, in report_state 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent return method(context, 'report_state', **kwargs) 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 174, in call 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent retry=self.retry) 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent File "/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 131, in _send 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent timeout=timeout, retry=retry) 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 559, in send 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent retry=retry) 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 548, in _send 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent result = self._waiter.wait(msg_id, timeout) 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 440, in wait 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent message = self.waiters.get(msg_id, timeout=timeout) 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 328, in get 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent 'to message ID %s' % msg_id) 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent MessagingTimeout: Timed out waiting for a reply to message ID 20ef587240864120b878559ab821adbf 2018-11-28 17:42:45.981 2476 ERROR neutron.plugins.ml2.drivers.agent._common_agent 2018-11-28 17:42:45.986 2476 WARNING oslo.service.loopingcall [-] Function 'neutron.plugins.ml2.drivers.agent._common_agent.CommonAgentLoop._report_state' run outlasted interval by 30.07 sec 2018-11-28 17:42:46.055 2476 INFO neutron.plugins.ml2.drivers.agent._common_agent [-] Linux bridge agent Agent has just been revived. Doing a full sync. 2018-11-28 17:42:46.156 2476 INFO neutron.plugins.ml2.drivers.agent._common_agent [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] Linux bridge agent Agent out of sync with plugin! 2018-11-28 17:43:40.189 2476 INFO neutron.agent.securitygroups_rpc [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] Preparing filters for devices set(['tap4a09374a-7f']) 2018-11-28 17:43:40.935 2476 INFO neutron.plugins.ml2.drivers.agent._common_agent [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] Port tap4a09374a-7f updated. Details: {u'profile': {}, u'network_qos_policy_id': None, u'qos_policy_id': None, u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': u'87fa0d9c-5ed3-4332-8782-0d4139eed7f3', u'segmentation_id': None, u'mtu': 1500, u'device_owner': u'network:dhcp', u'physical_network': u'provider', u'mac_address': u'fa:16:3e:ab:0e:84', u'device': u'tap4a09374a-7f', u'port_security_enabled': False, u'port_id': u'4a09374a-7fa5-42c2-9430-67a0cd65336c', u'fixed_ips': [{u'subnet_id': u'e95946a8-070c-42c4-877e-279e6e7acc7e', u'ip_address': u'192.0.10.4'}], u'network_type': u'flat'} 2018-11-28 17:43:41.124 2476 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] Skipping ARP spoofing rules for port 'tap4a09374a-7f' because it has port security disabled
"/var/log/neutron/neutron-server.log"
2018-11-28 17:30:02.130 15995 INFO neutron.pecan_wsgi.hooks.translation [req-e7554d70-c84d-46d5-8ff6-a536fb35664c 29a3a16fd2484ee9bed834a3835545af 5ebe3484974848b182a381127cb35a22 - default default] GET failed (client error): The resource could not be found. 2018-11-28 17:30:02.130 15995 INFO neutron.wsgi [req-e7554d70-c84d-46d5-8ff6-a536fb35664c 29a3a16fd2484ee9bed834a3835545af 5ebe3484974848b182a381127cb35a22 - default default] 10.150.21.183 "GET /v2.0/floatingips?fixed_ip_address=192.0.10.10&port_id=8ab0d544-ec5b-4e69-95f4-1f06f7b53bb4 HTTP/1.1" status: 404 len: 309 time: 0.0072770 2018-11-28 17:30:02.167 15995 INFO neutron.wsgi [req-a2b5f53b-8992-4178-b0b6-55be9d1a0f32 29a3a16fd2484ee9bed834a3835545af 5ebe3484974848b182a381127cb35a22 - default default] 10.150.21.183 "GET /v2.0/subnets?id=e95946a8-070c-42c4-877e-279e6e7acc7e HTTP/1.1" status: 200 len: 822 time: 0.0341990 2018-11-28 17:30:02.199 15995 INFO neutron.wsgi [req-9e1a51e9-9ccc-4226-a78f-0420ff95c147 29a3a16fd2484ee9bed834a3835545af 5ebe3484974848b182a381127cb35a22 - default default] 10.150.21.183 "GET /v2.0/ports?network_id=87fa0d9c-5ed3-4332-8782-0d4139eed7f3&device_owner=network%3Adhcp HTTP/1.1" status: 200 len: 1080 time: 0.0300300 2018-11-28 17:30:02.584 15995 INFO neutron.notifiers.nova [-] Nova event response: {u'status': u'completed', u'tag': u'8ab0d544-ec5b-4e69-95f4-1f06f7b53bb4', u'name': u'network-changed', u'server_uuid': u'a9afc2d4-f4c9-429b-9773-4de8a3eaefa5', u'code': 200} 2018-11-28 17:30:02.628 15995 INFO neutron.wsgi [req-73265cf5-5f0d-4217-b716-caa2fb906abf 29a3a16fd2484ee9bed834a3835545af 5ebe3484974848b182a381127cb35a22 - default default] 10.150.21.183 "GET /v2.0/ports?tenant_id=47270e4fb58045dc88b6f0f736286ffc&device_id=a9afc2d4-f4c9-429b-9773-4de8a3eaefa5 HTTP/1.1" status: 200 len: 1062 time: 0.0316660 2018-11-28 17:30:02.696 15995 INFO neutron.wsgi [req-ed53b92c-3033-4b4a-ade4-fdc5a3463e8c 29a3a16fd2484ee9bed834a3835545af 5ebe3484974848b182a381127cb35a22 - default default] 10.150.21.183 "GET /v2.0/networks?id=87fa0d9c-5ed3-4332-8782-0d4139eed7f3 HTTP/1.1" status: 200 len: 872 time: 0.0655539 2018-11-28 17:30:02.702 15995 WARNING neutron.pecan_wsgi.controllers.root [req-ccd8c9b8-d2cf-40f7-b53b-5936bd0c9a6d 29a3a16fd2484ee9bed834a3835545af 5ebe3484974848b182a381127cb35a22 - default default] No controller found for: floatingips - returning response code 404: PecanNotFound
2. compute node
"/var/log/libvirt/libxl/libxl-driver.log" 2018-11-28 08:40:31.920+0000: libxl: libxl_event.c:681:libxl__ev_xswatch_deregister: remove watch for path @releaseDomain: Bad file descriptor 2018-11-28 09:57:01.707+0000: libxl: libxl_exec.c:118:libxl_report_child_exitstatus: /etc/xen/scripts/vif-bridge online [2536] exited with error status 1 2018-11-28 09:57:01.708+0000: libxl: libxl_device.c:1286:device_hotplug_child_death_cb: script: ip link set vif1.0 name tape5a239a8-6e failed 2018-11-28 09:57:01.708+0000: libxl: libxl_create.c:1522:domcreate_attach_devices: Domain 1:unable to add vif devices
"/var/log/xen/xen-hotplug.log"
RTNETLINK answers: Device or resource busy
"/var/log/nova/nova-compute.log"
: libvirtError: internal error: libxenlight failed to create new domain 'instance-00000008' 2018-11-28 21:18:11.350 2384 ERROR nova.virt.libvirt.driver [req-b3e761b7-00fa-4930-bd9e-4330f8440c03 48086750fa13420888601964bb6a9d0d 5ebe3484974848b182a381127cb35a22 - default default] [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] Failed to start libvirt guest: libvirtError: internal error: libxenlight failed to create new domain 'instance-00000008' 2018-11-28 21:18:11.352 2384 INFO os_vif [req-b3e761b7-00fa-4930-bd9e-4330f8440c03 48086750fa13420888601964bb6a9d0d 5ebe3484974848b182a381127cb35a22 - default default] Successfully unplugged vif VIFBridge(active=False,address=fa:16:3e:6b:e4:b7,bridge_name='brq87fa0d9c-5e',has_traffic_filtering=True,id=484807ca-8c7c-4509-a5f5-ed7e5fd2078f,network=Network(87fa0d9c-5ed3-4332-8782-0d4139eed7f3),plugin='linux_bridge',port_profile=<?>,preserve_on_delete=False,vif_name='tap484807ca-8c') 2018-11-28 21:18:11.554 2384 INFO nova.virt.libvirt.driver [req-b3e761b7-00fa-4930-bd9e-4330f8440c03 48086750fa13420888601964bb6a9d0d 5ebe3484974848b182a381127cb35a22 - default default] [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] Deleting instance files /var/lib/nova/instances/9c2d08f3-0680-4709-a64d-ae1729a11304_del 2018-11-28 21:18:11.556 2384 INFO nova.virt.libvirt.driver [req-b3e761b7-00fa-4930-bd9e-4330f8440c03 48086750fa13420888601964bb6a9d0d 5ebe3484974848b182a381127cb35a22 - default default] [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] Deletion of /var/lib/nova/instances/9c2d08f3-0680-4709-a64d-ae1729a11304_del complete 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [req-b3e761b7-00fa-4930-bd9e-4330f8440c03 48086750fa13420888601964bb6a9d0d 5ebe3484974848b182a381127cb35a22 - default default] [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] Instance failed to spawn: libvirtError: internal error: libxenlight failed to create new domain 'instance-00000008' 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] Traceback (most recent call last): 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2251, in _build_resources 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] yield resources 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2031, in _build_and_run_instance 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] block_device_info=block_device_info) 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3089, in spawn 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] destroy_disks_on_failure=True) 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5614, in _create_domain_and_network 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] destroy_disks_on_failure) 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] self.force_reraise() 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] six.reraise(self.type_, self.value, self.tb) 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5583, in _create_domain_and_network 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] post_xml_callback=post_xml_callback) 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5502, in _create_domain 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] guest.launch(pause=pause) 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 144, in launch 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] self._encoded_xml, errors='ignore') 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] self.force_reraise() 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] six.reraise(self.type_, self.value, self.tb) 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 139, in launch 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] return self._domain.createWithFlags(flags) 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] result = proxy_call(self._autowrap, f, *args, **kwargs) 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] rv = execute(f, *args, **kwargs) 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] six.reraise(c, e, tb) 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] rv = meth(*args, **kwargs) 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1092, in createWithFlags 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: 9c2d08f3-0680-4709-a64d-ae1729a11304] libvirtError: internal error: libxenlight failed to create new domain 'instance-00000008'
Anyone help me please. _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack