[Openstack] Icehouse : vm spawn failed

Felix Lee zaknafein.lee at gmail.com
Wed Jul 9 09:51:00 UTC 2014


Hi, Malleshi,
I am not sure if you also use neutron with openvswitch plugin, well,
we once met similar problem on our testbed before, but it's because our 
Neutron network node was also compute node, so, when virtual machine was 
launched to neutron network node, after it injected vif into openvswitch 
configuration, somehow it will cause both hypervisor and VM network to 
be unstable(I didn't look into neutron code, presumingly, there is 
conflict in openvswitch datapath configuration between VM and 
hypervisor. ), I am not sure if that is also your case, but if you put 
neutron network node and nova compute together, you might want to 
separate neutron network from nova compute.

BTW, if you configure notify_nova_on_port_status_changes and
notify_nova_on_port_data_changes properly in nova.conf, you don't need 
"vif_plugging_is_fatal: true"



Best regards,
Felix Lee ~

On 2014年07月09日 09:34, m.channappa.negalur at accenture.com wrote:
> Hello Felix,
>
> 1. Which network service do you use?
> Ans: Neutron
>
> 2. do you have separate network interfaces for VM instances and hypervisor?
>   Ans: Yes I do have 3 interfaces on compute node and network node .
> 1.eth0 : Management network
> 2.eth1: external network
> 3.eth2:  for vm communication
>
>
> Hypervisor used in nova-compute is KVM
>
> Regards,
> Malleshi CN
>
> -----Original Message-----
> From: Felix Lee [mailto:zaknafein.lee at gmail.com]
> Sent: Wednesday, July 09, 2014 12:56 PM
> To: Channappa Negalur, M.; openstack at lists.openstack.org
> Subject: Re: [Openstack] Icehouse : vm spawn failed
>
> Hi, Malleshi,
> Which network service do you use?
> Neutron or Nova network?
> And, do you have separate network interfaces for VM instances and hypervisor?
>
>
> Best regards,
> Felix Lee ~
>
> On 2014撟�07��07�� 13:48, m.channappa.negalur at accenture.com wrote:
>> Hello all,
>>
>> I am installing icehouse setup on 3 node. My entire set up is on virtual
>> environment.
>>
>> ======Nova-compute.log ================
>>
>> *WARNING nova.virt.libvirt.driver [-] Periodic task is updating the host
>> stat, it is trying to get disk instance-0000000b, but disk file was
>> removed by concurrent operations such as resize*
>>
>> **
>>
>> *WARNING nova.virt.disk.vfs.guestfs
>> [req-727a676a-ca15-4cb5-8fc4-73dbf307a14f
>> 4f783fbf23304d1682e820740b99f954 7e2d68b079be44048bedd223b3683f19]
>> Failed to close augeas aug_close: call launch before using this function*
>>
>> *(in guestfish, don't forget to use the 'run' command)*
>>
>> 2014-07-07 17:02:23.852 2916 WARNING nova.virt.libvirt.driver
>> [req-df5b6fb1-5304-4f65-bfcf-fbff0ec7298f
>> 4f783fbf23304d1682e820740b99f954 7e2d68b079be44048bedd223b3683f19]
>> *Timeout waiting for vif plugging callback for instance*
>> 75515a86-ba63-4c95-8065-1add9da1f314
>>
>> 2014-07-07 17:02:24.693 2916 INFO nova.virt.libvirt.driver
>> [req-df5b6fb1-5304-4f65-bfcf-fbff0ec7298f
>> 4f783fbf23304d1682e820740b99f954 7e2d68b079be44048bedd223b3683f19]
>> [instance: 75515a86-ba63-4c95-8065-1add9da1f314] Deleting instance files
>> /var/lib/nova/instances/75515a86-ba63-4c95-8065-1add9da1f314
>>
>> 2014-07-07 17:02:24.693 2916 INFO nova.virt.libvirt.driver
>> [req-df5b6fb1-5304-4f65-bfcf-fbff0ec7298f
>> 4f783fbf23304d1682e820740b99f954 7e2d68b079be44048bedd223b3683f19]
>> [instance: 75515a86-ba63-4c95-8065-1add9da1f314] Deletion of
>> /var/lib/nova/instances/75515a86-ba63-4c95-8065-1add9da1f314 complete
>>
>> 2014-07-07 17:02:24.771 2916 ERROR nova.compute.manager
>> [req-df5b6fb1-5304-4f65-bfcf-fbff0ec7298f
>> 4f783fbf23304d1682e820740b99f954 7e2d68b079be44048bedd223b3683f19]
>> [instance: 75515a86-ba63-4c95-8065-1add9da1f314] *Instance failed to spawn*
>>
>> 2014-07-07 17:02:24.771 2916 TRACE nova.compute.manager [instance:
>> 75515a86-ba63-4c95-8065-1add9da1f314] Traceback (most recent call last):
>>
>> 2014-07-07 17:02:24.771 2916 TRACE nova.compute.manager [instance:
>> 75515a86-ba63-4c95-8065-1add9da1f314]   File
>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1720,
>> in _spawn
>>
>> 2014-07-07 17:02:24.771 2916 TRACE nova.compute.manager [instance:
>> 75515a86-ba63-4c95-8065-1add9da1f314]     block_device_info)
>>
>> 2014-07-07 17:02:24.771 2916 TRACE nova.compute.manager [instance:
>> 75515a86-ba63-4c95-8065-1add9da1f314]   File
>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line
>> 2253, in spawn
>>
>> 2014-07-07 17:02:24.771 2916 TRACE nova.compute.manager [instance:
>> 75515a86-ba63-4c95-8065-1add9da1f314]     block_device_info)
>>
>> 2014-07-07 17:02:24.771 2916 TRACE nova.compute.manager [instance:
>> 75515a86-ba63-4c95-8065-1add9da1f314]   File
>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line
>> 3663, in _create_domain_and_network
>>
>> 2014-07-07 17:02:24.771 2916 TRACE nova.compute.manager [instance:
>> 75515a86-ba63-4c95-8065-1add9da1f314]     raise
>> exception.VirtualInterfaceCreateException()
>>
>> 2014-07-07 17:02:24.771 2916 TRACE nova.compute.manager [instance:
>> 75515a86-ba63-4c95-8065-1add9da1f314] VirtualInterfaceCreateException:
>> Virtual Interface creation failed
>>
>> I got the below error from horizon
>>
>> *virtual interface creation failed *.
>>
>> As per the link
>> https://ask.openstack.org/en/question/26985/icehouse-virtual-interface-creation-failed/
>>
>> If I add below entries in my nova.conf , I am able to launch a instance
>> but I ll be disconnected from my host machine and I will not be able to
>> connect via ssh aslo �� again I need to reboot my host  machine to
>> connect ..
>>
>> *vif_plugging_is_fatal: false *
>>
>> *vif_plugging_timeout: 0*
>>
>> is there method to fix this ..? Please let me know ,
>>
>> Regards,
>>
>> Malleshi CN
>>
>>
>> ------------------------------------------------------------------------
>>
>> This message is for the designated recipient only and may contain
>> privileged, proprietary, or otherwise confidential information. If you
>> have received it in error, please notify the sender immediately and
>> delete the original. Any other use of the e-mail by you is prohibited.
>> Where allowed by local law, electronic communications with Accenture and
>> its affiliates, including e-mail and instant messaging (including
>> content), may be scanned by our systems for the purposes of information
>> security and assessment of internal compliance with Accenture policy.
>> ______________________________________________________________________________________
>>
>> www.accenture.com
>>
>>
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>
>
> --
> Felix H.T Lee                           Academia Sinica Grid & Cloud.
> Tel: +886-2-27898308
> Office: Room P111, Institute of Physics, 128 Academia Road, Section 2,
> Nankang, Taipei 115, Taiwan
>
> ________________________________
>
> This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.
> ______________________________________________________________________________________
>
> www.accenture.com
>


-- 
Felix H.T Lee                           Academia Sinica Grid & Cloud.
Tel: +886-2-27898308
Office: Room P111, Institute of Physics, 128 Academia Road, Section 2, 
Nankang, Taipei 115, Taiwan




More information about the Openstack mailing list