[Openstack] [devstack][Nova] Unable to spawn VMs
Deepak Shetty
dpkshetty at gmail.com
Tue Jun 10 10:51:55 UTC 2014
Sergey,
Do you have any other suggestions for me here ? :)
On Mon, Jun 9, 2014 at 6:55 PM, Deepak Shetty <dpkshetty at gmail.com> wrote:
> No errors in instance-00000001.log.. just that it says qemu terminating on
> signal 15. which is fine and expected i feel.
> I had virt_type = qemu all the while now, maybe i will try with virt_type
> = kvm now
>
> Am i the only one seeing this issue.. if yes, then I am wondering if there
> is somethign really silly in my setup ! :)
>
>
> On Mon, Jun 9, 2014 at 4:53 PM, Sergey Kolekonov <skolekonov at mirantis.com>
> wrote:
>
>> Are there any errors in instance-00000001.log and other log files?
>> Try to change virt_type = kvm to virt_type = qemu in /etc/nova/nova.conf
>> to exclude kvm problems
>>
>>
>> On Mon, Jun 9, 2014 at 2:28 PM, Deepak Shetty <dpkshetty at gmail.com>
>> wrote:
>>
>>> Hi sergey
>>> Don't see the .log in qemu/instnace-XXX at all
>>>
>>> But there is more...
>>>
>>> [stack at devstack-large-vm ~]$ [ready] nova show
>>> *bc598be6-999b-4ab9-8a0a-1c4d29dd811a*
>>>
>>> +--------------------------------------+----------------------------------------------------------------+
>>> | Property |
>>> Value |
>>>
>>> +--------------------------------------+----------------------------------------------------------------+
>>> | OS-DCF:diskConfig |
>>> MANUAL |
>>> | OS-EXT-AZ:availability_zone |
>>> nova |
>>> | OS-EXT-SRV-ATTR:host |
>>> devstack-large-vm.localdomain |
>>> | OS-EXT-SRV-ATTR:hypervisor_hostname |
>>> devstack-large-vm.localdomain |
>>> *| OS-EXT-SRV-ATTR:instance_name | instance-0000000a *
>>> |
>>> | OS-EXT-STS:power_state |
>>> 0 |
>>> | OS-EXT-STS:task_state |
>>> spawning |
>>> | OS-EXT-STS:vm_state |
>>> building |
>>> | OS-SRV-USG:launched_at |
>>> - |
>>> | OS-SRV-USG:terminated_at |
>>> - |
>>> | accessIPv4
>>> | |
>>> | accessIPv6
>>> | |
>>> | config_drive
>>> | |
>>> | created |
>>> 2014-06-09T10:18:53Z |
>>> | flavor | m1.tiny
>>> (1) |
>>> | hostId |
>>> 393d36ca7cc7c30957286e775b5808c8103b4b7be1af4f2163cc2fe4 |
>>> | id |
>>> bc598be6-999b-4ab9-8a0a-1c4d29dd811a |
>>> | image | cirros-0.3.2-x86_64-uec
>>> (99471623-273c-46ed-b1b0-5a58605aab76) |
>>> | key_name |
>>> - |
>>> | metadata |
>>> {} |
>>> | name |
>>> mynewvm |
>>> | os-extended-volumes:volumes_attached |
>>> [] |
>>> | private network |
>>> 10.0.0.2 |
>>> | progress |
>>> 0 |
>>> | security_groups |
>>> default |
>>> | status |
>>> BUILD |
>>> | tenant_id |
>>> 1d0dedb6e6f344258a1ab44df3fcd4ee |
>>> | updated |
>>> 2014-06-09T10:18:53Z |
>>> | user_id |
>>> 730ccd3aaf7140789f4929edd71d3d31 |
>>>
>>> +--------------------------------------+----------------------------------------------------------------+
>>> [stack at devstack-large-vm ~]$ [ready] ps aux| grep qemu| grep instance
>>> stack 11388 99.1 0.9 1054588 36884 ? Sl 10:18 4:03
>>> /usr/bin/qemu-system-x86_64 -machine accel=kvm *-name
>>> guestfs-tgx65m4jvhgmu8la* -S -machine pc-i440fx-1.6,accel=kvm,usb=off
>>> -cpu host -m 500 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1
>>> -uuid 6bd85030-c2a0-4576-82d2-ff30a9e44024 -nographic -no-user-config
>>> -nodefaults -chardev
>>> socket,id=charmonitor,path=/opt/stack/.config/libvirt/qemu/lib/guestfs-tgx65m4jvhgmu8la.monitor,server,nowait
>>> -mon chardev=charmonitor,id=monitor,mode=control -rtc
>>> base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet
>>> -no-reboot -no-acpi -kernel /var/tmp/.guestfs-1001/appliance.d/kernel
>>> -initrd /var/tmp/.guestfs-1001/appliance.d/initrd -append panic=1
>>> console=ttyS0 udevtimeout=600 no_timer_check acpi=off printk.time=1
>>> cgroup_disable=memory root=/dev/sdb selinux=0 TERM=screen -device
>>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
>>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x2 -device
>>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 *-drive
>>> file=/opt/stack/data/nova/instances/bc598be6-999b-4ab9-8a0a-1c4d29dd811a/disk*,if=none,id=drive-scsi0-0-0-0,format=qcow2,cache=writeback
>>> -device
>>> scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1
>>> -drive
>>> file=/tmp/libguestfsagFFSQ/overlay1,if=none,id=drive-scsi0-0-1-0,format=qcow2,cache=unsafe
>>> -device
>>> scsi-hd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-scsi0-0-1-0,id=scsi0-0-1-0
>>> -chardev socket,id=charserial0,path=/tmp/libguestfsagFFSQ/console.sock
>>> -device isa-serial,chardev=charserial0,id=serial0 -chardev
>>> socket,id=charchannel0,path=/tmp/libguestfsagFFSQ/guestfsd.sock -device
>>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.libguestfs.channel.0
>>> -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
>>>
>>> [root at devstack-large-vm qemu]# ls -lt
>>> total 84
>>> -rw-------. 1 root root 8279 Jun 9 09:59 instance-00000001.log
>>> -rw-------. 1 root root 2070 Jun 9 09:29 instance-00000007.log
>>> -rw-------. 1 root root 63444 Jun 6 09:20 instance-00000004.log
>>> -rw-------. 1 root root 2071 Jun 6 07:12 instance-00000002.log
>>> [root at devstack-large-vm qemu]# date
>>> Mon Jun 9 10:24:32 UTC 2014
>>>
>>> ^^ No instance with 0a name above ^^
>>>
>>> So the qemu process above is using the disk from my instance but its
>>> -name is not instance-XXXXX tho'
>>> *OS-EXT-SRV-ATTR:instance_name | instance-0000000a *
>>> show 0a as instance name
>>>
>>> virsh shows it as running...
>>>
>>> [stack at devstack-large-vm ~]$ [ready] virsh list
>>> Id Name State
>>> ----------------------------------------------------
>>> * 2 guestfs-tgx65m4jvhgmu8la running*
>>>
>>> thanx,
>>> deepak
>>>
>>>
>>> On Mon, Jun 9, 2014 at 3:50 PM, Sergey Kolekonov <
>>> skolekonov at mirantis.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> have you inspected /var/log/libvirt/qemu/instance-xxxxxxxx.log files?
>>>> Such problem can be connected with your hypervisor. Especially if you use
>>>> nested kvm virtualization on your host machine.
>>>>
>>>>
>>>> On Mon, Jun 9, 2014 at 2:06 PM, Deepak Shetty <dpkshetty at gmail.com>
>>>> wrote:
>>>>
>>>>> (Hit send by mistake.. continuing below)
>>>>>
>>>>>
>>>>> On Mon, Jun 9, 2014 at 3:32 PM, Deepak Shetty <dpkshetty at gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi All,
>>>>>> The last time I sent this issue, I didn't had the right tags in
>>>>>> the subject so sending with the right tags, in the hope that the right
>>>>>> folks might help provide some clues.
>>>>>>
>>>>>> I am usign latest devstack on F20 and I see VMs stuck in 'spawning'
>>>>>> state.
>>>>>> No errors in n-sch, n-cpu and I checked other n-* screen logs too, no
>>>>>> errors whatsoever
>>>>>>
>>>>>> There is enuf memory and disk space available for the tiny and small
>>>>>> flavour VMs I am trying to run
>>>>>>
>>>>>>
>>>>> 2014-06-09 09:57:19.935 AUDIT nova.compute.resource_tracker [-] Free
>>>>> ram (MB): 1905
>>>>> 2014-06-09 09:57:19.935 AUDIT nova.compute.resource_tracker [-] Free
>>>>> disk (GB): 46
>>>>> 2014-06-09 09:57:19.936 AUDIT nova.compute.resource_tracker [-] Free
>>>>> VCPUS: 1
>>>>> 2014-06-09 09:57:19.936 AUDIT nova.compute.resource_tracker [-] PCI
>>>>> stats: []
>>>>>
>>>>> The only thing that i see other than INFO/DEbug is the below
>>>>> 2014-06-09 09:57:42.700 WARNING nova.compute.manager [-] Found 3 in
>>>>> the database and 0 on the hypervisor.
>>>>>
>>>>> What does that mean ? Does that say that Nova sees 3 VMs in its DB but
>>>>> a query to the hyp (via libvirt I assume) returns none ?
>>>>>
>>>>> Interestingly I can see the qemu process for my VM that nova says is
>>>>> stuck in Spawning state and virsh list also lists' it
>>>>> But i don't see the libvirt.xml for this stuck VM (from nova's
>>>>> perspective) in the /opt/stack/data/nova/instances/<uuid> folder.
>>>>>
>>>>> I was trying to figure what this means ? Where is nova stuck ? Screen
>>>>> logs just doesn't point to anything useful to debug further.
>>>>>
>>>>> From the n-cpu logs i see the "Creating image" info log from
>>>>> nova.virt.libvirt and thats the last
>>>>> I don't see the info log "Creating isntance" from nova.virt.. so it
>>>>> looks like its stuck somewhere between creating image and spawning
>>>>> instance.. but virsh list and qemu process says otherwise
>>>>>
>>>>> Looking for some debug hints here
>>>>>
>>>>> thanx,
>>>>> deepak
>>>>>
>>>>> _______________________________________________
>>>>> Mailing list:
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>> Post to : openstack at lists.openstack.org
>>>>> Unsubscribe :
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>
>>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140610/ebb55f79/attachment.html>
More information about the Openstack
mailing list