<div dir="ltr"><div>Sergey,<br></div> Do you have any other suggestions for me here ? :)<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Jun 9, 2014 at 6:55 PM, Deepak Shetty <span dir="ltr"><<a href="mailto:dpkshetty@gmail.com" target="_blank">dpkshetty@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>No errors in instance-00000001.log.. just that it says qemu terminating on signal 15. which is fine and expected i feel.<br>
</div>I had virt_type = qemu all the while now, maybe i will try with virt_type = kvm now<br>
<br></div>Am i the only one seeing this issue.. if yes, then I am wondering if there is somethign really silly in my setup ! :)<br></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><br><div class="gmail_quote">
On Mon, Jun 9, 2014 at 4:53 PM, Sergey Kolekonov <span dir="ltr"><<a href="mailto:skolekonov@mirantis.com" target="_blank">skolekonov@mirantis.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Are there any errors in <span style="font-family:'courier new',monospace;font-size:13px">instance-00000001.log and other log files?</span><div>
<span style="font-family:'courier new',monospace;font-size:13px">Try to change </span><font face="courier new, monospace">virt_type = kvm to virt_type = qemu in /etc/nova/nova.conf to exclude kvm problems</font></div>
</div><div><div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Jun 9, 2014 at 2:28 PM, Deepak Shetty <span dir="ltr"><<a href="mailto:dpkshetty@gmail.com" target="_blank">dpkshetty@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><span style="font-family:courier new,monospace">Hi sergey<br></span></div><span style="font-family:courier new,monospace"> Don't see the .log in qemu/instnace-XXX at all<br>
<br>But there is more...<br>
<br>[stack@devstack-large-vm ~]$ [ready] nova show <b>bc598be6-999b-4ab9-8a0a-1c4d29dd811a</b><br>+--------------------------------------+----------------------------------------------------------------+<br>| Property | Value |<br>
+--------------------------------------+----------------------------------------------------------------+<br>| OS-DCF:diskConfig | MANUAL |<br>| OS-EXT-AZ:availability_zone | nova |<br>
| OS-EXT-SRV-ATTR:host | devstack-large-vm.localdomain |<br>| OS-EXT-SRV-ATTR:hypervisor_hostname | devstack-large-vm.localdomain |<br><b>| OS-EXT-SRV-ATTR:instance_name | instance-0000000a </b> |<br>
| OS-EXT-STS:power_state | 0 |<br>| OS-EXT-STS:task_state | spawning |<br>| OS-EXT-STS:vm_state | building |<br>
| OS-SRV-USG:launched_at | - |<br>| OS-SRV-USG:terminated_at | - |<br>| accessIPv4 | |<br>
| accessIPv6 | |<br>| config_drive | |<br>| created | 2014-06-09T10:18:53Z |<br>
| flavor | m1.tiny (1) |<br>| hostId | 393d36ca7cc7c30957286e775b5808c8103b4b7be1af4f2163cc2fe4 |<br>| id | bc598be6-999b-4ab9-8a0a-1c4d29dd811a |<br>
| image | cirros-0.3.2-x86_64-uec (99471623-273c-46ed-b1b0-5a58605aab76) |<br>| key_name | - |<br>| metadata | {} |<br>
| name | mynewvm |<br>| os-extended-volumes:volumes_attached | [] |<br>| private network | 10.0.0.2 |<br>
| progress | 0 |<br>| security_groups | default |<br>| status | BUILD |<br>
| tenant_id | 1d0dedb6e6f344258a1ab44df3fcd4ee |<br>| updated | 2014-06-09T10:18:53Z |<br>| user_id | 730ccd3aaf7140789f4929edd71d3d31 |<br>
+--------------------------------------+----------------------------------------------------------------+<br>[stack@devstack-large-vm ~]$ [ready] ps aux| grep qemu| grep instance<br>stack 11388 99.1 0.9 1054588 36884 ? Sl 10:18 4:03 /usr/bin/qemu-system-x86_64 -machine accel=kvm <b>-name guestfs-tgx65m4jvhgmu8la</b> -S -machine pc-i440fx-1.6,accel=kvm,usb=off -cpu host -m 500 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 6bd85030-c2a0-4576-82d2-ff30a9e44024 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/opt/stack/.config/libvirt/qemu/lib/guestfs-tgx65m4jvhgmu8la.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-reboot -no-acpi -kernel /var/tmp/.guestfs-1001/appliance.d/kernel -initrd /var/tmp/.guestfs-1001/appliance.d/initrd -append panic=1 console=ttyS0 udevtimeout=600 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 TERM=screen -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 <b>-drive file=/opt/stack/data/nova/instances/bc598be6-999b-4ab9-8a0a-1c4d29dd811a/disk</b>,if=none,id=drive-scsi0-0-0-0,format=qcow2,cache=writeback -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -drive file=/tmp/libguestfsagFFSQ/overlay1,if=none,id=drive-scsi0-0-1-0,format=qcow2,cache=unsafe -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-scsi0-0-1-0,id=scsi0-0-1-0 -chardev socket,id=charserial0,path=/tmp/libguestfsagFFSQ/console.sock -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/tmp/libguestfsagFFSQ/guestfsd.sock -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.libguestfs.channel.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4<br>
<br>[root@devstack-large-vm qemu]# ls -lt<br>total 84<br>-rw-------. 1 root root 8279 Jun 9 09:59 instance-00000001.log<br>-rw-------. 1 root root 2070 Jun 9 09:29 instance-00000007.log<br>-rw-------. 1 root root 63444 Jun 6 09:20 instance-00000004.log<br>
-rw-------. 1 root root 2071 Jun 6 07:12 instance-00000002.log<br>[root@devstack-large-vm qemu]# date<br>Mon Jun 9 10:24:32 UTC 2014<br><br></span></div><span style="font-family:courier new,monospace">^^ No instance with 0a name above ^^<br>
</span><div><span style="font-family:courier new,monospace"><br></span><div class="gmail_extra"><span style="font-family:courier new,monospace">So the qemu process above is using the disk from my instance but its -name is not instance-XXXXX tho' </span><br>
<span style="font-family:courier new,monospace"><b>OS-EXT-SRV-ATTR:instance_name | instance-0000000a </b> <br></span></div><div class="gmail_extra"><span style="font-family:courier new,monospace">show 0a as instance name<br>
<br></span></div><div class="gmail_extra"><span style="font-family:courier new,monospace">virsh shows it as running...<br><br>[stack@devstack-large-vm ~]$ [ready] virsh list<br> Id Name State<br>
----------------------------------------------------<br><b> 2 guestfs-tgx65m4jvhgmu8la running</b><br><br></span></div><div class="gmail_extra"><span style="font-family:courier new,monospace">thanx,<br>deepak<br>
<br></span></div><div><div><div class="gmail_extra"><span style="font-family:courier new,monospace"><br></span><div class="gmail_quote"><span style="font-family:courier new,monospace">On Mon, Jun 9, 2014 at 3:50 PM, Sergey Kolekonov <span dir="ltr"><<a href="mailto:skolekonov@mirantis.com" target="_blank">skolekonov@mirantis.com</a>></span> wrote:<br>
</span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><span style="font-family:courier new,monospace">Hi,</span><div><span style="font-family:courier new,monospace"><br>
</span></div><div><span style="font-family:courier new,monospace">have you inspected /var/log/libvirt/</span><span style="font-family:courier new,monospace">qemu/instance-xxxxxxxx.log files? Such problem can be connected with your hypervisor. Especially if you use nested kvm virtualization on your host machine.</span></div>
</div><div class="gmail_extra"><span style="font-family:courier new,monospace"><br><br></span><div class="gmail_quote"><div><div><span style="font-family:courier new,monospace">On Mon, Jun 9, 2014 at 2:06 PM, Deepak Shetty <span dir="ltr"><<a href="mailto:dpkshetty@gmail.com" target="_blank">dpkshetty@gmail.com</a>></span> wrote:<br>
</span>
</div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><div dir="ltr"><span style="font-family:courier new,monospace">(Hit send by mistake.. continuing below)<br>
</span><div><div class="gmail_extra"><span style="font-family:courier new,monospace"><br><br></span><div class="gmail_quote"><span style="font-family:courier new,monospace">
On Mon, Jun 9, 2014 at 3:32 PM, Deepak Shetty <span dir="ltr"><<a href="mailto:dpkshetty@gmail.com" target="_blank">dpkshetty@gmail.com</a>></span> wrote:<br></span>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><span style="font-family:courier new,monospace">Hi All,<br></span></div>
<span style="font-family:courier new,monospace"> The last time I sent this issue, I didn't had the right tags in the subject so sending with the right tags, in the hope that the right folks might help provide some clues.<br>
</span>
<span style="font-family:courier new,monospace"><br></span></div><span style="font-family:courier new,monospace">I am usign latest devstack on F20 and I see VMs stuck in 'spawning' state.<br>No errors in n-sch, n-cpu and I checked other n-* screen logs too, no errors whatsoever<br>
<br></span></div><span style="font-family:courier new,monospace">There is enuf memory and disk space available for the tiny and small flavour VMs I am trying to run<br></span>
<span style="font-family:courier new,monospace"><br></span></div>
</blockquote></div><span style="font-family:courier new,monospace"><br>2014-06-09 09:57:19.935 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 1905<br>2014-06-09 09:57:19.935 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 46<br>
2014-06-09 09:57:19.936 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 1<br>
2014-06-09 09:57:19.936 AUDIT nova.compute.resource_tracker [-] PCI stats: []<br><br></span></div><div class="gmail_extra"><span style="font-family:courier new,monospace">The only thing that i see other than INFO/DEbug is the below<br>
2014-06-09 09:57:42.700 WARNING nova.compute.manager [-] Found 3 in the database and 0 on the hypervisor.<br></span>
<span style="font-family:courier new,monospace"><br></span></div><div class="gmail_extra"><span style="font-family:courier new,monospace">What does that mean ? Does that say that Nova sees 3 VMs in its DB but a query to the hyp (via libvirt I assume) returns none ?<br>
<br></span></div><div class="gmail_extra"><span style="font-family:courier new,monospace">Interestingly I can see the qemu process for my VM that nova says is stuck in Spawning state and virsh list also lists' it<br>
</span>
</div><div class="gmail_extra"><span style="font-family:courier new,monospace">But i don't see the libvirt.xml for this stuck VM (from nova's perspective) in the /opt/stack/data/nova/</span><span style="font-family:courier new,monospace">instances/<uuid> folder.<br>
<br>I was trying to figure what this means ? Where is nova stuck ? Screen logs just doesn't point to anything useful to debug further.<br></span>
<span style="font-family:courier new,monospace"><br></span></div><div class="gmail_extra"><span style="font-family:courier new,monospace">From the n-cpu logs i see the "Creating image" info log from nova.virt.libvirt and thats the last<br>
</span></div><div class="gmail_extra"><span style="font-family:courier new,monospace">I don't see the info log "Creating isntance" from nova.virt.. so it looks like its stuck somewhere between creating image and spawning instance.. but virsh list and qemu process says otherwise<br>
</span>
<span style="font-family:courier new,monospace"><br></span></div><div class="gmail_extra"><span style="font-family:courier new,monospace">Looking for some debug hints here<br><br>thanx,<br>deepak<br></span></div></div></div>
<span style="font-family:courier new,monospace"><br></span></div></div><span style="font-family:courier new,monospace">______________________________</span><span style="font-family:courier new,monospace">_________________<br>
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
Post to : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a><br>
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br></span>
<span style="font-family:courier new,monospace"><br></span></blockquote></div><span style="font-family:courier new,monospace"><br></span></div>
</blockquote></div><span style="font-family:courier new,monospace"><br></span></div></div></div></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>