I initially did not have this setting in the nova.conf. I then read the OpenStack docs and set it to as below and resstarted nova services and re-created the instance but with same outcome.<br><br>libvirt_cpu_mode=host-passthrough<br>
<br>But I didnt have this issue with Windows Server 2012, it reported 2 virtal CPUs correctly.<br><br>Regards,<br>Balu<br><br><div class="gmail_quote">On Tue, Mar 5, 2013 at 6:10 PM, Sylvain Bauza <span dir="ltr"><<a href="mailto:sylvain.bauza@digimind.com" target="_blank">sylvain.bauza@digimind.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<div>Could you please tell us what is your
libvirt_cpu_model ?<br>
Please check both nova.conf and instance's libvirt.xml.<br>
<br>
-Sylvain<br>
<br>
Le 05/03/2013 11:30, Balamurugan V G a écrit :<br>
</div>
<blockquote type="cite">Hi,<br>
<br>
I am running Folsom 2.2 with a KVM compute node. When I launch a
windows instance with flavor that has 2 VCPUs and 2Gb RAM, the
guest seems the RAM fine but not the 2 CPUs. It reports only one
Virtual processor. Whne I look at the command line options with
which KVM has launched the instance, I see that in -smp argument,
sockets is set to 2 and core is set to 1. How can I get the core
to be 2 so that the guest OS can see it? <br>
<br>
<br>
107 7764 1 14 14:35 ? 00:11:27 /usr/bin/kvm -name
instance-0000000e -S -M pc-1.0 -cpu
core2duo,+lahf_lm,+osxsave,+xsave,+sse4.1,+dca,+pdcm,+xtpr,+cx16,+tm2,+est,+vmx,+ds_cpl,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds
-enable-kvm -m 2048<span style="color:rgb(255,0,0)"><b> -smp
2,sockets=2,cores=1,threads=1</b></span> -uuid
cd6dd64c-d792-4b31-92bf-ed8ad1ea46cb -nodefconfig -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-0000000e.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/nova/instances/instance-0000000e/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=24,id=hostnet0 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:46:68:2e,bus=pci.0,addr=0x3
-chardev
file,id=charserial0,path=/var/lib/nova/instances/instance-0000000e/console.log
-device isa-serial,chardev=charserial0,id=serial0 -chardev
pty,id=charserial1 -device
isa-serial,chardev=charserial1,id=serial1 -device
usb-tablet,id=input0 -vnc <a href="http://0.0.0.0:3" target="_blank">0.0.0.0:3</a> -k en-us -vga cirrus
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5<br>
root@openstack-kvm1:~#<br>
<br>
<br>
<br>
Regards,<br>
Balu<br>
<br>
<br>
<br>
<br>
<fieldset></fieldset>
<br>
<pre>_______________________________________________
Mailing list: <a href="https://launchpad.net/~openstack" target="_blank">https://launchpad.net/~openstack</a>
Post to : <a href="mailto:openstack@lists.launchpad.net" target="_blank">openstack@lists.launchpad.net</a>
Unsubscribe : <a href="https://launchpad.net/~openstack" target="_blank">https://launchpad.net/~openstack</a>
More help : <a href="https://help.launchpad.net/ListHelp" target="_blank">https://help.launchpad.net/ListHelp</a>
</pre>
</blockquote>
<br>
</div>
<br>_______________________________________________<br>
Mailing list: <a href="https://launchpad.net/~openstack" target="_blank">https://launchpad.net/~openstack</a><br>
Post to : <a href="mailto:openstack@lists.launchpad.net">openstack@lists.launchpad.net</a><br>
Unsubscribe : <a href="https://launchpad.net/~openstack" target="_blank">https://launchpad.net/~openstack</a><br>
More help : <a href="https://help.launchpad.net/ListHelp" target="_blank">https://help.launchpad.net/ListHelp</a><br>
<br></blockquote></div><br>