[Openstack] Performance issue on single physical host
Kostyantyn Volenbovskyi
volenbovsky at yandex.ru
Mon Dec 19 09:58:18 UTC 2016
Hi,
a few thoughts:
-analyze what exactly time is spent on using Nova logs (api+scheduler+conductor+compute)+Neutron (agent, bind port stuff)+Libvirt/QEMU logs. Use req-id and UUID of instance to identify the ’slow’ case across logs.
Take a look at the create instance’ flow that is present in bunch of websites in internet and make a draft of distribution of time for each stage
(side note: maybe there is tool that will allow such thing?). Turn on debug logging in Nova and Neutron to narrow it down in case necessary .
-compare CPU/RAM consumption in normal case and ‘slow’ case’
> How do I bound my vCPU all the way to the physical host?
Answer is CPU pinning. But yup, that is about CPU utilization on host , not related to time of VM launch.
Note that for general-purpose case CPU pinning can be an overkill in a way that general-purpose relies
on scheduling by host OS and it should not be problematic. And most configurations run with oversubscription of CPU 16 which is default (1 physical CPU=16 virtual)
'Specific purpose' cases are like NFV/Telco where CPU pinning is much-much more used.
BR,
Konstantin
> On Dec 19, 2016, at 5:27 AM, Vikram Chhibber <vikram.chhibber at gmail.com> wrote:
>
> Hi All,
>
> I am using Kilo release 1:2015.1.4-0ubuntu2 for my lab deployment on single physical host. The issue is that I am getting very low performance when I launch multiple VM instances. The first one boots up within seconds but the second takes noticeable time. If I instantiate the third instance when two instances are already running, it may take 30 minutes to come up. Moreover, the CPU idle % for a given instance keeps decreasing as number of running instances increase.
> I am suspecting that this behavior is because be lack of bounding of vCPUs with physical CPUs.
> Because of single node installation, the compute node itself becomes virtualized and run my instances within it. How do I bound my vCPU all the way to the physical host? I did not see any documentation regarding this for Kilo release and there is no mention of bounding CPU for the virtualized compute node on single node installation.
>
> This is the specification of the host:
> Architecture: x86_64
> CPU op-mode(s): 32-bit, 64-bita
> Byte Order: Little Endian
> CPU(s): 36
> On-line CPU(s) list: 0-35
> Thread(s) per core: 2
> Core(s) per socket: 18
> Socket(s): 1
> NUMA node(s): 1
> Vendor ID: GenuineIntel
> CPU family: 6
> Model: 63
> Stepping: 2
> CPU MHz: 1200.000
> BogoMIPS: 4599.94
> Virtualization: VT-x
> L1d cache: 32K
> L1i cache: 32K
> L2 cache: 256K
> L3 cache: 46080K
> NUMA node0 CPU(s): 0-35
>
>
> Spec. for the compute node:
> Architecture: x86_64
> CPU op-mode(s): 32-bit, 64-bit
> Byte Order: Little Endian
> CPU(s): 26
> On-line CPU(s) list: 0-25
> Thread(s) per core: 1
> Core(s) per socket: 1
> Socket(s): 26
> NUMA node(s): 1
> Vendor ID: GenuineIntel
> CPU family: 6
> Model: 6
> Stepping: 3
> CPU MHz: 2299.996
> BogoMIPS: 4599.99
> Virtualization: VT-x
> Hypervisor vendor: KVM
> Virtualization type: full
> L1d cache: 32K
> L1i cache: 32K
> L2 cache: 4096K
> NUMA node0 CPU(s): 0-25
>
>
> Spec for my instance:
> Architecture: x86_64
> CPU op-mode(s): 32-bit, 64-bit
> Byte Order: Little Endian
> CPU(s): 4
> On-line CPU(s) list: 0-3
> Thread(s) per core: 1
> Core(s) per socket: 1
> Socket(s): 4
> NUMA node(s): 1
> Vendor ID: GenuineIntel
> CPU family: 15
> Model: 6
> Stepping: 1
> CPU MHz: 2299.994
> BogoMIPS: 4599.98
> Virtualization: VT-x
> Hypervisor vendor: KVM
> Virtualization type: full
> L1d cache: 32K
> L1i cache: 32K
> L2 cache: 4096K
> NUMA node0 CPU(s): 0-3
>
> Thanks
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20161219/64ff62bb/attachment.html>
More information about the Openstack
mailing list