[Openstack-operators] Instance memory overhead

Joe Topjian joe at topjian.net
Tue Jun 23 16:58:30 UTC 2015


In addition to what Kris said, here are two other ways to see memory usage
of qemu processes:

The first is with "nova diagnostics <uuid>". By default this is an
admin-only command.

The second is by running "virsh dommemstat <instance-id>" directly on the
compute node.

Note that it's possible for the used memory (rss) to be greater than the
available memory. When this happens, I believe it is due to the qemu
process consuming more memory than the actual vm itself -- so the instance
has consumed all available memory, plus qemu itself needs some to function
properly. Someone please correct me if I'm wrong.

Hope that helps,
Joe

On Tue, Jun 23, 2015 at 10:12 AM, Kris G. Lindgren <klindgren at godaddy.com>
wrote:

>   Not totally sure I am following - the output of free would help a lot.
>
>  However, the number you should be caring about is free +buffers/cache.
> The reason for you discrepancy is you are including the cached in memory
> file system content that linux does in order to improve performance. On
> boxes with enough ram this can easily be 60+ GB.  When the system comes
> under memory pressure (from applications or the kernel wanting more memory)
> the kernel will remove any cached filesystem items to free up memory for
> processes.  This link [1] has a pretty good description of what I am
> talking about.
>
>  Either way, if you want to test to make sure this is a case of
> filesystem caching you can run:
>
> echo 3 > /proc/sys/vm/drop_caches
>
>  Which will tell linux to drop all filesystem cache from memory, and I
> bet a ton of your memory will show up.  Note: in doing so - you will affect
> the performance of the box.  Since what use to be an in memory lookup will
> now have to go to the filesystem.  However, over time the cache will
> re-establish.  You can find more examples of how caching interacts with
> other part of the linux memory system here: [2]
>
>  To your question about qemu process..  If you use ps aux, the
> columns VSZ and RSS will tell you are wanting.  VSZ is the virtual size
> (how much memory the process has asked the kernel for).  RSS is resident
> set side, or that actual amount of non-swapped memory the process is using.
>
>  [1] - http://www.linuxatemyram.com/
>  [2] - http://www.linuxatemyram.com/play.html
>  ____________________________________________
>
> Kris Lindgren
> Senior Linux Systems Engineer
> GoDaddy, LLC.
>
>
>   From: Mike Leong <leongmzlist at gmail.com>
> Date: Tuesday, June 23, 2015 at 9:44 AM
> To: "openstack-operators at lists.openstack.org" <
> openstack-operators at lists.openstack.org>
> Subject: [Openstack-operators] Instance memory overhead
>
>   My instances are using much more memory that expected.  The amount free
> memory (free + cached) is under 3G on my servers even though the compute
> nodes are configured to reserve 32G.
>
>  Here's my setup:
> Release: Ice House
>  Server mem: 256G
> Qemu version: 2.0.0+dfsg-2ubuntu1.1
> Networking: Contrail 1.20
> Block storage: Ceph 0.80.7
> Hypervisor OS: Ubuntu 12.04
> memory over-provisioning is disabled
> kernel version: 3.11.0-26-generic
>
>  On nova.conf
>  reserved_host_memory_mb = 32768
>
>  Info on instances:
> - root volume is file backed (qcow2) on the hypervisor local storage
> - each instance has a rbd volume mounted from Ceph
> - no swap file/partition
>
>  I've confirmed, via nova-compute.log, that nova is respecting the
> reserved_host_memory_mb directive and is not over-provisioning.  On some
> hypervisors, nova-compute says there's 4GB available for use even though
> the OS has less that 4G left (free +cached)!
>
>  I've also summed up the memory from /etc/libvir/qemu/*.xml files and the
> total looks good.
>
>  Each hypervisor hosts about 45-50 instances.
>
>  Is there good way to calculate the actual usage of each QEMU process?
>
>  PS: I've tried free, summing up RSS, and smem but none of them can tell
> me where's the missing mem.
>
>  thx
> mike
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20150623/bc2429b0/attachment.html>


More information about the OpenStack-operators mailing list