[ops] [nova] Wrong reported memory hypervisor usage
Massimo Sgaravatto
massimo.sgaravatto at gmail.com
Wed Feb 27 11:16:01 UTC 2019
Thanks for the prompt feedback.
This [*] is the output of "openstack hypervisor show
cld-blu-01.cloud.pd.infn.it"
Let me also add that if I restart openstack-nova-compute on the hypervisor,
then "nova host-describe" shows the right information for a few seconds:
[root at cld-ctrl-01 ~]# nova host-describe cld-blu-01.cloud.pd.infn.it
+-----------------------------+----------------------------------+-----+-----------+---------+
| HOST | PROJECT | cpu |
memory_mb | disk_gb |
+-----------------------------+----------------------------------+-----+-----------+---------+
| cld-blu-01.cloud.pd.infn.it | (total) | 8 |
32722 | 241 |
| cld-blu-01.cloud.pd.infn.it | (used_now) | 19 |
39424 | 95 |
| cld-blu-01.cloud.pd.infn.it | (used_max) | 19 |
38912 | 95 |
| cld-blu-01.cloud.pd.infn.it | b08eede75d5e4be4b0fe21e68fa9c688 | 1 |
2048 | 20 |
| cld-blu-01.cloud.pd.infn.it | 7890b3e262264529a19f9743cf2f14bc | 16 |
32768 | 50 |
| cld-blu-01.cloud.pd.infn.it | 4d8187cffa6a4085ad4a357b8a5afc03 | 2 |
4096 | 25 |
+-----------------------------+----------------------------------+-----+-----------+---------+
but after a while it goes back publishing 16256 MB as used_now
Thanks, Massimo
[*]
# openstack hypervisor show cld-blu-01.cloud.pd.infn.it
+----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value
|
+----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| aggregates | [u'Unipd-AdminTesting-Unipd',
u'Unipd-HPC-Physics', u'Unipd-model-glasses---DISC', u'Unipd-DECAMP',
u'Unipd-Biodiversity-Macro-biomes', u'Unipd- |
| | CMB4PrimordialNonGaussianity',
u'Unipd-Links-in-Channel', u'Unipd-TC4SAP', u'Unipd-Diabetes_Risk',
u'Unipd-LabTrasporti', u'Unipd-Hydrological_DA', u'Unipd- |
| | Smart_Enterprise_in_Cloud', u'Unipd-DEM_Sim_ICEA',
u'Unipd-Plasmonics', u'Unipd-QuBiMM', u'Unipd-MedComp',
u'Unipd-BigDataComputingCourse', u'Unipd-DSB---Sci.-Biomed', u |
| | 'Unipd-Notion',
u'Unipd-The_role_of_trade-offs_in_competitive_ecosystems',
u'Unipd-MMS_Cloud', u'Unipd-SID2016', u'Unipd-QuantumFuture',
u'Unipd-Link_Translocation-DFA', |
| | u'Unipd-Hopping_Transport_in_Lithium_Niobate',
u'Unipd-Negapedia', u'Unipd-SIGNET-ns3', u'Unipd-SIGNET-MATLAB',
u'Unipd-QST', u'Unipd-AbinitioTransport', u'Unipd- |
| | CalcStat', u'Unipd-PhysicsOfData-students',
u'Unipd-DiSePaM', u'Unipd-Few-mode-optical-fibers', u'Unipd-cleva']
|
| cpu_info | {"vendor": "Intel", "model": "SandyBridge-IBRS",
"arch": "x86_64", "features": ["pge", "avx", "xsaveopt", "clflush", "sep",
"syscall", "tsc-deadline", "dtes64", "stibp", |
| | "msr", "xsave", "vmx", "xtpr", "cmov", "ssse3",
"est", "pat", "monitor", "smx", "pbe", "lm", "tsc", "nx", "fxsr", "tm",
"sse4.1", "pae", "sse4.2", "pclmuldq", "cx16", |
| | "pcid", "vme", "mmx", "osxsave", "cx8", "mce",
"de", "rdtscp", "ht", "dca", "lahf_lm", "pdcm", "mca", "pdpe1gb", "apic",
"sse", "pse", "ds", "invtsc", "pni", "tm2", |
| | "aes", "sse2", "ss", "ds_cpl", "arat", "acpi",
"spec-ctrl", "fpu", "ssbd", "pse36", "mtrr", "popcnt", "x2apic"],
"topology": {"cores": 4, "cells": 2, "threads": 1, |
| | "sockets": 1}}
|
| current_workload | 0
|
| disk_available_least | 124
|
| free_disk_gb | 146
|
| free_ram_mb | -6702
|
| host_ip | 192.168.60.150
|
| host_time | 11:56:28
|
| hypervisor_hostname | cld-blu-01.cloud.pd.infn.it
|
| hypervisor_type | QEMU
|
| hypervisor_version | 2010000
|
| id | 132
|
| load_average | 0.11, 0.19, 0.21
|
| local_gb | 241
|
| local_gb_used | 23
|
| memory_mb | 32722
|
| memory_mb_used | 16255
|
| running_vms | 4
|
| service_host | cld-blu-01.cloud.pd.infn.it
|
| service_id | 171
|
| state | up
|
| status | enabled
|
| uptime | 85 days, 34 min
|
| users | 1
|
| vcpus | 8
|
| vcpus_used | 19
|
+----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[root at cld-ctrl-01 ~]#
On Wed, Feb 27, 2019 at 11:47 AM Sean Mooney <smooney at redhat.com> wrote:
> On Wed, 2019-02-27 at 09:33 +0100, Massimo Sgaravatto wrote:
> > In the dashboard of my OpenStack Ocata cloud I see that the reported
> memory usage is wrong for the hypervisors.
> > As far as I understand that information should correspond to the
> "used_now" field of the "nova host-describe"
> nova host-describe has been removed in later version of nova.
> > command. And indeed considering an hypervisor:
> >
> >
> > # nova host-describe cld-blu-01.cloud.pd.infn.it
> >
> +-----------------------------+----------------------------------+-----+-----------+---------+
> > | HOST | PROJECT | cpu |
> memory_mb | disk_gb |
> >
> +-----------------------------+----------------------------------+-----+-----------+---------+
> > | cld-blu-01.cloud.pd.infn.it | (total) | 8
> | 32722 | 241 |
> > | cld-blu-01.cloud.pd.infn.it | (used_now) | 19
> | 16259 | 23 |
> > | cld-blu-01.cloud.pd.infn.it | (used_max) | 19
> | 38912 | 95 |
> > | cld-blu-01.cloud.pd.infn.it | b08eede75d5e4be4b0fe21e68fa9c688 | 1
> | 2048 | 20 |
> > | cld-blu-01.cloud.pd.infn.it | 7890b3e262264529a19f9743cf2f14bc | 16
> | 32768 | 50 |
> > | cld-blu-01.cloud.pd.infn.it | 4d8187cffa6a4085ad4a357b8a5afc03 | 2
> | 4096 | 25 |
> >
> +-----------------------------+----------------------------------+-----+-----------+---------+
> >
> >
> > So for the memory it reports 16259 for used_now, while as far as far I
> understand it should be 38912 + the memory used
> > by the hypervisor
> >
> > Am I missing something ?
> the hypervisors api's memory_mb_used is (reserved memory + total of all
> memory associtate with instance on the host).
> i would have expected used_now to reflect that.
> what does "openstack hypervisor show cld-blu-01.cloud.pd.infn.it" print.
>
> >
> > Thanks, Massimo
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190227/76425fae/attachment-0001.html>
More information about the openstack-discuss
mailing list