[openstack-dev] [openstack] [nova] Instance snapshot creation locally and Negative values returned by Resource Tracker

Vilobh Meshram vilobhmeshram.openstack at gmail.com
Wed Nov 4 02:45:20 UTC 2015


Hi All,

I see negative values being returned by resource tracker, which is
surprising, since enough capacity is available on Hypervisor (as seen
through df -ha output [0]). In my setup I have configured nova.conf to
created instance snapshot locally and I *don't have* disk-filter enabled.

Local instance snapshot means the snapshot creation (and conversion from
RAW=>QCOW2) happens on the Hypervisor where the instance was created. After
the conversion the snapshot is uploaded to Glance and deleted from the
Hypervisor.

Questions are :-

1. compute_nodes['free_disk_gb'] is not in-sync with the actual free disk
capacity for that partition (as seen by df -ha) [0]  (see /home).

This is because resource tracker is returning negative values for
free_disk_gb [1] and that is because the value of resources['local_gb_used']
is greater than resources['local_gb']. The value for resources['
local_gb_used'] should ideally be the local gigabytes (787G [0]) used by
the Hypervisor but in-fact is the local gigabytes allocated on the
Hypervisor (3525 G [0]). Allocated is the sum of used capacity on
hypervisor + space consumed by instances spawned on that Hypervisor ( and
there size depends on which flavor VM was spawned on the Hypervisor).
Because of [2] the used space on the Hypervisor is discarded and only the
space consumed by the instances on the HV is taken into consideration.

Was there a specific reason to do so, specifically [2] i.e. resetting the
value of resources['local_gb_used'] ?

2. Is seeing negative values for compute_nodes['free_disk_gb'] and
compute_nodes['disk_available_least'] a normal pattern ? When can we expect
to see them ?

3. Lets say in future I plan to enable disk filter, scheduler logic will
make sure not to pick up this Hypervisor if its reaching its consumption
(considering it might need to have enough space for snapshot creation and
later a scratch space for snapshot conversion from RAW => QCOW2) will it
help so that resource tracker does not return negative values. Is there a
recommended overcommit ration suggestion in these scenario where you happen
to create/convert snapshot locally before uploading to glance.

4. How will multiple snapshot request for instances on same Hypervisor be
handled because till the time the request reaches the compute it has no
clear idea about the free capacity on HV which might lead to instance
unusable. Will something of this sort [3]  help? How do people using local
snapshots handle it right now ?

-Vilobh

[0] http://paste.openstack.org/show/477926/
[1]
https://github.com/openstack/nova/blob/stable/liberty/nova/compute/resource_tracker.py#L576
[2]
https://github.com/openstack/nova/blob/stable/liberty/nova/compute/resource_tracker.py#L853
[3] https://review.openstack.org/#/c/208078/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151103/31b83ef4/attachment.html>


More information about the OpenStack-dev mailing list