[openstack-dev] [nova] Plans to fix numa_topology related issues with migration/resize/evacuate
Nikola Đipanov
ndipanov at redhat.com
Wed Mar 4 17:20:08 UTC 2015
On 03/04/2015 03:17 PM, Wensley, Barton wrote:
> Hi,
>
> I have been exercising the numa topology related features in kilo (cpu
> pinning, numa topology, huge pages) and have seen that there are issues
> when an operation moves an instance between compute nodes. In summary,
> the numa_topology is not recalculated for the destination node, which
> results in the instance running with the wrong topology (or even
> failing to run if the topology isn't supported on the destination).
> This impacts live migration, cold migration, resize and evacuate.
>
> I have spent some time over the last couple weeks and have a working
> fix for these issues that I would like to push upstream. The fix for
> cold migration and resize is the most straightfoward, so I plan to
> start there.
>
First of all thanks for all the hard work on this. Some comments on the
proposed changes bellow - but as usual it's best to see the code :)
> At a high level, here is what I have done to fix cold migrate and
> resize:
> - Add the source_numa_topology and dest_numa_topology to the migration
> object and migrations table.
Migration has access to the instance, and thus access to the current
topology. Also it seems that we actually always load the instance when
we query for migrations in the resource tracker.
Also - it might be better to have something akin to 'new_' flavor for
new topology so we can store both in the instance_extra table which
would be sligthtly more consistent.
Again - best to see the code first.
> - When a resize_claim is done, store the claimed numa topology in the
> dest_numa_topology in the migration record. Also store the current
> numa topology as the source_numa_topology in the migration record.
> - Use the source_numa_topology and dest_numa_topology from the
> migration record in the resource accounting when referencing
> migration claims as appropriate. This is done for claims, dropped
> claims and the resource audit.
> - Set the numa_topology in the instance after the cold migration/resize
> is finished to the dest_numa_topology from the migration object -
> done in finish_resize RPC on the destination compute to match where
> the rest of the resources for the instance are updated (there is a
> call to _set_instance_info here that sets the memory, vcpus, disk
> space, etc... for the migrated instance).
> - Set the numa_topology in the instance if the cold migration/resize is
> reverted to the source_numa_topology from the migration object -
> done in finish_revert_resize RPC on the source compute.
>
> I would appreciate any comments on my approach. I plan to start
> submitting the code for this against bug 1417667 - I will split it
> into several chunks to make it easier to review.
>
All of the above sounds relatively reasonable overall.
I'd like to hear from Jay, Sylvain and other scheduler devs on how they
see this impacting some of the planned blueprints like the RequestSpec
one [1]
Also note that this will require fixing this completely NUMA filter as
well: I've proposed a way to do it here [2]
N.
[1] https://blueprints.launchpad.net/nova/+spec/request-spec-object
[2] https://review.openstack.org/160484
> Fixing live migration was significantly more effort - I'll start a
> different thread on that once I have feedback on the above approach.
>
> Thanks,
>
> Bart Wensley, Member of Technical Staff, Wind River
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
More information about the OpenStack-dev
mailing list