[openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

Sylvain Bauza sbauza at redhat.com
Thu May 31 19:19:49 UTC 2018


On Thu, May 31, 2018 at 5:00 PM, Jay Pipes <jaypipes at gmail.com> wrote:

> On 05/31/2018 05:10 AM, Sylvain Bauza wrote:
>
>> After considering the whole approach, discussing with a couple of folks
>> over IRC, here is what I feel the best approach for a seamless upgrade :
>>   - VGPU inventory will be kept on root RP (for the first type) in Queens
>> so that a compute service upgrade won't impact the DB
>>   - during Queens, operators can run a DB online migration script (like
>> the ones we currently have in https://github.com/openstack/n
>> ova/blob/c2f42b0/nova/cmd/manage.py#L375) that will create a new
>> resource provider for the first type and move the inventory and allocations
>> to it.
>>   - it's the responsibility of the virt driver code to check whether a
>> child RP with its name being the first type name already exists to know
>> whether to update the inventory against the root RP or the child RP.
>>
>> Does it work for folks ?
>>
>
> No, sorry, that doesn't work for me. It seems overly complex and fragile,
> especially considering that VGPUs are not moveable anyway (no support for
> live migrating them). Same goes for CPU pinning, NUMA topologies, PCI
> passthrough devices, SR-IOV PF/VFs and all the other "must have" features
> that have been added to the virt driver over the last 5 years.
>
> My feeling is that we should not attempt to "migrate" any allocations or
> inventories between root or child providers within a compute node, period.
>
>
I don't understand why you're talking of *moving* an instance. My concern
was about upgrading a compute node to Rocky where some instances were
already there, and using vGPUs.


> The virt drivers should simply error out of update_provider_tree() if
> there are ANY existing VMs on the host AND the virt driver wishes to begin
> tracking resources with nested providers.
>
> The upgrade operation should look like this:
>
> 1) Upgrade placement
> 2) Upgrade nova-scheduler
> 3) start loop on compute nodes. for each compute node:
>  3a) disable nova-compute service on node (to take it out of scheduling)
>  3b) evacuate all existing VMs off of node
>  3c) upgrade compute node (on restart, the compute node will see no
>      VMs running on the node and will construct the provider tree inside
>      update_provider_tree() with an appropriate set of child providers
>      and inventories on those child providers)
>  3d) enable nova-compute service on node
>
> Which is virtually identical to the "normal" upgrade process whenever
> there are significant changes to the compute node -- such as upgrading
> libvirt or the kernel. Nested resource tracking is another such significant
> change and should be dealt with in a similar way, IMHO.
>
>
Upgrading to Rocky for vGPUs doesn't need to also upgrade libvirt or the
kernel. So why operators should need to "evacuate" (I understood that as
"migrate")  instances if they don't need to upgrade their host OS ?

Best,
> -jay
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180531/02c5c141/attachment.html>


More information about the OpenStack-dev mailing list