[Openstack-operators] [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild
Arvind N
arvindn05 at gmail.com
Tue May 1 22:26:58 UTC 2018
Reminder for Operators, Please provide feedback either way.
In cases of rebuilding of an instance using a different image where the
image traits have changed between the original launch and the rebuild, is
it reasonable to ask to just re-launch a new instance with the new image?
The argument for this approach is that given that the requirements have
changed, we want the scheduler to pick and allocate the appropriate host
for the instance.
The approach above also gives you consistent results vs the other
approaches where the rebuild may or may not succeed depending on how the
original allocation of resources went.
For example(from Alex Xu) ,if you launched an instance on a host which has
two SRIOV nic. One is normal SRIOV nic(A), another one with some kind of
offload feature(B).
So, the original request is: resources=SRIOV_VF:1 The instance gets a VF
from the normal SRIOV nic(A).
But with a new image, the new request is: resources=SRIOV_VF:1
traits=HW_NIC_OFFLOAD_XX
With all the solutions discussed in the thread, a rebuild request like
above may or may not succeed depending on whether during the initial launch
whether nic A or nic B was allocated.
Remember that in rebuild new allocation don't happen, we have to reuse the
existing allocations.
Given the above background, there seems to be 2 competing options.
1. Fail in the API saying you can't rebuild with a new image with new
required traits.
2. Look at the current allocations for the instance and try to match the
new requirement from the image with the allocations.
With #1, we get consistent results in regards to how rebuilds are treated
when the image traits changed.
With #2, the rebuild may or may not succeed, depending on how well the
original allocations match up with the new requirements.
#2 will also need to need to account for handling preferred traits or
granular resource traits if we decide to implement them for images at some
point...
[1]
https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/glance-image-traits.html
[2] https://review.openstack.org/#/c/560718/
On Tue, Apr 24, 2018 at 6:26 AM, Sylvain Bauza <sbauza at redhat.com> wrote:
> Sorry folks for the late reply, I'll try to also weigh in the Gerrit
> change.
>
> On Tue, Apr 24, 2018 at 2:55 PM, Jay Pipes <jaypipes at gmail.com> wrote:
>
>> On 04/23/2018 05:51 PM, Arvind N wrote:
>>
>>> Thanks for the detailed options Matt/eric/jay.
>>>
>>> Just few of my thoughts,
>>>
>>> For #1, we can make the explanation very clear that we rejected the
>>> request because the original traits specified in the original image and the
>>> new traits specified in the new image do not match and hence rebuild is not
>>> supported.
>>>
>>
>> I believe I had suggested that on the spec amendment patch. Matt had
>> concerns about an error message being a poor user experience (I don't
>> necessarily disagree with that) and I had suggested a clearer error message
>> to try and make that user experience slightly less sucky.
>>
>> For #3,
>>>
>>> Even though it handles the nested provider, there is a potential issue.
>>>
>>> Lets say a host with two SRIOV nic. One is normal SRIOV nic(VF1),
>>> another one with some kind of offload feature(VF2).(Described by alex)
>>>
>>> Initial instance launch happens with VF:1 allocated, rebuild launches
>>> with modified request with traits=HW_NIC_OFFLOAD_X, so basically we want
>>> the instance to be allocated VF2.
>>>
>>> But the original allocation happens against VF1 and since in rebuild the
>>> original allocations are not changed, we have wrong allocations.
>>>
>>
>> Yep, that is certainly an issue. The only solution to this that I can see
>> would be to have the conductor ask the compute node to do the pre-flight
>> check. The compute node already has the entire tree of providers, their
>> inventories and traits, along with information about providers that share
>> resources with the compute node. It has this information in the
>> ProviderTree object in the reportclient that is contained in the compute
>> node resource tracker.
>>
>> The pre-flight check, if run on the compute node, would be able to grab
>> the allocation records for the instance and determine if the required
>> traits for the new image are present on the actual resource providers
>> allocated against for the instance (and not including any child providers
>> not allocated against).
>>
>>
> Yup, that. We also have pre-flight checks for move operations like live
> and cold migrations, and I'd really like to keep all the conditionals in
> the conductor, because it knows better than the scheduler which operation
> is asked.
> I'm not really happy with adding more in the scheduler about "yeah, it's a
> rebuild, so please do something exceptional", and I'm also not happy with
> having a filter (that can be disabled) calling the Placement API.
>
>
>> Or... we chalk this up as a "too bad" situation and just either go with
>> option #1 or simply don't care about it.
>
>
> Also, that too. Maybe just provide an error should be enough, nope?
> Operators, what do you think ? (cross-calling openstack-operators@)
>
> -Sylvain
>
>
>>
>> Best,
>> -jay
>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
--
Arvind N
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20180501/04f60d78/attachment.html>
More information about the OpenStack-operators
mailing list