[nova][ironic] need help understanding 'cpu_arch' in nova ironic driver
melanie witt
melwittt at gmail.com
Wed Dec 18 04:43:05 UTC 2019
On 12/13/19 06:18, Julia Kreger wrote:
> On Fri, Dec 13, 2019 at 4:34 AM Dmitry Tantsur <dtantsur at redhat.com> wrote:
>>
>> Thank you for starting this!
Thank you both for your input on this issue!
>> On Fri, Dec 13, 2019 at 2:48 AM melanie witt <melwittt at gmail.com> wrote:
>>>
>>> Hey all,
>>>
>>> Recently I'm trying to figure out how to fulfill a use case in the nova
>>> ironic driver around treating an ironic node's 'cpu_arch' as optional.
>>>
>>> This is coming up as a result of a downstream issue [1] and recent
>>> change on the ironic side [2] to make cpu_arch optional for iscsi
>>> deployments. The change makes it so that ironic will _not_ include a
>>> 'cpu_arch' attribute as part of a node's properties if iscsi.
>>>
>>> On the nova side, we have a filter scheduler ImagePropertiesFilter which
>>> will only match a node if the architecture, hypervisor_type, and vm_mode
>>> in the glance image properties match the cpu_arch of the node. We have
>>> always pulled the cpu_arch of the from the ironic node properties since
>>> the original ironic driver code was added to nova.
>>>
>>> Now, the use case I'm trying to solve today [3] is where an iscsi ironic
>>> deployment has no cpu_arch specified on the ironic side and the deployer
>>> wants to use glance images with architecture specified in their image
>>> properties. Today (and historically always) the request to create an
>>> instance in this situation will fail with NoValidHost because the nova
>>> ironic driver could not determine a cpu_arch from the ironic node.
>>
>>
>> I tend to think that if the image requires an explicit architecture, we should keep failing. However, hypervisor_type==baremetal should probably be a valid case (and I don't really know what vm_mode is, so cannot comment here).
This makes sense to me and I agree.
The more I thought about it, the more I was thinking that using an image
with a specific architecture property is directly in opposition to the
desire to have cpu_arch optional in the environment. If everything is
single arch and the deployer will not specify cpu_arch in ironic, it
should be consistent that they not specify arch in the glance image either.
>>> My questions are:
>>>
>>> * Is this a valid thing to want to do?
>>>
>>> * If if is valid, what is the correct way that we should handle missing
>>> cpu_arch in the nova ironic driver? Should we guess at a default
>>> cpu_arch? If so, how? Or should we add a new config option where a
>>> default cpu_arch can be specified? Or, other?
>>
>>
>> Worst case, we can document cpu_arch as being required for nova. A bit inconvenient, but certainly not the end of the world.
To be clear, it's only required for nova in the case of a glance image
with a specific architecture included in the image properties.
It's sounding to me like the proper solution in this use case is to
_not_ set the architecture glance image property if the deployment
prefers to treat the cpu_arch as optional.
> Now that I understand where this entire discussion is coming from,..
> there seems to be a strong desire among some of the larger operators
> (and kind of has been) to be able to have one ironic or as few ironics
> to manage their hardware fleet as possible. So I'm not sure how well
> it would work if it is always required that they create the cpu arch
> field. Then again, most of these deployments use inspector to also do
> wiring and system configuration validation prior to deployment, so
> they would have cpu_arch then.... Hmmm, this is a conundrum.
Yeah, I was thinking the same thing. If they want to avoid setting
cpu_arch in ironic, but want to use glance images with architecture
specified, they end up having to specify the cpu_arch in nova by way of
config option (or something) and defeat the point of trying to avoid
setting cpu_arch.
Yet another reason it's sounding like for this use case, they should not
specify 'architecture' in the glance image properties.
>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1653788
>>> [2] https://review.opendev.org/620634
>>> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1688838
>>>
>
More information about the openstack-discuss
mailing list