[openstack-dev] [ironic][nova] Suggestion required on pci_device inventory addition to ironic and its subsequent changes in nova
Nisha Agarwal
agarwalnisha1980 at gmail.com
Tue Apr 11 16:25:07 UTC 2017
Hi Jay, Dmitry,
>I strongly challenge the assertion made here that inspection is only
useful in scheduling contexts.
Ok i agree that scheduling is not the only purpose of inspection but it is
one of the main aspect of inspection.
>There are users who simply want to know about their hardware, and read the
results as posted to swift.
This is true only for ironic-inspector. If we say all the features of
ironic-inspector is "OK" for ironic, then why OOB inspection not allowed to
discover same things or do same things what ironic-inspector already does.
Ironic-inspector already discovers the pci-device data in the format nova
supports. Why the features supported by ironic-inspector doesnt has to go
through ironic review for capabilities review etc. ironic-inspector does
has its own review process but doesnt centralize its approach(atleast
fields/attributes names) for ironic which is and should be a common thing
between inband inspection and out-of-band inspection.
All above is said just to emphasize that ironic-inspector is not the only
way of inspection in ironic.
> Inspection also handles discovery of new nodes when given basic
information about them.
Applies only to ironic-inspector.
> Also ironic-inspector is useful for automatically defining resource
classes on nodes, so I'm not sure about this purpose being defeated as well.
I wasnt aware that the creation of custom resource class is already
automated by ironic-inspector. If it is already there , it should be done
by ironic instead of ironic-inspector because thats required even by OOB
inspection. If the solution is there in ironic OOB inspection can also use
that for scheduling.
Regards
Nisha
On Tue, Apr 11, 2017 at 9:34 PM, Dmitry Tantsur <dtantsur at redhat.com> wrote:
> On 04/11/2017 05:28 PM, Jay Faulkner wrote:
>
>>
>> On Apr 11, 2017, at 12:54 AM, Nisha Agarwal <agarwalnisha1980 at gmail.com>
>>> wrote:
>>>
>>> Hi John,
>>>
>>> With ironic I thought everything is "passed through" by default,
>>>> because there is no virtualization in the way. (I am possibly
>>>> incorrectly assuming no BIOS tricks to turn off or re-assign PCI
>>>> devices dynamically.)
>>>>
>>>
>>> Yes with ironic everything is passed through by default.
>>>
>>> So I am assuming this is purely a scheduling concern. If so, why are
>>>> the new custom resource classes not good enough? "ironic_blue" could
>>>> mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
>>>> and one 1Gb nic, etc.
>>>> Or is there something else that needs addressing here? Trying to
>>>> describe what you get with each flavor to end users?
>>>>
>>> Yes this is purely from scheduling perspective.
>>> Currently how ironic works is we discover server attributes and populate
>>> them into node object. These attributes are then used for further
>>> scheduling of the node from nova scheduler using ComputeCapabilities
>>> filter. So this is something automated on ironic side, like we do
>>> inspection of the node properties/attributes and user need to create the
>>> flavor of their choice and the node which meets the user need is scheduled
>>> for ironic deploy.
>>> With resource class name in place in ironic, we ask user to do a manual
>>> step i.e. create a resource class name based on the hardware attributes and
>>> this need to be done on per node basis. For this user need to know the
>>> server hardware properties in advance before assigning the resource class
>>> name to the node(s) and then assign the resource class name manually to the
>>> node.
>>> In a broad way if i say, if we want to support scheduling based on
>>> quantity for ironic nodes there is no way we can do it through current
>>> resource class structure(actually just a tag) in ironic. A user may want
>>> to schedule ironic nodes on different resources and each resource should be
>>> a different resource class (IMO).
>>>
>>> Are you needing to aggregating similar hardware in a different way to
>>>> the above
>>>> resource class approach?
>>>>
>>> i guess no but the above resource class approach takes away the
>>> automation on the ironic side and the whole purpose of inspection is
>>> defeated.
>>>
>>>
>> I strongly challenge the assertion made here that inspection is only
>> useful in scheduling contexts. There are users who simply want to know
>> about their hardware, and read the results as posted to swift. Inspection
>> also handles discovery of new nodes when given basic information about them.
>>
>
> Also ironic-inspector is useful for automatically defining resource
> classes on nodes, so I'm not sure about this purpose being defeated as well.
>
> /me makes a note to provide a few examples of such approach in
> ironic-inspector docs
>
> Not sure about OOB inspection though.
>
>
>
>> -
>> Jay Faulkner
>> OSIC
>>
>> Regards
>>> Nisha
>>>
>>>
>>> On Mon, Apr 10, 2017 at 4:29 PM, John Garbutt <john at johngarbutt.com>
>>> wrote:
>>> On 10 April 2017 at 11:31, <sfinucan at redhat.com> wrote:
>>>
>>>> On Mon, 2017-04-10 at 11:50 +0530, Nisha Agarwal wrote:
>>>>
>>>>> Hi team,
>>>>>
>>>>> Please could you pour in your suggestions on the mail?
>>>>>
>>>>> I raised a blueprint in Nova for this https://blueprints.launchpad.ne
>>>>> t/nova/+spec/pci-passthorugh-for-ironic and two RFEs at ironic side h
>>>>> ttps://bugs.launchpad.net/ironic/+bug/1680780 and https://bugs.launch
>>>>> pad.net/ironic/+bug/1681320 for the discussion topic.
>>>>>
>>>>
>>>> If I understand you correctly, you want to be able to filter ironic
>>>> hosts by available PCI device, correct? Barring any possibility that
>>>> resource providers could do this for you yet, extending the nova ironic
>>>> driver to use the PCI passthrough filter sounds like the way to go.
>>>>
>>>
>>> With ironic I thought everything is "passed through" by default,
>>> because there is no virtualization in the way. (I am possibly
>>> incorrectly assuming no BIOS tricks to turn off or re-assign PCI
>>> devices dynamically.)
>>>
>>> So I am assuming this is purely a scheduling concern. If so, why are
>>> the new custom resource classes not good enough? "ironic_blue" could
>>> mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
>>> and one 1Gb nic, etc.
>>>
>>> Or is there something else that needs addressing here? Trying to
>>> describe what you get with each flavor to end users? Are you needing
>>> to aggregating similar hardware in a different way to the above
>>> resource class approach?
>>>
>>> Thanks,
>>> johnthetubaguy
>>>
>>> ____________________________________________________________
>>> ______________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> --
>>> The Secret Of Success is learning how to use pain and pleasure, instead
>>> of having pain and pleasure use you. If You do that you are in control
>>> of your life. If you don't life controls you.
>>> ____________________________________________________________
>>> ______________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
--
The Secret Of Success is learning how to use pain and pleasure, instead
of having pain and pleasure use you. If You do that you are in control
of your life. If you don't life controls you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170411/ebaada22/attachment.html>
More information about the OpenStack-dev
mailing list