[openstack-dev] [ironic][nova] Suggestion required on pci_device inventory addition to ironic and its subsequent changes in nova

Nisha Agarwal agarwalnisha1980 at gmail.com
Tue Apr 11 07:54:54 UTC 2017


Hi John,

>With ironic I thought everything is "passed through" by default,
>because there is no virtualization in the way. (I am possibly
>incorrectly assuming no BIOS tricks to turn off or re-assign PCI
>devices dynamically.)

Yes with ironic everything is passed through by default.

>So I am assuming this is purely a scheduling concern. If so, why are
>the new custom resource classes not good enough? "ironic_blue" could
>mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
>and one 1Gb nic, etc.
>Or is there something else that needs addressing here? Trying to
>describe what you get with each flavor to end users?
Yes this is purely from scheduling perspective.
Currently how ironic works is we discover server attributes and populate
them into node object. These attributes are then used for further
scheduling of the node from nova scheduler using ComputeCapabilities
filter. So this is something automated on ironic side, like we do
inspection of the node properties/attributes and user need to create the
flavor of their choice and the node which meets the user need is scheduled
for ironic deploy.
With resource class name in place in ironic, we ask user to do a manual
step i.e. create a resource class name based on the hardware attributes and
this need to be done on per node basis. For this user need to know the
server hardware properties in advance before assigning the resource class
name to the node(s) and then assign the resource class name manually to the
node.
In a broad way if i say, if we want to support scheduling based on quantity
for ironic nodes there is no way we can do it through current resource
class structure(actually just a tag) in ironic. A  user may want to
schedule ironic nodes on different resources and each resource should be a
different resource class (IMO).

>Are you needing to aggregating similar hardware in a different way to the
above
>resource class approach?
i guess no but the above resource class approach takes away the automation
on the ironic side and the whole purpose of inspection is defeated.

Regards
Nisha


On Mon, Apr 10, 2017 at 4:29 PM, John Garbutt <john at johngarbutt.com> wrote:

> On 10 April 2017 at 11:31,  <sfinucan at redhat.com> wrote:
> > On Mon, 2017-04-10 at 11:50 +0530, Nisha Agarwal wrote:
> >> Hi team,
> >>
> >> Please could you pour in your suggestions on the mail?
> >>
> >> I raised a blueprint in Nova for this https://blueprints.launchpad.ne
> >> t/nova/+spec/pci-passthorugh-for-ironic and two RFEs at ironic side h
> >> ttps://bugs.launchpad.net/ironic/+bug/1680780 and https://bugs.launch
> >> pad.net/ironic/+bug/1681320 for the discussion topic.
> >
> > If I understand you correctly, you want to be able to filter ironic
> > hosts by available PCI device, correct? Barring any possibility that
> > resource providers could do this for you yet, extending the nova ironic
> > driver to use the PCI passthrough filter sounds like the way to go.
>
> With ironic I thought everything is "passed through" by default,
> because there is no virtualization in the way. (I am possibly
> incorrectly assuming no BIOS tricks to turn off or re-assign PCI
> devices dynamically.)
>
> So I am assuming this is purely a scheduling concern. If so, why are
> the new custom resource classes not good enough? "ironic_blue" could
> mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
> and one 1Gb nic, etc.
>
> Or is there something else that needs addressing here? Trying to
> describe what you get with each flavor to end users? Are you needing
> to aggregating similar hardware in a different way to the above
> resource class approach?
>
> Thanks,
> johnthetubaguy
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
The Secret Of Success is learning how to use pain and pleasure, instead
of having pain and pleasure use you. If You do that you are in control
of your life. If you don't life controls you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170411/794f37ec/attachment.html>


More information about the OpenStack-dev mailing list