[Ironic] Teaching virtualbmc how to talk to Ironic?

Donny Davis donny at fortnebula.com
Mon Oct 19 11:33:37 UTC 2020


On Mon, Oct 19, 2020 at 7:31 AM Donny Davis <donny at fortnebula.com> wrote:

>
>
> On Sun, Oct 18, 2020 at 11:57 PM Dan Sneddon <dsneddon at redhat.com> wrote:
>
>> Steve, Lars,
>>
>> I just wanted to throw an idea out there:
>>
>> Redfish requires user accounts. Is there a way to stretch this idea into
>> a more general Redfish proxy? Ironic would have to create a temporary user
>> account for Redfish in the BMC, and then provide the tenant an IP:port of a
>> TCP proxy for the tenant to connect to.
>>
>> The TCP proxy would be deactivated when the node was deleted. When the
>> node was cleaned, the user account would be removed. When nodes were
>> scheduled, the Ironic-generated user accounts would be removed and
>> recreated as needed based on the scheduling request. Ironic settings could
>> be provided to control the access rights/group of the user accounts.
>>
>> A variant of this would create and manage the user account, but not
>> provide a TCP proxy. Instead, perhaps the BMC location could be returned.
>> This would not meet the criteria of not providing direct access to the
>> hardware. We would have to remove accounts from the BMC when the node was
>> deleted or require cleaning, but it could still potentially be less secure.
>>
>> A downside of the role-based user scheduling approach is that Ironic then
>> requires credentials for the BMC that include user account management,
>> which could run counter to security policy.
>>
>> On Sun, Oct 18, 2020 at 6:08 PM Steve Baker <sbaker at redhat.com> wrote:
>>
>>>
>>>
>>> On 17/10/20 7:29 am, Lars Kellogg-Stedman wrote:
>>>
>>> > In the work that we're doing with the Mass Open Cloud [1], we're
>>>
>>> > looking at using Ironic (and the multi-tenant support we contributed)
>>>
>>> > to manage access to a shared pool of hardware while still permitting
>>>
>>> > people to use their own provisioning tools.
>>>
>>> >
>>>
>>> > We don't want to expose the hardware BMC directly to consumers; we
>>>
>>> > want Ironic to act as the access control mechanism for all activities
>>>
>>> > involving the hardware.
>>>
>>> >
>>>
>>> > The missing part of this scenario is that at the moment this would
>>>
>>> > require provisioning tools to know how to talk to the Ironic API if
>>>
>>> > they want to perform BMC actions on the host, such as controlling
>>>
>>> > power.
>>>
>>> >
>>>
>>> > While talking with Mainn the other day, it occurred to me that maybe
>>>
>>> > we could teach virtualbmc [2] how to talk to Ironic, so that we could
>>>
>>> > provide a virtual IPMI interface to provisioning tools. There are some
>>>
>>> > obvious questions here around credentials (I think we'd probably
>>>
>>> > generate them randomly when assigning control of a piece of hardware
>>>
>>> > to someone, but that's more of an implementation detail).
>>>
>>> >
>>>
>>> > I wanted to sanity check this idea: does this seem reasonable? Are
>>>
>>> > there alternatives  you would suggest?
>>>
>>>
>>>
>>> As far as I'm aware, an IPMI host:port endpoint will manage exactly one
>>>
>>> baremetal host, with no obvious mechanism to specify which host to
>>>
>>> control when you have multiple hosts behind a single endpoint. These
>>>
>>> days with the rise of Redfish I think IPMI is considered a legacy
>>>
>>> interface now.
>>>
>>>
>>>
>>> I suspect a BMC interface is not the right abstraction for a
>>>
>>> multi-tenant baremetal API, that's why Ironic was started in the first
>>>
>>> place ;)
>>>
>>>
>>>
>>> If there are provisioning tools frequently used by the target audience
>>>
>>> of Mass Open Cloud which have poor Ironic API support then we'd like to
>>>
>>> know what those tools are so we can improve that support.
>>>
>>>
>>>
>>>
>>>
>>> > Thanks!
>>>
>>> >
>>>
>>> > [1] https://github.com/CCI-MOC/esi
>>>
>>> > [2] https://github.com/openstack/virtualbmc
>>>
>>> >
>>>
>>>
>>>
>>>
>>>
>>> --
>> Dan Sneddon         |  Senior Principal Software Engineer
>> dsneddon at redhat.com |  redhat.com/cloud
>> dsneddon:irc        |  @dxs:twitter
>>
>
>
> Lars,
> Please forgive my ignorance if I did not understand your question. Why not
> just use Nova with the ironic driver? Nova won't tell a user about the
> abstraction to metal and it's pretty easy to get setup and running with
> Ironic. This way users can request a metal type that is predefined and the
> rest is handled for them. This way you can keep consumers and admins
> separate and provide that multi-tenant functionality without exposing any
> critical details of the underlying infra.
>
> --
> ~/DonnyD
> C: 805 814 6800
> "No mission too difficult. No sacrifice too great. Duty First"
>

/donnyd shakes fist at gmail

Lars,
Please forgive my ignorance if I did not understand your question. Why not
just use Nova with the ironic driver? Nova won't tell a user about the
abstraction to metal and it's pretty easy to get setup and running with
Ironic. This way users can request a metal type that is predefined and the
rest is handled for them. This way you can keep consumers and admins
separate and provide that multi-tenant functionality without exposing any
critical details of the underlying infra.


~/DonnyD
C: 805 814 6800
"No mission too difficult. No sacrifice too great. Duty First"
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20201019/44005d48/attachment-0001.html>


More information about the openstack-discuss mailing list