On Tue, 20 Oct 2020 at 00:37, Julia Kreger <juliaashleykreger@gmail.com> wrote:
Greetings everyone!
Dan raises a good idea in somehow proxying through the requests and I suspect that is what Lars is actually kind of thinking, but I also suspect Lars is trying to find a way to provide compatibility with whole entire software ecosystems outside the OpenStack realm and likely those that don't even have a concept of Redfish. In essence, I think the discussion and thoughts are kind of thinking in the same basic direction but with very different details, That is not a bad thing thought!
I guess I also have concerns on the subject of trying to do remote account management in BMCs. And what if the BMC doesn't speak full redfish, or doesn't even comprehend account management, or itself is just IPMI. Which I think maybe in part beyond some of the other ecosystems that are attempting to be supported.
But Something Steve said resonated with my brain as I pondered this email over the last day or so. Perhaps *some* of the answer is to enumerate areas where we should work on improving the consumption of ironic such that more traditional baremtal provisioning tools can leverage it, whilst also providing some through access.
I'm not opposed to maybe having some sort of integrated thing that interacts on some level with the conductor to offer up specific IPMI endpoints, although to do it I suspect some retooling or refactoring may be necessary of virtualbmc.
Maybe it could be an "ipmi" or "pass-through" deploy interface where maybe we offer up an IPMI endpoint for a particular amount of time with specific credentials? I guess in the end some of this is a marriage of pyghmi's behavior and baremetal user behavior.
* How does someone claim/allocate machines into this pool to be used? * How are we starting a pass-through service * How is that pass-through service running? Is it a separate api-like service with magic? Is it a single endpoint doing dual bridge bus parsing and per node credential checking and then issuing a message over the RPC bus for the conductor do to a thing (Something like this would align with the existing service delineation security model, fwiw.) * Hey, what is managing authentication details?
With regard to Donny's question, I suspect this is the same conundrum as why even setup some sort of pass-through service. Tooling exists that doesn't comprehend intermediate tooling. Should it? It might help? Would it be difficult? It might be very difficult. That being said, I don't think this is about abstraction to metal, but about semi-privileged users maybe somehow having access to something like they are used to so they might be able to effect the same change they desire.
I guess to sum up my thoughts.
* Yes, there is value here. * This is likely reasonable from a 10,000 foot view without many technical details. * Would the community accept such, I have no idea present and I think we need to better understand the details. I also think it may help us to kind of see where we want to try and improve support concurrently. Or maybe create a list of "things we wish had ironic support" list.
Hope this helps, and my email makes sense.
If this is a new thing, it should be called 'sincere', since it is essentially the opposite of Ironic.
-Julia
On Sun, Oct 18, 2020 at 9:00 PM Dan Sneddon <dsneddon@redhat.com> wrote:
Steve, Lars,
I just wanted to throw an idea out there:
Redfish requires user accounts. Is there a way to stretch this idea into a more general Redfish proxy? Ironic would have to create a temporary user account for Redfish in the BMC, and then provide the tenant an IP:port of a TCP proxy for the tenant to connect to.
The TCP proxy would be deactivated when the node was deleted. When the node was cleaned, the user account would be removed. When nodes were scheduled, the Ironic-generated user accounts would be removed and recreated as needed based on the scheduling request. Ironic settings could be provided to control the access rights/group of the user accounts.
A variant of this would create and manage the user account, but not provide a TCP proxy. Instead, perhaps the BMC location could be returned. This would not meet the criteria of not providing direct access to the hardware. We would have to remove accounts from the BMC when the node was deleted or require cleaning, but it could still potentially be less secure.
A downside of the role-based user scheduling approach is that Ironic then requires credentials for the BMC that include user account management, which could run counter to security policy.
On Sun, Oct 18, 2020 at 6:08 PM Steve Baker <sbaker@redhat.com> wrote:
On 17/10/20 7:29 am, Lars Kellogg-Stedman wrote:
In the work that we're doing with the Mass Open Cloud [1], we're
looking at using Ironic (and the multi-tenant support we contributed)
to manage access to a shared pool of hardware while still permitting
people to use their own provisioning tools.
We don't want to expose the hardware BMC directly to consumers; we
want Ironic to act as the access control mechanism for all activities
involving the hardware.
The missing part of this scenario is that at the moment this would
require provisioning tools to know how to talk to the Ironic API if
they want to perform BMC actions on the host, such as controlling
power.
While talking with Mainn the other day, it occurred to me that maybe
we could teach virtualbmc [2] how to talk to Ironic, so that we could
provide a virtual IPMI interface to provisioning tools. There are some
obvious questions here around credentials (I think we'd probably
generate them randomly when assigning control of a piece of hardware
to someone, but that's more of an implementation detail).
I wanted to sanity check this idea: does this seem reasonable? Are
there alternatives you would suggest?
As far as I'm aware, an IPMI host:port endpoint will manage exactly one
baremetal host, with no obvious mechanism to specify which host to
control when you have multiple hosts behind a single endpoint. These
days with the rise of Redfish I think IPMI is considered a legacy
interface now.
I suspect a BMC interface is not the right abstraction for a
multi-tenant baremetal API, that's why Ironic was started in the first
place ;)
If there are provisioning tools frequently used by the target audience
of Mass Open Cloud which have poor Ironic API support then we'd like to
know what those tools are so we can improve that support.
Thanks!
-- Dan Sneddon | Senior Principal Software Engineer dsneddon@redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter