[Ironic] Teaching virtualbmc how to talk to Ironic?
In the work that we're doing with the Mass Open Cloud [1], we're looking at using Ironic (and the multi-tenant support we contributed) to manage access to a shared pool of hardware while still permitting people to use their own provisioning tools. We don't want to expose the hardware BMC directly to consumers; we want Ironic to act as the access control mechanism for all activities involving the hardware. The missing part of this scenario is that at the moment this would require provisioning tools to know how to talk to the Ironic API if they want to perform BMC actions on the host, such as controlling power. While talking with Mainn the other day, it occurred to me that maybe we could teach virtualbmc [2] how to talk to Ironic, so that we could provide a virtual IPMI interface to provisioning tools. There are some obvious questions here around credentials (I think we'd probably generate them randomly when assigning control of a piece of hardware to someone, but that's more of an implementation detail). I wanted to sanity check this idea: does this seem reasonable? Are there alternatives you would suggest? Thanks! [1] https://github.com/CCI-MOC/esi [2] https://github.com/openstack/virtualbmc -- Lars Kellogg-Stedman <lars@redhat.com> | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | N1LKS
On 17/10/20 7:29 am, Lars Kellogg-Stedman wrote:
In the work that we're doing with the Mass Open Cloud [1], we're looking at using Ironic (and the multi-tenant support we contributed) to manage access to a shared pool of hardware while still permitting people to use their own provisioning tools.
We don't want to expose the hardware BMC directly to consumers; we want Ironic to act as the access control mechanism for all activities involving the hardware.
The missing part of this scenario is that at the moment this would require provisioning tools to know how to talk to the Ironic API if they want to perform BMC actions on the host, such as controlling power.
While talking with Mainn the other day, it occurred to me that maybe we could teach virtualbmc [2] how to talk to Ironic, so that we could provide a virtual IPMI interface to provisioning tools. There are some obvious questions here around credentials (I think we'd probably generate them randomly when assigning control of a piece of hardware to someone, but that's more of an implementation detail).
I wanted to sanity check this idea: does this seem reasonable? Are there alternatives you would suggest?
As far as I'm aware, an IPMI host:port endpoint will manage exactly one baremetal host, with no obvious mechanism to specify which host to control when you have multiple hosts behind a single endpoint. These days with the rise of Redfish I think IPMI is considered a legacy interface now. I suspect a BMC interface is not the right abstraction for a multi-tenant baremetal API, that's why Ironic was started in the first place ;) If there are provisioning tools frequently used by the target audience of Mass Open Cloud which have poor Ironic API support then we'd like to know what those tools are so we can improve that support.
Thanks!
[1] https://github.com/CCI-MOC/esi [2] https://github.com/openstack/virtualbmc
Steve, Lars, I just wanted to throw an idea out there: Redfish requires user accounts. Is there a way to stretch this idea into a more general Redfish proxy? Ironic would have to create a temporary user account for Redfish in the BMC, and then provide the tenant an IP:port of a TCP proxy for the tenant to connect to. The TCP proxy would be deactivated when the node was deleted. When the node was cleaned, the user account would be removed. When nodes were scheduled, the Ironic-generated user accounts would be removed and recreated as needed based on the scheduling request. Ironic settings could be provided to control the access rights/group of the user accounts. A variant of this would create and manage the user account, but not provide a TCP proxy. Instead, perhaps the BMC location could be returned. This would not meet the criteria of not providing direct access to the hardware. We would have to remove accounts from the BMC when the node was deleted or require cleaning, but it could still potentially be less secure. A downside of the role-based user scheduling approach is that Ironic then requires credentials for the BMC that include user account management, which could run counter to security policy. On Sun, Oct 18, 2020 at 6:08 PM Steve Baker <sbaker@redhat.com> wrote:
On 17/10/20 7:29 am, Lars Kellogg-Stedman wrote:
In the work that we're doing with the Mass Open Cloud [1], we're
looking at using Ironic (and the multi-tenant support we contributed)
to manage access to a shared pool of hardware while still permitting
people to use their own provisioning tools.
We don't want to expose the hardware BMC directly to consumers; we
want Ironic to act as the access control mechanism for all activities
involving the hardware.
The missing part of this scenario is that at the moment this would
require provisioning tools to know how to talk to the Ironic API if
they want to perform BMC actions on the host, such as controlling
power.
While talking with Mainn the other day, it occurred to me that maybe
we could teach virtualbmc [2] how to talk to Ironic, so that we could
provide a virtual IPMI interface to provisioning tools. There are some
obvious questions here around credentials (I think we'd probably
generate them randomly when assigning control of a piece of hardware
to someone, but that's more of an implementation detail).
I wanted to sanity check this idea: does this seem reasonable? Are
there alternatives you would suggest?
As far as I'm aware, an IPMI host:port endpoint will manage exactly one
baremetal host, with no obvious mechanism to specify which host to
control when you have multiple hosts behind a single endpoint. These
days with the rise of Redfish I think IPMI is considered a legacy
interface now.
I suspect a BMC interface is not the right abstraction for a
multi-tenant baremetal API, that's why Ironic was started in the first
place ;)
If there are provisioning tools frequently used by the target audience
of Mass Open Cloud which have poor Ironic API support then we'd like to
know what those tools are so we can improve that support.
Thanks!
--
Dan Sneddon | Senior Principal Software Engineer dsneddon@redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter
On Sun, Oct 18, 2020 at 11:57 PM Dan Sneddon <dsneddon@redhat.com> wrote:
Steve, Lars,
I just wanted to throw an idea out there:
Redfish requires user accounts. Is there a way to stretch this idea into a more general Redfish proxy? Ironic would have to create a temporary user account for Redfish in the BMC, and then provide the tenant an IP:port of a TCP proxy for the tenant to connect to.
The TCP proxy would be deactivated when the node was deleted. When the node was cleaned, the user account would be removed. When nodes were scheduled, the Ironic-generated user accounts would be removed and recreated as needed based on the scheduling request. Ironic settings could be provided to control the access rights/group of the user accounts.
A variant of this would create and manage the user account, but not provide a TCP proxy. Instead, perhaps the BMC location could be returned. This would not meet the criteria of not providing direct access to the hardware. We would have to remove accounts from the BMC when the node was deleted or require cleaning, but it could still potentially be less secure.
A downside of the role-based user scheduling approach is that Ironic then requires credentials for the BMC that include user account management, which could run counter to security policy.
On Sun, Oct 18, 2020 at 6:08 PM Steve Baker <sbaker@redhat.com> wrote:
On 17/10/20 7:29 am, Lars Kellogg-Stedman wrote:
In the work that we're doing with the Mass Open Cloud [1], we're
looking at using Ironic (and the multi-tenant support we contributed)
to manage access to a shared pool of hardware while still permitting
people to use their own provisioning tools.
We don't want to expose the hardware BMC directly to consumers; we
want Ironic to act as the access control mechanism for all activities
involving the hardware.
The missing part of this scenario is that at the moment this would
require provisioning tools to know how to talk to the Ironic API if
they want to perform BMC actions on the host, such as controlling
power.
While talking with Mainn the other day, it occurred to me that maybe
we could teach virtualbmc [2] how to talk to Ironic, so that we could
provide a virtual IPMI interface to provisioning tools. There are some
obvious questions here around credentials (I think we'd probably
generate them randomly when assigning control of a piece of hardware
to someone, but that's more of an implementation detail).
I wanted to sanity check this idea: does this seem reasonable? Are
there alternatives you would suggest?
As far as I'm aware, an IPMI host:port endpoint will manage exactly one
baremetal host, with no obvious mechanism to specify which host to
control when you have multiple hosts behind a single endpoint. These
days with the rise of Redfish I think IPMI is considered a legacy
interface now.
I suspect a BMC interface is not the right abstraction for a
multi-tenant baremetal API, that's why Ironic was started in the first
place ;)
If there are provisioning tools frequently used by the target audience
of Mass Open Cloud which have poor Ironic API support then we'd like to
know what those tools are so we can improve that support.
Thanks!
--
Dan Sneddon | Senior Principal Software Engineer dsneddon@redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter
Lars, Please forgive my ignorance if I did not understand your question. Why not just use Nova with the ironic driver? Nova won't tell a user about the abstraction to metal and it's pretty easy to get setup and running with Ironic. This way users can request a metal type that is predefined and the rest is handled for them. This way you can keep consumers and admins separate and provide that multi-tenant functionality without exposing any critical details of the underlying infra. -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First"
On Mon, Oct 19, 2020 at 7:31 AM Donny Davis <donny@fortnebula.com> wrote:
On Sun, Oct 18, 2020 at 11:57 PM Dan Sneddon <dsneddon@redhat.com> wrote:
Steve, Lars,
I just wanted to throw an idea out there:
Redfish requires user accounts. Is there a way to stretch this idea into a more general Redfish proxy? Ironic would have to create a temporary user account for Redfish in the BMC, and then provide the tenant an IP:port of a TCP proxy for the tenant to connect to.
The TCP proxy would be deactivated when the node was deleted. When the node was cleaned, the user account would be removed. When nodes were scheduled, the Ironic-generated user accounts would be removed and recreated as needed based on the scheduling request. Ironic settings could be provided to control the access rights/group of the user accounts.
A variant of this would create and manage the user account, but not provide a TCP proxy. Instead, perhaps the BMC location could be returned. This would not meet the criteria of not providing direct access to the hardware. We would have to remove accounts from the BMC when the node was deleted or require cleaning, but it could still potentially be less secure.
A downside of the role-based user scheduling approach is that Ironic then requires credentials for the BMC that include user account management, which could run counter to security policy.
On Sun, Oct 18, 2020 at 6:08 PM Steve Baker <sbaker@redhat.com> wrote:
On 17/10/20 7:29 am, Lars Kellogg-Stedman wrote:
In the work that we're doing with the Mass Open Cloud [1], we're
looking at using Ironic (and the multi-tenant support we contributed)
to manage access to a shared pool of hardware while still permitting
people to use their own provisioning tools.
We don't want to expose the hardware BMC directly to consumers; we
want Ironic to act as the access control mechanism for all activities
involving the hardware.
The missing part of this scenario is that at the moment this would
require provisioning tools to know how to talk to the Ironic API if
they want to perform BMC actions on the host, such as controlling
power.
While talking with Mainn the other day, it occurred to me that maybe
we could teach virtualbmc [2] how to talk to Ironic, so that we could
provide a virtual IPMI interface to provisioning tools. There are some
obvious questions here around credentials (I think we'd probably
generate them randomly when assigning control of a piece of hardware
to someone, but that's more of an implementation detail).
I wanted to sanity check this idea: does this seem reasonable? Are
there alternatives you would suggest?
As far as I'm aware, an IPMI host:port endpoint will manage exactly one
baremetal host, with no obvious mechanism to specify which host to
control when you have multiple hosts behind a single endpoint. These
days with the rise of Redfish I think IPMI is considered a legacy
interface now.
I suspect a BMC interface is not the right abstraction for a
multi-tenant baremetal API, that's why Ironic was started in the first
place ;)
If there are provisioning tools frequently used by the target audience
of Mass Open Cloud which have poor Ironic API support then we'd like to
know what those tools are so we can improve that support.
Thanks!
--
Dan Sneddon | Senior Principal Software Engineer dsneddon@redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter
Lars, Please forgive my ignorance if I did not understand your question. Why not just use Nova with the ironic driver? Nova won't tell a user about the abstraction to metal and it's pretty easy to get setup and running with Ironic. This way users can request a metal type that is predefined and the rest is handled for them. This way you can keep consumers and admins separate and provide that multi-tenant functionality without exposing any critical details of the underlying infra.
-- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First"
/donnyd shakes fist at gmail Lars, Please forgive my ignorance if I did not understand your question. Why not just use Nova with the ironic driver? Nova won't tell a user about the abstraction to metal and it's pretty easy to get setup and running with Ironic. This way users can request a metal type that is predefined and the rest is handled for them. This way you can keep consumers and admins separate and provide that multi-tenant functionality without exposing any critical details of the underlying infra. ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First"
On Mon, Oct 19, 2020 at 07:31:39AM -0400, Donny Davis wrote:
Please forgive my ignorance if I did not understand your question. Why not just use Nova with the ironic driver?
We're particularly interested in supporting the use of Ironic as a hardware controller for provisioning tools outside of the OpenStack ecosystem. We have two sessions coming up at the summit tomorrow if you'd like to better understand what we're doing. There is some reading available at https://github.com/CCI-MOC/esi/. -- Lars Kellogg-Stedman <lars@redhat.com> | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | N1LKS
Greetings everyone! Dan raises a good idea in somehow proxying through the requests and I suspect that is what Lars is actually kind of thinking, but I also suspect Lars is trying to find a way to provide compatibility with whole entire software ecosystems outside the OpenStack realm and likely those that don't even have a concept of Redfish. In essence, I think the discussion and thoughts are kind of thinking in the same basic direction but with very different details, That is not a bad thing thought! I guess I also have concerns on the subject of trying to do remote account management in BMCs. And what if the BMC doesn't speak full redfish, or doesn't even comprehend account management, or itself is just IPMI. Which I think maybe in part beyond some of the other ecosystems that are attempting to be supported. But Something Steve said resonated with my brain as I pondered this email over the last day or so. Perhaps *some* of the answer is to enumerate areas where we should work on improving the consumption of ironic such that more traditional baremtal provisioning tools can leverage it, whilst also providing some through access. I'm not opposed to maybe having some sort of integrated thing that interacts on some level with the conductor to offer up specific IPMI endpoints, although to do it I suspect some retooling or refactoring may be necessary of virtualbmc. Maybe it could be an "ipmi" or "pass-through" deploy interface where maybe we offer up an IPMI endpoint for a particular amount of time with specific credentials? I guess in the end some of this is a marriage of pyghmi's behavior and baremetal user behavior. * How does someone claim/allocate machines into this pool to be used? * How are we starting a pass-through service * How is that pass-through service running? Is it a separate api-like service with magic? Is it a single endpoint doing dual bridge bus parsing and per node credential checking and then issuing a message over the RPC bus for the conductor do to a thing (Something like this would align with the existing service delineation security model, fwiw.) * Hey, what is managing authentication details? With regard to Donny's question, I suspect this is the same conundrum as why even setup some sort of pass-through service. Tooling exists that doesn't comprehend intermediate tooling. Should it? It might help? Would it be difficult? It might be very difficult. That being said, I don't think this is about abstraction to metal, but about semi-privileged users maybe somehow having access to something like they are used to so they might be able to effect the same change they desire. I guess to sum up my thoughts. * Yes, there is value here. * This is likely reasonable from a 10,000 foot view without many technical details. * Would the community accept such, I have no idea present and I think we need to better understand the details. I also think it may help us to kind of see where we want to try and improve support concurrently. Or maybe create a list of "things we wish had ironic support" list. Hope this helps, and my email makes sense. -Julia On Sun, Oct 18, 2020 at 9:00 PM Dan Sneddon <dsneddon@redhat.com> wrote:
Steve, Lars,
I just wanted to throw an idea out there:
Redfish requires user accounts. Is there a way to stretch this idea into a more general Redfish proxy? Ironic would have to create a temporary user account for Redfish in the BMC, and then provide the tenant an IP:port of a TCP proxy for the tenant to connect to.
The TCP proxy would be deactivated when the node was deleted. When the node was cleaned, the user account would be removed. When nodes were scheduled, the Ironic-generated user accounts would be removed and recreated as needed based on the scheduling request. Ironic settings could be provided to control the access rights/group of the user accounts.
A variant of this would create and manage the user account, but not provide a TCP proxy. Instead, perhaps the BMC location could be returned. This would not meet the criteria of not providing direct access to the hardware. We would have to remove accounts from the BMC when the node was deleted or require cleaning, but it could still potentially be less secure.
A downside of the role-based user scheduling approach is that Ironic then requires credentials for the BMC that include user account management, which could run counter to security policy.
On Sun, Oct 18, 2020 at 6:08 PM Steve Baker <sbaker@redhat.com> wrote:
On 17/10/20 7:29 am, Lars Kellogg-Stedman wrote:
In the work that we're doing with the Mass Open Cloud [1], we're
looking at using Ironic (and the multi-tenant support we contributed)
to manage access to a shared pool of hardware while still permitting
people to use their own provisioning tools.
We don't want to expose the hardware BMC directly to consumers; we
want Ironic to act as the access control mechanism for all activities
involving the hardware.
The missing part of this scenario is that at the moment this would
require provisioning tools to know how to talk to the Ironic API if
they want to perform BMC actions on the host, such as controlling
power.
While talking with Mainn the other day, it occurred to me that maybe
we could teach virtualbmc [2] how to talk to Ironic, so that we could
provide a virtual IPMI interface to provisioning tools. There are some
obvious questions here around credentials (I think we'd probably
generate them randomly when assigning control of a piece of hardware
to someone, but that's more of an implementation detail).
I wanted to sanity check this idea: does this seem reasonable? Are
there alternatives you would suggest?
As far as I'm aware, an IPMI host:port endpoint will manage exactly one
baremetal host, with no obvious mechanism to specify which host to
control when you have multiple hosts behind a single endpoint. These
days with the rise of Redfish I think IPMI is considered a legacy
interface now.
I suspect a BMC interface is not the right abstraction for a
multi-tenant baremetal API, that's why Ironic was started in the first
place ;)
If there are provisioning tools frequently used by the target audience
of Mass Open Cloud which have poor Ironic API support then we'd like to
know what those tools are so we can improve that support.
Thanks!
-- Dan Sneddon | Senior Principal Software Engineer dsneddon@redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter
On Tue, 20 Oct 2020 at 00:37, Julia Kreger <juliaashleykreger@gmail.com> wrote:
Greetings everyone!
Dan raises a good idea in somehow proxying through the requests and I suspect that is what Lars is actually kind of thinking, but I also suspect Lars is trying to find a way to provide compatibility with whole entire software ecosystems outside the OpenStack realm and likely those that don't even have a concept of Redfish. In essence, I think the discussion and thoughts are kind of thinking in the same basic direction but with very different details, That is not a bad thing thought!
I guess I also have concerns on the subject of trying to do remote account management in BMCs. And what if the BMC doesn't speak full redfish, or doesn't even comprehend account management, or itself is just IPMI. Which I think maybe in part beyond some of the other ecosystems that are attempting to be supported.
But Something Steve said resonated with my brain as I pondered this email over the last day or so. Perhaps *some* of the answer is to enumerate areas where we should work on improving the consumption of ironic such that more traditional baremtal provisioning tools can leverage it, whilst also providing some through access.
I'm not opposed to maybe having some sort of integrated thing that interacts on some level with the conductor to offer up specific IPMI endpoints, although to do it I suspect some retooling or refactoring may be necessary of virtualbmc.
Maybe it could be an "ipmi" or "pass-through" deploy interface where maybe we offer up an IPMI endpoint for a particular amount of time with specific credentials? I guess in the end some of this is a marriage of pyghmi's behavior and baremetal user behavior.
* How does someone claim/allocate machines into this pool to be used? * How are we starting a pass-through service * How is that pass-through service running? Is it a separate api-like service with magic? Is it a single endpoint doing dual bridge bus parsing and per node credential checking and then issuing a message over the RPC bus for the conductor do to a thing (Something like this would align with the existing service delineation security model, fwiw.) * Hey, what is managing authentication details?
With regard to Donny's question, I suspect this is the same conundrum as why even setup some sort of pass-through service. Tooling exists that doesn't comprehend intermediate tooling. Should it? It might help? Would it be difficult? It might be very difficult. That being said, I don't think this is about abstraction to metal, but about semi-privileged users maybe somehow having access to something like they are used to so they might be able to effect the same change they desire.
I guess to sum up my thoughts.
* Yes, there is value here. * This is likely reasonable from a 10,000 foot view without many technical details. * Would the community accept such, I have no idea present and I think we need to better understand the details. I also think it may help us to kind of see where we want to try and improve support concurrently. Or maybe create a list of "things we wish had ironic support" list.
Hope this helps, and my email makes sense.
If this is a new thing, it should be called 'sincere', since it is essentially the opposite of Ironic.
-Julia
On Sun, Oct 18, 2020 at 9:00 PM Dan Sneddon <dsneddon@redhat.com> wrote:
Steve, Lars,
I just wanted to throw an idea out there:
Redfish requires user accounts. Is there a way to stretch this idea into a more general Redfish proxy? Ironic would have to create a temporary user account for Redfish in the BMC, and then provide the tenant an IP:port of a TCP proxy for the tenant to connect to.
The TCP proxy would be deactivated when the node was deleted. When the node was cleaned, the user account would be removed. When nodes were scheduled, the Ironic-generated user accounts would be removed and recreated as needed based on the scheduling request. Ironic settings could be provided to control the access rights/group of the user accounts.
A variant of this would create and manage the user account, but not provide a TCP proxy. Instead, perhaps the BMC location could be returned. This would not meet the criteria of not providing direct access to the hardware. We would have to remove accounts from the BMC when the node was deleted or require cleaning, but it could still potentially be less secure.
A downside of the role-based user scheduling approach is that Ironic then requires credentials for the BMC that include user account management, which could run counter to security policy.
On Sun, Oct 18, 2020 at 6:08 PM Steve Baker <sbaker@redhat.com> wrote:
On 17/10/20 7:29 am, Lars Kellogg-Stedman wrote:
In the work that we're doing with the Mass Open Cloud [1], we're
looking at using Ironic (and the multi-tenant support we contributed)
to manage access to a shared pool of hardware while still permitting
people to use their own provisioning tools.
We don't want to expose the hardware BMC directly to consumers; we
want Ironic to act as the access control mechanism for all activities
involving the hardware.
The missing part of this scenario is that at the moment this would
require provisioning tools to know how to talk to the Ironic API if
they want to perform BMC actions on the host, such as controlling
power.
While talking with Mainn the other day, it occurred to me that maybe
we could teach virtualbmc [2] how to talk to Ironic, so that we could
provide a virtual IPMI interface to provisioning tools. There are some
obvious questions here around credentials (I think we'd probably
generate them randomly when assigning control of a piece of hardware
to someone, but that's more of an implementation detail).
I wanted to sanity check this idea: does this seem reasonable? Are
there alternatives you would suggest?
As far as I'm aware, an IPMI host:port endpoint will manage exactly one
baremetal host, with no obvious mechanism to specify which host to
control when you have multiple hosts behind a single endpoint. These
days with the rise of Redfish I think IPMI is considered a legacy
interface now.
I suspect a BMC interface is not the right abstraction for a
multi-tenant baremetal API, that's why Ironic was started in the first
place ;)
If there are provisioning tools frequently used by the target audience
of Mass Open Cloud which have poor Ironic API support then we'd like to
know what those tools are so we can improve that support.
Thanks!
-- Dan Sneddon | Senior Principal Software Engineer dsneddon@redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter
On Tue, Oct 20, 2020 at 1:01 AM Mark Goddard <mark@stackhpc.com> wrote:
Hope this helps, and my email makes sense. If this is a new thing, it should be called 'sincere', since it is essentially the opposite of Ironic.
I was personally hoping for 'conundrum' so I could go around all day saying "your ironic conundrum" when talking to people. ;)
On Mon, Oct 19, 2020 at 04:37:09PM -0700, Julia Kreger wrote:
I'm not opposed to maybe having some sort of integrated thing that interacts on some level with the conductor to offer up specific IPMI endpoints, although to do it I suspect some retooling or refactoring may be necessary of virtualbmc.
Absolutely: right now, virtualbmc itself is very libvirt-specific. We'd have to abstract out a driver model, with libvirt as the first target, before moving on to my idea of using the package to present an IPMI-proxy in front of Ironic.
Maybe it could be an "ipmi" or "pass-through" deploy interface where maybe we offer up an IPMI endpoint for a particular amount of time with specific credentials? I guess in the end some of this is a marriage of pyghmi's behavior and baremetal user behavior.
My thought was that we would generate random credentials whenever someone acquires a piece of hardware from the free pool, which puts some of this more in the realm of the leasing service that Mainn and other are working on, rather than in Ironic itself.
* How does someone claim/allocate machines into this pool to be used?
See above :) I think a lot of this (spawning these services, managing credentials, etc) might end up being outside of Ironic itself.
Hope this helps, and my email makes sense.
This absolutely helps! I think you highlight a number of questions that will help us plan further development along these lines. -- Lars Kellogg-Stedman <lars@redhat.com> | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | N1LKS
On 21/10/20 7:49 am, Lars Kellogg-Stedman wrote:
On Mon, Oct 19, 2020 at 04:37:09PM -0700, Julia Kreger wrote:
I'm not opposed to maybe having some sort of integrated thing that interacts on some level with the conductor to offer up specific IPMI endpoints, although to do it I suspect some retooling or refactoring may be necessary of virtualbmc. Absolutely: right now, virtualbmc itself is very libvirt-specific. We'd have to abstract out a driver model, with libvirt as the first target, before moving on to my idea of using the package to present an IPMI-proxy in front of Ironic.
virtualbmc is built on pyghmi[1] and might be more appropriate for this project to do the same. Here is an example[2] of a Nova backed BMC service. [1] https://opendev.org/x/pyghmi [2] https://github.com/openstack/openstack-virtual-baremetal/blob/master/opensta...
On Sun, Oct 18, 2020 at 08:52:20PM -0700, Dan Sneddon wrote:
Redfish requires user accounts. Is there a way to stretch this idea into a more general Redfish proxy? Ironic would have to create a temporary user account for Redfish in the BMC, and then provide the tenant an IP:port of a TCP proxy for the tenant to connect to.
We really want to have more control over the actions to which someone who has acquired a piece of hardware has access (so a simple TCP proxy probably isn't what we're after, although I would have to do a little more research into what sort of access limitations we can apply to Redfish users on something like an iDRAC). That said, we're not tied to IPMI if we find that things these days typically have good support for Redfish, and some sort of API proxy for Redfish would be a fine alternative to virtual IPMI instance. -- Lars Kellogg-Stedman <lars@redhat.com> | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | N1LKS
On Mon, Oct 19, 2020 at 02:06:52PM +1300, Steve Baker wrote:
As far as I'm aware, an IPMI host:port endpoint will manage exactly one baremetal host
That's correct. The idea is that we would spawn one virtual ipmi instance per host to be controlled.
These days with the rise of Redfish I think IPMI is considered a legacy interface now.
It absolutely is, but it is also still commonly supported.
If there are provisioning tools frequently used by the target audience of Mass Open Cloud which have poor Ironic API support then we'd like to know what those tools are so we can improve that support.
The answer here is "everything", including other instances of Ironic, locally developed tools, things like Foreman, etc. We have a couple of summit sessions coming up tomorrow if you're interested in learning more about what we're about. -- Lars Kellogg-Stedman <lars@redhat.com> | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | N1LKS
participants (6)
-
Dan Sneddon
-
Donny Davis
-
Julia Kreger
-
Lars Kellogg-Stedman
-
Mark Goddard
-
Steve Baker