[openstack-dev] [nova][ironic] making file injection optional / removing it

Devananda van der Veen devananda.vdv at gmail.com
Mon May 13 21:44:52 UTC 2013


On Mon, May 13, 2013 at 11:33 AM, Scott Moser <smoser at ubuntu.com> wrote:

> On Mon, 13 May 2013, Devananda van der Veen wrote:
>
> > On Mon, May 13, 2013 at 9:47 AM, Scott Moser <smoser at ubuntu.com> wrote:
> >
> > > On Fri, 10 May 2013, Clint Byrum wrote:
> > >
>
> > > I largely agree here, but we have config-drive in nova.  I think it
> makes
> > > sense to have an analog in bare metal provisioning.  In bare metal, it
> > > would actually allow the nodes to never have access to the management
> > > network while in "user" possession (ie, detach pxe/management network
> > > after system installed).
> > >
> > >
> > Config drive for baremetal seems possible for some (but not all)
> hardware.
> > Clearly, we'll need to support multiple deployment models :)
>
> Can you give an example of what hardware would not be supported?
>

Any hardware which doesn't support mounting virtual media and exposing it
to the guest -- this is, afaict, not part of the IPMI specification, though
most large hw vendors have implemented it anyway.

Also, this approach would be unsuitable for high-density compute where many
SOCs share a single management board, even if that BMC supports virtual
media, since this would serialize the deployment process.

 (caveat: I'm assuming that HDC systems whose BMC support virtual media
would only support mounting a small number of, or just one, virtual media
at a time. I base this assumption on the knowledge that some HDC systems
have a limitation to the number of concurrent SOL sessions, which is
considerably lower than the number of SOCs they contain.)


>
> > However, detaching from the management network seems like less than great
> > security to me. The moment that the user requests any management of their
> > instance be performed, you'll have to reconnect it to the management
> > network *before* you can power it down (or do what ever else you need).
> > There is still a clear (though perhaps shorter) window where the tenant
> has
> > access to the management network.
>
> > Also, the management network is the only vector for the cloud operator to
> > monitor the health of a bare metal instance (eg., poll power state and hw
> > sensors over IPMI). Not having that visibility seems, well, like a bad
> idea
> > to me.
>
> You seem to assume there that ipmi or other power control is on the same
> network as the pxe boot or other network that the user needs to use.
> I dont think that is necessarily true. That may be a silly/broken
> limitation of ipmi (ipmi does have shortcomings for untrusted occupant).
>
>
Perhaps we meant different things by "management network". I was including
both the out-of-band network (eg, for IPMI) and the network used for image
deployment under the broad heading of "Management networks", whether these
are actually handled by one or multiple NICs, VLANs, or what ever. Instance
provisioning requires access to both; instance management requires access
to the out-of-band net, and tenants do not require access to either.
Removing tenant access from the network used for image deployment should be
straight forward, and should be fine to do once deployment is complete, but
I don't think we shouldn't be mucking with the out-of-band network.

Anyway, I think we agree on all that, and I probably just misinterpreted
"detach from the management network" as "detach from both IPMI and PXE
networks", which it seems is not what you meant :)

Cheers,
-Devananda
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130513/0becf37b/attachment.html>


More information about the OpenStack-dev mailing list