[openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

Joshua Harlow harlowja at fastmail.com
Tue Jun 7 15:46:28 UTC 2016


Clint Byrum wrote:
> Excerpts from Kris G. Lindgren's message of 2016-06-06 20:44:26 +0000:
>> Hi ironic folks,
>> As I'm trying to explore how GoDaddy can use ironic I've created the following in an attempt to document some of my concerns, and I'm wondering if you folks could help myself identity ongoing work to solve these (or alternatives?)
>
> Hi Kris. I've been using Ironic in various forms for a while, and I can
> answer a few of these things.
>
>> List of concerns with ironic:
>>
>> 1.)Nova<->  ironic interactions are generally seem terrible?
>
> I don't know if I'd call it terrible, but there's friction. Things that
> are unchangable on hardware are just software configs in vms (like mac
> addresses, overlays, etc), and things that make no sense in VMs are
> pretty standard on servers (trunked vlans, bonding, etc).
>
> One way we've gotten around it is by using Ironic standalone via
> Bifrost[1]. This deploys Ironic in wide open auth mode on 127.0.0.1,
> and includes playbooks to build config drives and deploy images in a
> fairly rudimentary way without Nova.
>
> I call this the "better than Cobbler" way of getting a toe into the
> Ironic waters.
>
> [1] https://github.com/openstack/bifrost

Out of curiosity, why ansible vs turning 
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py 
(or something like it) into a tiny-wsgi-app (pick useful name here) that 
has its own REST api (that looks pretty similar to the public functions 
in that driver file)?

That seems almost easier than building a bunch of ansible scripts that 
appear (at a glance) to do similar things; and u get the benefit of 
using a actual programming language vs a 
half-programming-ansible-yaml-language...

A realization I'm having is that I'm really not a fan of using ansible 
as a half-programming-ansible-yaml-language, which it seems like people 
start to try to do after a while (because at some point you need 
something like if statements, then things like [1] get created), no 
offense to the authors, but I guess this is my personal preference (it's 
also one of the reasons taskflow directly is a lib. in python, because 
<meh>, people don't need to learn a new language).

[1] 
https://github.com/openstack/bifrost/blob/master/playbooks/roles/ironic-enroll-dynamic/tasks/main.yml

>
>>    -How to accept raid config and partitioning(?) from end users? Seems to not a yet agreed upon method between nova/ironic.
>
> AFAIK accepting it from the users just isn't solved. Administrators
> do have custom ramdisks that they boot to pre-configure RAID during
> enrollment.
>
>>     -How to run multiple conductors/nova-computes?   Right now as far as I can tell all of ironic front-end by a single nova-compute, which I will have to manage via a cluster technology between two or mode nodes.  Because of this and the way host-agregates work I am unable to expose fault domains for ironic instances (all of ironic can only be under a single AZ (the az that is assigned to the nova-compute node)). Unless I create multiple nova-compute servers and manage multiple independent ironic setups.  This makes on-boarding/query of hardware capacity painful.
>
> The nova-compute does almost nothing. It really just talks to the
> scheduler to tell it what's going on in Ironic. If it dies, deploys
> won't stop. You can run many many conductors and spread load and fault
> tolerance among them easily. I think for multiple AZs though, you're
> right, there's no way to expose that. Perhaps it can be done with cells,
> which I think Rackspace's OnMetal uses (but I'll let them refute or
> confirm that).
>
> Seems like the virt driver could be taught to be AZ-aware and some
> metadata in the server record could allow AZs to go through to Ironic.
>
>>    - Nova appears to be forcing a we are "compute" as long as "compute" is VMs, means that we will have a baremetal flavor explosion (ie the mismatch between baremetal and VM).
>>        - This is a feeling I got from the ironic-nova cross project meeting in Austin.  General exmaple goes back to raid config above. I can configure a single piece of hardware many different ways, but to fit into nova's world view I need to have many different flavors exposed to end-user.  In this way many flavors can map back to a single piece of hardware with just a lsightly different configuration applied. So how am I suppose to do a single server with 6 drives as either: Raid 1 + Raid 5, Raid 5, Raid 10, Raid 6, or JBOD.  Seems like I would need to pre-mark out servers that were going to be a specific raid level.  Which means that I need to start managing additional sub-pools of hardware to just deal with how the end users wants the raid configured, this is pretty much a non-starter for us.  I have not really heard of whats being done on this specific front.
>
> You got that right. Perhaps people are comfortable with this limitation.
> It is at least simple.
>
>> 2.) Inspector:
>>    - IPA service doesn't gather port/switching information
>>    - Inspection service doesn't process port/switching information, which means that it wont add it to ironic.  Which makes managing network swinging of the host a non-starter.  As I would inspect the host – then modify the ironic record to add the details about what port/switch the server is connected to from a different source.  At that point why wouldn't I just onboard everything through the API?
>>    - Doesn't grab hardware disk configurations, If the server has multiple raids (r1 + r5) only reports boot raid disk capacity.
>>    - Inspection is geared towards using a different network and dnsmasq infrastructure than what is in use for ironic/neutron.  Which also means that in order to not conflict with dhcp requests for servers in ironic I need to use different networks.  Which also means I now need to handle swinging server ports between different networks.
>>
>> 3.) IPA image:
>>    - Default build stuff is pinned to extremly old versions due to gate failure issues. So I can not work without a fork for onboard of servers due to the fact that IPMI modules aren't built for the kernel, so inspection can never match the node against ironic.  Seems like currently functionality here is MVP for gate to work and to deploy images.  But if you need to do firmware, bios-config, any other hardware specific features you are pretty much going to need to roll your own IPA image and IPA modules to do standard provisioning tasks.
>>
>> 4.) Conductor:
>>    - Serial-over-lan consoles require a unique port on the conductor server (I have seen purposes to try and fix this?), this is painful to manage with large numbers of servers.
>>    - SOL consoles aren't restarted when conductor is restarted (I think this might be fixed in newer versions of ironic?), again if end users aren't suppose to consume ironic api's directly - this is painful to handle.
>>    - Its very easy to get a node to fall off the staemachine rails (reboot a server while an image is being deployed to it), the only way I have seen to be able to fix this is to update the DB directly.
>>    - As far as I can tell shell-in-a- box, SOL consoles aren't support via nova – so how are end users suppose to consume the shell-in-box console?
>>    - I have BMC that need specific configuration (some require SOL on com2, others on com1) this makes it pretty much impossible without per box overrides against the conductor hardcoded templates.
>>    - Additionally it would be nice to default to having a provisioning kernel/image that was set as a single config option with per server overrides – rather than on each server.  If we ever change the IPA image – that means at scale we would need to update thousands of ironic nodes.
>>
>> What is ironic doing to monitor the hardware for failures?  I assume the answer here is nothing and that we will need to make sure the images that we deploy are correctly configuring the tools to monitor disk/health/psu's/ram errors, ect. ect.
>>
>> Overall the above concerns have me wondering if I am going crazy, or is ironic really not ready to take over datacenters baremetal provisioning (unless you have a very limited view of functionality that is needed in those datacenters).
>
> You're not crazy at all. I remember having some of these discussions in
> the early days when it was just 'nova baremetal' and people wanted AZs
> and better serial console access. For whatever reason, it just hasn't
> quite happened yet.
>
> It's not a small job, but I don't think anything you say above is
> particularly troubling. I'd certainly like better console support and AZs.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list