[openstack-dev] [Ironic] Fuel agent proposal

Roman Prykhodchenko rprikhodchenko at mirantis.com
Tue Dec 9 10:09:02 UTC 2014

It is true that IPA and FuelAgent share a lot of functionality in common. However there is a major difference between them which is that they are intended to be used to solve a different problem.

IPA is a solution for provision-use-destroy-use_by_different_user use-case and is really great for using it for providing BM nodes for other OS services or in services like Rackspace OnMetal. FuelAgent itself serves for provision-use-use-…-use use-case like Fuel or TripleO have.

Those two use-cases require concentration on different details in first place. For instance for IPA proper decommissioning is more important than advanced disk management, but for FuelAgent priorities are opposite because of obvious reasons.

Putting all functionality to a single driver and a single agent may cause conflicts in priorities and make a lot of mess inside both the driver and the agent. Actually previously changes to IPA were blocked right because of this conflict of priorities. Therefore replacing FuelAgent by IPA in where FuelAgent is used currently does not seem like a good option because come people (and I’m not talking about Mirantis) might loose required features because of different priorities.

Having two separate drivers along with two separate agents for those different use-cases will allow to have two independent teams that are concentrated on what’s really important for a specific use-case. I don’t see any problem in overlapping functionality if it’s used differently.

P. S.
I realise that people may be also confused by the fact that FuelAgent is actually called like that and is used only in Fuel atm. Our point is to make it a simple, powerful and what’s more important a generic tool for provisioning. It is not bound to Fuel or Mirantis and if it will cause confusion in the future we will even be happy to give it a different and less confusing name.

P. P. S.
Some of the points of this integration do not look generic enough or nice enough. We look pragmatic on the stuff and are trying to implement what’s possible to implement as the first step. For sure this is going to have a lot more steps to make it better and more generic.

> On 09 Dec 2014, at 01:46, Jim Rollenhagen <jim at jimrollenhagen.com> wrote:
> On December 8, 2014 2:23:58 PM PST, Devananda van der Veen <devananda.vdv at gmail.com <mailto:devananda.vdv at gmail.com>> wrote:
>> I'd like to raise this topic for a wider discussion outside of the
>> hallway
>> track and code reviews, where it has thus far mostly remained.
>> In previous discussions, my understanding has been that the Fuel team
>> sought to use Ironic to manage "pets" rather than "cattle" - and doing
>> so
>> required extending the API and the project's functionality in ways that
>> no
>> one else on the core team agreed with. Perhaps that understanding was
>> wrong
>> (or perhaps not), but in any case, there is now a proposal to add a
>> FuelAgent driver to Ironic. The proposal claims this would meet that
>> teams'
>> needs without requiring changes to the core of Ironic.
>> https://review.openstack.org/#/c/138115/
> I think it's clear from the review that I share the opinions expressed in this email.
> That said (and hopefully without derailing the thread too much), I'm curious how this driver could do software RAID or LVM without modifying Ironic's API or data model. How would the agent know how these should be built? How would an operator or user tell Ironic what the disk/partition/volume layout would look like?
> And before it's said - no, I don't think vendor passthru API calls are an appropriate answer here.
> // jim
>> The Problem Description section calls out four things, which have all
>> been
>> discussed previously (some are here [0]). I would like to address each
>> one,
>> invite discussion on whether or not these are, in fact, problems facing
>> Ironic (not whether they are problems for someone, somewhere), and then
>> ask
>> why these necessitate a new driver be added to the project.
>> They are, for reference:
>> 1. limited partition support
>> 2. no software RAID support
>> 3. no LVM support
>> 4. no support for hardware that lacks a BMC
>> #1.
>> When deploying a partition image (eg, QCOW format), Ironic's PXE deploy
>> driver performs only the minimal partitioning necessary to fulfill its
>> mission as an OpenStack service: respect the user's request for root,
>> swap,
>> and ephemeral partition sizes. When deploying a whole-disk image,
>> Ironic
>> does not perform any partitioning -- such is left up to the operator
>> who
>> created the disk image.
>> Support for arbitrarily complex partition layouts is not required by,
>> nor
>> does it facilitate, the goal of provisioning physical servers via a
>> common
>> cloud API. Additionally, as with #3 below, nothing prevents a user from
>> creating more partitions in unallocated disk space once they have
>> access to
>> their instance. Therefor, I don't see how Ironic's minimal support for
>> partitioning is a problem for the project.
>> #2.
>> There is no support for defining a RAID in Ironic today, at all,
>> whether
>> software or hardware. Several proposals were floated last cycle; one is
>> under review right now for DRAC support [1], and there are multiple
>> call
>> outs for RAID building in the state machine mega-spec [2]. Any such
>> support
>> for hardware RAID will necessarily be abstract enough to support
>> multiple
>> hardware vendor's driver implementations and both in-band creation (via
>> IPA) and out-of-band creation (via vendor tools).
>> Given the above, it may become possible to add software RAID support to
>> IPA
>> in the future, under the same abstraction. This would closely tie the
>> deploy agent to the images it deploys (the latter image's kernel would
>> be
>> dependent upon a software RAID built by the former), but this would
>> necessarily be true for the proposed FuelAgent as well.
>> I don't see this as a compelling reason to add a new driver to the
>> project.
>> Instead, we should (plan to) add support for software RAID to the
>> deploy
>> agent which is already part of the project.
>> #3.
>> LVM volumes can easily be added by a user (after provisioning) within
>> unallocated disk space for non-root partitions. I have not yet seen a
>> compelling argument for doing this within the provisioning phase.
>> #4.
>> There are already in-tree drivers [3] [4] [5] which do not require a
>> BMC.
>> One of these uses SSH to connect and run pre-determined commands. Like
>> the
>> spec proposal, which states at line 122, "Control via SSH access
>> feature
>> intended only for experiments in non-production environment," the
>> current
>> SSHPowerDriver is only meant for testing environments. We could
>> probably
>> extend this driver to do what the FuelAgent spec proposes, as far as
>> remote
>> power control for cheap always-on hardware in testing environments with
>> a
>> pre-shared key.
>> (And if anyone wonders about a use case for Ironic without external
>> power
>> control ... I can only think of one situation where I would rationally
>> ever
>> want to have a control-plane agent running inside a user-instance: I am
>> both the operator and the only user of the cloud.)
>> ----------------
>> In summary, as far as I can tell, all of the problem statements upon
>> which
>> the FuelAgent proposal are based are solvable through incremental
>> changes
>> in existing drivers, or out of scope for the project entirely. As
>> another
>> software-based deploy agent, FuelAgent would duplicate the majority of
>> the
>> functionality which ironic-python-agent has today.
>> Ironic's driver ecosystem benefits from a diversity of
>> hardware-enablement
>> drivers. Today, we have two divergent software deployment drivers which
>> approach image deployment differently: "agent" drivers use a local
>> agent to
>> prepare a system and download the image; "pxe" drivers use a remote
>> agent
>> and copy the image over iSCSI. I don't understand how a second driver
>> which
>> duplicates the functionality we already have, and shares the same goals
>> as
>> the drivers we already have, is beneficial to the project.
>> Doing the same thing twice just increases the burden on the team; we're
>> all
>> working on the same problems, so let's do it together.
>> -Devananda
>> [0]
>> https://blueprints.launchpad.net/ironic/+spec/ironic-python-agent-partition
>> [1] https://review.openstack.org/#/c/107981/
>> [2]
>> https://review.openstack.org/#/c/133828/11/specs/kilo/new-ironic-state-machine.rst
>> [3]
>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/snmp.py
>> [4]
>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/iboot.py
>> [5]
>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ssh.py
>> ------------------------------------------------------------------------
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org <mailto:OpenStack-dev at lists.openstack.org>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org <mailto:OpenStack-dev at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141209/228c5d08/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141209/228c5d08/attachment.pgp>

More information about the OpenStack-dev mailing list