[openstack-dev] [Fuel] [ironic] [inspector] Rewriting nailgun agent on Python proposal
Vladimir Kozhukalov
vkozhukalov at mirantis.com
Fri Mar 18 17:59:07 UTC 2016
Sorry, typo:
*cloud case does NOT assume running any kind of agent
inside user instance
Vladimir Kozhukalov
On Fri, Mar 18, 2016 at 7:26 PM, Vladimir Kozhukalov <
vkozhukalov at mirantis.com> wrote:
> >Well, there's a number of reasons. Ironic is not meant only for an
> >"undercloud" (deploying OpenStack on ironic instances). There are both
> >public and private cloud deployments of ironic in production today, that
> >make bare metal instances available to users of the cloud. Those users
> >may not want an agent running inside their instance, and more
> >importantly, the operators of those clouds may not want to expose the
> >ironic or inspector APIs to their users.
>
> >I'm not sure ironic should say "no, that isn't allowed" but at a minimum
> >it would need to be opt-in behavior.
>
> For me it's absolutely clear why cloud case does assume running any kind
> of agent
> inside user instance. It is clear why cloud case does not assume exposing
> API
> to the user instance. But cloud is not the only case that exists.
> Fuel is a deployment tool. Fuel case is not cloud. It is 'cattle' (cattle
> vs. pets), but
> it is not cloud in a sense that instances are 'user instances'.
> Fuel 'user instances' are not even 'user' instances.
> Fuel manages the content of instances throughout their whole life cycle.
>
> As you might remember we talked about this about two years ago (when we
> tried to contribute lvm and md features to IPA). I don't know why this
> case
> (deployment) was rejected again and again while it's still viable and
> widely used.
> And I don't know why it could not be implemented to be 'opt-in'.
> Since that we have invented our own fuel-agent (that supports lvm, md) and
> a driver for Ironic conductor that allows to use Ironic with fuel-agent.
>
> >Is the fuel team having a summit session of some sort about integrating
> >with ironic better? I'd be happy to come to that if it can be scheduled
> >at a time that ironic doesn't have a session. Otherwise maybe we can
> >catch up on Friday or something.
>
> >I'm glad to see Fuel wanting to integrate better with Ironic.
>
> We are still quite interested in closer integration with Ironic (we need
> power
> management features that Ironic provides). We'll be happy to schedule yet
> another discussion on closer integration with Ironic.
>
> BTW, about a year ago (in Grenoble) we agreed that it is not even
> necessary to merge such custom things into Ironic tree. Happily, Ironic is
> smart enough to consume drivers using stevedore. About ironic-inspector
> the case is the same. Whether we are going to run it inside 'user instance'
> or inside ramdisk it does not affect ironic-inspector itself. If Ironic
> team is
> open for merging "non-cloud" features (of course 'opt-in') we'll be happy
> to contribute.
>
> Vladimir Kozhukalov
>
> On Fri, Mar 18, 2016 at 6:03 PM, Jim Rollenhagen <jim at jimrollenhagen.com>
> wrote:
>
>> On Fri, Mar 18, 2016 at 05:26:13PM +0300, Evgeniy L wrote:
>> > On Thu, Mar 17, 2016 at 3:16 PM, Dmitry Tantsur <dtantsur at redhat.com>
>> wrote:
>> >
>> > > On 03/16/2016 01:39 PM, Evgeniy L wrote:
>> > >
>> > >> Hi Dmitry,
>> > >>
>> > >> I can try to provide you description on what current Nailgun agent
>> is,
>> > >> and what are potential requirements we may need from HW discovery
>> system.
>> > >>
>> > >> Nailgun agent is a one-file Ruby script [0] which is periodically run
>> > >> under cron. It collects information about HW using ohai [1], plus it
>> > >> does custom parsing, filtration, retrieval of HW information. After
>> the
>> > >> information is collected, it is sent to Nailgun, that is how node
>> gets
>> > >> discovered in Fuel.
>> > >>
>> > >
>> > > Quick clarification: does it run on user instances? or does it run on
>> > > hardware while it's still not deployed to? The former is something
>> that
>> > > Ironic tries not to do. There is an interest in the latter.
>> >
>> >
>> > Both, on user instances (with deployed OpenStack) and on instances which
>> > are not deployed and in bootstrap.
>> > What are the reasons Ironic tries not to do that (running HW discovery
>> on
>> > deployed node)?
>>
>> Well, there's a number of reasons. Ironic is not meant only for an
>> "undercloud" (deploying OpenStack on ironic instances). There are both
>> public and private cloud deployments of ironic in production today, that
>> make bare metal instances available to users of the cloud. Those users
>> may not want an agent running inside their instance, and more
>> importantly, the operators of those clouds may not want to expose the
>> ironic or inspector APIs to their users.
>>
>> I'm not sure ironic should say "no, that isn't allowed" but at a minimum
>> it would need to be opt-in behavior.
>>
>> >
>> >
>> > >
>> > >
>> > >> To summarise entire process:
>> > >> 1. After Fuel master node is installed, user restarts the nodes and
>> they
>> > >> get booted via PXE with bootstrap image.
>> > >> 2. Inside of bootstrap image Nailgun agent is configured and
>> installed.
>> > >> 3. Cron runs Nailgun agent.
>> > >> 3. Information is collected by Nailgun agent.
>> > >> 4. Information is sent to Nailgun.
>> > >> 5. Nailgun creates new node, for which user using UI can define
>> > >> partitioning schema and networks allocation.
>> > >> 6. After that provisioning/deployment can be run.
>> > >>
>> > >
>> > > So it looks quite similar to ironic-inspector + IPA, except
>> introspection
>> > > runs once. Rerunning it would not be impossible to implement, though
>> it
>> > > will require some changes to inspector, so that it can accept
>> "updates" to
>> > > a node after the introspection is finished.
>> > >
>> > >
>> > >> Every time Nailgun agent sends a request, we submit information about
>> > >> the time last request from agent was done, if there was no request
>> for
>> > >> time N, we mark the node as offline.
>> > >>
>> > >
>> > > This is similar to IPA heartbeating, I guess.
>> > >
>> > >
>> > >> With current implementation we have several problems (not all of them
>> > >> should be addressed by HW discovery system only):
>> > >>
>> > >> 1. A lot of things are hardcoded on the agent's side, which does
>> > >> additional filtration based on pre-hardcoded parameters [2], the less
>> > >> hardcoded logic in agent we have the easier it's to do upgrades and
>> > >> deliver fixes (upgrade one service is simpler than hundreds of
>> agents).
>> > >>
>> > >
>> > > Oh, I hear it. In the inspector world we are moving away from
>> processing
>> > > things on the ramdisk side exactly for this reason: it's too hard to
>> change.
>> > >
>> > >
>> > >> 2. In order to get additional HW information user has to continue
>> > >> hardcoding it right in Ruby code, as the result, there is no way for
>> > >> Fuel plugin [3], to get additional hardware specific information, we
>> > >> need data-driven mechanism to be able to describe, what/how/where
>> > >> information to get from the node.
>> > >>
>> > >
>> > > Hmm, interesting. Right now we have a plugin mechanism for the
>> ramdisk. We
>> > > also have a plugin (extra-hardware) trying to collect as much
>> information
>> > > as it's feasible (based on https://github.com/redhat-cip/hardware).
>> > >
>> >
>> > Could you please provide a link where I can learn more on plugin
>> mechanism
>> > for ramdisk?
>>
>> When IPA does inspection, it sends the inventory as reported by the
>> hardware managers. When building a ramdisk, you can include out-of-tree
>> hardware managers, and each hardware manager is called to fetch
>> inventory.
>>
>> Docs:
>> http://docs.openstack.org/developer/ironic-python-agent/#hardware-inventory
>> Example out-of-tree hardware managers:
>>
>> https://github.com/openstack/proliantutils/tree/master/proliantutils/ipa_hw_manager
>> https://github.com/rackerlabs/onmetal-ironic-hardware-manager
>>
>> >
>> >
>> > >
>> > > On the other side, there is ongoing work to have an ansible-based
>> deploy
>> > > ramdisk in Ironic, maybe inspector could benefit from it too. Didn't
>> think
>> > > about it yet, would be interesting to discuss on the summit.
>> >
>> >
>> > And here, I would appreciate if you have any link to get more context (I
>> > was able to find only playbook for Ironic installation).
>> > In Fuel we had an idea to implement tasks (abstract from specific
>> > deployment tool) to make configuration and get information about
>> specific
>> > hardware.
>>
>> The spec is in review, from some Mirantis folks in fact:
>> https://review.openstack.org/#/c/241946/
>>
>> >
>> >
>> > >
>> > >
>> > >
>> > >> 3. Hardware gets changed, we have cases when NICs, HDDs, Motherboards
>> > >> are replaced/removed/added, as the result we should have a tool which
>> > >> would allow us to see these changes and when they were performed,
>> based
>> > >> on that we want to be able to notify the user and provide suggestions
>> > >> how to proceed with these changes.
>> > >>
>> > >
>> > > This could probably done with a new ironic-inspector plugin.
>> > >
>> > >
>> > >> 4. With 3rd real-world cases, we have a problem of node
>> identification,
>> > >> when HW gets changed and automatic matching doesn't happen (when we
>> > >> cannot say for sure that this is the node which we've already had),
>> user
>> > >> should be able to say, that new node X is in fact offline node Y.
>> > >>
>> > >
>> > > Very good question. Right now Inspector is using either BMC IP
>> address or
>> > > MAC's.
>> > >
>> > >
>> > >> 5. Different source of HW information, we want to have a system which
>> > >> would allow us to have hardware discovery from IPMI, CSV file,
>> Cobbler,
>> > >> CMDB, etc at the same time.
>> > >>
>> > >
>> > > Not sure something like that should live within Ironic to be honest.
>> Also
>> > > worth discussing in details.
>> > >
>> > >
>> > >> 6. Not only hardware gets changed, but operating system (with kernel
>> > >> versions), for example when we used CentOS as a bootstrap (in
>> bootstrap
>> > >> we do provisioning/partitioning + initial configuration) and Ubuntu
>> for
>> > >> running OpenStack, we were getting wide range of weird problems, from
>> > >> NICs renaming to Disks' ids duplication and deduplication. There
>> should
>> > >> be a way to track these problems (3rd item), and we should be able to
>> > >> add operating system specific system to get HW information.
>> > >>
>> > >> 7. Cron + Agent based mechanism to define if node is offline is not
>> the
>> > >> best, since it adds race conditions and in fact it only says if
>> there is
>> > >> connectivity between Nailgun and Nailgun agent.
>> > >>
>> > >
>> > > We are thinking about using some DLM for that.. No specific plans
>> though,
>> > > again a topic for the summit.
>> > >
>> > >
>> > >> Will be glad to answer any questions you have, if there are any.
>>
>> Is the fuel team having a summit session of some sort about integrating
>> with ironic better? I'd be happy to come to that if it can be scheduled
>> at a time that ironic doesn't have a session. Otherwise maybe we can
>> catch up on Friday or something.
>>
>> I'm glad to see Fuel wanting to integrate better with Ironic.
>>
>> // jim
>>
>> > >>
>> > >> Thanks,
>> > >>
>> > >> [0]
>> https://github.com/openstack/fuel-nailgun-agent/blob/master/agent
>> > >> [1] https://docs.chef.io/ohai.html
>> > >> [2]
>> > >>
>> https://github.com/openstack/fuel-nailgun-agent/blob/master/agent#L46-L51
>> > >> [3] https://wiki.openstack.org/wiki/Fuel/Plugins
>> > >>
>> > >>
>> > >> On Wed, Mar 16, 2016 at 1:39 PM, Dmitry Tantsur <dtantsur at redhat.com
>> > >> <mailto:dtantsur at redhat.com>> wrote:
>> > >>
>> > >> On 03/15/2016 01:53 PM, Serge Kovaleff wrote:
>> > >>
>> > >> Dear All,
>> > >>
>> > >> Let's compare functional abilities of both solutions.
>> > >>
>> > >> Till the recent Mitaka release Ironic-inspector had only
>> > >> Introspection
>> > >> ability.
>> > >>
>> > >> Discovery part is proposed and implemented by Anton Arefiev.
>> We
>> > >> should
>> > >> align expectations and current and future functionality.
>> > >>
>> > >> Adding Tags to attract the Inspector community.
>> > >>
>> > >>
>> > >> Hi!
>> > >>
>> > >> It would be great to see what we can do to fit the nailgun use
>> case.
>> > >> Unfortunately, I don't know much about it right now. What are you
>> > >> missing?
>> > >>
>> > >>
>> > >> Cheers,
>> > >> Serge Kovaleff
>> > >> http://www.mirantis.com <http://www.mirantis.com/>
>> > >> cell: +38 (063) 83-155-70
>> > >>
>> > >> On Tue, Mar 15, 2016 at 2:07 PM, Alexander Saprykin
>> > >> <asaprykin at mirantis.com <mailto:asaprykin at mirantis.com>
>> > >> <mailto:asaprykin at mirantis.com <mailto:
>> asaprykin at mirantis.com>>>
>> > >> wrote:
>> > >>
>> > >> Dear all,
>> > >>
>> > >> Thank you for the opinions about this problem.
>> > >>
>> > >> I would agree with Roman, that it is always better to
>> reuse
>> > >> solutions than re-inventing the wheel. We should
>> investigate
>> > >> possibility of using ironic-inspector and integrating it
>> > >> into fuel.
>> > >>
>> > >> Best regards,
>> > >> Alexander Saprykin
>> > >>
>> > >> 2016-03-15 13:03 GMT+01:00 Sergii Golovatiuk
>> > >> <sgolovatiuk at mirantis.com <mailto:
>> sgolovatiuk at mirantis.com>
>> > >> <mailto:sgolovatiuk at mirantis.com
>> > >> <mailto:sgolovatiuk at mirantis.com>>>:
>> > >>
>> > >> My strong +1 to drop off nailgun-agent completely in
>> > >> favour of
>> > >> ironic-inspector. Even taking into consideration
>> we'lll
>> > >> need to
>> > >> extend ironic-inspector for fuel needs.
>> > >>
>> > >> --
>> > >> Best regards,
>> > >> Sergii Golovatiuk,
>> > >> Skype #golserge
>> > >> IRC #holser
>> > >>
>> > >> On Tue, Mar 15, 2016 at 11:06 AM, Roman
>> Prykhodchenko
>> > >> <me at romcheg.me <mailto:me at romcheg.me>
>> > >> <mailto:me at romcheg.me <mailto:me at romcheg.me>>> wrote:
>> > >>
>> > >> My opition on this is that we have too many
>> > >> re-invented
>> > >> wheels in Fuel and it’s better think about
>> > >> replacing them
>> > >> with something we can re-use than re-inventing
>> them
>> > >> one more
>> > >> time.
>> > >>
>> > >> Let’s take a look at Ironic and try to figure
>> out
>> > >> how we can
>> > >> use its features for the same purpose.
>> > >>
>> > >>
>> > >> - romcheg
>> > >> > 15 бер. 2016 р. о 10:38 Neil Jerram
>> > >> <Neil.Jerram at metaswitch.com
>> > >> <mailto:Neil.Jerram at metaswitch.com>
>> > >> <mailto:Neil.Jerram at metaswitch.com
>> > >>
>> > >> <mailto:Neil.Jerram at metaswitch.com>>> написав(ла):
>> > >>
>> > >> >
>> > >> > On 15/03/16 07:11, Vladimir Kozhukalov wrote:
>> > >> >> Alexander,
>> > >> >>
>> > >> >> We have many other places where use Ruby
>> > >> (astute, puppet
>> > >> custom types,
>> > >> >> etc.). I don't think it is a good reason to
>> > >> re-write
>> > >> something just
>> > >> >> because it is written in Ruby. You are right
>> > >> about
>> > >> tests, about plugins,
>> > >> >> but let's look around. Ironic community has
>> > >> already
>> > >> invented discovery
>> > >> >> component (btw written in python) and I
>> can't
>> > >> see any
>> > >> reason why we
>> > >> >> should continue putting efforts in nailgun
>> > >> agent and not
>> > >> try to switch
>> > >> >> to ironic-inspector.
>> > >> >
>> > >> > +1 in general terms. It's strange to me that
>> > >> there are
>> > >> so many
>> > >> > OpenStack deployment systems that each do
>> each
>> > >> piece of
>> > >> the puzzle in
>> > >> > their own way (Fuel, Foreman, MAAS/Juju
>> etc.) -
>> > >> and which
>> > >> also means
>> > >> > that I need substantial separate learning in
>> > >> order to use
>> > >> all these
>> > >> > systems. It would be great to see some
>> > >> consolidation.
>> > >> >
>> > >> > Regards,
>> > >> > Neil
>> > >> >
>> > >> >
>> > >> >
>> > >>
>> > >>
>> > >>
>> __________________________________________________________________________
>> > >> > OpenStack Development Mailing List (not for
>> > >> usage questions)
>> > >> > Unsubscribe:
>> > >>
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > >> <
>> > >> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
>> >
>> > >> <
>> > >> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
>> >
>> > >> >
>> > >>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> __________________________________________________________________________
>> > >> OpenStack Development Mailing List (not for
>> usage
>> > >> questions)
>> > >> Unsubscribe:
>> > >>
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > >> <
>> > >> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
>> >
>> > >> <
>> > >> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
>> >
>> > >>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> __________________________________________________________________________
>> > >> OpenStack Development Mailing List (not for usage
>> > >> questions)
>> > >> Unsubscribe:
>> > >>
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > >> <
>> > >> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
>> >
>> > >>
>> > >> <
>> > >> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
>> >
>> > >>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> __________________________________________________________________________
>> > >> OpenStack Development Mailing List (not for usage
>> questions)
>> > >> Unsubscribe:
>> > >>
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > >> <
>> > >> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
>> >
>> > >>
>> > >> <
>> > >> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
>> >
>> > >>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> __________________________________________________________________________
>> > >> OpenStack Development Mailing List (not for usage questions)
>> > >> Unsubscribe:
>> > >>
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > >> <
>> > >> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
>> >
>> > >>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> __________________________________________________________________________
>> > >> OpenStack Development Mailing List (not for usage questions)
>> > >> Unsubscribe:
>> > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > >> <
>> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
>> > >> >
>> > >>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> __________________________________________________________________________
>> > >> OpenStack Development Mailing List (not for usage questions)
>> > >> Unsubscribe:
>> > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >>
>> > >>
>> > >
>> > >
>> __________________________________________________________________________
>> > > OpenStack Development Mailing List (not for usage questions)
>> > > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>>
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160318/ab92c6ce/attachment-0001.html>
More information about the OpenStack-dev
mailing list