[openstack-dev] [ironic]Ironic operations on nodes in maintenance mode

Shraddha Pandhe spandhe.openstack at gmail.com
Tue Nov 24 19:51:52 UTC 2015


On Tue, Nov 24, 2015 at 7:39 AM, Jim Rollenhagen <jim at jimrollenhagen.com>
wrote:

> On Mon, Nov 23, 2015 at 03:35:58PM -0800, Shraddha Pandhe wrote:
> > Hi,
> >
> > I would like to know how everyone is using maintenance mode and what is
> > expected from admins about nodes in maintenance. The reason I am bringing
> > up this topic is because, most of the ironic operations, including manual
> > cleaning are not allowed for nodes in maintenance. Thats a problem for
> us.
> >
> > The way we use it is as follows:
> >
> > We allow users to put nodes in maintenance mode (indirectly) if they find
> > something wrong with the node. They also provide a maintenance reason
> along
> > with it, which gets stored as "user_reason" under maintenance_reason. So
> > basically we tag it as user specified reason.
> >
> > To debug what happened to the node our operators use manual cleaning to
> > re-image the node. By doing this, they can find out all the issues
> related
> > to re-imaging (dhcp, ipmi, image transfer, etc). This debugging process
> > applies to all the nodes that were put in maintenance either by user, or
> by
> > system (due to power cycle failure or due to cleaning failure).
>
> Interesting; do you let the node go through cleaning between the user
> nuking the instance and doing this manual cleaning stuff?
>

Do you mean automated cleaning? If so, yes we let that go through since
thats allowed in maintenance mode.

>
> At Rackspace, we leverage the fact that maintenance mode will not allow
> the node to proceed through the state machine. If a user reports a
> hardware issue, we maintenance the node on their behalf, and when they
> delete it, it boots the agent for cleaning and begins heartbeating.
> Heartbeats are ignored in maintenance mode, which gives us time to
> investigate the hardware, fix things, etc. When the issue is resolved,
> we remove maintenance mode, it goes through cleaning, then back in the
> pool.


What is the provision state when maintenance mode is removed? Does it
automatically go back into the available pool? How does a user report a
hardware issue?

Due to large scale, we can't always assure that someone will take care of
the node right away. So we have some automation to make sure that user's
quota is freed.

1. If a user finds some problem with the node, the user calls our break-fix
extension (with reason for break-fix) which deletes the instance for the
user and frees the quota.
2. Internally nova deletes the instance and calls destroy on virt driver.
This follows the normal delete flow with automated cleaning.
3. We have an automated tool called Reparo which constantly monitors the
node list for nodes in maintenance mode.
4. If it finds any nodes in maintenance, it runs one round of manual
cleaning on it to check if the issue was transient.
5. If cleaning fails, we need someone to take a look at it.
6. If cleaning succeeds, we put the node back in available pool.

This is only way we can scale to hundreds of thousands of nodes. If manual
cleaning was not allowed in maintenance mode, our operators would hate us :)

If the provision state of the node is such a way that the node cannot be
picked up by the scheduler, we can remove maintenance mode and run manual
cleaning.


> We used to enroll nodes in maintenance mode, back when the API put them
> in the available state immediately, to avoid them being scheduled to
> until we knew they were good to go. The enroll state solved this for us.
>
> Last, we use maintenance mode on available nodes if we want to
> temporarily pull them from the pool for a manual process or some
> testing. This can also be solved by the manageable state.
>
> // jim
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151124/d7c4d7a7/attachment.html>


More information about the OpenStack-dev mailing list