[openstack-dev] [Heat] in-instance update hooks

Clint Byrum clint at fewbar.com
Tue Feb 11 17:57:16 UTC 2014


Thanks Kevin, great summary.

This is beyond the scope of in-instance notification. I think this is
more like the generic notification API that Thomas Herve suggested in
the rolling updates thread. It can definitely use the same method for
its implementation, and I think it is adjacent to, and not in front of
or behind, the in-instance case.

Excerpts from Fox, Kevin M's message of 2014-02-11 09:22:28 -0800:
> Another scaling down/update use case:
> Say I have a pool of ssh servers for users to use (compute cluster login nodes).
> Autoscaling up is easy. Just launch a new node and add it to the load balancer.
> 
> Scaling down/updating is harder. It should ideally:
>  * Set the admin state on the load balancer for the node, ensuring no new connections go to the node.
>  * Contact the node or the balancer and ensure all outstanding connections are complete. Wait for it. This could be a long long time.
>  * Destroy or update the node
> 
> Thanks,
> Kevin
> ________________________________________
> From: Clint Byrum [clint at fewbar.com]
> Sent: Tuesday, February 11, 2014 8:13 AM
> To: openstack-dev
> Subject: Re: [openstack-dev] [Heat] in-instance update hooks
> 
> Excerpts from Steven Dake's message of 2014-02-11 07:19:19 -0800:
> > On 02/10/2014 10:22 PM, Clint Byrum wrote:
> > > Hi, so in the previous thread about rolling updates it became clear that
> > > having in-instance control over updates is a more fundamental idea than
> > > I had previously believed. During an update, Heat does things to servers
> > > that may interrupt the server's purpose, and that may cause it to fail
> > > subsequent things in the graph.
> > >
> > > Specifically, in TripleO we have compute nodes that we are managing.
> > > Before rebooting a machine, we want to have a chance to live-migrate
> > > workloads if possible, or evacuate in the simpler case, before the node
> > > is rebooted. Also in the case of a Galera DB where we may even be running
> > > degraded, we want to ensure that we have quorum before proceeding.
> > >
> > > I've filed a blueprint for this functionality:
> > >
> > > https://blueprints.launchpad.net/heat/+spec/update-hooks
> > >
> > > I've cobbled together a spec here, and I would very much welcome
> > > edits/comments/etc:
> > >
> > > https://etherpad.openstack.org/p/heat-update-hooks
> > Clint,
> >
> > I read through your etherpad and think there is a relationship to a use
> > case for scaling.  It would be sweet if both these cases could use the
> > same model and tooling.  At the moment in an autoscaling group, when you
> > want to scale "down" there is no way to quiesce the node before killing
> > the VM.  It is the same problem you have with Galera, except your
> > scenario involves an update.
> >
> > I'm not clear how the proposed design could be made to fit this
> > particular use case.  Do you see a way it can fill both roles so we
> > don't have two different ways to do essentially the same thing?
> >
> 
> I see scaling down as an update to a nested stack which contains all of
> the members of the scaling group.
> 
> So if we start focusing on scaling stacks, as Zane suggested in
> the rolling updates thread, then the author of said stack would add
> action_hooks to the resources that are scaled.
> 



More information about the OpenStack-dev mailing list