[openstack-dev] [Nova] Automatic evacuate

Fei Long Wang feilong at catalyst.net.nz
Mon Oct 13 21:40:44 UTC 2014


I think Adam is talking about this bp:
https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically

For now, we're using Nagios probe/event to trigger the Nova evacuate 
command, but I think it's possible to do that in Nova if we can find a
good way to define the trigger policy.


On 14/10/14 10:15, Joe Gordon wrote:
>
>
> On Mon, Oct 13, 2014 at 1:32 PM, Adam Lawson <alawson at aqorn.com
> <mailto:alawson at aqorn.com>> wrote:
>
>     Looks like this was proposed and denied to be part of Nova for
>     some reason last year. Thoughts on why and is the reasoning
>     (whatever it was) still applicable?
>
>
> Link?
>
>  
>
>
>     */
>     Adam Lawson/*
>
>     AQORN, Inc.
>     427 North Tatnall Street
>     Ste. 58461
>     Wilmington, Delaware 19801-2230
>     Toll-free: (844) 4-AQORN-NOW ext. 101
>     International: +1 302-387-4660 <tel:%2B1%20302-387-4660>
>     Direct: +1 916-246-2072 <tel:%2B1%20916-246-2072>
>
>
>     On Mon, Oct 13, 2014 at 1:26 PM, Adam Lawson <alawson at aqorn.com
>     <mailto:alawson at aqorn.com>> wrote:
>
>         [switching to openstack-dev]
>
>         Has anyone automated nova evacuate so that VM's on a failed
>         compute host using shared storage are automatically moved onto
>         a new host or is manually entering /nova compute <instance>
>         <host>/ required in all cases?
>
>         If it's manual only or require custom Heat/Ceilometer
>         templates, how hard would it be to enable automatic evacuation
>         within Novs?
>
>         i.e. (within /etc/nova/nova.conf)
>         auto_evac = true
>
>         Or is this possible now and I've simply not run across it?
>
>         */
>         Adam Lawson/*
>
>         AQORN, Inc.
>         427 North Tatnall Street
>         Ste. 58461
>         Wilmington, Delaware 19801-2230
>         Toll-free: (844) 4-AQORN-NOW ext. 101
>         International: +1 302-387-4660 <tel:%2B1%20302-387-4660>
>         Direct: +1 916-246-2072 <tel:%2B1%20916-246-2072>
>
>
>         On Sat, Sep 27, 2014 at 12:32 AM, Clint Byrum
>         <clint at fewbar.com <mailto:clint at fewbar.com>> wrote:
>
>             So, what you're looking for is basically the same old IT,
>             but with an
>             API. I get that. For me, the point of this cloud thing is
>             so that server
>             operators can make _reasonable_ guarantees, and
>             application operators
>             can make use of them in an automated fashion.
>
>             If you start guaranteeing 4 and 5 nines for single VM's,
>             you're right
>             back in the boat of spending a lot on server
>             infrastructure even if your
>             users could live without it sometimes.
>
>             Compute hosts are going to go down. Networks are going to
>             partition. It
>             is not actually expensive to deal with that at the
>             application layer. In
>             fact when you know your business rules, you'll do a better
>             job at doing
>             this efficiently than some blanket "replicate all the
>             things" layer might.
>
>             I know, some clouds are just new ways to chop up these
>             fancy 40 core
>             megaservers that everyone is shipping. I'm sure OpenStack
>             can do it, but
>             I'm saying, I don't think OpenStack _should_ do it.
>
>             Excerpts from Adam Lawson's message of 2014-09-26 20:30:29
>             -0700:
>             > Generally speaking that's true when you have full
>             control over how you
>             > deploy applications as a consumer. As a provider
>             however, cloud resiliency
>             > is king and it's generally frowned upon to associate
>             instances directly to
>             > the underlying physical hardware for any reason. It's
>             good when instances
>             > can come and go as needed, but in a production context,
>             a failed compute
>             > host shouldn't take down every instance hosted on it.
>             Otherwise there is no
>             > real abstraction going on and the cloud loses immense value.
>             > On Sep 26, 2014 4:15 PM, "Clint Byrum" <clint at fewbar.com
>             <mailto:clint at fewbar.com>> wrote:
>             >
>             > > Excerpts from Adam Lawson's message of 2014-09-26
>             14:43:40 -0700:
>             > > > Hello fellow stackers.
>             > > >
>             > > > I'm looking for discussions/plans re VM continuity.
>             > > >
>             > > > I.e. Protection for instances using ephemeral
>             storage against host
>             > > failures
>             > > > or auto-failover capability for instances on hosts
>             where the host suffers
>             > > > from an attitude problem?
>             > > >
>             > > > I know fail-overs are supported and I'm quite
>             certain auto-fail-overs are
>             > > > possible in the event of a host failure (hosting
>             instances not using
>             > > shared
>             > > > storage). I just can't find where this has been
>             addressed/discussed.
>             > > >
>             > > > Someone help a brother out? ; )
>             > >
>             > > I'm sure some of that is possible, but it's a cloud,
>             so why not do things
>             > > the cloud way?
>             > >
>             > > Spin up redundant bits in disparate availability
>             zones. Replicate only
>             > > what must be replicated. Use volumes for DR only when
>             replication would
>             > > be too expensive.
>             > >
>             > > Instances are cattle, not pets. Keep them alive just
>             long enough to make
>             > > your profit.
>             > >
>             > > _______________________________________________
>             > > Mailing list:
>             > >
>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>             > > Post to     : openstack at lists.openstack.org
>             <mailto:openstack at lists.openstack.org>
>             > > Unsubscribe :
>             > >
>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>             > >
>
>
>
>
>     _______________________________________________
>     OpenStack-dev mailing list
>     OpenStack-dev at lists.openstack.org
>     <mailto:OpenStack-dev at lists.openstack.org>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
Fei Long Wang (王飞龙)
--------------------------------------------------------------------------
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flwang at catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-------------------------------------------------------------------------- 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141014/795aac11/attachment-0001.html>


More information about the OpenStack-dev mailing list