[openstack-dev] [nova] Automatic Evacuation

Jay Lau jay.lau.513 at gmail.com
Mon Mar 3 15:28:23 UTC 2014


Yes, it would be great if we can have a simple framework for future run
time policy plugins. ;-)

2014-03-03 23:12 GMT+08:00 laserjetyang <laserjetyang at gmail.com>:

> there are a lot of rules for HA or LB, so I think it might be a better
> idea to scope the framework and leave the policy as plugins.
>
>
> On Mon, Mar 3, 2014 at 10:30 PM, Andrew Laski <andrew.laski at rackspace.com>wrote:
>
>> On 03/01/14 at 07:24am, Jay Lau wrote:
>>
>>> Hey,
>>>
>>> Sorry to bring this up again. There are also some discussions here:
>>> http://markmail.org/message/5zotly4qktaf34ei
>>>
>>> You can also search [Runtime Policy] in your email list.
>>>
>>> Not sure if we can put this to Gantt and enable Gantt provide both
>>> initial
>>> placement and rum time polices like HA, load balance etc.
>>>
>>
>> I don't have an opinion at the moment as to whether or not this sort of
>> functionality belongs in Gantt, but there's still a long way to go just to
>> get the scheduling functionality we want out of Gantt and I would like to
>> see the focus stay on that.
>>
>>
>>
>>
>>
>>> Thanks,
>>>
>>> Jay
>>>
>>>
>>>
>>> 2014-02-21 21:31 GMT+08:00 Russell Bryant <rbryant at redhat.com>:
>>>
>>>  On 02/20/2014 06:04 PM, Sean Dague wrote:
>>>> > On 02/20/2014 05:32 PM, Russell Bryant wrote:
>>>> >> On 02/20/2014 05:05 PM, Costantino, Leandro I wrote:
>>>> >>> Hi,
>>>> >>>
>>>> >>> Would like to know if there's any interest on having
>>>> >>> 'automatic evacuation' feature when a compute node goes down. I
>>>> >>> found 3 bps related to this topic: [1] Adding a periodic task
>>>> >>> and using ServiceGroup API for compute-node status [2] Using
>>>> >>> ceilometer to trigger the evacuate api. [3] Include some kind
>>>> >>> of H/A plugin  by using a 'resource optimization service'
>>>> >>>
>>>> >>> Most of those BP's have comments like 'this logic should not
>>>> >>> reside in nova', so that's why i am asking what should be the
>>>> >>> best approach to have something like that.
>>>> >>>
>>>> >>> Should this be ignored, and just rely on external monitoring
>>>> >>> tools to trigger the evacuation? There are complex scenarios
>>>> >>> that require lot of logic that won't fit into nova nor any
>>>> >>> other OS component. (For instance: sometimes it will be faster
>>>> >>> to reboot the node or compute-nova than starting the
>>>> >>> evacuation, but if it fail X times then trigger an evacuation,
>>>> >>> etc )
>>>> >>>
>>>> >>> Any thought/comment// about this?
>>>> >>>
>>>> >>> Regards Leandro
>>>> >>>
>>>> >>> [1]
>>>> >>>
>>>> https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken
>>>> >>>
>>>> >>>
>>>> [2]
>>>> >>>
>>>> https://blueprints.launchpad.net/nova/+spec/evacuate-
>>>> instance-automatically
>>>> >>>
>>>> >>>
>>>> [3]
>>>> >>>
>>>> https://blueprints.launchpad.net/nova/+spec/resource-
>>>> optimization-service
>>>> >>
>>>> >>
>>>> >>>
>>>> My opinion is that I would like to see this logic done outside of Nova.
>>>> >
>>>> > Right now Nova is the only service that really understands the
>>>> > compute topology of hosts, though it's understanding of liveness is
>>>> > really not sufficient to handle this kind of HA thing anyway.
>>>> >
>>>> > I think that's the real problem to solve. How to provide
>>>> > notifications to somewhere outside of Nova on host death. And the
>>>> > question is, should Nova be involved in just that part, keeping
>>>> > track of node liveness and signaling up for someone else to deal
>>>> > with it? Honestly that part I'm more on the fence about. Because
>>>> > putting another service in place to just handle that monitoring
>>>> > seems overkill.
>>>> >
>>>> > I 100% agree that all the policy, reacting, logic for this should
>>>> > be outside of Nova. Be it Heat or somewhere else.
>>>>
>>>> I think we agree.  I'm very interested in continuing to enhance Nova
>>>> to make sure that the thing outside of Nova has all of the APIs it
>>>> needs to get the job done.
>>>>
>>>> --
>>>> Russell Bryant
>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks,
>>>
>>> Jay
>>>
>>
>>  _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140303/7bc1b7c0/attachment.html>


More information about the OpenStack-dev mailing list