<div dir="ltr">Tim,<div><br></div><div>Regarding this discussion, now there is at least a plan in Heat to allow management of VMs not launched by that service:<br><a href="https://blueprints.launchpad.net/heat/+spec/adopt-stack">https://blueprints.launchpad.net/heat/+spec/adopt-stack</a><br>
</div><div><br></div><div>So hopefully in future HARestarter will allow to support medium availability for all types of instances.</div><div><br></div><div>--</div><div>Best regards,</div><div>Oleg Gelbukh</div><div>Mirantis Labs</div>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Oct 9, 2013 at 3:28 PM, Tim Bell <span dir="ltr"><<a href="mailto:Tim.Bell@cern.ch" target="_blank">Tim.Bell@cern.ch</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Would the HARestarter approach work for VMs which were not launched by Heat ?<br>
<br>
We expect to have some applications driven by Heat but lots of others would not be (especially the more 'pet'-like traditional workloads).<br>
<br>
Tim<br>
<br>
From: Oleg Gelbukh [mailto:<a href="mailto:ogelbukh@mirantis.com">ogelbukh@mirantis.com</a>]<br>
Sent: 09 October 2013 13:01<br>
<div class="im HOEnZb">To: OpenStack Development Mailing List<br>
</div><div class="HOEnZb"><div class="h5">Subject: Re: [openstack-dev] [nova] automatically evacuate instances on compute failure<br>
<br>
Hello,<br>
<br>
We have much interest in this discussion (with focus on second scenario outlined by Tim), and working on its design at the moment. Thanks to everyone for valuable insights in this thread.<br>
<br>
It looks like external orchestration daemon problem is partially solved already by Heat with HARestarter resource [1].<br>
<br>
Hypervisor failure detection is also more or less solved problem in Nova [2]. There are other candidates for that task as well, like Ceilometer's hardware agent [3] (still WIP to my knowledge).<br>
<br>
[1] <a href="https://github.com/openstack/heat/blob/stable/grizzly/heat/engine/resources/instance.py#L35" target="_blank">https://github.com/openstack/heat/blob/stable/grizzly/heat/engine/resources/instance.py#L35</a><br>
[2] <a href="http://docs.openstack.org/developer/nova/api/nova.api.openstack.compute.contrib.hypervisors.html#module-nova.api.openstack.compute.contrib.hypervisors" target="_blank">http://docs.openstack.org/developer/nova/api/nova.api.openstack.compute.contrib.hypervisors.html#module-nova.api.openstack.compute.contrib.hypervisors</a><br>
[3] <a href="https://blueprints.launchpad.net/ceilometer/+spec/monitoring-physical-devices" target="_blank">https://blueprints.launchpad.net/ceilometer/+spec/monitoring-physical-devices</a><br>
--<br>
Best regards,<br>
Oleg Gelbukh<br>
Mirantis Labs<br>
<br>
On Wed, Oct 9, 2013 at 9:26 AM, Tim Bell <<a href="mailto:Tim.Bell@cern.ch">Tim.Bell@cern.ch</a>> wrote:<br>
I have proposed the summit design session for Hong Kong (<a href="http://summit.openstack.org/cfp/details/103" target="_blank">http://summit.openstack.org/cfp/details/103</a>) to discuss exactly these sort of points. We have the low level Nova commands but need a service to automate the process.<br>
<br>
I see two scenarios<br>
<br>
- A hardware intervention needs to be scheduled, please rebalance this workload elsewhere before it fails completely<br>
- A hypervisor has failed, please recover what you can using shared storage and give me a policy on what to do with the other VMs (restart, leave down till repair etc.)<br>
<br>
Most OpenStack production sites have some sort of script doing this sort of thing now. However, each one will be implementing the logic for migration differently so there is no agreed best practise approach.<br>
<br>
Tim<br>
<br>
> -----Original Message-----<br>
> From: Chris Friesen [mailto:<a href="mailto:chris.friesen@windriver.com">chris.friesen@windriver.com</a>]<br>
> Sent: 09 October 2013 00:48<br>
> To: <a href="mailto:openstack-dev@lists.openstack.org">openstack-dev@lists.openstack.org</a><br>
> Subject: Re: [openstack-dev] [nova] automatically evacuate instances on compute failure<br>
><br>
> On 10/08/2013 03:20 PM, Alex Glikson wrote:<br>
> > Seems that this can be broken into 3 incremental pieces. First, would<br>
> > be great if the ability to schedule a single 'evacuate' would be<br>
> > finally merged<br>
> > (_<a href="https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance_" target="_blank">https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance_</a>).<br>
><br>
> Agreed.<br>
><br>
> > Then, it would make sense to have the logic that evacuates an entire<br>
> > host<br>
> > (_<a href="https://blueprints.launchpad.net/python-novaclient/+spec/find-and-evacuate-host_" target="_blank">https://blueprints.launchpad.net/python-novaclient/+spec/find-and-evacuate-host_</a>).<br>
> > The reasoning behind suggesting that this should not necessarily be in<br>
> > Nova is, perhaps, that it *can* be implemented outside Nova using the<br>
> > indvidual 'evacuate' API.<br>
><br>
> This actually more-or-less exists already in the existing "nova host-evacuate" command. One major issue with this however is that it<br>
> requires the caller to specify whether all the instances are on shared or local storage, and so it can't handle a mix of local and shared<br>
> storage for the instances. If any of them boot off block storage for<br>
> instance you need to move them first and then do the remaining ones as a group.<br>
><br>
> It would be nice to embed the knowledge of whether or not an instance is on shared storage in the instance itself at creation time. I<br>
> envision specifying this in the config file for the compute manager along with the instance storage location, and the compute manager<br>
> could set the field in the instance at creation time.<br>
><br>
> > Finally, it should be possible to close the loop and invoke the<br>
> > evacuation automatically as a result of a failure detection (not clear<br>
> > how exactly this would work, though). Hopefully we will have at least<br>
> > the first part merged soon (not sure if anyone is actively working on<br>
> > a rebase).<br>
><br>
> My interpretation of the discussion so far is that the nova maintainers would prefer this to be driven by an outside orchestration daemon.<br>
><br>
> Currently the only way a service is recognized to be "down" is if someone calls is_up() and it notices that the service hasn't sent an update<br>
> in the last minute. There's nothing in nova actively scanning for compute node failures, which is where the outside daemon comes in.<br>
><br>
> Also, there is some complexity involved in dealing with auto-evacuate:<br>
> What do you do if an evacuate fails? How do you recover intelligently if there is no admin involved?<br>
><br>
> Chris<br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div>