<div dir="ltr">Hi, Roman,<div><br></div><div>Evacuated has been on my radar for a while and this post has prodded me to take a look at the code. I think it's worth starting by explaining the problems in the current solution. Nova client is currently responsible for doing this evacuate. It does:</div><div><br></div><div>1. List all instances on the source host</div><div>2. Initiate evacuate for each instance</div><div><br></div><div>Evacuating a single instance does:</div><div><br></div><div>API:</div><div>1. Set instance task state to rebuilding</div><div>2. Create a migration record with source and dest if specified</div><div><br></div><div>Conductor:</div><div>3. Call the scheduler to get a destination host if not specified</div><div>4. Get the migration object from the db</div><div><br></div><div>Compute:</div><div>5. Rebuild the instance on dest</div><div>6. Update instance.host to dest</div><div><br></div><div>Examining single instance evacuation, the first obvious thing to look at is what if 2 happen simultaneously. Because step 1 is atomic, it should not be possible to initiate 2 evacuations simultaneously of a single instance. However, note that this atomic action hasn't updated the instance host, meaning the source host remains the owner of this instance. If the evacuation process fails to complete, the source host will automatically delete it if it comes back up because it will find a migration record, but it will not be rebuilt anywhere else. Evacuating it again will fail, because its task state is already rebuilding.</div><div><br></div><div>Also, let's imagine that the conductor crashes. There is not enough state for any tool, whether internal or external, to be able to know if the rebuild is ongoing somewhere or not, and therefore whether it is safe to retry even if that retry would succeed, which it wouldn't.</div><div><br></div><div>Which is to say that we can't currently robustly evacuate one instance!</div><div><br></div><div>Looking at the nova client side, there is an obvious race there: there is no guarantee in step 2 that instances returned in step one have not already been evacuated by another process. We're protected here, though because evacuating a single instance twice will fail the second time. Note that the process isn't idempotent, though, because an evacuation which falls into a hole will never be retried.</div><div><br></div><div>Moving on to what evacuated does. Evacuated uses rabbit to distribute jobs reliably. There are 2 jobs in evacuated:</div><div><br></div><div>1. Evacuate host:</div><div> 1.1 Get list of all instances on the source host from Nova</div><div> 1.2 Send an evacuate vm job for each instance</div><div>2. Evacuate vm:</div><div> 2.1 Tell Nova to start evacuating an instance</div><div><br></div><div>Because we're using rabbit as a reliable message bus, the initiator of one of the tasks knows that it will eventually run to completion at least once. Note that there's nothing to prevent the task being executed more than once per call, though. A task may crash before sending an ack, or may just be really slow. However, in both cases, for exactly the same reasons as for the implementation in nova client, running more than once should not race. It is still not idempotent, though, again for exactly the same reasons as nova client.</div><div><br></div><div>Also notice that, exactly as in the nova client implementation, we are not asserting that an instance has been evacuated. We are only asserting that we called nova.evacuate, which is to say that we got as far as step 2 in the evacuation sequence above.</div><div><br></div><div>In other words, in terms of robustness, calling evacuated's evacuate host is identical to asserting that nova client's evacuate host ran to completion at least once, which is quite a lot simpler to do. That's still not very robust, though: we don't recover from failures, and we don't ensure that an instance is evacuated, only that we started an attempt to evacuate at least once. I'm obviously not satisfied with nova client, however as the implementation is simpler I would favour it over evacuated.</div><div><br></div><div>I believe we can solve this problem, but I think that without fixing single-instance evacuate we're just pushing the problem around (or creating new places for it to live). I would base the robustness of my implementation on a single principal:</div><div><br></div><div> An instance has a single owner, which is exclusively responsible for rebuilding it.</div><div><br></div><div>In outline, I would redefine the evacuate process to do:</div><div><br></div><div>API:</div><div>1. Call the scheduler to get a destination for the evacuate if none was given.</div><div>2. Atomically update instance.host to this destination, and task state to rebuilding.</div><div><br></div><div>Compute:</div><div>3. Rebuild the instance.</div><div><br></div><div>This would be supported by a periodic task on the compute host which looks for rebuilding instances assigned to this host which aren't currently rebuilding, and kicks off a rebuild for them. This would cover the compute going down during a rebuild, or the api going down before messaging the compute.</div><div><br></div><div>Implementing this gives us several things:</div><div><br></div><div>1. The list instances, evacuate all instances process becomes idempotent, because as soon as the evacuate is initiated, the instance is removed from the source host.</div><div>2. We get automatic recovery of failure of the target compute. Because we atomically moved the instance to the target compute immediately, if the target compute also has to be evacuated, our instance won't fall through the gap.</div><div>3. We don't need an additional place for the code to run, because it will run on the compute. All the work has to be done by the compute anyway. By farming the evacuates out directly and immediately to the target compute we reduce both overhead and complexity.</div><div><br></div><div>The coordination becomes very simple. If we've run the nova client evacuation anywhere at least once, the actual evacuations are now Sombody Else's Problem (to quote h2g2), and will complete eventually. As evacuation in any case involves a forced change of owner it requires fencing of the source and implies an external agent such as pacemaker. The nova client evacuation can run in pacemaker.</div><div><br></div><div>Matt</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Oct 2, 2015 at 2:05 PM, Roman Dobosz <span dir="ltr"><<a href="mailto:roman.dobosz@intel.com" target="_blank">roman.dobosz@intel.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi all,<br>
<br>
The case of automatic evacuation (or resurrection currently), is a topic<br>
which surfaces once in a while, but it isn't yet fully supported by<br>
OpenStack and/or by the cluster services. There was some attempts to<br>
bring the feature into OpenStack, however it turns out it cannot be<br>
easily integrated with. On the other hand evacuation may be executed<br>
from the outside using Nova client or Nova API calls for evacuation<br>
initiation.<br>
<br>
I did some research regarding the ways how it could be designed, based<br>
on Russel Bryant blog post[1] as a starting point. Apart from it, I've<br>
also taken high availability and reliability into consideration when<br>
designing the solution.<br>
<br>
Together with coworker, we did first PoC[2] to enable cluster to be able<br>
to perform evacuation. The idea behind that PoC was simple - providing<br>
additional, small service which would trigger and supervise the<br>
evacuation process, which would be triggered from the outside (in this<br>
example we were using Pacemaker fencing facility, but it might be<br>
anything) using RabbitMQ directly. Those services are running on the<br>
control plane in AA fashion.<br>
<br>
That work well for us. So we started exploring other possibilities like<br>
oslo.messaging just to use it in the same manner as we did in the poc.<br>
It turns out that the implementation will not be as easy, because there<br>
is no facility in the oslo.messaging for letting sending an ACK from the<br>
client after the job is done (not as soon as it gets the message). We<br>
also looked at the existing OpenStack projects for a candidate which<br>
provide service for managing long running tasks.<br>
<br>
There is the Mistral project, which gives us almost all the features we<br>
need. The one missing feature is the HA of the Mistral tasks execution.<br>
<br>
The question is, how such problem (long running tasks) could be resolved<br>
in OpenStack?<br>
<br>
[1] <a href="http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/" rel="noreferrer" target="_blank">http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/</a><br>
[2] <a href="https://github.com/dawiddeja/evacuationd" rel="noreferrer" target="_blank">https://github.com/dawiddeja/evacuationd</a><br>
<span class="HOEnZb"><font color="#888888"><br>
--<br>
Cheers,<br>
Roman Dobosz<br>
<br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</font></span></blockquote></div><br></div>