<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Oct 13, 2014 at 1:32 PM, Adam Lawson <span dir="ltr"><<a href="mailto:alawson@aqorn.com" target="_blank">alawson@aqorn.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Looks like this was proposed and denied to be part of Nova for some reason last year. Thoughts on why and is the reasoning (whatever it was) still applicable?</div></blockquote><div><br></div><div>Link?</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="gmail_extra"><span class=""><br clear="all"><div><div dir="ltr"><div><font><div style="font-family:arial;font-size:small"><b><i><br>Adam Lawson</i></b></div><div><font><font color="#666666" size="1"><div style="font-family:arial"><br></div><div style="font-family:arial;font-size:small">AQORN, Inc.</div><div style="font-family:arial;font-size:small">427 North Tatnall Street</div><div style="font-family:arial;font-size:small">Ste. 58461</div><div style="font-family:arial;font-size:small">Wilmington, Delaware 19801-2230</div><div style="font-family:arial;font-size:small">Toll-free: (844) 4-AQORN-NOW ext. 101</div><div style="font-family:arial;font-size:small">International: <a href="tel:%2B1%20302-387-4660" value="+13023874660" target="_blank">+1 302-387-4660</a></div></font><font color="#666666" size="1"><div style="font-family:arial;font-size:small">Direct: <a href="tel:%2B1%20916-246-2072" value="+19162462072" target="_blank">+1 916-246-2072</a></div></font></font></div></font></div><div style="font-family:arial;font-size:small"><img src="http://www.aqorn.com/images/logo.png" width="96" height="39"><br></div></div></div>
<br></span><div><div class="h5"><div class="gmail_quote">On Mon, Oct 13, 2014 at 1:26 PM, Adam Lawson <span dir="ltr"><<a href="mailto:alawson@aqorn.com" target="_blank">alawson@aqorn.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>[switching to openstack-dev]</div><div><br></div><div>Has anyone automated <font face="courier new, monospace">nova evacuate</font> so that VM's on a failed compute host using shared storage are automatically moved onto a new host or is manually entering <i><font face="courier new, monospace">nova compute <instance> <host></font></i> required in all cases?</div><div><br></div><div>If it's manual only or require custom Heat/Ceilometer templates, how hard would it be to enable automatic evacuation within Novs?</div><div><br></div><div>i.e. (within /etc/nova/nova.conf)</div><div><font face="courier new, monospace">auto_evac = true</font></div><div><br></div><div>Or is this possible now and I've simply not run across it?</div><div><br></div><div class="gmail_extra"><div><div dir="ltr"><div><font><div style="font-family:arial;font-size:small"><b><i><br>Adam Lawson</i></b></div><div><font><font color="#666666" size="1"><div style="font-family:arial"><br></div><div style="font-family:arial;font-size:small">AQORN, Inc.</div><div style="font-family:arial;font-size:small">427 North Tatnall Street</div><div style="font-family:arial;font-size:small">Ste. 58461</div><div style="font-family:arial;font-size:small">Wilmington, Delaware 19801-2230</div><div style="font-family:arial;font-size:small">Toll-free: (844) 4-AQORN-NOW ext. 101</div><div style="font-family:arial;font-size:small">International: <a href="tel:%2B1%20302-387-4660" value="+13023874660" target="_blank">+1 302-387-4660</a></div></font><font color="#666666" size="1"><div style="font-family:arial;font-size:small">Direct: <a href="tel:%2B1%20916-246-2072" value="+19162462072" target="_blank">+1 916-246-2072</a></div></font></font></div></font></div><div style="font-family:arial;font-size:small"><img src="http://www.aqorn.com/images/logo.png" width="96" height="39"><br></div></div></div>
<br><div class="gmail_quote">On Sat, Sep 27, 2014 at 12:32 AM, Clint Byrum <span dir="ltr"><<a href="mailto:clint@fewbar.com" target="_blank">clint@fewbar.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">So, what you're looking for is basically the same old IT, but with an<br>
API. I get that. For me, the point of this cloud thing is so that server<br>
operators can make _reasonable_ guarantees, and application operators<br>
can make use of them in an automated fashion.<br>
<br>
If you start guaranteeing 4 and 5 nines for single VM's, you're right<br>
back in the boat of spending a lot on server infrastructure even if your<br>
users could live without it sometimes.<br>
<br>
Compute hosts are going to go down. Networks are going to partition. It<br>
is not actually expensive to deal with that at the application layer. In<br>
fact when you know your business rules, you'll do a better job at doing<br>
this efficiently than some blanket "replicate all the things" layer might.<br>
<br>
I know, some clouds are just new ways to chop up these fancy 40 core<br>
megaservers that everyone is shipping. I'm sure OpenStack can do it, but<br>
I'm saying, I don't think OpenStack _should_ do it.<br>
<br>
Excerpts from Adam Lawson's message of 2014-09-26 20:30:29 -0700:<br>
<div><div>> Generally speaking that's true when you have full control over how you<br>
> deploy applications as a consumer. As a provider however, cloud resiliency<br>
> is king and it's generally frowned upon to associate instances directly to<br>
> the underlying physical hardware for any reason. It's good when instances<br>
> can come and go as needed, but in a production context, a failed compute<br>
> host shouldn't take down every instance hosted on it. Otherwise there is no<br>
> real abstraction going on and the cloud loses immense value.<br>
> On Sep 26, 2014 4:15 PM, "Clint Byrum" <<a href="mailto:clint@fewbar.com" target="_blank">clint@fewbar.com</a>> wrote:<br>
><br>
> > Excerpts from Adam Lawson's message of 2014-09-26 14:43:40 -0700:<br>
> > > Hello fellow stackers.<br>
> > ><br>
> > > I'm looking for discussions/plans re VM continuity.<br>
> > ><br>
> > > I.e. Protection for instances using ephemeral storage against host<br>
> > failures<br>
> > > or auto-failover capability for instances on hosts where the host suffers<br>
> > > from an attitude problem?<br>
> > ><br>
> > > I know fail-overs are supported and I'm quite certain auto-fail-overs are<br>
> > > possible in the event of a host failure (hosting instances not using<br>
> > shared<br>
> > > storage). I just can't find where this has been addressed/discussed.<br>
> > ><br>
> > > Someone help a brother out? ; )<br>
> ><br>
> > I'm sure some of that is possible, but it's a cloud, so why not do things<br>
> > the cloud way?<br>
> ><br>
> > Spin up redundant bits in disparate availability zones. Replicate only<br>
> > what must be replicated. Use volumes for DR only when replication would<br>
> > be too expensive.<br>
> ><br>
> > Instances are cattle, not pets. Keep them alive just long enough to make<br>
> > your profit.<br>
> ><br>
> > _______________________________________________<br>
> > Mailing list:<br>
> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
> > Post to : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a><br>
> > Unsubscribe :<br>
> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
> ><br>
</div></div></blockquote></div><br></div></div>
</blockquote></div><br></div></div></div>
<br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div></div>