I'm not saying it can't be rationalized; I'm saying it is frustrating to me.<div><br></div><div>My understanding is that Essex is going to be baked into both Ubuntu & Debian for the long term - 5 years plus. That's a long time to have to keep explaining why X is broken; I'd rather just fix X.</div>
<div><div><br></div><div><br><div><br><div class="gmail_quote">On Thu, Mar 29, 2012 at 10:22 AM, David Kranz <span dir="ltr"><<a href="mailto:david.kranz@qrclab.com">david.kranz@qrclab.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><div><div class="h5">
On 3/29/2012 12:46 PM, Justin Santa Barbara wrote:
<blockquote type="cite">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Is there a
good way to map back where in the code these calls are coming
from?</blockquote>
<div>
<br>
</div>
<div>There's not a great way currently. I'm trying to get a
patch in for Essex which will let deployments easily turn on
SQL debugging (though this is proving contentious); it will
have a configurable log level to allow for future
improvements, and one of the things I'd like to do is add
later is something like a stack trace on 'problematic' SQL
(large row count, long query time). But that'll be in Folsom,
or in G if we don't get logging into Essex.</div>
<div><br>
</div>
<div>In the meantime, it's probably not too hard to follow the
code and infer where the calls are coming from. In the full
log, there's a bit more context, and I've probably snipped
some of that out; in this case the relevant code is
get_metadata in the compute API service and
get_instance_nw_info in the network service.</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> Regardless,
large table scans should be eliminated, especially if the
table is mostly read, as the hit on an extra index on insert
will be completely offset by the speedups on select.</blockquote>
<div><br>
</div>
<div>Agreed - some of these problems are very clear-cut!</div>
<div><br>
</div>
<div>It does frustrate me that we've done so much programming
work, but then not do the simple stuff at the end to make
things work well. It feels a bit like shipping we're shipping
C code which we've compiled with -O0 instead of -O3.</div>
</div>
<br>
</blockquote>
<br></div></div>
Well, in a project with the style of fixed-date release
(short-duration train-model) that openstack has, I think we have to
accept that there will never be time to do anything except fight
critical bugs "at the end". At least not until the project code is
much more mature. In projects I have managed we always allocated
time at the *beginning* of a release cycle for fixing some
backlogged bugs and performance work. There is less pressure and the
code is not yet churning. It is also important to have performance
benchmark tests to make sure new features do not introduce
performance regressions.<span class="HOEnZb"><font color="#888888"><br>
<br>
-David<br>
</font></span></div>
<br>_______________________________________________<br>
Mailing list: <a href="https://launchpad.net/~openstack" target="_blank">https://launchpad.net/~openstack</a><br>
Post to : <a href="mailto:openstack@lists.launchpad.net">openstack@lists.launchpad.net</a><br>
Unsubscribe : <a href="https://launchpad.net/~openstack" target="_blank">https://launchpad.net/~openstack</a><br>
More help : <a href="https://help.launchpad.net/ListHelp" target="_blank">https://help.launchpad.net/ListHelp</a><br>
<br></blockquote></div><br></div></div></div>