<div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Is there a good way to map back where in the code these calls are coming from?</blockquote><div>
<br></div><div>There's not a great way currently. I'm trying to get a patch in for Essex which will let deployments easily turn on SQL debugging (though this is proving contentious); it will have a configurable log level to allow for future improvements, and one of the things I'd like to do is add later is something like a stack trace on 'problematic' SQL (large row count, long query time). But that'll be in Folsom, or in G if we don't get logging into Essex.</div>
<div><br></div><div>In the meantime, it's probably not too hard to follow the code and infer where the calls are coming from. In the full log, there's a bit more context, and I've probably snipped some of that out; in this case the relevant code is get_metadata in the compute API service and get_instance_nw_info in the network service.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> Regardless, large table scans should be eliminated, especially if the table is mostly read, as the hit on an extra index on insert will be completely offset by the speedups on select.</blockquote>
<div><br></div><div>Agreed - some of these problems are very clear-cut!</div><div><br></div><div>It does frustrate me that we've done so much programming work, but then not do the simple stuff at the end to make things work well. It feels a bit like shipping we're shipping C code which we've compiled with -O0 instead of -O3.</div>
</div>