<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Wed, Aug 1, 2018 at 11:15 AM, Ben Nemec <span dir="ltr"><<a href="mailto:openstack@nemebean.com" target="_blank">openstack@nemebean.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
<br>
I'm having an issue with no valid host errors when starting instances and I'm struggling to figure out why. I thought the problem was disk space, but I changed the disk_allocation_ratio and I'm still getting no valid host. The host does have plenty of disk space free, so that shouldn't be a problem.<br>
<br>
However, I'm not even sure it's disk that's causing the failures because I can't find any information in the logs about why the no valid host is happening. All I get from the scheduler is:<br>
<br>
"Got no allocation candidates from the Placement API. This may be a temporary occurrence as compute nodes start up and begin reporting inventory to the Placement service."<br>
<br>
While in placement I see:<br>
<br>
2018-08-01 15:02:22.062 20 DEBUG nova.api.openstack.placement.r<wbr>equestlog [req-0a830ce9-e2af-413a-86cb-b<wbr>47ae129b676 fc44fe5cefef43f4b921b9123c95e6<wbr>94 b07e6dc2e6284b00ac7070aa3457c1<wbr>5e - default default] Starting request: 10.2.2.201 "GET /placement/allocation_candidat<wbr>es?limit=1000&resources=DISK_<wbr>GB%3A20%2CMEMORY_MB%3A2048%<wbr>2CVCPU%3A1" __call__ /usr/lib/python2.7/site-packag<wbr>es/nova/api/openstack/placemen<wbr>t/requestlog.py:38<br>
2018-08-01 15:02:22.103 20 INFO nova.api.openstack.placement.r<wbr>equestlog [req-0a830ce9-e2af-413a-86cb-b<wbr>47ae129b676 fc44fe5cefef43f4b921b9123c95e6<wbr>94 b07e6dc2e6284b00ac7070aa3457c1<wbr>5e - default default] 10.2.2.201 "GET /placement/allocation_candidat<wbr>es?limit=1000&resources=DISK_<wbr>GB%3A20%2CMEMORY_MB%3A2048%<wbr>2CVCPU%3A1" status: 200 len: 53 microversion: 1.25<br>
<br>
Basically it just seems to be logging that it got a request, but there's no information about what it did with that request.<br>
<br>
So where do I go from here? Is there somewhere else I can look to see why placement returned no candidates?<br><br></blockquote><div><br></div><div>Hi again, Ben, hope you are enjoying your well-earned time off! :)</div><div><br></div><div>I've created a patch that (hopefully) will address some of the difficulty that folks have had in diagnosing which parts of a request caused all providers to be filtered out from the return of GET /allocation_candidates:</div><div><br></div><div><a href="https://review.openstack.org/#/c/590041">https://review.openstack.org/#/c/590041</a><br></div><div><br></div><div>This patch changes two primary things:</div><div><br></div><div>1) Query-splitting</div><div><br></div><div>The patch splits the existing monster SQL query that was being used for querying for all providers that matched all requested resources, required traits, forbidden traits and required aggregate associations into doing multiple queries, one for each requested resource. While this does increase the number of database queries executed for each call to GET /allocation_candidates, the changes allow better visibility into what parts of the request cause an exhaustion of matching providers. We've benchmarked the new patch and have shown the performance impact of doing 3 queries versus 1 (when there is a request for 3 resources -- VCPU, RAM and disk) is minimal (a few extra milliseconds for execution against a DB with 1K providers having inventory of all three resource classes).</div><div><br></div><div>2) Diagnostic logging output</div><div><br></div><div>The patch adds debug log output within each loop iteration, so there is no logging output that shows how many matching providers were found for each resource class involved in the request. The output looks like this in the logs:</div><div><pre><span class="gmail-DEBUG gmail-_Aug_09_16_54_29_772149">[req-2d30faa8-4190-4490-a91e-610045530140] inside VCPU request loop. before applying trait and aggregate filters, found 12 matching providers
</span><span class="gmail-DEBUG gmail-_Aug_09_16_54_29_772341">[req-2d30faa8-4190-4490-a91e-610045530140] found 12 providers with capacity for the requested 1 VCPU.
</span><span class="gmail-DEBUG gmail-_Aug_09_16_54_29_779418">[req-2d30faa8-4190-4490-a91e-610045530140] inside MEMORY_MB request loop. before applying trait and aggregate filters, found 9 matching providers
</span><span class="gmail-DEBUG gmail-_Aug_09_16_54_29_779690">[req-2d30faa8-4190-4490-a91e-610045530140] found 9 providers with capacity for the requested 64 MEMORY_MB. before loop iteration we had 12 matches.
</span><span class="gmail-DEBUG gmail-_Aug_09_16_54_29_804202">[req-2d30faa8-4190-4490-a91e-610045530140] RequestGroup(use_same_provider=False, resources={MEMORY_MB:64, VCPU:1}, traits=[], aggregates=[]) (suffix '') returned 9 matches</span></pre>If a request includes required traits, forbidden traits or required aggregate associations, there are additional log messages showing how many matching providers were found after applying the trait or aggregate filtering set operation (in other words, the log output shows the impact of the trait filter or aggregate filter in much the same way that the existing FilterScheduler logging shows the "before and after" impact that a particular filter had on a request process.</div><div><br></div><div>Have a look at the patch in question and please feel free to add your feedback and comments on ways this can be improved to meet your needs.</div><div><br></div><div>Best,</div><div>-jay<br></div></div><br></div></div>