[nova][placement] Openstack only building one VM per machine in cluster, then runs out of resources
Sylvain Bauza
sylvain.bauza at gmail.com
Wed Jun 30 09:49:25 UTC 2021
Le mer. 30 juin 2021 à 11:31, Balazs Gibizer <balazs.gibizer at est.tech> a
écrit :
>
>
> On Tue, Jun 29, 2021 at 20:42, Jeffrey Mazzone <jmazzone at uchicago.edu>
> wrote:
> > Hello,
> >
> >
>
> [snip]
>
> > Trying to start another vm on that host fails with the following log
> > entries:
> >
> > scheduler.log
> >
> > "status": 409, "title": "Conflict", "detail": "There was a conflict
> > when trying to complete your request.\n\n Unable to allocate
> > inventory: Unable to create allocation for 'VCPU' on resource provider
> >
> > conductor.log
> >
> > Failed to schedule instances:
> > nova.exception_Remote.NoValidHost_Remote: No valid host was found.
> > There are not enough hosts available.
> >
> > placement.log
> >
> > Over capacity for VCPU on resource provider
> > 3f9d0deb-936c-474a-bdee-d3df049f073d. Needed: 4, Used: 8206,
> > Capacity: 1024.0
>
> At this point if you list the resource provider usage on
> 3f9d0deb-936c-474a-bdee-d3df049f073d again then do you still see 4 VCPU
> used, or 8206 used? With the "openstack resource provider show
> 3f9d0deb-936c-474a-bdee-d3df049f073d --allocations" command you could
> print the UUIDs of the consumers that are actually consuming your VCPUs
> in placement. So you can try to identify where the 8206 allocation is
> coming from.
>
>
Given you also have an Ussuri deployment, you could call the nova-audit
command to see whether you would have orphaned allocations :
nova-manage placement audit [--verbose] [--delete] [--resource_provider
<uuid>]
See details in
https://docs.openstack.org/nova/ussuri/cli/nova-manage.html#nova-api-database
-Sylvain
> Cheers,
> gibi
>
> >
> > As you can see, the used value is suddenly 8206 after a single 4 core
> > vm is placed on it. I don’t understand what im missing or could be
> > doing wrong. Im really unsure where this value is being calculated
> > from. All the entries in the database and via openstack commands show
> > the correct values except in this log entry. Has anyone experienced
> > the same or similar behavior? I would appreciate any insight as to
> > what the issue could be.
> >
> > Thanks in advance!
> >
> > -Jeff M
> >
> >
> >
> >
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210630/89cf85a7/attachment.html>
More information about the openstack-discuss
mailing list