[Openstack] [nova] Log files on exceeding cpu allocation limit

Cody codeology.lab at gmail.com
Wed Aug 8 18:00:00 UTC 2018


Got it! Thank you, Jay!  - Cody
On Wed, Aug 8, 2018 at 11:36 AM Jay Pipes <jaypipes at gmail.com> wrote:
>
> So, that is normal operation, actually. The conductor calls the
> scheduler to find a place for your requested instances. The scheduler
> responded to the conductor that, sorry, there were no hosts that were
> able to match the request (I don't know what the details of that request
> were).
>
> And so the conductor set the status of the instance(s) in your request
> to an ERROR state, since they were not able to be launched.
>
> Best,
> -jay
>
> On 08/08/2018 09:58 AM, Cody wrote:
> > Hi Jay,
> >
> > Thank you for getting back. I attached the log in my previous reply,
> > but I guess Gmail hided it from you as a quoted message. Here comes
> > again:
> >
> >  From nova-conductor.log
> > ### BEGIN ###
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager
> > [req-ef0d8ea1-e801-483e-b913-9148a6ac5d90
> > 2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 -
> > default default] Failed to schedule instances: NoValidHost_Remote: No
> > valid host was found.
> > Traceback (most recent call last):
> >
> >    File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
> > line 226, in inner
> >      return func(*args, **kwargs)
> >
> >    File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py",
> > line 139, in select_destinations
> >      raise exception.NoValidHost(reason="")
> >
> > NoValidHost: No valid host was found.
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager Traceback
> > (most recent call last):
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager   File
> > "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line
> > 1116, in schedule_and_build_instances
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager
> > instance_uuids, return_alternates=True)
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager   File
> > "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line
> > 716, in _schedule_instances
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager
> > return_alternates=return_alternates)
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager   File
> > "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 726,
> > in wrapped
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager     return
> > func(*args, **kwargs)
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager   File
> > "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py",
> > line 53, in select_destinations
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager
> > instance_uuids, return_objects, return_alternates)
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager   File
> > "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py",
> > line 37, in __run_method
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager     return
> > getattr(self.instance, __name)(*args, **kwargs)
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager   File
> > "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py",
> > line 42, in select_destinations
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager
> > instance_uuids, return_objects, return_alternates)
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager   File
> > "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 158,
> > in select_destinations
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager     return
> > cctxt.call(ctxt, 'select_destinations', **msg_args)
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager   File
> > "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line
> > 174, in call
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager     retry=self.retry)
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager   File
> > "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line
> > 131, in _send
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager
> > timeout=timeout, retry=retry)
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager   File
> > "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py",
> > line 559, in send
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager     retry=retry)
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager   File
> > "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py",
> > line 550, in _send
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager     raise result
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager
> > NoValidHost_Remote: No valid host was found.
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager Traceback
> > (most recent call last):
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager   File
> > "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line
> > 226, in inner
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager     return
> > func(*args, **kwargs)
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager   File
> > "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line
> > 139, in select_destinations
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager     raise
> > exception.NoValidHost(reason="")
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager NoValidHost:
> > No valid host was found.
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager
> > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager
> > 2018-08-08 09:28:36.328 1648 WARNING nova.scheduler.utils
> > [req-ef0d8ea1-e801-483e-b913-9148a6ac5d90
> > 2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 -
> > default default] Failed to compute_task_build_instances: No valid host
> > was found.
> > Traceback (most recent call last):
> >
> >    File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
> > line 226, in inner
> >      return func(*args, **kwargs)
> >
> >    File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py",
> > line 139, in select_destinations
> >      raise exception.NoValidHost(reason="")
> >
> > NoValidHost: No valid host was found.
> > : NoValidHost_Remote: No valid host was found.
> > 2018-08-08 09:28:36.331 1648 WARNING nova.scheduler.utils
> > [req-ef0d8ea1-e801-483e-b913-9148a6ac5d90
> > 2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 -
> > default default] [instance: b466a974-06ba-459b-bc04-2ccb2b3ee720]
> > Setting instance to ERROR state.: NoValidHost_Remote: No valid host
> > was found.
> > ### END ###
> > On Wed, Aug 8, 2018 at 9:45 AM Jay Pipes <jaypipes at gmail.com> wrote:
> >>
> >> On 08/08/2018 09:37 AM, Cody wrote:
> >>>> On 08/08/2018 07:19 AM, Bernd Bausch wrote:
> >>>>> I would think you don't even reach the scheduling stage. Why bother
> >>>>> looking for a suitable compute node if you exceeded your quota anyway?
> >>>>>
> >>>>> The message is in the conductor log because it's the conductor that does
> >>>>> most of the work. The others are just slackers (like nova-api) or wait
> >>>>> for instructions from the conductor.
> >>>>>
> >>>>> The above is my guess, of course, but IMHO a very educated one.
> >>>>>
> >>>>> Bernd.
> >>>
> >>> Thank you, Bernd. I didn't know the inner workflow in this case.
> >>> Initially, I thought it was for the scheduler to discover that no more
> >>> resource was left available, hence I expected to see something from
> >>> the scheduler log. My understanding now is that the quota get checked
> >>> in the database prior to the deployment. That would explain why the
> >>> clue was in the nova-conductor.log, not the nova-scheduler.log.
> >>
> >> Quota is checked in the nova-api node, not the nova-conductor.
> >>
> >> As I said in my previous message, unless you paste what the logs are
> >> that you are referring to, it's not possible to know what you are
> >> referring to.
> >>
> >> Best,
> >> -jay



More information about the Openstack mailing list