[Openstack] nova-scheduler not using all nova-compute nodes
Balamurugan V G
balamuruganvg at gmail.com
Wed Oct 23 16:21:42 UTC 2013
Hi Nick,
Check your scheduler configurations in the /etc/nova/nova.conf in the
controller node. Some scheduler configurations I use are as below:
scheduler_available_filters=nova.scheduler.filters.all_filters
scheduler_default_filters=AvailabilityZoneFilter,CoreFilter,RamFilter,ComputeFilter,RetryFilter
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
scheduler_max_attempts=3
cpu_allocation_ratio=1.5
ram_allocation_ratio=1.0
The allocation rations and type of filters may vary depending on your needs.
Regards,
Balu
On Wed, Oct 23, 2013 at 9:08 PM, Nick Maslov <azpekt at gmail.com> wrote:
>
> Hi guys,
>
> I`m using Grizzly on 13.04 Ubuntu.
>
> For compute hosts I have few 64Gb RAM servers and about a dozen of 8Gb RAM
> ones.
>
> "nova-manage service list", "quantum agent-list", "nova hypervisor-stats"
> show that all nodes and agents for all nodes are there and happy :-)
>
> I`m spawning VM hosts in huge packs - 50 or even more simultaneously. But
> most of VM`s are deployed in ERROR state.
>
> In scheduler logs I can see this:
>
> Oct 23 15:03:52 ctrl01-001 2013-10-23 15:03:52.376 ERROR
> nova.scheduler.filter_scheduler [req-61327e43-5f34-43ca-84ef-dfd8aefddec4
> b30e83bf390e434993e8f484f74d4a17 81448b94090c4da68bae2eca678e7bd2]
> [instance: bc423509-37a4-4b4f-a480-cddd80d33b84] Error from last host:
> nova17 (node nova17): [u'Traceback (most recent call last):\n', u' File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 847, in
> _run_instance\n requested_networks, macs, security_groups)\n', u' File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1091, in
> _allocate_network\n instance=instance)\n', u' File
> "/usr/lib/python2.7/contextlib.py", line 24, in __exit__\n
> self.gen.next()\n', u' File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1087, in
> _allocate_network\n security_groups=security_groups)\n', u' File
> "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 47, in
> wrapper\n res = f(self, context, *args, **kwargs)\n', u' File
> "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py", line 292,
> in allocate_for_instance\n nw_info = self._get_instance_nw_info(context,
> instance, networks=nets)\n', u' File
> "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py", line 374,
> in _get_instance_nw_info\n nw_info =
> self._build_network_info_model(context, instance, networks)\n', u' File
> "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py", line 832,
> in _build_network_info_model\n subnets =
> self._get_subnets_from_port(context, port)\n', u' File
> "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py", line 902,
> in _get_subnets_from_port\n data =
> quantumv2.get_client(context).list_ports(**search_opts)\n', u' File
> "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 107,
> in with_params\n ret = self.function(instance, *args, **kwargs)\n', u'
> File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line
> 255, in list_ports\n **_params)\n', u' File
> "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 99
> Oct 23 15:03:52 ctrl01-001 2013-10-23 15:03:52.378 WARNING
> nova.scheduler.manager [req-61327e43-5f34-43ca-84ef-dfd8aefddec4
> b30e83bf390e434993e8f484f74d4a17 81448b94090c4da68bae2eca678e7bd2] Failed
> to schedule_run_instance: No valid host was found. Exceeded max scheduling
> attempts 3 for instance bc423509-37a4-4b4f-a480-cddd80d33b84
> Oct 23 15:03:52 ctrl01-001 2013-10-23 15:03:52.378 WARNING
> nova.scheduler.manager [req-61327e43-5f34-43ca-84ef-dfd8aefddec4
> b30e83bf390e434993e8f484f74d4a17 81448b94090c4da68bae2eca678e7bd2]
> [instance: bc423509-37a4-4b4f-a480-cddd80d33b84] Setting instance to ERROR
> state.
>
>
> Anything I should be looking at in particular?
>
> I`m now havng 474 virtual servers in my environment, and spawning more of
> them creates more errors...
>
> Thanks,
> NM
>
>
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131023/23ad5d91/attachment.html>
More information about the Openstack
mailing list