[Openstack] [Nova][virt-driver-numa-placement]How to enbale instance with numa ?
Li, Chen
chen.li at intel.com
Mon Feb 2 07:08:16 UTC 2015
Hi all,
I'm trying to enable numa for my instance based on https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement.
I'm working under Ubuntu 14.10, with libvirt 1.2.8:
virsh -v
1.2.8
My compute node has only 1 numa node:
dmesg |grep numa
[ 0.000000] mempolicy: Enabling automatic NUMA balancing. Configure with numa_balancing= or the kernel.numa_balancing sysctl
numactl --hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
node 0 size: 15983 MB
node 0 free: 8085 MB
node distances:
node 0
0: 10
I installed a fresh devstack.
And updated flavor m1.medium with:
nova flavor-key m1.medium set hw:numa_nodes=1
nova flavor-key m1.medium set hw:mem_page_size=large
Then I try to start instance with command:
nova boot --flavor m1.medium --image cirros-0.3.2-x86_64-uec --nic net-id=9b2afc82-b9d0-49ce-be21-732f3af506eb test1
The instance start failed due to reason:
| fault | {"message": "No valid host was found. There are not enough hosts available.", "code": 500, "details": " File \"/opt/stack/nova/nova/conductor/manager.py\", line 651, in build_instances |
| | request_spec, filter_properties) |
| | File \"/opt/stack/nova/nova/scheduler/utils.py\", line 333, in wrapped |
| | return func(*args, **kwargs) |
| | File \"/opt/stack/nova/nova/scheduler/client/__init__.py\", line 52, in select_destinations |
| | context, request_spec, filter_properties) |
| | File \"/opt/stack/nova/nova/scheduler/client/__init__.py\", line 37, in __run_method |
| | return getattr(self.instance, __name)(*args, **kwargs) |
| | File \"/opt/stack/nova/nova/scheduler/client/query.py\", line 34, in select_destinations |
| | context, request_spec, filter_properties) |
| | File \"/opt/stack/nova/nova/scheduler/rpcapi.py\", line 114, in select_destinations |
| | request_spec=request_spec, filter_properties=filter_properties) |
| | File \"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py\", line 156, in call |
| | retry=self.retry) |
| | File \"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py\", line 90, in _send |
| | timeout=timeout, retry=retry) |
| | File \"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py\", line 417, in send |
| | retry=retry) |
| | File \"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py\", line 408, in _send |
| | raise result |
| | ", "created": "2015-02-02T06:23:41Z"} |
The log in compute.log:
2015-02-02 14:59:47.736 AUDIT nova.compute.manager [req-7e1b0ad2-0349-4afc-961b-322b3334f67a admin demo] [instance: 1a555c06-b1fd-405d-bc1a-d20a8ac1b7fb] Starting instance...
2015-02-02 14:59:47.738 DEBUG oslo_messaging._drivers.amqpdriver [-] MSG_ID is c323d0b819414dc38fe0188baef2cc98 from (pid=22569) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:378
2015-02-02 14:59:47.738 DEBUG oslo_messaging._drivers.amqp [-] UNIQUE_ID is 2ce2555b27e6438db8c8d9043ab2a2f7. from (pid=22569) _add_unique_id /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:224
2015-02-02 14:59:47.900 DEBUG oslo_concurrency.lockutils [-] Lock "compute_resources" acquired by "instance_claim" :: waited 0.000s from (pid=22569) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:430
2015-02-02 14:59:47.900 DEBUG nova.compute.resource_tracker [-] Memory overhead for 4096 MB instance; 0 MB from (pid=22569) instance_claim /opt/stack/nova/nova/compute/resource_tracker.py:130
2015-02-02 14:59:47.902 AUDIT nova.compute.claims [-] [instance: 1a555c06-b1fd-405d-bc1a-d20a8ac1b7fb] Attempting claim: memory 4096 MB, disk 40 GB
2015-02-02 14:59:47.902 AUDIT nova.compute.claims [-] [instance: 1a555c06-b1fd-405d-bc1a-d20a8ac1b7fb] Total memory: 15983 MB, used: 512.00 MB
2015-02-02 14:59:47.902 AUDIT nova.compute.claims [-] [instance: 1a555c06-b1fd-405d-bc1a-d20a8ac1b7fb] memory limit: 23974.50 MB, free: 23462.50 MB
2015-02-02 14:59:47.902 AUDIT nova.compute.claims [-] [instance: 1a555c06-b1fd-405d-bc1a-d20a8ac1b7fb] Total disk: 915 GB, used: 0.00 GB
2015-02-02 14:59:47.902 AUDIT nova.compute.claims [-] [instance: 1a555c06-b1fd-405d-bc1a-d20a8ac1b7fb] disk limit not specified, defaulting to unlimited
2015-02-02 14:59:47.903 DEBUG oslo_messaging._drivers.amqpdriver [-] MSG_ID is 8010409e00644a43a4672f705e83a6cd from (pid=22569) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:378
2015-02-02 14:59:47.903 DEBUG oslo_messaging._drivers.amqp [-] UNIQUE_ID is 2c8e3b9dc627498d878fc7a8db7e92eb. from (pid=22569) _add_unique_id /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:224
2015-02-02 14:59:47.913 DEBUG nova.compute.resources.vcpu [-] Total CPUs: 24 VCPUs, used: 0.00 VCPUs from (pid=22569) test /opt/stack/nova/nova/compute/resources/vcpu.py:51
2015-02-02 14:59:47.913 DEBUG nova.compute.resources.vcpu [-] CPUs limit not specified, defaulting to unlimited from (pid=22569) test /opt/stack/nova/nova/compute/resources/vcpu.py:55
2015-02-02 14:59:47.913 DEBUG oslo_concurrency.lockutils [-] Lock "compute_resources" released by "instance_claim" :: held 0.013s from (pid=22569) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:442
2015-02-02 14:59:47.913 DEBUG nova.compute.manager [-] [instance: 1a555c06-b1fd-405d-bc1a-d20a8ac1b7fb] Insufficient compute resources: Requested instance NUMA topology cannot fit the given host NUMA topology. from (pid=22569) _build_and
_run_instance /opt/stack/nova/nova/compute/manager.py:2183
2015-02-02 14:59:47.914 DEBUG nova.compute.utils [-] [instance: 1a555c06-b1fd-405d-bc1a-d20a8ac1b7fb] Insufficient compute resources: Requested instance NUMA topology cannot fit the given host NUMA topology. from (pid=22569) notify_about
_instance_usage /opt/stack/nova/nova/compute/utils.py:324
2015-02-02 14:59:47.914 DEBUG nova.compute.manager [-] [instance: 1a555c06-b1fd-405d-bc1a-d20a8ac1b7fb] Build of instance 1a555c06-b1fd-405d-bc1a-d20a8ac1b7fb was re-scheduled: Insufficient compute resources: Requested instance NUMA topo
logy cannot fit the given host NUMA topology. from (pid=22569) _do_build_and_run_instance /opt/stack/nova/nova/compute/manager.py:2080
Anyone know why this happened ??
Also, if the compute node can't support instance NUMA topology , isn't it should failed in scheduler ???
Thanks.
-chen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20150202/12828140/attachment.html>
More information about the Openstack
mailing list