[openstack-dev] [nova][neutron] numa aware vswitch

Stephen Finucane sfinucan at redhat.com
Fri Aug 24 13:58:48 UTC 2018


On Fri, 2018-08-24 at 07:55 +0000, Guo, Ruijing wrote:
> Hi, All,
>  
> I am verifying numa aware vwitch features (https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/numa-aware-vswitches.html). But the result is not my expectation.
>  
> What I missing?
>  
>  
> Nova configuration:
>  
> [filter_scheduler]
> track_instance_changes = False
> enabled_filters = RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,SameHostFilter,DifferentHostFilter,NUMATopologyFilter
>  
> [neutron]
> physnets = physnet0,physnet1
>  
> [neutron_physnet_physnet0]
> numa_nodes = 0
>  
> [neutron_physnet_physnet1]
> numa_nodes = 1
>  
>  
> ml2 configuration:
>  
> [ml2_type_vlan]
> network_vlan_ranges = physnet0,physnet1
> [ovs]
> vhostuser_socket_dir = /var/lib/libvirt/qemu
> bridge_mappings = physnet0:br-physnet0,physnet1:br-physnet1
>  
>  
> command list:
>  
> openstack network create net0 --external --provider-network-type=vlan --provider-physical-network=physnet0 --provider-segment=100
> openstack network create net1 --external --provider-network-type=vlan --provider-physical-network=physnet1 --provider-segment=200
> openstack subnet create --network=net0 --subnet-range=192.168.1.0/24 --allocation-pool start=192.168.1.200,end=192.168.1.250 --gateway 192.168.1.1 subnet0
> openstack subnet create --network=net1 --subnet-range=192.168.2.0/24 --allocation-pool start=192.168.2.200,end=192.168.2.250 --gateway 192.168.2.1 subnet1
> openstack server create --flavor 1 --image=cirros-0.3.5-x86_64-disk --nic net-id=net0 vm0
> openstack server create --flavor 1 --image=cirros-0.3.5-x86_64-disk --nic net-id=net1 vm1
>  
> vm0 and vm1 are created but numa is not enabled:
>   <vcpu placement='static'>1</vcpu>
>   <cputune>
>     <shares>1024</shares>
>   </cputune>
 
Using this won't add a NUMA topology - it'll just control how any
topology present will be mapped to the guest. You need to enable
dedicated CPUs or a explicitly request a NUMA topology for this to
work.

openstack flavor set --property hw:numa_nodes=1 1

<or>

openstack flavor set --property hw:cpu_policy=dedicated 1


This is perhaps something that we could change in the future, though I
haven't given it much thought yet.

Regards,
Stephen




More information about the OpenStack-dev mailing list