[openstack-dev] [Openstack] Need help in validating CPU Pinning feature

Srinivasa Rao Ragolu sragolu at mvista.com
Mon Dec 29 07:44:32 UTC 2014


Hi Steve,

Thank you so much for your reply and detailed steps to go forward.

I am using devstack setup and nova master commit. As I could not able to
see CPUPinningFilter implementation in source, I have used
NUMATopologyFilter.

But same problem exists. I could not able to see any vcpupin in the guest
xml. Please see the section of vcpu from xml below.

<metadata>
    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
      <nova:package version="2015.1"/>
      <nova:name>test_pinning</nova:name>
      <nova:creationTime>2014-12-29 07:30:04</nova:creationTime>
      <nova:flavor name="pinned.medium">
        <nova:memory>2048</nova:memory>
        <nova:disk>20</nova:disk>
        <nova:swap>0</nova:swap>
        <nova:ephemeral>0</nova:ephemeral>
        <nova:vcpus>2</nova:vcpus>
      </nova:flavor>
      <nova:owner>
        <nova:user uuid="d72f55401b924e36ac88efd223717c75">admin</nova:user>
        <nova:project
uuid="4904cdf59c254546981f577351b818de">admin</nova:project>
      </nova:owner>
      <nova:root type="image" uuid="fe017c19-6b4e-4625-93b1-2618dc5ce323"/>
    </nova:instance>
  </metadata>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <vcpu placement='static' cpuset='0-3'>2</vcpu>

Kindly suggest me which branch of NOVA I need to take to validated pinning
feature. Alse let me know CPUPinningFilter is required to validate pinning
feature?

Thanks a lot,
Srinivas.


On Sat, Dec 27, 2014 at 4:37 AM, Steve Gordon <sgordon at redhat.com> wrote:

> ----- Original Message -----
> > From: "Srinivasa Rao Ragolu" <sragolu at mvista.com>
> > To: "joejiang" <ifzing at 126.com>
> >
> > Hi Joejing,
> >
> > Thanks for quick reply. Above xml is getting generated fine if I set
> > "vcpu_pin_set=1-12" in /etc/nova/nova.conf.
> >
> > But how to pin each vcpu with pcpu something like below
> >
> > <cputune>
> >    <vcpupin vcpu=‘0’ cpuset=‘1-5,12-17’/>
> >
> >    <vcpupin vcpu=‘1’ cpuset=‘2-3,12-17’/>
> >
> > </cputune>
> >
> >
> > One more questions is Are Numa nodes are compulsory for pin each vcpu to
> > pcpu?
>
> The specification for the CPU pinning functionality recently implemented
> in Nova is here:
>
>
> http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/virt-driver-cpu-pinning.html
>
> Note that exact vCPU to pCPU pinning is not exposed to the user as this
> would require them to have direct knowledge of the host pCPU layout.
> Instead they request that the instance receive "dedicated" CPU resourcing
> and Nova handles allocation of pCPUs and pinning of vCPUs to them.
>
> Example usage:
>
> * Create a host aggregate and add set metadata on it to indicate it is to
> be used for pinning, 'pinned' is used for the example but any key value can
> be used. The same key must used be used in later steps though::
>
>     $ nova aggregate-create cpu_pinning
>     $ nova aggregate-set-metadata 1 pinned=true
>
>   NB: For aggregates/flavors that wont be dedicated set pinned=false.
>
> * Set all existing flavors to avoid this aggregate::
>
>     $ for FLAVOR in `nova flavor-list | cut -f 2 -d ' ' | grep -o [0-9]*`;
> do nova flavor-key ${FLAVOR} set
> "aggregate_instance_extra_specs:pinned"="false"; done
>
> * Create flavor that has extra spec "hw:cpu_policy" set to "dedicated". In
> this example it is created with ID of 6, 2048 MB of RAM, 20 GB drive, and 2
> vCPUs::
>
>     $ nova flavor-create pinned.medium 6 2048 20 2
>     $ nova flavor-key 6 set "hw:cpu_policy"="dedicated"
>
> * Set the flavor to require the aggregate set aside for dedicated pinning
> of guests::
>
>     $ nova flavor-key 6 set "aggregate_instance_extra_specs:pinned"="true"
>
> * Add a compute host to the created aggregate (see nova host-list to get
> the host name(s))::
>
>     $ nova aggregate-add-host 1 my_packstack_host_name
>
> * Add the AggregateInstanceExtraSpecsFilter and CPUPinningFilter filters
> to the scheduler_default_filters in /etc/nova.conf::
>
>     scheduler_default_filters =
> RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,
>
> ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,
>
> AggregateInstanceExtraSpecsFilter,CPUPinningFilter
>
>   NB: On Kilo code base I believe the filter is NUMATopologyFilter
>
> * Restart the scheduler::
>
>     # systemctl restart openstack-nova-scheduler
>
> * After the above - with a normal (non-admin user) try to boot an instance
> with the newly created flavor::
>
>     $ nova boot --image fedora --flavor 6 test_pinning
>
> * Confirm the instance has succesfully booted and that it's vCPU's are
> pinned to _a single_ host CPU by observing
>   the <cputune> element of the generated domain XML::
>
>     # virsh list
>      Id    Name                           State
>     ----------------------------------------------------
>      2     instance-00000001              running
>     # virsh dumpxml instance-00000001
>     ...
>     <vcpu placement='static' cpuset='0-3'>2</vcpu>
>       <cputune>
>         <vcpupin vcpu='0' cpuset='0'/>
>         <vcpupin vcpu='1' cpuset='1'/>
>     </cputune>
>
>
> -Steve
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141229/84f19f8f/attachment.html>


More information about the OpenStack-dev mailing list