[Openstack] Scheduler Filters Ignored

Georgios Dimitrakakis giorgis at acmac.uoc.gr
Thu Nov 27 20:09:48 UTC 2014


 8Cores are with the HT enabled!

 My CPU is an Intel(R) Xeon(R) CPU E3-1230 V2 which has 4Cores which 
 means that with HT are 8

 And there is just one CPU installed.

 Is it enough if I just do the changes on the compute node and not at 
 the controller?

 Do you think that it has anything to do either because it's the admin 
 user or because it's at the admin tenant?
 Maybe it's ignored because I specify the availability zone directly 
 instead of letting it choose....


 Any suggestions are mostly welcomed!


 G.




> HT enabled will make it look like your hypervisor has 16 CPUs, but 
> you
> were still able to allocate at least 17 vCPUs (2 large + at least one
> tiny), so this is a bit weird.
>
> On Thu, Nov 27, 2014 at 2:29 PM, Georgios Dimitrakakis  wrote:
>
>> Hi all!
>>
>> I have a node with 8Cores (HT enabled) and 32GB of RAM.
>>
>> I am trying to limit the VMs that will run on it using scheduler
>> filters.
>>
>> I have set the following at the nova.conf file:
>>
>> cpu_allocation_ratio=1.0
>>
>> ram_allocation_ratio=1.0
>>
>> reserved_host_memory_mb=1024
>>
>> scheduler_available_filters=nova.scheduler.filters.all_filters
>>
>>
> 
> scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,CoreFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
>>
>> I then boot a CirrOS VM with a flavor that has 8vCPUs
>>
>> # nova flavor-list
>>
> 
> +----+----------------+-----------+------+-----------+------+-------+-------------+-----------+
>> | ID | Name           | Memory_MB | Disk | Ephemeral | Swap |
>> VCPUs | RXTX_Factor | Is_Public |
>>
> 
> +----+----------------+-----------+------+-----------+------+-------+-------------+-----------+
>> | 1  | m1.tiny        | 512       | 1    | 0       
>>  |      | 1     | 1.0         | True      |
>> | 12 | n1.large       | 8192      | 80   | 0       
>>  |      | 8     | 1.0         | True      |
>>
> 
> +----+----------------+-----------+------+-----------+------+-------+-------------+-----------+
>>
>> # nova boot --flavor n1.large --image cirros-0.3.3 --security-group
>> default --key-name aaa-key --availability-zone nova:node02 cirrOS-K2
>>
>> and it builds succesfully
>>
>> # nova list
>>
> 
> +--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
>> | ID                                   | Name 
>>       | Status | Task State | Power State | Networks       
>>                 |
>>
> 
> +--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
>> | a0beb084-73c0-428a-bb59-0604588450be | cirrOS-K2   | ACTIVE |
>> -         | Running     | vmnet=10.0.0.2             
>>     |
>>
>> Next I try to put a second with the m1.tiny flavor and although I
>> was expecting to produce an error and do not build it this one is
>> also build succesfull!!!
>>
>> # nova boot --flavor m1.tiny --image cirros-0.3.3 --security-group
>> default --key-name aaa-key --availability-zone nova:node02
>> cirrOS-K2-2
>>
>> # nova list
>>
> 
> +--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
>> | ID                                   | Name 
>>       | Status | Task State | Power State | Networks       
>>                 |
>>
> 
> +--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
>> | a0beb084-73c0-428a-bb59-0604588450be | cirrOS-K2   | ACTIVE |
>> -         | Running     | vmnet=10.0.0.2             
>>     |
>> | 32fef068-aea3-423f-afb8-b9a5f8f2e0a6 | cirrOS-K2-2 | ACTIVE | - 
>>        | Running     | vmnet=10.0.0.3               
>>   |
>>
> 
> +--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
>>
>> I can eve boot a third LARGE one
>>
>> # nova boot --flavor n1.large --image cirros-0.3.3 --security-group
>> default --key-name aaa-key --availability-zone nova:node02
>> cirrOS-K2-3
>>
>> # nova list
>>
> 
> +--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
>> | ID                                   | Name 
>>       | Status | Task State | Power State | Networks       
>>                 |
>>
> 
> +--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
>> | a0beb084-73c0-428a-bb59-0604588450be | cirrOS-K2   | ACTIVE |
>> -         | Running     | vmnet=10.0.0.2             
>>     |
>> | 32fef068-aea3-423f-afb8-b9a5f8f2e0a6 | cirrOS-K2-2 | ACTIVE | - 
>>        | Running     | vmnet=10.0.0.3               
>>   |
>> | 6210f7c7-f16a-4343-a181-88ede5ee0132 | cirrOS-K2-3 | ACTIVE | - 
>>        | Running     | vmnet=10.0.0.4               
>>   |
>>
> 
> +--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
>>
>> All these are running on NODE02 as a hypervisor on which I have put
>> the afforementioned cpu allocation ratio etc.
>>
>> # nova hypervisor-servers node02
>>
> 
> +--------------------------------------+-------------------+---------------+----------------------+
>> | ID                                   | Name 
>>             | Hypervisor ID | Hypervisor Hostname  |
>>
> 
> +--------------------------------------+-------------------+---------------+----------------------+
>> | a0beb084-73c0-428a-bb59-0604588450be | instance-00000041 | 2   
>>         | node02               |
>> | 32fef068-aea3-423f-afb8-b9a5f8f2e0a6 | instance-00000042 | 2   
>>         | node02               |
>> | 6210f7c7-f16a-4343-a181-88ede5ee0132 | instance-00000043 | 2   
>>         | node02               |
>>
> 
> +--------------------------------------+-------------------+---------------+----------------------+
>>
>> Any ideas why it is ignored???
>>
>> I haver restarted all services at the hypervisor node!
>>
>> Should I restart any service at the controller node as well or are
>> they picked up automatically?
>>
>> Does it has anything to do with the fact that I am specifically
>> requesting that node through the availability zone parameter?
>>
>> Best regards,
>>
>> George
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1]
>> Post to     : openstack at lists.openstack.org [2]
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [3]
>
>
>
> Links:
> ------
> [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> [2] mailto:openstack at lists.openstack.org
> [3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> [4] mailto:giorgis at acmac.uoc.gr





More information about the Openstack mailing list