<div dir="ltr"><div><div><div>Hi George<br><br></div>The following parameters should be configured in nova controller node where nova scheduler service is running. After that, please restart the nova scheduler service. <br><br><br>
cpu_allocation_ratio=1.0<br>
<br>
ram_allocation_ratio=1.0<br>
<br>
reserved_host_memory_mb=1024<br>
<br>
scheduler_available_filters=no<div id=":4st" class="">va.scheduler.filters.all_filters<br>
<br>
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,CoreFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter</div><br></div>Thanks,<br><br></div>Yong<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Nov 27, 2014 at 2:29 PM, Georgios Dimitrakakis <span dir="ltr"><<a href="mailto:giorgis@acmac.uoc.gr" target="_blank">giorgis@acmac.uoc.gr</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi all!<br>
<br>
I have a node with 8Cores (HT enabled) and 32GB of RAM.<br>
<br>
I am trying to limit the VMs that will run on it using scheduler filters.<br>
<br>
I have set the following at the nova.conf file:<br>
<br>
cpu_allocation_ratio=1.0<br>
<br>
ram_allocation_ratio=1.0<br>
<br>
reserved_host_memory_mb=1024<br>
<br>
scheduler_available_filters=<u></u>nova.scheduler.filters.all_<u></u>filters<br>
<br>
scheduler_default_filters=<u></u>RetryFilter,<u></u>AvailabilityZoneFilter,<u></u>CoreFilter,RamFilter,<u></u>ComputeFilter,<u></u>ComputeCapabilitiesFilter,<u></u>ImagePropertiesFilter,<u></u>ServerGroupAntiAffinityFilter,<u></u>ServerGroupAffinityFilter<br>
<br>
<br>
I then boot a CirrOS VM with a flavor that has 8vCPUs<br>
<br>
# nova flavor-list<br>
+----+----------------+-------<u></u>----+------+-----------+------<u></u>+-------+-------------+-------<u></u>----+<br>
| ID | Name           | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |<br>
+----+----------------+-------<u></u>----+------+-----------+------<u></u>+-------+-------------+-------<u></u>----+<br>
| 1  | m1.tiny        | 512       | 1    | 0         |      | 1     | 1.0         | True      |<br>
| 12 | n1.large       | 8192      | 80   | 0         |      | 8     | 1.0         | True      |<br>
+----+----------------+-------<u></u>----+------+-----------+------<u></u>+-------+-------------+-------<u></u>----+<br>
<br>
<br>
# nova boot --flavor n1.large --image cirros-0.3.3 --security-group default --key-name aaa-key --availability-zone nova:node02 cirrOS-K2<br>
<br>
and it builds succesfully<br>
<br>
<br>
# nova list<br>
+-----------------------------<u></u>---------+-------------+------<u></u>--+------------+-------------+<u></u>------------------------------<u></u>---+<br>
| ID                                   | Name        | Status | Task State | Power State | Networks                        |<br>
+-----------------------------<u></u>---------+-------------+------<u></u>--+------------+-------------+<u></u>------------------------------<u></u>---+<br>
| a0beb084-73c0-428a-bb59-<u></u>0604588450be | cirrOS-K2   | ACTIVE | -         | Running     | vmnet=10.0.0.2                  |<br>
<br>
<br>
<br>
Next I try to put a second with the m1.tiny flavor and although I was expecting to produce an error and do not build it this one is also build succesfull!!!<br>
<br>
# nova boot --flavor m1.tiny --image cirros-0.3.3 --security-group default --key-name aaa-key --availability-zone nova:node02 cirrOS-K2-2<br>
<br>
<br>
# nova list<br>
+-----------------------------<u></u>---------+-------------+------<u></u>--+------------+-------------+<u></u>------------------------------<u></u>---+<br>
| ID                                   | Name        | Status | Task State | Power State | Networks                        |<br>
+-----------------------------<u></u>---------+-------------+------<u></u>--+------------+-------------+<u></u>------------------------------<u></u>---+<br>
| a0beb084-73c0-428a-bb59-<u></u>0604588450be | cirrOS-K2   | ACTIVE | -         | Running     | vmnet=10.0.0.2                  |<br>
| 32fef068-aea3-423f-afb8-<u></u>b9a5f8f2e0a6 | cirrOS-K2-2 | ACTIVE | -         | Running     | vmnet=10.0.0.3                  |<br>
+-----------------------------<u></u>---------+-------------+------<u></u>--+------------+-------------+<u></u>------------------------------<u></u>---+<br>
<br>
<br>
<br>
<br>
I can eve boot a third LARGE one<br>
<br>
<br>
<br>
# nova boot --flavor n1.large --image cirros-0.3.3 --security-group default --key-name aaa-key --availability-zone nova:node02 cirrOS-K2-3<br>
<br>
<br>
# nova list<br>
+-----------------------------<u></u>---------+-------------+------<u></u>--+------------+-------------+<u></u>------------------------------<u></u>---+<br>
| ID                                   | Name        | Status | Task State | Power State | Networks                        |<br>
+-----------------------------<u></u>---------+-------------+------<u></u>--+------------+-------------+<u></u>------------------------------<u></u>---+<br>
| a0beb084-73c0-428a-bb59-<u></u>0604588450be | cirrOS-K2   | ACTIVE | -         | Running     | vmnet=10.0.0.2                  |<br>
| 32fef068-aea3-423f-afb8-<u></u>b9a5f8f2e0a6 | cirrOS-K2-2 | ACTIVE | -         | Running     | vmnet=10.0.0.3                  |<br>
| 6210f7c7-f16a-4343-a181-<u></u>88ede5ee0132 | cirrOS-K2-3 | ACTIVE | -         | Running     | vmnet=10.0.0.4                  |<br>
+-----------------------------<u></u>---------+-------------+------<u></u>--+------------+-------------+<u></u>------------------------------<u></u>---+<br>
<br>
<br>
<br>
<br>
<br>
All these are running on NODE02 as a hypervisor on which I have put the afforementioned cpu allocation ratio etc.<br>
<br>
<br>
<br>
# nova hypervisor-servers node02<br>
+-----------------------------<u></u>---------+-------------------+<u></u>---------------+--------------<u></u>--------+<br>
| ID                                   | Name              | Hypervisor ID | Hypervisor Hostname  |<br>
+-----------------------------<u></u>---------+-------------------+<u></u>---------------+--------------<u></u>--------+<br>
| a0beb084-73c0-428a-bb59-<u></u>0604588450be | instance-00000041 | 2            | node02               |<br>
| 32fef068-aea3-423f-afb8-<u></u>b9a5f8f2e0a6 | instance-00000042 | 2            | node02               |<br>
| 6210f7c7-f16a-4343-a181-<u></u>88ede5ee0132 | instance-00000043 | 2            | node02               |<br>
+-----------------------------<u></u>---------+-------------------+<u></u>---------------+--------------<u></u>--------+<br>
<br>
<br>
<br>
<br>
Any ideas why it is ignored???<br>
<br>
<br>
I haver restarted all services at the hypervisor node!<br>
<br>
<br>
Should I restart any service at the controller node as well or are they picked up automatically?<br>
<br>
<br>
<br>
Does it has anything to do with the fact that I am specifically requesting that node through the availability zone parameter?<br>
<br>
<br>
Best regards,<br>
<br>
<br>
George<br>
<br>
<br>
<br>
______________________________<u></u>_________________<br>
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/<u></u>cgi-bin/mailman/listinfo/<u></u>openstack</a><br>
Post to     : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a><br>
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/<u></u>cgi-bin/mailman/listinfo/<u></u>openstack</a><br>
</blockquote></div><br></div>