[Openstack-operators] Instance Scheduling on Hosts
Ahmad Ahmadi
ahmadidamha at yahoo.com
Wed Jul 3 22:58:57 UTC 2013
I'm using folsom. Getting scheduler error.
My commands in order:
# nova aggregate-create smlAgg nova
+----+--------+-------------------+-------+----------+
| Id | Name | Availability Zone | Hosts | Metadata |
+----+--------+-------------------+-------+----------+
| 10 | smlAgg | nova | | |
+----+--------+-------------------+-------+----------+
# nova aggregate-set-metadata 10 mem=true
Aggregate 10 has been successfully updated.
+----+--------+-------------------+-------+-------------------+
| Id | Name | Availability Zone | Hosts | Metadata |
+----+--------+-------------------+-------+-------------------+
| 10 | smlAgg | nova | [] | {u'mem': u'true'} |
+----+--------+-------------------+-------+-------------------+
# nova aggregate-add-host 10 n005
Aggregate 10 has been successfully updated.
+----+--------+-------------------+-----------+-------------------+
| Id | Name | Availability Zone | Hosts | Metadata |
+----+--------+-------------------+-----------+-------------------+
| 10 | smlAgg | nova | [u'n005'] | {u'mem': u'true'} |
+----+--------+-------------------+-----------+-------------------+
# nova aggregate-add-host 10 n006
Aggregate 10 has been successfully updated.
+----+--------+-------------------+--------------------+-------------------+
| Id | Name | Availability Zone | Hosts | Metadata |
+----+--------+-------------------+--------------------+-------------------+
| 10 | smlAgg | nova | [u'n005', u'n006'] | {u'mem': u'true'} |
+----+--------+-------------------+--------------------+-------------------+
# nova aggregate-add-host 10 n007
Aggregate 10 has been successfully updated.
+----+--------+-------------------+-----------------------------+-------------------+
| Id | Name | Availability Zone | Hosts | Metadata |
+----+--------+-------------------+-----------------------------+-------------------+
| 10 | smlAgg | nova | [u'n005', u'n006', u'n007'] | {u'mem': u'true'} |
+----+--------+-------------------+-----------------------------+-------------------+
# nova-manage instance_type set_key --name=m1.tiny --key=mem --value=true
Key mem set to true on instance type m1.tiny
# nova flavor-show m1.tiny
+----------------------------+-------------------+
| Property | Value |
+----------------------------+-------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 0 |
| extra_specs | {u'mem': u'true'} |
| id | 1 |
| name | m1.tiny |
| os-flavor-access:is_public | True |
| ram | 512 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+-------------------+
# nova boot --image c6714433-1653-463c-906a-9511e3e6b21b --flavor m1.tiny test
+-------------------------------------+---------------------------------------------------------------------------------+
| Property | Value |
+-------------------------------------+---------------------------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | instance-00000250 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | error |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | ************ |
| config_drive | |
| created | 2013-07-03T21:23:22Z |
| fault | {u'message': u'NoValidHost', u'code': 500, u'created': u'2013-07-03T21:23:23Z'} |
| flavor | m1.tiny |
| hostId | |
| id | 53e42fa5-8d72-4f80-8586-eb7f20cd14d0 |
| image | CentOS_6 |
| key_name | None |
| metadata | {} |
| name | test |
| security_groups | [{u'name': u'default'}] |
| status | ERROR |
| tenant_id | 4c2340f60b7c48a0a2504eeefe2ff74a |
| updated | 2013-07-03T21:23:23Z |
| user_id | 3800b26a95c44e7abd1c080f7695560a |
+-------------------------------------+---------------------------------------------------------------------------------+
nova.conf:
scheduler_driver =nova.scheduler.multi.MultiScheduler
compute_scheduler_driver =nova.scheduler.filter_scheduler.FilterScheduler
scheduler_available_filters =nova.scheduler.filters.all_filters
scheduler_default_filters =AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter
least_cost_functions =nova.scheduler.least_cost.compute_fill_first_cost_fn
compute_fill_first_cost_fn_weight =-1.0
scheduler_host_subset_size =1
log:
==> /var/log/nova/scheduler.log <==
2013-07-03 15:23:23 WARNING nova.scheduler.driver [req-8616e45a-2826-4d2f-9870-b5b80ea81740 3800b26a95c44e7abd1c080f7695560a 4c2340f60b7c48a0a2504eeefe2ff74a] [instance: 53e42fa5-8d72-4f80-8586-eb7f20cd14d0] Setting instance to ERROR state.
________________________________
From: Sylvain Bauza <sylvain.bauza at digimind.com>
To: Ahmad Ahmadi <ahmadidamha at yahoo.com>
Cc: "openstack-operators at lists.openstack.org" <openstack-operators at lists.openstack.org>
Sent: Wednesday, July 3, 2013 7:16 AM
Subject: Re: [Openstack-operators] Instance Scheduling on Hosts
I ended up successfully in creating flavors with aggregate hosts metadata and the filter you mentioned on my Folsom install.
I followed the same configuration steps [1]
What kind of error are you getting ? Is nova-scheduler failing
or nova-compute ? Check the logs to get the exact error, please
:-)
-Sylvain
[1] :
http://docs.openstack.org/folsom/openstack-compute/admin/content/host-aggregates.html#d6e10301
Le 02/07/2013 22:55, Ahmad Ahmadi a écrit :
I have two groups of hosts as compute nodes:
>
>BigGroup: a few machines with very big RAM
>SmallGroup: a lot of machines with smaller RAM each.
>
>
>My goal is to assigne bigger instances (from bigger flavors) to BigGroup and smaller instances to SmallGroup.
>I tried the "AggregateInstanceExtraSpecsFilter" filtering with assigning different keys to instances and making aggregated hosts (http://docs.openstack.org/folsom/openstack-compute/admin/content/host-aggregates.html), but didn't work and all the instance boots will end up to error.
>Any idea?
>
>
>More over does anyone know how can I assign higher cost to bigger nodes for weighting?
>
>
>Thanks,
>
>
>
>_______________________________________________
OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130703/a31cb6e6/attachment.html>
More information about the OpenStack-operators
mailing list