[openstack-dev] [ironic] Scheduling error with RamFilter ... on integrating ironic into our OpenStack Distribution

Waines, Greg Greg.Waines at windriver.com
Mon Oct 30 17:37:17 UTC 2017


Thanks Jay ... i’ll try this out and let you know.

BTW ... i should have mentioned that i am currently @Newton ... and will eventually move to @PIKE<mailto:christopher.pike at intel.com>
               Does that change anything you suggested below ?

Greg.



From: Jay Pipes <jaypipes at gmail.com>
Reply-To: "openstack-dev at lists.openstack.org" <openstack-dev at lists.openstack.org>
Date: Monday, October 30, 2017 at 1:23 PM
To: "openstack-dev at lists.openstack.org" <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [ironic] Scheduling error with RamFilter ... on integrating ironic into our OpenStack Distribution

You need to set the node's resource_class attribute to the custom
resource class you will use for that chassis/hardware type.

Then you need to add a specific extra_specs key/value to a flavor to
indicate that that flavor is requesting that specific hardware type:

openstack flavor set $flavorname --property resources:$RESOURCE_CLASS=1

for instance, let's say you set your node's resource class to
CUSTOM_METALLICA, you would do this to the flavor you are using to grab
one of those Ironic resources:

openstack flavor set $flavorname --property resources:CUSTOM_METALLICA=1

Then nova boot with that flavor and you should be good to go.

-jay

On 10/30/2017 01:05 PM, Waines, Greg wrote:
Hey,
We are in the process of integrating OpenStack Ironic into our own
OpenStack Distribution.
Still pulling all the pieces together ... have not yet got a successful
‘nova boot’ yet, so issues below could be configuration or setup issues.
We have ironic node enrolled ... and corresponding nova hypervisor has
been created for it ... ALTHOUGH does not seem to be populated correctly
(see below).
AND then the ‘nova boot’ fails with the error:
"No valid host was found. There are not enough hosts available.
66aaf6fa-3cbe-4744-8d55-c90eeae4800a: (RamFilter) Insufficient total
RAM: req:20480, avail:0 MB,
NOTE: the nova.conf that we are using for the nova.compute being used
for ironic servers is attached.
Any Ideas what could be wrong ?
Greg.
[wrsroot at controller-1 ~(keystone_admin)]$ ironic node-show metallica
+------------------------+--------------------------------------------------------------------------+
| Property | Value|
+------------------------+--------------------------------------------------------------------------+
| chassis_uuid ||
| clean_step | {} |
| console_enabled| False|
| created_at | 2017-10-27T20:37:12.241352+00:00 |
| driver | pxe_ipmitool |
| driver_info| {u'ipmi_password': u'******', u'ipmi_address':
u'128.224.64.212',|
|| u'ipmi_username': u'root', u'deploy_kernel': u'2939e2d4-da3f-4917-b99a-|
|| 01030fd30345', u'deploy_ramdisk':|
|| u'73ad43c4-4300-45a5-87ec-f28646518430'} |
| driver_internal_info | {} |
| extra| {} |
| inspection_finished_at | None |
| inspection_started_at| None |
| instance_info| {} |
| instance_uuid| None |
| last_error | None |
| maintenance| False|
| maintenance_reason | None |
| name | metallica|
| network_interface||
| power_state| power off|
| properties | {u'memory_mb': 20480, u'cpu_arch': u'x86_64',
u'local_gb': 100, u'cpus': |
|| 20, u'capabilities': u'boot_option:local'} |
| provision_state| manageable |
| provision_updated_at | 2017-10-30T15:47:33.397317+00:00 |
| raid_config||
| reservation| None |
| resource_class ||
| target_power_state | None |
| target_provision_state | None |
| target_raid_config ||
| updated_at | 2017-10-30T15:47:51.396471+00:00 |
| uuid | 66aaf6fa-3cbe-4744-8d55-c90eeae4800a |
+------------------------+--------------------------------------------------------------------------+
[wrsroot at controller-1 ~(keystone_admin)]$ nova hypervisor-show
66aaf6fa-3cbe-4744-8d55-c90eeae4800a
+-------------------------+--------------------------------------+
| Property| Value|
+-------------------------+--------------------------------------+
| cpu_info| {} |
| current_workload| 0|
| disk_available_least| 0|
| free_disk_gb| 0|
| free_ram_mb | 0|
| host_ip | 127.0.0.1|
| hypervisor_hostname | 66aaf6fa-3cbe-4744-8d55-c90eeae4800a |
| hypervisor_type | ironic |
| hypervisor_version| 1|
| id| 5|
| local_gb| 0|
| local_gb_used | 0|
| memory_mb | 0|
| memory_mb_node| None |
| memory_mb_used| 0|
| memory_mb_used_node | None |
| running_vms | 0|
| service_disabled_reason | None |
| service_host| controller-1 |
| service_id| 28 |
| state | up |
| status| enabled|
| vcpus | 0|
| vcpus_node| None |
| vcpus_used| 0.0|
| vcpus_used_node | None |
+-------------------------+--------------------------------------+
[wrsroot at controller-1 ~(keystone_admin)]$
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20171030/09dac74e/attachment.html>


More information about the OpenStack-dev mailing list