<div dir="ltr">From what I understand of baremetal nodes, they will show up as hypervisors from the Nova perspective.<div><br></div><div>Can you try "openstack hypervisor list"</div><div><br></div><div>From the doc </div><div><br></div><div><div class="gmail-admonition gmail-note" style="box-sizing:border-box;font-size:11.9px;background:rgb(237,242,247);color:rgb(42,78,104);border-width:1px 1px 1px 4px;border-style:solid;border-color:rgb(42,78,104);padding:15px;margin:15px 0px;border-radius:4px;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif"><p style="box-sizing:border-box;margin:0px 0px 10px;font-size:inherit">Each bare metal node becomes a separate hypervisor in Nova. The hypervisor host name always matches the associated node UUID.</p></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Feb 14, 2022 at 10:03 AM Lokendra Rathour <<a href="mailto:lokendrarathour@gmail.com">lokendrarathour@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Julia,<div>Thanks once again. we got your point and understood the issue, but we still are facing the same issue on our TRIPLEO Train HA Setup, even if the settings are done as per your recommendations.</div><div><br></div><div>The error that we are seeing is  again "<b>No valid host was found"</b></div><div><b><br></b></div><div>(overcloud) [stack@undercloud v4]$ openstack server show bm-server --fit-width<br>+-------------------------------------+----------------------------------------------------------------------------------------+<br>| Field                               | Value                                                                                  |<br>+-------------------------------------+----------------------------------------------------------------------------------------+<br>| OS-DCF:diskConfig                   | MANUAL                                                                                 |<br>| OS-EXT-AZ:availability_zone         |                                                                                        |<br>| OS-EXT-SRV-ATTR:host                | None                                                                                   |<br>| OS-EXT-SRV-ATTR:hostname            | bm-server                                                                              |<br>| OS-EXT-SRV-ATTR:hypervisor_hostname | None                                                                                   |<br>| OS-EXT-SRV-ATTR:instance_name       | instance-00000014                                                                      |<br>| OS-EXT-SRV-ATTR:kernel_id           |                                                                                        |<br>| OS-EXT-SRV-ATTR:launch_index        | 0                                                                                      |<br>| OS-EXT-SRV-ATTR:ramdisk_id          |                                                                                        |<br>| OS-EXT-SRV-ATTR:reservation_id      | r-npd6m9ah                                                                             |<br>| OS-EXT-SRV-ATTR:root_device_name    | None                                                                                   |<br>| OS-EXT-SRV-ATTR:user_data           | I2Nsb3VkLWNvbmZpZwpkaXNhYmxlX3Jvb3Q6IGZhbHNlCnBhc3N3b3JkOiBoc2MzMjEKc3NoX3B3YXV0aDogdH |<br>|                                     | J1ZQptYW5hZ2VfZXRjX2hvc3RzOiB0cnVlCmNocGFzc3dkOiB7ZXhwaXJlOiBmYWxzZSB9Cg==             |<br>| OS-EXT-STS:power_state              | NOSTATE                                                                                |<br>| OS-EXT-STS:task_state               | None                                                                                   |<br>| OS-EXT-STS:vm_state                 | error                                                                                  |<br>| OS-SRV-USG:launched_at              | None                                                                                   |<br>| OS-SRV-USG:terminated_at            | None                                                                                   |<br>| accessIPv4                          |                                                                                        |<br>| accessIPv6                          |                                                                                        |<br>| addresses                           |                                                                                        |<br>| config_drive                        | True                                                                                   |<br>| created                             | 2022-02-14T10:20:48Z                                                                   |<br>| description                         | None                                                                                   |<br>| fault                               | {'code': 500, 'created': '2022-02-14T10:20:49Z', 'message': 'No valid host was found.  |<br>|                                     | There are not enough hosts available.', 'details': 'Traceback (most recent call        |<br>|                                     | last):\n  File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line      |<br>|                                     | 1379, in schedule_and_build_instances\n    instance_uuids, return_alternates=True)\n   |<br>|                                     | File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 839, in        |<br>|                                     | _schedule_instances\n    return_alternates=return_alternates)\n  File                  |<br>|                                     | "/usr/lib/python3.6/site-packages/nova/scheduler/client/query.py", line 42, in         |<br>|                                     | select_destinations\n    instance_uuids, return_objects, return_alternates)\n  File    |<br>|                                     | "/usr/lib/python3.6/site-packages/nova/scheduler/rpcapi.py", line 160, in              |<br>|                                     | select_destinations\n    return cctxt.call(ctxt, \'select_destinations\',              |<br>|                                     | **msg_args)\n  File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py",   |<br>|                                     | line 181, in call\n    transport_options=self.transport_options)\n  File               |<br>|                                     | "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 129, in _send\n   |<br>|                                     | transport_options=transport_options)\n  File "/usr/lib/python3.6/site-                 |<br>|                                     | packages/oslo_messaging/_drivers/amqpdriver.py", line 674, in send\n                   |<br>|                                     | transport_options=transport_options)\n  File "/usr/lib/python3.6/site-                 |<br>|                                     | packages/oslo_messaging/_drivers/amqpdriver.py", line 664, in _send\n    raise         |<br>|                                     | result\nnova.exception_Remote.NoValidHost_Remote: No valid host was found. There are   |<br>|                                     | not enough hosts available.\nTraceback (most recent call last):\n\n  File              |<br>|                                     | "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 235, in inner\n  |<br>|                                     | return func(*args, **kwargs)\n\n  File "/usr/lib/python3.6/site-                       |<br>|                                     | packages/nova/scheduler/manager.py", line 214, in select_destinations\n                |<br>|                                     | allocation_request_version, return_alternates)\n\n  File "/usr/lib/python3.6/site-     |<br>|                                     | packages/nova/scheduler/filter_scheduler.py", line 96, in select_destinations\n        |<br>|                                     | allocation_request_version, return_alternates)\n\n  File "/usr/lib/python3.6/site-     |<br>|                                     | packages/nova/scheduler/filter_scheduler.py", line 265, in _schedule\n                 |<br>|                                     | claimed_instance_uuids)\n\n  File "/usr/lib/python3.6/site-                            |<br>|                                     | packages/nova/scheduler/filter_scheduler.py", line 302, in _ensure_sufficient_hosts\n  |<br>|                                     | raise exception.NoValidHost(reason=reason)\n\nnova.exception.NoValidHost: No valid     |<br>|                                     | host was found. There are not enough hosts available.\n\n'}                            |<br>| flavor                              | disk='470', ephemeral='0',                                                             |<br>|                                     | extra_specs.capabilities='boot_mode:uefi,boot_option:local',                           |<br>|                                     | extra_specs.resources:CUSTOM_BAREMETAL_RESOURCE_CLASS='1',                             |<br>|                                     | extra_specs.resources:DISK_GB='0', extra_specs.resources:MEMORY_MB='0',                |<br>|                                     | extra_specs.resources:VCPU='0', original_name='bm-flavor', ram='63700', swap='0',      |<br>|                                     | vcpus='20'                                                                             |<br>| hostId                              |                                                                                        |<br>| host_status                         |                                                                                        |<br>| id                                  | 49944a1f-7758-4522-9ef1-867ede44b3fc                                                   |<br>| image                               | whole-disk-centos (80724772-c760-4136-b453-754456d7c549)                               |<br>| key_name                            | None                                                                                   |<br>| locked                              | False                                                                                  |<br>| locked_reason                       | None                                                                                   |<br>| name                                | bm-server                                                                              |<br>| project_id                          | 8dde31e24eba41bfb7212ae154d61268                                                       |<br>| properties                          |                                                                                        |<br>| server_groups                       | []                                                                                     |<br>| status                              | ERROR                                                                                  |<br>| tags                                | []                                                                                     |<br>| trusted_image_certificates          | None                                                                                   |<br>| updated                             | 2022-02-14T10:20:49Z                                                                   |<br>| user_id                             | f689d147221549f1a6cbd1310078127d                                                       |<br>| volumes_attached                    |                                                                                        |<br>+-------------------------------------+----------------------------------------------------------------------------------------+<br>(overcloud) [stack@undercloud v4]$<br>(overcloud) [stack@undercloud v4]$<br></div><div><br></div><div> For your reference our update flavor and baremetal node properties are as below:</div><div><br></div><div>(overcloud) [stack@undercloud v4]$ <b>openstack flavor show bm-flavor --fit-width</b><br>+----------------------------+-------------------------------------------------------------------------------------------------+<br>| Field                      | Value                                                                                           |<br>+----------------------------+-------------------------------------------------------------------------------------------------+<br>| OS-FLV-DISABLED:disabled   | False                                                                                           |<br>| OS-FLV-EXT-DATA:ephemeral  | 0                                                                                               |<br>| access_project_ids         | None                                                                                            |<br>| description                | None                                                                                            |<br>| disk                       | 470                                                                                             |<br>| extra_specs                | {'resources:CUSTOM_BAREMETAL_RESOURCE_CLASS': '1', 'resources:VCPU': '0',                       |<br>|                            | 'resources:MEMORY_MB': '0', 'resources:DISK_GB': '0', 'capabilities':                           |<br>|                            | 'boot_mode:uefi,boot_option:local'}                                                             |<br>| id                         | 021c3021-56ec-4eba-bf57-c516ee9b2ee3                                                            |<br>| name                       | bm-flavor                                                                                       |<br>| os-flavor-access:is_public | True                                                                                            |<br>| properties                 |<b> capabilities='boot_mode:uefi,boot_option:local', resources:CUSTOM_BAREMETAL_RESOURCE_CLASS='1', |</b><br>|                            | resources:DISK_GB='0', resources:MEMORY_MB='0', resources:VCPU='0'                              |<br>| ram                        | 63700                                                                                           |<br>| rxtx_factor                | 1.0                                                                                             |<br>| swap                       | 0                                                                                               |<br>| vcpus                      | 20                                                                                              |<br>+----------------------------+-------------------------------------------------------------------------------------------------+<br>(overcloud) [stack@undercloud v4]$<br><br>(overcloud) [stack@undercloud v4]$<br><br></div><div>(overcloud) [stack@undercloud v4]$<b> openstack baremetal node show baremetal-node --fit-width</b><br>+------------------------+-----------------------------------------------------------------------------------------------------+<br>| Field                  | Value                                                                                               |<br>+------------------------+-----------------------------------------------------------------------------------------------------+<br>| allocation_uuid        | None                                                                                                |<br>| automated_clean        | None                                                                                                |<br>| bios_interface         | no-bios                                                                                             |<br>| boot_interface         | ipxe                                                                                                |<br>| chassis_uuid           | None                                                                                                |<br>| clean_step             | {}                                                                                                  |<br>| conductor              | overcloud-controller-0.localdomain                                                                  |<br>| conductor_group        |                                                                                                     |<br>| console_enabled        | False                                                                                               |<br>| console_interface      | ipmitool-socat                                                                                      |<br>| created_at             | 2022-02-14T10:05:32+00:00                                                                           |<br>| deploy_interface       | iscsi                                                                                               |<br>| deploy_step            | {}                                                                                                  |<br>| description            | None                                                                                                |<br>| driver                 | ipmi                                                                                                |<br>| driver_info            | {'ipmi_port': 623, 'ipmi_username': 'hsc', 'ipmi_password': '******', 'ipmi_address': '10.0.1.183', |<br>|                        | 'deploy_kernel': '95a5b644-c04e-4a66-8f2b-e1e9806bed6e', 'deploy_ramdisk':                          |<br>|                        | '17644220-e623-4981-ae77-d789657851ba'}                                                             |<br>| driver_internal_info   | {'agent_erase_devices_iterations': 1, 'agent_erase_devices_zeroize': True,                          |<br>|                        | 'agent_continue_if_ata_erase_failed': False, 'agent_enable_ata_secure_erase': True,                 |<br>|                        | 'disk_erasure_concurrency': 1, 'last_power_state_change': '2022-02-14T10:15:05.062161',             |<br>|                        | 'agent_version': '5.0.5.dev25', 'agent_last_heartbeat': '2022-02-14T10:14:59.666025',               |<br>|                        | 'hardware_manager_version': {'generic_hardware_manager': '1.1'}, 'agent_cached_clean_steps':        |<br>|                        | {'deploy': [{'step': 'erase_devices', 'priority': 10, 'interface': 'deploy', 'reboot_requested':    |<br>|                        | False, 'abortable': True}, {'step': 'erase_devices_metadata', 'priority': 99, 'interface':          |<br>|                        | 'deploy', 'reboot_requested': False, 'abortable': True}], 'raid': [{'step': 'delete_configuration', |<br>|                        | 'priority': 0, 'interface': 'raid', 'reboot_requested': False, 'abortable': True}, {'step':         |<br>|                        | 'create_configuration', 'priority': 0, 'interface': 'raid', 'reboot_requested': False, 'abortable': |<br>|                        | True}]}, 'agent_cached_clean_steps_refreshed': '2022-02-14 10:14:58.093777', 'clean_steps': None}   |<br>| extra                  | {}                                                                                                  |<br>| fault                  | None                                                                                                |<br>| inspect_interface      | inspector                                                                                           |<br>| inspection_finished_at | None                                                                                                |<br>| inspection_started_at  | None                                                                                                |<br>| instance_info          | {}                                                                                                  |<br>| instance_uuid          | None                                                                                                |<br>| last_error             | None                                                                                                |<br>| maintenance            | False                                                                                               |<br>| maintenance_reason     | None                                                                                                |<br>| management_interface   | ipmitool                                                                                            |<br>| name                   | baremetal-node                                                                                      |<br>| network_interface      | flat                                                                                                |<br>| owner                  | None                                                                                                |<br>| power_interface        | ipmitool                                                                                            |<br>| power_state            | power off                                                                                           |<br>| <b>properties             | {'cpus': 20, 'memory_mb': 63700, 'local_gb': 470, 'cpu_arch': 'x86_64', 'capabilities':             |<br>|                        | 'boot_mode:uefi,boot_option:local', 'vendor': 'hewlett-packard'}        </b>                            |<br>| protected              | False                                                                                               |<br>| protected_reason       | None                                                                                                |<br>| provision_state        | available                                                                                           |<br>| provision_updated_at   | 2022-02-14T10:15:27+00:00                                                                           |<br>| raid_config            | {}                                                                                                  |<br>| raid_interface         | no-raid                                                                                             |<br>| rescue_interface       | agent                                                                                               |<br>| reservation            | None                                                                                                |<br>| resource_class         | baremetal-resource-class                                                                            |<br>| storage_interface      | noop                                                                                                |<br>| target_power_state     | None                                                                                                |<br>| target_provision_state | None                                                                                                |<br>| target_raid_config     | {}                                                                                                  |<br>| traits                 | []                                                                                                  |<br>| updated_at             | 2022-02-14T10:15:27+00:00                                                                           |<br>| uuid                   | cd021878-40eb-407c-87c5-ce6ef92d29eb                                                                |<br>| vendor_interface       | ipmitool                                                                                            |<br>+------------------------+-----------------------------------------------------------------------------------------------------+<br>(overcloud) [stack@undercloud v4]$,<br></div><div><br></div><div>On further debugging, we found that in the nova-scheduler logs :</div><div><b><br>2022-02-14 12:58:22.830 7 WARNING keystoneauth.discover [-] Failed to contact the endpoint at <a href="http://172.16.2.224:8778/placement" target="_blank">http://172.16.2.224:8778/placement</a> for discovery. Fallback to using that endpoint as the base url.<br>2022-02-14 12:58:23.438 7 WARNING keystoneauth.discover [req-ad5801e4-efd7-4159-a601-68e72c0d651f - - - - -] Failed to contact the endpoint at <a href="http://172.16.2.224:8778/placement" target="_blank">http://172.16.2.224:8778/placement</a> for discovery. Fallback to using that endpoint as the base url.</b><br></div><div><br></div><div>where 172.16.2.224 is the internal IP. </div><div><br></div><div>going by document :</div><div><a href="https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/baremetal_overcloud.html" target="_blank">Bare Metal Instances in Overcloud — TripleO 3.0.0 documentation (openstack.org)</a><br></div><div><br></div><div>it is given as below for commands:</div><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div>(overcloud) [root@overcloud-controller-0 ~]# endpoint=<a href="http://172.16.2.224:8778/placement" target="_blank">http://172.16.2.224:8778/placement</a></div><div>(overcloud) [root@overcloud-controller-0 ~]# token=$(openstack token issue -f value -c id)</div><div>(overcloud) [root@overcloud-controller-0 ~]# curl -sH "X-Auth-Token: $token" $endpoint/resource_providers/<node id> | jq .inventories</div><div><b>null</b></div></blockquote><div>result is the same even if we run the curl command on public endpoint.<br></div><div><br></div><div>Please advice. </div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Feb 12, 2022 at 12:45 AM Julia Kreger <<a href="mailto:juliaashleykreger@gmail.com" target="_blank">juliaashleykreger@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Feb 11, 2022 at 6:32 AM Lokendra Rathour <<a href="mailto:lokendrarathour@gmail.com" target="_blank">lokendrarathour@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">Hi Harald/ Openstack Team,<div>Thank you again for your support.</div><div><br></div><div>we have successfully provisioned the baremetal node as per the inputs shared by you. The only change that we did was to add an entry for the ServiceNetmap.</div><div><br></div><div>Further, we were trying to launch the baremetal node instance  in which we are facing ISSUE as mentioned below:</div><div><br></div><div><br></div></div></div></blockquote><div>[trim'ed picture because of message size] </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div></div><div><br></div><div><i>"2022-02-11 18:13:45.840 7 ERROR nova.compute.manager [req-aafdea4d-815f-4504-b7d7-4fd95d1e083e - - - - -] Error updating resources for node 9560bc2d-5f94-4ba0-9711-340cb8ad7d8a.: nova.exception.NoResourceClass: Resource class not found for Ironic node 9560bc2d-5f94-4ba0-9711-340cb8ad7d8a.</i></div><i>2022-02-11 18:13:45.840 7 ERROR nova.compute.manager Traceback (most recent call last):<br>2022-02-11 18:13:45.840 7 ERROR nova.compute.manager   File "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line 8894, in _update_available_resource_for_node</i><br><div>"</div></div></div></blockquote><div><br></div><div>So this exception can only be raised if the resource_class field is just not populated for the node. It is a required field for nova/ironic integration. Also, Interestingly enough, this UUID in the error doesn't match the baremetal node below. I don't know if that is intentional?</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div>for your reference please refer following details:</div><div>(overcloud) [stack@undercloud v4]$ openstack baremetal node show baremetal-node --fit-width<br>+------------------------+-------------------------------------------------------------------------------------------------------------------+<br>| Field                  | Value                                                                                                             |<br>+------------------------+-------------------------------------------------------------------------------------------------------------------+<br>| allocation_uuid        | None                                                                                                              |<br>| automated_clean        | None                                                                                                              |<br>| bios_interface         | no-bios                                                                                                           |<br>| boot_interface         | ipxe                                                                                                              |<br>| chassis_uuid           | None                                                                                                              |<br>| clean_step             | {}                                                                                                                |<br>| conductor              | overcloud-controller-0.localdomain                                                                                |<br>| conductor_group        |                                                                                                                   |<br>| console_enabled        | False                                                                                                             |<br>| console_interface      | ipmitool-socat                                                                                                    |<br>| created_at             | 2022-02-11T13:02:40+00:00                                                                                         |<br>| deploy_interface       | iscsi                                                                                                             |<br>| deploy_step            | {}                                                                                                                |<br>| description            | None                                                                                                              |<br>| driver                 | ipmi                                                                                                              |<br>| driver_info            | {'ipmi_port': 623, 'ipmi_username': 'hsc', 'ipmi_password': '******', 'ipmi_address': '10.0.1.183',               |<br>|                        | 'deploy_kernel': 'bc62f3dc-d091-4dbd-b730-cf7b6cb48625', 'deploy_ramdisk':                                        |<br>|                        | 'd58bcc08-cb7c-4f21-8158-0a5ed4198108'}                                                                           |<br>| driver_internal_info   | {'agent_erase_devices_iterations': 1, 'agent_erase_devices_zeroize': True, 'agent_continue_if_ata_erase_failed':  |<br>|                        | False, 'agent_enable_ata_secure_erase': True, 'disk_erasure_concurrency': 1, 'last_power_state_change':           |<br>|                        | '2022-02-11T13:14:29.581361', 'agent_version': '5.0.5.dev25', 'agent_last_heartbeat':                             |<br>|                        | '2022-02-11T13:14:24.151928', 'hardware_manager_version': {'generic_hardware_manager': '1.1'},                    |<br>|                        | 'agent_cached_clean_steps': {'deploy': [{'step': 'erase_devices', 'priority': 10, 'interface': 'deploy',          |<br>|                        | 'reboot_requested': False, 'abortable': True}, {'step': 'erase_devices_metadata', 'priority': 99, 'interface':    |<br>|                        | 'deploy', 'reboot_requested': False, 'abortable': True}], 'raid': [{'step': 'delete_configuration', 'priority':   |<br>|                        | 0, 'interface': 'raid', 'reboot_requested': False, 'abortable': True}, {'step': 'create_configuration',           |<br>|                        | 'priority': 0, 'interface': 'raid', 'reboot_requested': False, 'abortable': True}]},                              |<br>|                        | 'agent_cached_clean_steps_refreshed': '2022-02-11 13:14:22.580729', 'clean_steps': None}                          |<br>| extra                  | {}                                                                                                                |<br>| fault                  | None                                                                                                              |<br>| inspect_interface      | inspector                                                                                                         |<br>| inspection_finished_at | None                                                                                                              |<br>| inspection_started_at  | None                                                                                                              |<br>| instance_info          | {}                                                                                                                |<br>| instance_uuid          | None                                                                                                              |<br>| last_error             | None                                                                                                              |<br>| maintenance            | False                                                                                                             |<br>| maintenance_reason     | None                                                                                                              |<br>| management_interface   | ipmitool                                                                                                          |<br>| name                   | baremetal-node                                                                                                    |<br>| network_interface      | flat                                                                                                              |<br>| owner                  | None                                                                                                              |<br>| power_interface        | ipmitool                                                                                                          |<br>| power_state            | power off                                                                                                         |<br><b>|<span style="background-color:rgb(255,242,204)"> properties             | {'cpus': 20, 'memory_mb': 63700, 'local_gb': 470, 'cpu_arch': 'x86_64', 'capabilities':                           |<br>|                        | 'boot_option:local,boot_mode:uefi', 'vendor': 'hewlett-packard'}  </span>   </b>                                             |<br>| protected              | False                                                                                                             |<br>| protected_reason       | None                                                                                                              |<br>| provision_state        | available                                                                                                         |<br>| provision_updated_at   | 2022-02-11T13:14:51+00:00                                                                                         |<br>| raid_config            | {}                                                                                                                |<br>| raid_interface         | no-raid                                                                                                           |<br>| rescue_interface       | agent                                                                                                             |<br>| reservation            | None                                                                                                              |<br><b style="background-color:rgb(255,242,204)">| resource_class         | baremetal-resource-class  </b>                                                                                        |<br>| storage_interface      | noop                                                                                                              |<br>| target_power_state     | None                                                                                                              |<br>| target_provision_state | None                                                                                                              |<br>| target_raid_config     | {}                                                                                                                |<br>| traits                 | []                                                                                                                |<br>| updated_at             | 2022-02-11T13:14:52+00:00                                                                                         |<br>| uuid                   | e64ad28c-43d6-4b9f-aa34-f8bc58e9e8fe                                                                              |<br>| vendor_interface       | ipmitool                                                                                                          |<br>+------------------------+-------------------------------------------------------------------------------------------------------------------+<br>(overcloud) [stack@undercloud v4]$<br></div><div><br></div><div><br></div><div><br></div><div><br></div><div>(overcloud) [stack@undercloud v4]$ openstack flavor show my-baremetal-flavor --fit-width<br>+----------------------------+---------------------------------------------------------------------------------------------------------------+<br>| Field                      | Value                                                                                                         |<br>+----------------------------+---------------------------------------------------------------------------------------------------------------+<br>| OS-FLV-DISABLED:disabled   | False                                                                                                         |<br>| OS-FLV-EXT-DATA:ephemeral  | 0                                                                                                             |<br>| access_project_ids         | None                                                                                                          |<br>| description                | None                                                                                                          |<br>| disk                       | 470                                                                                                           |<br><b style="background-color:rgb(255,242,204)">| extra_specs                | {'resources:CUSTOM_BAREMETAL_RESOURCE_CLASS': '1', 'resources:VCPU': '0', 'resources:MEMORY_MB': '0',         |<br>|                            | 'resources:DISK_GB': '0', 'capabilities:boot_option': 'local,boot_mode:uefi'} </b>                                |<br>| id                         | 66a13404-4c47-4b67-b954-e3df42ae8103                                                                          |<br>| name                       | my-baremetal-flavor                                                                                           |<br>| os-flavor-access:is_public | True                                                                                                          |<br><b>| properties                 | capabilities:boot_option='local,boot_mode:uefi', resources:CUSTOM_BAREMETAL_RESOURCE_CLASS='1',               |<br>|                            | resources:DISK_GB='0', resources:MEMORY_MB='0', resources:VCPU='0'    </b>                                        |<br>| ram                        | 63700                                                                                                         |<br>| rxtx_factor                | 1.0                                                                                                           |<br>| swap                       | 0                                                                                                             |<br>| vcpus                      | 20                                                                                                            |<br>+----------------------------+---------------------------------------------------------------------------------------------------------------+<br></div></div></div></blockquote><div><br></div><div>However you've set your capabilities field, it is actually unable to be parsed. Then again, it doesn't *have* to be defined to match the baremetal node. The setting can still apply on the baremetal node if that is the operational default for the machine as defined on the machine itself.</div><div><br></div><div>I suspect, based upon whatever the precise nova settings are, this would result in an inability to schedule on to the node because it would parse it incorrectly, possibly looking for a key value of "capabilities:boot_option", instead of "capabilities".</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div>(overcloud) [stack@undercloud v4]$<br></div><div><br></div><div>Can you please check and suggest if something is missing.</div><div><br></div><div>Thanks once again for your support.</div><div><br></div><div>-Lokendra</div><div><br></div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Feb 10, 2022 at 10:09 PM Lokendra Rathour <<a href="mailto:lokendrarathour@gmail.com" target="_blank">lokendrarathour@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi Harald, </div><div>Thanks for the response, please find my response inline:</div><div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Feb 10, 2022 at 8:24 PM Harald Jensas <<a href="mailto:hjensas@redhat.com" target="_blank">hjensas@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 2/10/22 14:49, Lokendra Rathour wrote:<br>
> Hi Harald,<br>
> Thanks once again for your support, we tried activating the parameters:<br>
> ServiceNetMap:<br>
>      IronicApiNetwork: provisioning<br>
>      IronicNetwork: provisioning<br>
> at environments/network-environments.yaml<br>
> image.png<br>
> After changing these values the updated or even the fresh deployments <br>
> are failing.<br>
> <br>
<br>
How did deployment fail?<br></blockquote><div><br></div><div>[Loke] : it failed immediately after when the IP for ctlplane network is assigned, and ssh is established and stack creation is completed, I think at the start of ansible execution. </div></div><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_quote"><div>Error:</div></div><div class="gmail_quote"><div>"enabling ssh admin - COMPLETE.</div></div><div class="gmail_quote"><div>Host 10.0.1.94 not found in /home/stack/.ssh/known_hosts"</div></div><div class="gmail_quote"><div>Although this message is even seen when the deployment is successful. so I do not think this is the culprit.</div></div></blockquote><div class="gmail_quote"><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> The command that we are using to deploy the OpenStack overcloud:<br>
> /openstack overcloud deploy --templates \<br>
>      -n /home/stack/templates/network_data.yaml \<br>
>      -r /home/stack/templates/roles_data.yaml \<br>
>      -e /home/stack/templates/node-info.yaml \<br>
>      -e /home/stack/templates/environment.yaml \<br>
>      -e /home/stack/templates/environments/network-isolation.yaml \<br>
>      -e /home/stack/templates/environments/network-environment.yaml \<br>
<br>
What modifications did you do to network-isolation.yaml and</blockquote><div>[Loke]:</div><div><b>Network-isolation.yaml:</b></div></div><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_quote"><div># Enable the creation of Neutron networks for isolated Overcloud</div></div><div class="gmail_quote"><div># traffic and configure each role to assign ports (related</div></div><div class="gmail_quote"><div># to that role) on these networks.</div></div><div class="gmail_quote"><div>resource_registry:</div></div><div class="gmail_quote"><div>  # networks as defined in network_data.yaml</div></div><div class="gmail_quote"><div>  OS::TripleO::Network::J3Mgmt: ../network/j3mgmt.yaml</div></div><div class="gmail_quote"><div>  OS::TripleO::Network::Tenant: ../network/tenant.yaml</div></div><div class="gmail_quote"><div>  OS::TripleO::Network::InternalApi: ../network/internal_api.yaml</div></div><div class="gmail_quote"><div>  OS::TripleO::Network::External: ../network/external.yaml</div></div></blockquote><div class="gmail_quote"><div><br></div></div><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_quote"><div>  # Port assignments for the VIPs</div></div>  OS::TripleO::Network::Ports::J3MgmtVipPort: ../network/ports/j3mgmt.yaml<br><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_quote"><br></div></blockquote><div class="gmail_quote"><div>  OS::TripleO::Network::Ports::InternalApiVipPort: ../network/ports/internal_api.yaml</div></div>  OS::TripleO::Network::Ports::ExternalVipPort: ../network/ports/external.yaml<br><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_quote"><br></div></blockquote><div class="gmail_quote"><div>  OS::TripleO::Network::Ports::RedisVipPort: ../network/ports/vip.yaml</div></div>  OS::TripleO::Network::Ports::OVNDBsVipPort: ../network/ports/vip.yaml<br></blockquote><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_quote"><br></div></blockquote><div class="gmail_quote"><div><br></div></div><div class="gmail_quote"><div>  # Port assignments by role, edit role definition to assign networks to roles.</div></div><div class="gmail_quote"><div>  # Port assignments for the Controller</div></div><div class="gmail_quote"><div>  OS::TripleO::Controller::Ports::J3MgmtPort: ../network/ports/j3mgmt.yaml</div></div><div class="gmail_quote"><div>  OS::TripleO::Controller::Ports::TenantPort: ../network/ports/tenant.yaml</div></div><div class="gmail_quote"><div>  OS::TripleO::Controller::Ports::InternalApiPort: ../network/ports/internal_api.yaml</div></div><div class="gmail_quote"><div>  OS::TripleO::Controller::Ports::ExternalPort: ../network/ports/external.yaml</div></div><div class="gmail_quote"><div>  # Port assignments for the Compute</div></div><div class="gmail_quote"><div>  OS::TripleO::Compute::Ports::J3MgmtPort: ../network/ports/j3mgmt.yaml</div></div><div class="gmail_quote"><div>  OS::TripleO::Compute::Ports::TenantPort: ../network/ports/tenant.yaml</div></div><div class="gmail_quote"><div>  OS::TripleO::Compute::Ports::InternalApiPort: ../network/ports/internal_api.yaml</div></div></blockquote><div class="gmail_quote"><div>~<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <br>
network-environment.yaml?<br></blockquote><div><br></div>resource_registry:<br>  # Network Interface templates to use (these files must exist). You can<br>  # override these by including one of the net-*.yaml environment files,<br>  # such as net-bond-with-vlans.yaml, or modifying the list here.<br>  # Port assignments for the Controller<br>  OS::TripleO::Controller::Net::SoftwareConfig:<br>    ../network/config/bond-with-vlans/controller.yaml<br>  # Port assignments for the Compute<br>  OS::TripleO::Compute::Net::SoftwareConfig:<br>    ../network/config/bond-with-vlans/compute.yaml<br><div>parameter_defaults:<br> <br>  J3MgmtNetCidr: '<a href="http://80.0.1.0/24" target="_blank">80.0.1.0/24</a>'<br>  J3MgmtAllocationPools: [{'start': '80.0.1.4', 'end': '80.0.1.250'}]<br>  J3MgmtNetworkVlanID: 400<br><br>  TenantNetCidr: '<a href="http://172.16.0.0/24" target="_blank">172.16.0.0/24</a>'<br>  TenantAllocationPools: [{'start': '172.16.0.4', 'end': '172.16.0.250'}]<br>  TenantNetworkVlanID: 416<br>  TenantNetPhysnetMtu: 1500<br><br>  InternalApiNetCidr: '<a href="http://172.16.2.0/24" target="_blank">172.16.2.0/24</a>'<br>  InternalApiAllocationPools: [{'start': '172.16.2.4', 'end': '172.16.2.250'}]<br>  InternalApiNetworkVlanID: 418<br><br>  ExternalNetCidr: '<a href="http://10.0.1.0/24" target="_blank">10.0.1.0/24</a>'<br>  ExternalAllocationPools: [{'start': '10.0.1.85', 'end': '10.0.1.98'}]<br>  ExternalNetworkVlanID: 408<br><br>  DnsServers: []<br>  NeutronNetworkType: 'geneve,vlan'<br>  NeutronNetworkVLANRanges: 'datacentre:1:1000'<br>  BondInterfaceOvsOptions: "bond_mode=active-backup"<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
I typically use:<br>
-e <br>
/usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml<br>
-e <br>
/usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml<br>
-e /home/stack/templates/environments/network-overrides.yaml<br>
<br>
The network-isolation.yaml and network-environment.yaml are Jinja2 <br>
rendered based on the -n input, so too keep in sync with change in the <br>
`-n` file reference the file in <br>
/usr/share/opentack-tripleo-heat-templates. Then add overrides in <br>
network-overrides.yaml as neede.<br></blockquote><div><br></div><div>[Loke] : we are using this as like only, I do not know what you pass in network-overrides.yaml but I pass other files as per commands as below:</div><div><br></div>[stack@undercloud templates]$ cat environment.yaml<br>parameter_defaults:<br>  ControllerCount: 3<br>  TimeZone: 'Asia/Kolkata'<br>  NtpServer: ['30.30.30.3']<br>  NeutronBridgeMappings: datacentre:br-ex,baremetal:br-baremetal<br>  NeutronFlatNetworks: datacentre,baremetal<br>[stack@undercloud templates]$ cat ironic-config.yaml<br>parameter_defaults:<br>    IronicEnabledHardwareTypes:<br>        - ipmi<br>        - redfish<br>    IronicEnabledPowerInterfaces:<br>        - ipmitool<br>        - redfish<br>    IronicEnabledManagementInterfaces:<br>        - ipmitool<br>        - redfish<br>    IronicCleaningDiskErase: metadata<br>    IronicIPXEEnabled: true<br>    IronicInspectorSubnets:<br>    - ip_range: 172.23.3.100,172.23.3.150<br>    IPAImageURLs: '["<a href="http://30.30.30.1:8088/agent.kernel" target="_blank">http://30.30.30.1:8088/agent.kernel</a>", "<a href="http://30.30.30.1:8088/agent.ramdisk" target="_blank">http://30.30.30.1:8088/agent.ramdisk</a>"]'<br>    IronicInspectorInterface: 'br-baremetal'<br>[stack@undercloud templates]$<br>[stack@undercloud templates]$ cat node-info.yaml<br>parameter_defaults:<br>  OvercloudControllerFlavor: control<br>  OvercloudComputeFlavor: compute<br>  ControllerCount: 3<br>  ComputeCount: 1<br>[stack@undercloud templates]$<br><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
>      -e <br>
> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml <br>
> \<br>
>      -e <br>
> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml <br>
> \<br>
>      -e <br>
> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml <br>
> \<br>
>      -e /home/stack/templates/ironic-config.yaml \<br>
>      -e <br>
> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \<br>
>      -e <br>
> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \<br>
>      -e /home/stack/containers-prepare-parameter.yaml/<br>
> <br>
> **/home/stack/templates/ironic-config.yaml :<br>
> (overcloud) [stack@undercloud ~]$ cat <br>
> /home/stack/templates/ironic-config.yaml<br>
> parameter_defaults:<br>
>      IronicEnabledHardwareTypes:<br>
>          - ipmi<br>
>          - redfish<br>
>      IronicEnabledPowerInterfaces:<br>
>          - ipmitool<br>
>          - redfish<br>
>      IronicEnabledManagementInterfaces:<br>
>          - ipmitool<br>
>          - redfish<br>
>      IronicCleaningDiskErase: metadata<br>
>      IronicIPXEEnabled: true<br>
>      IronicInspectorSubnets:<br>
>      - ip_range: 172.23.3.100,172.23.3.150<br>
>      IPAImageURLs: '["<a href="http://30.30.30.1:8088/agent.kernel" rel="noreferrer" target="_blank">http://30.30.30.1:8088/agent.kernel</a> <br>
> <<a href="http://30.30.30.1:8088/agent.kernel" rel="noreferrer" target="_blank">http://30.30.30.1:8088/agent.kernel</a>>", <br>
> "<a href="http://30.30.30.1:8088/agent.ramdisk" rel="noreferrer" target="_blank">http://30.30.30.1:8088/agent.ramdisk</a> <br>
> <<a href="http://30.30.30.1:8088/agent.ramdisk" rel="noreferrer" target="_blank">http://30.30.30.1:8088/agent.ramdisk</a>>"] >      IronicInspectorInterface: 'br-baremetal'<br>
> <br>
> Also the baremetal network(provisioning)(172.23.3.x)  is  routed with <br>
> ctlplane/admin network (30.30.30.x)<br>
> <br>
<br>
Unless the network you created in the overcloud is named `provisioning`, <br>
these parameters may be relevant.<br>
<br>
IronicCleaningNetwork:<br>
   default: 'provisioning'<br>
   description: Name or UUID of the *overcloud* network used for cleaning<br>
                bare metal nodes. The default value of "provisioning" can be<br>
                left during the initial deployment (when no networks are<br>
                created yet) and should be changed to an actual UUID in<br>
                a post-deployment stack update.<br>
   type: string<br>
<br>
IronicProvisioningNetwork:<br>
   default: 'provisioning'<br>
   description: Name or UUID of the *overcloud* network used for <br>
provisioning<br>
                of bare metal nodes, if IronicDefaultNetworkInterface is<br>
                set to "neutron". The default value of "provisioning" can be<br>
                left during the initial deployment (when no networks are<br>
                created yet) and should be changed to an actual UUID in<br>
                a post-deployment stack update.<br>
   type: string<br>
<br>
IronicRescuingNetwork:<br>
   default: 'provisioning'<br>
   description: Name or UUID of the *overcloud* network used for resucing<br>
                of bare metal nodes, if IronicDefaultRescueInterface is not<br>
                set to "no-rescue". The default value of "provisioning" <br>
can be<br>
                left during the initial deployment (when no networks are<br>
                created yet) and should be changed to an actual UUID in<br>
                a post-deployment stack update.<br>
   type: string<br>
<br>
> *Query:*<br>
> <br>
>  1. any other location/way where we should add these so that they are<br>
>     included without error.<br>
> <br>
>         *ServiceNetMap:*<br>
> <br>
>         *    IronicApiNetwork: provisioning*<br>
> <br>
>         *    IronicNetwork: provisioning*<br>
> <br>
<br>
`provisioning` network is defined in -n <br>
/home/stack/templates/network_data.yaml right? </blockquote><div>[Loke]: No it does not have any entry for provisioning in this file, it is network entry for J3Mgmt,Tenant,InternalApi, and External. These n/w's are added as vlan based under the br-ext bridge. </div><div>provisioning network I am creating after the overcloud is deployed and before the baremetal node is provisioned. </div><div>in the provisioning network, we are giving the range of the ironic network. (172.23.3.x)</div><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">And an entry in <br>
'networks' for the controller role in /home/stack/templates/roles_data.yaml?<br></blockquote><div>[Loke]: we also did not added a similar entry in the roles_data.yaml as well.</div><div><br></div><div>Just to add with these two files we have rendered the remaining templates. </div><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
<br>
>       2. Also are these commands(mentioned above) configure Baremetal<br>
>     services are fine.<br>
> <br>
<br>
Yes, what you are doing makes sense.<br>
<br>
I'm actually not sure why it did'nt work with your previous <br>
configuration, it got the information about NBP file and obviously <br>
attempted to download it from 30.30.30.220. With routing in place, that <br>
should work.<br>
<br>
Changeing the ServiceNetMap to move IronicNetwork services to the <br>
172.23.3 would avoid the routing.<br></blockquote><div>[Loke] : we can try this but are somehow not able to do so because of some weird reasons.  </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
<br>
What is NeutronBridgeMappings?<br>
  br-baremetal maps to the physical network of the overcloud <br>
`provisioning` neutron network?<br></blockquote><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
[Loke] : yes , we create br-barmetal and then we create provisioning network mapping it to br-baremetal.<br>
<br></blockquote></div><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_quote"><div>Also attaching the complete rendered template folder along with custom yaml files that I am using, maybe referring that you might have a more clear picture of our problem.</div></div><div class="gmail_quote"><div>Any clue would help.</div></div><div class="gmail_quote"><div>Our problem,</div></div><div class="gmail_quote"><div>we are not able to provision the baremetal node after the overcloud is deployed.</div></div><div class="gmail_quote"><div>Do we have any straight-forward documents using which we can test the baremetal provision, please provide that. </div></div><div class="gmail_quote"><div><br></div></div><div class="gmail_quote"><div>Thanks once again for reading all these. </div></div></blockquote><div class="gmail_quote"><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
--<br>
Harald<br>
<br>
</blockquote></div><br clear="all"><div><br></div>-<br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div>skype: lokendrarathour</div><div dir="ltr"><img src="https://drive.google.com/uc?id=0BynJnQEa1sUyU2dxclR4dVVWM0E&export=download&resourcekey=0-SqQLe-ZfiPFkKfdNa8WpMg" width="200" height="41"><br></div><div dir="ltr"><br></div></div></div></div></div></div>
</blockquote></div><br clear="all"><div><br></div>--<div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><img src="https://drive.google.com/uc?id=0BynJnQEa1sUyU2dxclR4dVVWM0E&export=download&resourcekey=0-SqQLe-ZfiPFkKfdNa8WpMg" width="200" height="41"><br></div><div dir="ltr"><br></div></div></div></div></div></div>
</blockquote></div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div dir="ltr">~ Lokendra</div><div dir="ltr"><a href="http://www.inertiaspeaks.com" target="_blank">www.inertiaspeaks.com</a></div><div dir="ltr"><a href="http://www.inertiagroups.com" target="_blank">www.inertiagroups.com</a></div><div>skype: lokendrarathour</div><div dir="ltr"><img src="https://drive.google.com/uc?id=0BynJnQEa1sUyU2dxclR4dVVWM0E&export=download&resourcekey=0-SqQLe-ZfiPFkKfdNa8WpMg" width="200" height="41"><br></div><div dir="ltr"><br></div></div></div></div></div>
</blockquote></div>