<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Feb 11, 2022 at 6:32 AM Lokendra Rathour <<a href="mailto:lokendrarathour@gmail.com">lokendrarathour@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">Hi Harald/ Openstack Team,<div>Thank you again for your support.</div><div><br></div><div>we have successfully provisioned the baremetal node as per the inputs shared by you. The only change that we did was to add an entry for the ServiceNetmap.</div><div><br></div><div>Further, we were trying to launch the baremetal node instance  in which we are facing ISSUE as mentioned below:</div><div><br></div><div><br></div></div></div></blockquote><div>[trim'ed picture because of message size] </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div></div><div><br></div><div><i>"2022-02-11 18:13:45.840 7 ERROR nova.compute.manager [req-aafdea4d-815f-4504-b7d7-4fd95d1e083e - - - - -] Error updating resources for node 9560bc2d-5f94-4ba0-9711-340cb8ad7d8a.: nova.exception.NoResourceClass: Resource class not found for Ironic node 9560bc2d-5f94-4ba0-9711-340cb8ad7d8a.</i></div><i>2022-02-11 18:13:45.840 7 ERROR nova.compute.manager Traceback (most recent call last):<br>2022-02-11 18:13:45.840 7 ERROR nova.compute.manager   File "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line 8894, in _update_available_resource_for_node</i><br><div>"</div></div></div></blockquote><div><br></div><div>So this exception can only be raised if the resource_class field is just not populated for the node. It is a required field for nova/ironic integration. Also, Interestingly enough, this UUID in the error doesn't match the baremetal node below. I don't know if that is intentional?</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div>for your reference please refer following details:</div><div>(overcloud) [stack@undercloud v4]$ openstack baremetal node show baremetal-node --fit-width<br>+------------------------+-------------------------------------------------------------------------------------------------------------------+<br>| Field                  | Value                                                                                                             |<br>+------------------------+-------------------------------------------------------------------------------------------------------------------+<br>| allocation_uuid        | None                                                                                                              |<br>| automated_clean        | None                                                                                                              |<br>| bios_interface         | no-bios                                                                                                           |<br>| boot_interface         | ipxe                                                                                                              |<br>| chassis_uuid           | None                                                                                                              |<br>| clean_step             | {}                                                                                                                |<br>| conductor              | overcloud-controller-0.localdomain                                                                                |<br>| conductor_group        |                                                                                                                   |<br>| console_enabled        | False                                                                                                             |<br>| console_interface      | ipmitool-socat                                                                                                    |<br>| created_at             | 2022-02-11T13:02:40+00:00                                                                                         |<br>| deploy_interface       | iscsi                                                                                                             |<br>| deploy_step            | {}                                                                                                                |<br>| description            | None                                                                                                              |<br>| driver                 | ipmi                                                                                                              |<br>| driver_info            | {'ipmi_port': 623, 'ipmi_username': 'hsc', 'ipmi_password': '******', 'ipmi_address': '10.0.1.183',               |<br>|                        | 'deploy_kernel': 'bc62f3dc-d091-4dbd-b730-cf7b6cb48625', 'deploy_ramdisk':                                        |<br>|                        | 'd58bcc08-cb7c-4f21-8158-0a5ed4198108'}                                                                           |<br>| driver_internal_info   | {'agent_erase_devices_iterations': 1, 'agent_erase_devices_zeroize': True, 'agent_continue_if_ata_erase_failed':  |<br>|                        | False, 'agent_enable_ata_secure_erase': True, 'disk_erasure_concurrency': 1, 'last_power_state_change':           |<br>|                        | '2022-02-11T13:14:29.581361', 'agent_version': '5.0.5.dev25', 'agent_last_heartbeat':                             |<br>|                        | '2022-02-11T13:14:24.151928', 'hardware_manager_version': {'generic_hardware_manager': '1.1'},                    |<br>|                        | 'agent_cached_clean_steps': {'deploy': [{'step': 'erase_devices', 'priority': 10, 'interface': 'deploy',          |<br>|                        | 'reboot_requested': False, 'abortable': True}, {'step': 'erase_devices_metadata', 'priority': 99, 'interface':    |<br>|                        | 'deploy', 'reboot_requested': False, 'abortable': True}], 'raid': [{'step': 'delete_configuration', 'priority':   |<br>|                        | 0, 'interface': 'raid', 'reboot_requested': False, 'abortable': True}, {'step': 'create_configuration',           |<br>|                        | 'priority': 0, 'interface': 'raid', 'reboot_requested': False, 'abortable': True}]},                              |<br>|                        | 'agent_cached_clean_steps_refreshed': '2022-02-11 13:14:22.580729', 'clean_steps': None}                          |<br>| extra                  | {}                                                                                                                |<br>| fault                  | None                                                                                                              |<br>| inspect_interface      | inspector                                                                                                         |<br>| inspection_finished_at | None                                                                                                              |<br>| inspection_started_at  | None                                                                                                              |<br>| instance_info          | {}                                                                                                                |<br>| instance_uuid          | None                                                                                                              |<br>| last_error             | None                                                                                                              |<br>| maintenance            | False                                                                                                             |<br>| maintenance_reason     | None                                                                                                              |<br>| management_interface   | ipmitool                                                                                                          |<br>| name                   | baremetal-node                                                                                                    |<br>| network_interface      | flat                                                                                                              |<br>| owner                  | None                                                                                                              |<br>| power_interface        | ipmitool                                                                                                          |<br>| power_state            | power off                                                                                                         |<br><b>|<span style="background-color:rgb(255,242,204)"> properties             | {'cpus': 20, 'memory_mb': 63700, 'local_gb': 470, 'cpu_arch': 'x86_64', 'capabilities':                           |<br>|                        | 'boot_option:local,boot_mode:uefi', 'vendor': 'hewlett-packard'}  </span>   </b>                                             |<br>| protected              | False                                                                                                             |<br>| protected_reason       | None                                                                                                              |<br>| provision_state        | available                                                                                                         |<br>| provision_updated_at   | 2022-02-11T13:14:51+00:00                                                                                         |<br>| raid_config            | {}                                                                                                                |<br>| raid_interface         | no-raid                                                                                                           |<br>| rescue_interface       | agent                                                                                                             |<br>| reservation            | None                                                                                                              |<br><b style="background-color:rgb(255,242,204)">| resource_class         | baremetal-resource-class  </b>                                                                                        |<br>| storage_interface      | noop                                                                                                              |<br>| target_power_state     | None                                                                                                              |<br>| target_provision_state | None                                                                                                              |<br>| target_raid_config     | {}                                                                                                                |<br>| traits                 | []                                                                                                                |<br>| updated_at             | 2022-02-11T13:14:52+00:00                                                                                         |<br>| uuid                   | e64ad28c-43d6-4b9f-aa34-f8bc58e9e8fe                                                                              |<br>| vendor_interface       | ipmitool                                                                                                          |<br>+------------------------+-------------------------------------------------------------------------------------------------------------------+<br>(overcloud) [stack@undercloud v4]$<br></div><div><br></div><div><br></div><div><br></div><div><br></div><div>(overcloud) [stack@undercloud v4]$ openstack flavor show my-baremetal-flavor --fit-width<br>+----------------------------+---------------------------------------------------------------------------------------------------------------+<br>| Field                      | Value                                                                                                         |<br>+----------------------------+---------------------------------------------------------------------------------------------------------------+<br>| OS-FLV-DISABLED:disabled   | False                                                                                                         |<br>| OS-FLV-EXT-DATA:ephemeral  | 0                                                                                                             |<br>| access_project_ids         | None                                                                                                          |<br>| description                | None                                                                                                          |<br>| disk                       | 470                                                                                                           |<br><b style="background-color:rgb(255,242,204)">| extra_specs                | {'resources:CUSTOM_BAREMETAL_RESOURCE_CLASS': '1', 'resources:VCPU': '0', 'resources:MEMORY_MB': '0',         |<br>|                            | 'resources:DISK_GB': '0', 'capabilities:boot_option': 'local,boot_mode:uefi'} </b>                                |<br>| id                         | 66a13404-4c47-4b67-b954-e3df42ae8103                                                                          |<br>| name                       | my-baremetal-flavor                                                                                           |<br>| os-flavor-access:is_public | True                                                                                                          |<br><b>| properties                 | capabilities:boot_option='local,boot_mode:uefi', resources:CUSTOM_BAREMETAL_RESOURCE_CLASS='1',               |<br>|                            | resources:DISK_GB='0', resources:MEMORY_MB='0', resources:VCPU='0'    </b>                                        |<br>| ram                        | 63700                                                                                                         |<br>| rxtx_factor                | 1.0                                                                                                           |<br>| swap                       | 0                                                                                                             |<br>| vcpus                      | 20                                                                                                            |<br>+----------------------------+---------------------------------------------------------------------------------------------------------------+<br></div></div></div></blockquote><div><br></div><div>However you've set your capabilities field, it is actually unable to be parsed. Then again, it doesn't *have* to be defined to match the baremetal node. The setting can still apply on the baremetal node if that is the operational default for the machine as defined on the machine itself.</div><div><br></div><div>I suspect, based upon whatever the precise nova settings are, this would result in an inability to schedule on to the node because it would parse it incorrectly, possibly looking for a key value of "capabilities:boot_option", instead of "capabilities".</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div>(overcloud) [stack@undercloud v4]$<br></div><div><br></div><div>Can you please check and suggest if something is missing.</div><div><br></div><div>Thanks once again for your support.</div><div><br></div><div>-Lokendra</div><div><br></div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Feb 10, 2022 at 10:09 PM Lokendra Rathour <<a href="mailto:lokendrarathour@gmail.com" target="_blank">lokendrarathour@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi Harald, </div><div>Thanks for the response, please find my response inline:</div><div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Feb 10, 2022 at 8:24 PM Harald Jensas <<a href="mailto:hjensas@redhat.com" target="_blank">hjensas@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 2/10/22 14:49, Lokendra Rathour wrote:<br>
> Hi Harald,<br>
> Thanks once again for your support, we tried activating the parameters:<br>
> ServiceNetMap:<br>
>      IronicApiNetwork: provisioning<br>
>      IronicNetwork: provisioning<br>
> at environments/network-environments.yaml<br>
> image.png<br>
> After changing these values the updated or even the fresh deployments <br>
> are failing.<br>
> <br>
<br>
How did deployment fail?<br></blockquote><div><br></div><div>[Loke] : it failed immediately after when the IP for ctlplane network is assigned, and ssh is established and stack creation is completed, I think at the start of ansible execution. </div></div><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_quote"><div>Error:</div></div><div class="gmail_quote"><div>"enabling ssh admin - COMPLETE.</div></div><div class="gmail_quote"><div>Host 10.0.1.94 not found in /home/stack/.ssh/known_hosts"</div></div><div class="gmail_quote"><div>Although this message is even seen when the deployment is successful. so I do not think this is the culprit.</div></div></blockquote><div class="gmail_quote"><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> The command that we are using to deploy the OpenStack overcloud:<br>
> /openstack overcloud deploy --templates \<br>
>      -n /home/stack/templates/network_data.yaml \<br>
>      -r /home/stack/templates/roles_data.yaml \<br>
>      -e /home/stack/templates/node-info.yaml \<br>
>      -e /home/stack/templates/environment.yaml \<br>
>      -e /home/stack/templates/environments/network-isolation.yaml \<br>
>      -e /home/stack/templates/environments/network-environment.yaml \<br>
<br>
What modifications did you do to network-isolation.yaml and</blockquote><div>[Loke]:</div><div><b>Network-isolation.yaml:</b></div></div><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_quote"><div># Enable the creation of Neutron networks for isolated Overcloud</div></div><div class="gmail_quote"><div># traffic and configure each role to assign ports (related</div></div><div class="gmail_quote"><div># to that role) on these networks.</div></div><div class="gmail_quote"><div>resource_registry:</div></div><div class="gmail_quote"><div>  # networks as defined in network_data.yaml</div></div><div class="gmail_quote"><div>  OS::TripleO::Network::J3Mgmt: ../network/j3mgmt.yaml</div></div><div class="gmail_quote"><div>  OS::TripleO::Network::Tenant: ../network/tenant.yaml</div></div><div class="gmail_quote"><div>  OS::TripleO::Network::InternalApi: ../network/internal_api.yaml</div></div><div class="gmail_quote"><div>  OS::TripleO::Network::External: ../network/external.yaml</div></div></blockquote><div class="gmail_quote"><div><br></div></div><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_quote"><div>  # Port assignments for the VIPs</div></div>  OS::TripleO::Network::Ports::J3MgmtVipPort: ../network/ports/j3mgmt.yaml<br><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_quote"><br></div></blockquote><div class="gmail_quote"><div>  OS::TripleO::Network::Ports::InternalApiVipPort: ../network/ports/internal_api.yaml</div></div>  OS::TripleO::Network::Ports::ExternalVipPort: ../network/ports/external.yaml<br><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_quote"><br></div></blockquote><div class="gmail_quote"><div>  OS::TripleO::Network::Ports::RedisVipPort: ../network/ports/vip.yaml</div></div>  OS::TripleO::Network::Ports::OVNDBsVipPort: ../network/ports/vip.yaml<br></blockquote><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_quote"><br></div></blockquote><div class="gmail_quote"><div><br></div></div><div class="gmail_quote"><div>  # Port assignments by role, edit role definition to assign networks to roles.</div></div><div class="gmail_quote"><div>  # Port assignments for the Controller</div></div><div class="gmail_quote"><div>  OS::TripleO::Controller::Ports::J3MgmtPort: ../network/ports/j3mgmt.yaml</div></div><div class="gmail_quote"><div>  OS::TripleO::Controller::Ports::TenantPort: ../network/ports/tenant.yaml</div></div><div class="gmail_quote"><div>  OS::TripleO::Controller::Ports::InternalApiPort: ../network/ports/internal_api.yaml</div></div><div class="gmail_quote"><div>  OS::TripleO::Controller::Ports::ExternalPort: ../network/ports/external.yaml</div></div><div class="gmail_quote"><div>  # Port assignments for the Compute</div></div><div class="gmail_quote"><div>  OS::TripleO::Compute::Ports::J3MgmtPort: ../network/ports/j3mgmt.yaml</div></div><div class="gmail_quote"><div>  OS::TripleO::Compute::Ports::TenantPort: ../network/ports/tenant.yaml</div></div><div class="gmail_quote"><div>  OS::TripleO::Compute::Ports::InternalApiPort: ../network/ports/internal_api.yaml</div></div></blockquote><div class="gmail_quote"><div>~<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <br>
network-environment.yaml?<br></blockquote><div><br></div>resource_registry:<br>  # Network Interface templates to use (these files must exist). You can<br>  # override these by including one of the net-*.yaml environment files,<br>  # such as net-bond-with-vlans.yaml, or modifying the list here.<br>  # Port assignments for the Controller<br>  OS::TripleO::Controller::Net::SoftwareConfig:<br>    ../network/config/bond-with-vlans/controller.yaml<br>  # Port assignments for the Compute<br>  OS::TripleO::Compute::Net::SoftwareConfig:<br>    ../network/config/bond-with-vlans/compute.yaml<br><div>parameter_defaults:<br> <br>  J3MgmtNetCidr: '<a href="http://80.0.1.0/24" target="_blank">80.0.1.0/24</a>'<br>  J3MgmtAllocationPools: [{'start': '80.0.1.4', 'end': '80.0.1.250'}]<br>  J3MgmtNetworkVlanID: 400<br><br>  TenantNetCidr: '<a href="http://172.16.0.0/24" target="_blank">172.16.0.0/24</a>'<br>  TenantAllocationPools: [{'start': '172.16.0.4', 'end': '172.16.0.250'}]<br>  TenantNetworkVlanID: 416<br>  TenantNetPhysnetMtu: 1500<br><br>  InternalApiNetCidr: '<a href="http://172.16.2.0/24" target="_blank">172.16.2.0/24</a>'<br>  InternalApiAllocationPools: [{'start': '172.16.2.4', 'end': '172.16.2.250'}]<br>  InternalApiNetworkVlanID: 418<br><br>  ExternalNetCidr: '<a href="http://10.0.1.0/24" target="_blank">10.0.1.0/24</a>'<br>  ExternalAllocationPools: [{'start': '10.0.1.85', 'end': '10.0.1.98'}]<br>  ExternalNetworkVlanID: 408<br><br>  DnsServers: []<br>  NeutronNetworkType: 'geneve,vlan'<br>  NeutronNetworkVLANRanges: 'datacentre:1:1000'<br>  BondInterfaceOvsOptions: "bond_mode=active-backup"<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
I typically use:<br>
-e <br>
/usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml<br>
-e <br>
/usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml<br>
-e /home/stack/templates/environments/network-overrides.yaml<br>
<br>
The network-isolation.yaml and network-environment.yaml are Jinja2 <br>
rendered based on the -n input, so too keep in sync with change in the <br>
`-n` file reference the file in <br>
/usr/share/opentack-tripleo-heat-templates. Then add overrides in <br>
network-overrides.yaml as neede.<br></blockquote><div><br></div><div>[Loke] : we are using this as like only, I do not know what you pass in network-overrides.yaml but I pass other files as per commands as below:</div><div><br></div>[stack@undercloud templates]$ cat environment.yaml<br>parameter_defaults:<br>  ControllerCount: 3<br>  TimeZone: 'Asia/Kolkata'<br>  NtpServer: ['30.30.30.3']<br>  NeutronBridgeMappings: datacentre:br-ex,baremetal:br-baremetal<br>  NeutronFlatNetworks: datacentre,baremetal<br>[stack@undercloud templates]$ cat ironic-config.yaml<br>parameter_defaults:<br>    IronicEnabledHardwareTypes:<br>        - ipmi<br>        - redfish<br>    IronicEnabledPowerInterfaces:<br>        - ipmitool<br>        - redfish<br>    IronicEnabledManagementInterfaces:<br>        - ipmitool<br>        - redfish<br>    IronicCleaningDiskErase: metadata<br>    IronicIPXEEnabled: true<br>    IronicInspectorSubnets:<br>    - ip_range: 172.23.3.100,172.23.3.150<br>    IPAImageURLs: '["<a href="http://30.30.30.1:8088/agent.kernel" target="_blank">http://30.30.30.1:8088/agent.kernel</a>", "<a href="http://30.30.30.1:8088/agent.ramdisk" target="_blank">http://30.30.30.1:8088/agent.ramdisk</a>"]'<br>    IronicInspectorInterface: 'br-baremetal'<br>[stack@undercloud templates]$<br>[stack@undercloud templates]$ cat node-info.yaml<br>parameter_defaults:<br>  OvercloudControllerFlavor: control<br>  OvercloudComputeFlavor: compute<br>  ControllerCount: 3<br>  ComputeCount: 1<br>[stack@undercloud templates]$<br><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
>      -e <br>
> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml <br>
> \<br>
>      -e <br>
> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml <br>
> \<br>
>      -e <br>
> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml <br>
> \<br>
>      -e /home/stack/templates/ironic-config.yaml \<br>
>      -e <br>
> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \<br>
>      -e <br>
> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \<br>
>      -e /home/stack/containers-prepare-parameter.yaml/<br>
> <br>
> **/home/stack/templates/ironic-config.yaml :<br>
> (overcloud) [stack@undercloud ~]$ cat <br>
> /home/stack/templates/ironic-config.yaml<br>
> parameter_defaults:<br>
>      IronicEnabledHardwareTypes:<br>
>          - ipmi<br>
>          - redfish<br>
>      IronicEnabledPowerInterfaces:<br>
>          - ipmitool<br>
>          - redfish<br>
>      IronicEnabledManagementInterfaces:<br>
>          - ipmitool<br>
>          - redfish<br>
>      IronicCleaningDiskErase: metadata<br>
>      IronicIPXEEnabled: true<br>
>      IronicInspectorSubnets:<br>
>      - ip_range: 172.23.3.100,172.23.3.150<br>
>      IPAImageURLs: '["<a href="http://30.30.30.1:8088/agent.kernel" rel="noreferrer" target="_blank">http://30.30.30.1:8088/agent.kernel</a> <br>
> <<a href="http://30.30.30.1:8088/agent.kernel" rel="noreferrer" target="_blank">http://30.30.30.1:8088/agent.kernel</a>>", <br>
> "<a href="http://30.30.30.1:8088/agent.ramdisk" rel="noreferrer" target="_blank">http://30.30.30.1:8088/agent.ramdisk</a> <br>
> <<a href="http://30.30.30.1:8088/agent.ramdisk" rel="noreferrer" target="_blank">http://30.30.30.1:8088/agent.ramdisk</a>>"] >      IronicInspectorInterface: 'br-baremetal'<br>
> <br>
> Also the baremetal network(provisioning)(172.23.3.x)  is  routed with <br>
> ctlplane/admin network (30.30.30.x)<br>
> <br>
<br>
Unless the network you created in the overcloud is named `provisioning`, <br>
these parameters may be relevant.<br>
<br>
IronicCleaningNetwork:<br>
   default: 'provisioning'<br>
   description: Name or UUID of the *overcloud* network used for cleaning<br>
                bare metal nodes. The default value of "provisioning" can be<br>
                left during the initial deployment (when no networks are<br>
                created yet) and should be changed to an actual UUID in<br>
                a post-deployment stack update.<br>
   type: string<br>
<br>
IronicProvisioningNetwork:<br>
   default: 'provisioning'<br>
   description: Name or UUID of the *overcloud* network used for <br>
provisioning<br>
                of bare metal nodes, if IronicDefaultNetworkInterface is<br>
                set to "neutron". The default value of "provisioning" can be<br>
                left during the initial deployment (when no networks are<br>
                created yet) and should be changed to an actual UUID in<br>
                a post-deployment stack update.<br>
   type: string<br>
<br>
IronicRescuingNetwork:<br>
   default: 'provisioning'<br>
   description: Name or UUID of the *overcloud* network used for resucing<br>
                of bare metal nodes, if IronicDefaultRescueInterface is not<br>
                set to "no-rescue". The default value of "provisioning" <br>
can be<br>
                left during the initial deployment (when no networks are<br>
                created yet) and should be changed to an actual UUID in<br>
                a post-deployment stack update.<br>
   type: string<br>
<br>
> *Query:*<br>
> <br>
>  1. any other location/way where we should add these so that they are<br>
>     included without error.<br>
> <br>
>         *ServiceNetMap:*<br>
> <br>
>         *    IronicApiNetwork: provisioning*<br>
> <br>
>         *    IronicNetwork: provisioning*<br>
> <br>
<br>
`provisioning` network is defined in -n <br>
/home/stack/templates/network_data.yaml right? </blockquote><div>[Loke]: No it does not have any entry for provisioning in this file, it is network entry for J3Mgmt,Tenant,InternalApi, and External. These n/w's are added as vlan based under the br-ext bridge. </div><div>provisioning network I am creating after the overcloud is deployed and before the baremetal node is provisioned. </div><div>in the provisioning network, we are giving the range of the ironic network. (172.23.3.x)</div><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">And an entry in <br>
'networks' for the controller role in /home/stack/templates/roles_data.yaml?<br></blockquote><div>[Loke]: we also did not added a similar entry in the roles_data.yaml as well.</div><div><br></div><div>Just to add with these two files we have rendered the remaining templates. </div><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
<br>
>       2. Also are these commands(mentioned above) configure Baremetal<br>
>     services are fine.<br>
> <br>
<br>
Yes, what you are doing makes sense.<br>
<br>
I'm actually not sure why it did'nt work with your previous <br>
configuration, it got the information about NBP file and obviously <br>
attempted to download it from 30.30.30.220. With routing in place, that <br>
should work.<br>
<br>
Changeing the ServiceNetMap to move IronicNetwork services to the <br>
172.23.3 would avoid the routing.<br></blockquote><div>[Loke] : we can try this but are somehow not able to do so because of some weird reasons.  </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
<br>
What is NeutronBridgeMappings?<br>
  br-baremetal maps to the physical network of the overcloud <br>
`provisioning` neutron network?<br></blockquote><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
[Loke] : yes , we create br-barmetal and then we create provisioning network mapping it to br-baremetal.<br>
<br></blockquote></div><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_quote"><div>Also attaching the complete rendered template folder along with custom yaml files that I am using, maybe referring that you might have a more clear picture of our problem.</div></div><div class="gmail_quote"><div>Any clue would help.</div></div><div class="gmail_quote"><div>Our problem,</div></div><div class="gmail_quote"><div>we are not able to provision the baremetal node after the overcloud is deployed.</div></div><div class="gmail_quote"><div>Do we have any straight-forward documents using which we can test the baremetal provision, please provide that. </div></div><div class="gmail_quote"><div><br></div></div><div class="gmail_quote"><div>Thanks once again for reading all these. </div></div></blockquote><div class="gmail_quote"><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
--<br>
Harald<br>
<br>
</blockquote></div><br clear="all"><div><br></div>-<br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div>skype: lokendrarathour</div><div dir="ltr"><img src="https://drive.google.com/uc?id=0BynJnQEa1sUyU2dxclR4dVVWM0E&export=download&resourcekey=0-SqQLe-ZfiPFkKfdNa8WpMg" width="200" height="41"><br></div><div dir="ltr"><br></div></div></div></div></div></div>
</blockquote></div><br clear="all"><div><br></div>--<div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><img src="https://drive.google.com/uc?id=0BynJnQEa1sUyU2dxclR4dVVWM0E&export=download&resourcekey=0-SqQLe-ZfiPFkKfdNa8WpMg" width="200" height="41"><br></div><div dir="ltr"><br></div></div></div></div></div></div>
</blockquote></div></div>