[Tripleo] Issue in Baremetal Provisioning from Overcloud

Lokendra Rathour lokendrarathour at gmail.com
Thu Feb 10 16:39:18 UTC 2022


Hi Harald,
Thanks for the response, please find my response inline:


On Thu, Feb 10, 2022 at 8:24 PM Harald Jensas <hjensas at redhat.com> wrote:

> On 2/10/22 14:49, Lokendra Rathour wrote:
> > Hi Harald,
> > Thanks once again for your support, we tried activating the parameters:
> > ServiceNetMap:
> >      IronicApiNetwork: provisioning
> >      IronicNetwork: provisioning
> > at environments/network-environments.yaml
> > image.png
> > After changing these values the updated or even the fresh deployments
> > are failing.
> >
>
> How did deployment fail?
>

[Loke] : it failed immediately after when the IP for ctlplane network is
assigned, and ssh is established and stack creation is completed, I think
at the start of ansible execution.

Error:
"enabling ssh admin - COMPLETE.
Host 10.0.1.94 not found in /home/stack/.ssh/known_hosts"
Although this message is even seen when the deployment is successful. so I
do not think this is the culprit.




> > The command that we are using to deploy the OpenStack overcloud:
> > /openstack overcloud deploy --templates \
> >      -n /home/stack/templates/network_data.yaml \
> >      -r /home/stack/templates/roles_data.yaml \
> >      -e /home/stack/templates/node-info.yaml \
> >      -e /home/stack/templates/environment.yaml \
> >      -e /home/stack/templates/environments/network-isolation.yaml \
> >      -e /home/stack/templates/environments/network-environment.yaml \
>
> What modifications did you do to network-isolation.yaml and

[Loke]:
*Network-isolation.yaml:*

# Enable the creation of Neutron networks for isolated Overcloud
# traffic and configure each role to assign ports (related
# to that role) on these networks.
resource_registry:
  # networks as defined in network_data.yaml
  OS::TripleO::Network::J3Mgmt: ../network/j3mgmt.yaml
  OS::TripleO::Network::Tenant: ../network/tenant.yaml
  OS::TripleO::Network::InternalApi: ../network/internal_api.yaml
  OS::TripleO::Network::External: ../network/external.yaml


  # Port assignments for the VIPs
  OS::TripleO::Network::Ports::J3MgmtVipPort: ../network/ports/j3mgmt.yaml


  OS::TripleO::Network::Ports::InternalApiVipPort:
../network/ports/internal_api.yaml
  OS::TripleO::Network::Ports::ExternalVipPort:
../network/ports/external.yaml


  OS::TripleO::Network::Ports::RedisVipPort: ../network/ports/vip.yaml
  OS::TripleO::Network::Ports::OVNDBsVipPort: ../network/ports/vip.yaml



  # Port assignments by role, edit role definition to assign networks to
roles.
  # Port assignments for the Controller
  OS::TripleO::Controller::Ports::J3MgmtPort: ../network/ports/j3mgmt.yaml
  OS::TripleO::Controller::Ports::TenantPort: ../network/ports/tenant.yaml
  OS::TripleO::Controller::Ports::InternalApiPort:
../network/ports/internal_api.yaml
  OS::TripleO::Controller::Ports::ExternalPort:
../network/ports/external.yaml
  # Port assignments for the Compute
  OS::TripleO::Compute::Ports::J3MgmtPort: ../network/ports/j3mgmt.yaml
  OS::TripleO::Compute::Ports::TenantPort: ../network/ports/tenant.yaml
  OS::TripleO::Compute::Ports::InternalApiPort:
../network/ports/internal_api.yaml

~


>
> network-environment.yaml?
>

resource_registry:
  # Network Interface templates to use (these files must exist). You can
  # override these by including one of the net-*.yaml environment files,
  # such as net-bond-with-vlans.yaml, or modifying the list here.
  # Port assignments for the Controller
  OS::TripleO::Controller::Net::SoftwareConfig:
    ../network/config/bond-with-vlans/controller.yaml
  # Port assignments for the Compute
  OS::TripleO::Compute::Net::SoftwareConfig:
    ../network/config/bond-with-vlans/compute.yaml
parameter_defaults:

  J3MgmtNetCidr: '80.0.1.0/24'
  J3MgmtAllocationPools: [{'start': '80.0.1.4', 'end': '80.0.1.250'}]
  J3MgmtNetworkVlanID: 400

  TenantNetCidr: '172.16.0.0/24'
  TenantAllocationPools: [{'start': '172.16.0.4', 'end': '172.16.0.250'}]
  TenantNetworkVlanID: 416
  TenantNetPhysnetMtu: 1500

  InternalApiNetCidr: '172.16.2.0/24'
  InternalApiAllocationPools: [{'start': '172.16.2.4', 'end':
'172.16.2.250'}]
  InternalApiNetworkVlanID: 418

  ExternalNetCidr: '10.0.1.0/24'
  ExternalAllocationPools: [{'start': '10.0.1.85', 'end': '10.0.1.98'}]
  ExternalNetworkVlanID: 408

  DnsServers: []
  NeutronNetworkType: 'geneve,vlan'
  NeutronNetworkVLANRanges: 'datacentre:1:1000'
  BondInterfaceOvsOptions: "bond_mode=active-backup"


>
> I typically use:
> -e
>
> /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml
> -e
>
> /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml
> -e /home/stack/templates/environments/network-overrides.yaml
>
> The network-isolation.yaml and network-environment.yaml are Jinja2
> rendered based on the -n input, so too keep in sync with change in the
> `-n` file reference the file in
> /usr/share/opentack-tripleo-heat-templates. Then add overrides in
> network-overrides.yaml as neede.
>

[Loke] : we are using this as like only, I do not know what you pass in
network-overrides.yaml but I pass other files as per commands as below:

[stack at undercloud templates]$ cat environment.yaml
parameter_defaults:
  ControllerCount: 3
  TimeZone: 'Asia/Kolkata'
  NtpServer: ['30.30.30.3']
  NeutronBridgeMappings: datacentre:br-ex,baremetal:br-baremetal
  NeutronFlatNetworks: datacentre,baremetal
[stack at undercloud templates]$ cat ironic-config.yaml
parameter_defaults:
    IronicEnabledHardwareTypes:
        - ipmi
        - redfish
    IronicEnabledPowerInterfaces:
        - ipmitool
        - redfish
    IronicEnabledManagementInterfaces:
        - ipmitool
        - redfish
    IronicCleaningDiskErase: metadata
    IronicIPXEEnabled: true
    IronicInspectorSubnets:
    - ip_range: 172.23.3.100,172.23.3.150
    IPAImageURLs: '["http://30.30.30.1:8088/agent.kernel", "
http://30.30.30.1:8088/agent.ramdisk"]'
    IronicInspectorInterface: 'br-baremetal'
[stack at undercloud templates]$
[stack at undercloud templates]$ cat node-info.yaml
parameter_defaults:
  OvercloudControllerFlavor: control
  OvercloudComputeFlavor: compute
  ControllerCount: 3
  ComputeCount: 1
[stack at undercloud templates]$



>
> >      -e
> >
> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml
>
> > \
> >      -e
> >
> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml
>
> > \
> >      -e
> >
> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml
>
> > \
> >      -e /home/stack/templates/ironic-config.yaml \
> >      -e
> > /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \
> >      -e
> > /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \
> >      -e /home/stack/containers-prepare-parameter.yaml/
> >
> > **/home/stack/templates/ironic-config.yaml :
> > (overcloud) [stack at undercloud ~]$ cat
> > /home/stack/templates/ironic-config.yaml
> > parameter_defaults:
> >      IronicEnabledHardwareTypes:
> >          - ipmi
> >          - redfish
> >      IronicEnabledPowerInterfaces:
> >          - ipmitool
> >          - redfish
> >      IronicEnabledManagementInterfaces:
> >          - ipmitool
> >          - redfish
> >      IronicCleaningDiskErase: metadata
> >      IronicIPXEEnabled: true
> >      IronicInspectorSubnets:
> >      - ip_range: 172.23.3.100,172.23.3.150
> >      IPAImageURLs: '["http://30.30.30.1:8088/agent.kernel
> > <http://30.30.30.1:8088/agent.kernel>",
> > "http://30.30.30.1:8088/agent.ramdisk
> > <http://30.30.30.1:8088/agent.ramdisk>"] >
> IronicInspectorInterface: 'br-baremetal'
> >
> > Also the baremetal network(provisioning)(172.23.3.x)  is  routed with
> > ctlplane/admin network (30.30.30.x)
> >
>
> Unless the network you created in the overcloud is named `provisioning`,
> these parameters may be relevant.
>
> IronicCleaningNetwork:
>    default: 'provisioning'
>    description: Name or UUID of the *overcloud* network used for cleaning
>                 bare metal nodes. The default value of "provisioning" can
> be
>                 left during the initial deployment (when no networks are
>                 created yet) and should be changed to an actual UUID in
>                 a post-deployment stack update.
>    type: string
>
> IronicProvisioningNetwork:
>    default: 'provisioning'
>    description: Name or UUID of the *overcloud* network used for
> provisioning
>                 of bare metal nodes, if IronicDefaultNetworkInterface is
>                 set to "neutron". The default value of "provisioning" can
> be
>                 left during the initial deployment (when no networks are
>                 created yet) and should be changed to an actual UUID in
>                 a post-deployment stack update.
>    type: string
>
> IronicRescuingNetwork:
>    default: 'provisioning'
>    description: Name or UUID of the *overcloud* network used for resucing
>                 of bare metal nodes, if IronicDefaultRescueInterface is not
>                 set to "no-rescue". The default value of "provisioning"
> can be
>                 left during the initial deployment (when no networks are
>                 created yet) and should be changed to an actual UUID in
>                 a post-deployment stack update.
>    type: string
>
> > *Query:*
> >
> >  1. any other location/way where we should add these so that they are
> >     included without error.
> >
> >         *ServiceNetMap:*
> >
> >         *    IronicApiNetwork: provisioning*
> >
> >         *    IronicNetwork: provisioning*
> >
>
> `provisioning` network is defined in -n
> /home/stack/templates/network_data.yaml right?

[Loke]: No it does not have any entry for provisioning in this file, it is
network entry for J3Mgmt,Tenant,InternalApi, and External. These n/w's are
added as vlan based under the br-ext bridge.
provisioning network I am creating after the overcloud is deployed and
before the baremetal node is provisioned.
in the provisioning network, we are giving the range of the ironic network.
(172.23.3.x)




> And an entry in
> 'networks' for the controller role in
> /home/stack/templates/roles_data.yaml?
>
[Loke]: we also did not added a similar entry in the roles_data.yaml as
well.

Just to add with these two files we have rendered the remaining templates.




>
>
> >       2. Also are these commands(mentioned above) configure Baremetal
> >     services are fine.
> >
>
> Yes, what you are doing makes sense.
>
> I'm actually not sure why it did'nt work with your previous
> configuration, it got the information about NBP file and obviously
> attempted to download it from 30.30.30.220. With routing in place, that
> should work.
>
> Changeing the ServiceNetMap to move IronicNetwork services to the
> 172.23.3 would avoid the routing.
>
[Loke] : we can try this but are somehow not able to do so because of some
weird reasons.

>
>
> What is NeutronBridgeMappings?
>   br-baremetal maps to the physical network of the overcloud
> `provisioning` neutron network?
>


> [Loke] : yes , we create br-barmetal and then we create provisioning
> network mapping it to br-baremetal.
>
> Also attaching the complete rendered template folder along with custom
yaml files that I am using, maybe referring that you might have a more
clear picture of our problem.
Any clue would help.
Our problem,
we are not able to provision the baremetal node after the overcloud is
deployed.
Do we have any straight-forward documents using which we can test the
baremetal provision, please provide that.

Thanks once again for reading all these.




> --
> Harald
>
>

-
skype: lokendrarathour
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20220210/130d13d4/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: overcloud_baremetal.zip
Type: application/x-zip-compressed
Size: 128512 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20220210/130d13d4/attachment-0001.bin>


More information about the openstack-discuss mailing list