[ironic][tripleo][ussuri] Problem with bare metal provisioning and old RAID controllers

Marco Marino marino.mrc at gmail.com
Tue Aug 4 10:57:13 UTC 2020


Here is what I did:
# /usr/lib/dracut/skipcpio /home/stack/images/ironic-python-agent.initramfs
| zcat | cpio -ivd | pax -r
# mount dd-megaraid_sas-07.710.50.00-1.el8_2.elrepo.iso /mnt/
# rpm2cpio
/mnt/rpms/x86_64/kmod-megaraid_sas-07.710.50.00-1.el8_2.elrepo.x86_64.rpm |
pax -r
# find . 2>/dev/null | cpio --quiet -c -o | gzip -8  >
/home/stack/images/ironic-python-agent.initramfs
# chown stack: /home/stack/images/ironic-python-agent.initramfs
(undercloud) [stack at undercloud ~]$ openstack overcloud image upload
--update-existing --image-path /home/stack/images/

At this point I checked that agent.ramdisk in /var/lib/ironic/httpboot has
an update timestamp

Then
(undercloud) [stack at undercloud ~]$ openstack overcloud node introspect
--provide controller2
/usr/lib64/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't
resolve package from __spec__ or __package__, falling back on __name__ and
__path__
  return f(*args, **kwds)

PLAY [Baremetal Introspection for multiple Ironic Nodes]
***********************
2020-08-04 12:32:26.684368 | ecf4bbd2-e605-20dd-3da9-000000000008 |
TASK | Check for required inputs
2020-08-04 12:32:26.739797 | ecf4bbd2-e605-20dd-3da9-000000000008 |
 SKIPPED | Check for required inputs | localhost | item=node_uuids
2020-08-04 12:32:26.746684 | ecf4bbd2-e605-20dd-3da9-00000000000a |
TASK | Set node_uuids_intro fact
[WARNING]: Failure using method (v2_playbook_on_task_start) in callback
plugin
(<ansible.plugins.callback.tripleo_profile_tasks.CallbackModule object at
0x7f1b0f9bce80>): maximum recursion depth exceeded while calling a Python
object
2020-08-04 12:32:26.828985 | ecf4bbd2-e605-20dd-3da9-00000000000a |
OK | Set node_uuids_intro fact | localhost
2020-08-04 12:32:26.834281 | ecf4bbd2-e605-20dd-3da9-00000000000c |
TASK | Notice
2020-08-04 12:32:26.911106 | ecf4bbd2-e605-20dd-3da9-00000000000c |
 SKIPPED | Notice | localhost
2020-08-04 12:32:26.916344 | ecf4bbd2-e605-20dd-3da9-00000000000e |
TASK | Set concurrency fact
2020-08-04 12:32:26.994087 | ecf4bbd2-e605-20dd-3da9-00000000000e |
OK | Set concurrency fact | localhost
2020-08-04 12:32:27.005932 | ecf4bbd2-e605-20dd-3da9-000000000010 |
TASK | Check if validation enabled
2020-08-04 12:32:27.116425 | ecf4bbd2-e605-20dd-3da9-000000000010 |
 SKIPPED | Check if validation enabled | localhost
2020-08-04 12:32:27.129120 | ecf4bbd2-e605-20dd-3da9-000000000011 |
TASK | Run Validations
2020-08-04 12:32:27.239850 | ecf4bbd2-e605-20dd-3da9-000000000011 |
 SKIPPED | Run Validations | localhost
2020-08-04 12:32:27.251796 | ecf4bbd2-e605-20dd-3da9-000000000012 |
TASK | Fail if validations are disabled
2020-08-04 12:32:27.362050 | ecf4bbd2-e605-20dd-3da9-000000000012 |
 SKIPPED | Fail if validations are disabled | localhost
2020-08-04 12:32:27.373947 | ecf4bbd2-e605-20dd-3da9-000000000014 |
TASK | Start baremetal introspection


2020-08-04 12:48:19.944028 | ecf4bbd2-e605-20dd-3da9-000000000014 |
 CHANGED | Start baremetal introspection | localhost
2020-08-04 12:48:19.966517 | ecf4bbd2-e605-20dd-3da9-000000000015 |
TASK | Nodes that passed introspection
2020-08-04 12:48:20.130913 | ecf4bbd2-e605-20dd-3da9-000000000015 |
OK | Nodes that passed introspection | localhost | result={
    "changed": false,
    "msg": " 00c5e81b-1e5d-442b-b64f-597a604051f7"
}
2020-08-04 12:48:20.142919 | ecf4bbd2-e605-20dd-3da9-000000000016 |
TASK | Nodes that failed introspection
2020-08-04 12:48:20.305004 | ecf4bbd2-e605-20dd-3da9-000000000016 |
OK | Nodes that failed introspection | localhost | result={
    "changed": false,
    "failed_when_result": false,
    "msg": " All nodes completed introspection successfully!"
}
2020-08-04 12:48:20.316860 | ecf4bbd2-e605-20dd-3da9-000000000017 |
TASK | Node introspection failed and no results are provided
2020-08-04 12:48:20.427675 | ecf4bbd2-e605-20dd-3da9-000000000017 |
 SKIPPED | Node introspection failed and no results are provided | localhost

PLAY RECAP
*********************************************************************
localhost                  : ok=5    changed=1    unreachable=0    failed=0
   skipped=6    rescued=0    ignored=0
[WARNING]: Failure using method (v2_playbook_on_stats) in callback plugin
(<ansible.plugins.callback.tripleo_profile_tasks.CallbackModule object at
0x7f1b0f9bce80>): _output() missing 1 required positional argument: 'color'
Successfully introspected nodes: ['controller2']
Exception occured while running the command
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/ansible_runner/runner_config.py",
line 340, in prepare_command
    cmdline_args = self.loader.load_file('args', string_types,
encoding=None)
  File "/usr/lib/python3.6/site-packages/ansible_runner/loader.py", line
164, in load_file
    contents = parsed_data = self.get_contents(path)
  File "/usr/lib/python3.6/site-packages/ansible_runner/loader.py", line
98, in get_contents
    raise ConfigurationError('specified path does not exist %s' % path)
ansible_runner.exceptions.ConfigurationError: specified path does not exist
/tmp/tripleop89yr8i8/args

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line
34, in run
    super(Command, self).run(parsed_args)
  File "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line
41, in run
    return super(Command, self).run(parsed_args)
  File "/usr/lib/python3.6/site-packages/cliff/command.py", line 187, in run
    return_code = self.take_action(parsed_args) or 0
  File
"/usr/lib/python3.6/site-packages/tripleoclient/v2/overcloud_node.py", line
210, in take_action
    node_uuids=parsed_args.node_uuids,
  File
"/usr/lib/python3.6/site-packages/tripleoclient/workflows/baremetal.py",
line 134, in provide
    'node_uuids': node_uuids
  File "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line 659,
in run_ansible_playbook
    runner_config.prepare()
  File "/usr/lib/python3.6/site-packages/ansible_runner/runner_config.py",
line 174, in prepare
    self.prepare_command()
  File "/usr/lib/python3.6/site-packages/ansible_runner/runner_config.py",
line 346, in prepare_command
    self.command = self.generate_ansible_command()
  File "/usr/lib/python3.6/site-packages/ansible_runner/runner_config.py",
line 415, in generate_ansible_command
    v = 'v' * self.verbosity
TypeError: can't multiply sequence by non-int of type 'ClientManager'
can't multiply sequence by non-int of type 'ClientManager'
(undercloud) [stack at undercloud ~]$


and
(undercloud) [stack at undercloud ~]$ openstack baremetal node show controller2
....
| properties             | {'local_gb': '0', 'cpus': '24', 'cpu_arch':
'x86_64', 'memory_mb': '32768', 'capabilities':
'cpu_vt:true,cpu_aes:true,cpu_hugepages:true,cpu_hugepages_1g:true,cpu_txt:true'}


It seems that megaraid driver is correctly inserted in ramdisk:
# lsinitrd /var/lib/ironic/httpboot/agent.ramdisk | grep  megaraid
/bin/lsinitrd: line 276: warning: command substitution: ignored null byte
in input
-rw-r--r--   1 root     root           50 Apr 28 21:55
etc/depmod.d/kmod-megaraid_sas.conf
drwxr-xr-x   2 root     root            0 Aug  4 12:13
usr/lib/modules/4.18.0-193.6.3.el8_2.x86_64/kernel/drivers/scsi/megaraid
-rw-r--r--   1 root     root        68240 Aug  4 12:13
usr/lib/modules/4.18.0-193.6.3.el8_2.x86_64/kernel/drivers/scsi/megaraid/megaraid_sas.ko.xz
drwxr-xr-x   2 root     root            0 Apr 28 21:55
usr/lib/modules/4.18.0-193.el8.x86_64/extra/megaraid_sas
-rw-r--r--   1 root     root       309505 Apr 28 21:55
usr/lib/modules/4.18.0-193.el8.x86_64/extra/megaraid_sas/megaraid_sas.ko
drwxr-xr-x   2 root     root            0 Apr 28 21:55
usr/share/doc/kmod-megaraid_sas-07.710.50.00
-rw-r--r--   1 root     root        18092 Apr 28 21:55
usr/share/doc/kmod-megaraid_sas-07.710.50.00/GPL-v2.0.txt
-rw-r--r--   1 root     root         1152 Apr 28 21:55
usr/share/doc/kmod-megaraid_sas-07.710.50.00/greylist.txt

If the solution is to use a Centos7 ramdisk, please can you give me some
hint? I have no idea on how to build a new ramdisk from scratch
Thank you








Il giorno mar 4 ago 2020 alle ore 12:33 Dmitry Tantsur <dtantsur at redhat.com>
ha scritto:

> Hi,
>
> On Tue, Aug 4, 2020 at 11:58 AM Marco Marino <marino.mrc at gmail.com> wrote:
>
>> Hi, I'm trying to install openstack Ussuri on Centos 8 hardware using
>> tripleo. I'm using a relatively old hardware (dell PowerEdge R620) with old
>> RAID controllers, deprecated in RHEL8/Centos8. Here is some basic
>> information:
>> # lspci | grep -i raid
>> 00:1f.2 RAID bus controller: Intel Corporation C600/X79 series chipset
>> SATA RAID Controller (rev 05)
>> 02:00.0 RAID bus controller: Broadcom / LSI MegaRAID SAS 2008 [Falcon]
>> (rev 03)
>>
>> I'm able to manually install centos 8 using DUD driver from here ->
>> https://elrepo.org/linux/dud/el8/x86_64/dd-megaraid_sas-07.710.50.00-1.el8_2.elrepo.iso
>> (basically I add inst.dd and I use an usb pendrive with iso).
>> Is there a way to do bare metal provisioning using openstack on this kind
>> of server? At the moment, when I launch "openstack overcloud node
>> introspect --provide controller1" it doesn't recognize disks (local_gb = 0
>> in properties) and in inspector logs I see:
>> Jun 22 11:12:42 localhost.localdomain ironic-python-agent[1543]:
>> 2018-06-22 11:12:42.261 1543 DEBUG root [-] Still waiting for the root
>> device to appear, attempt 1 of 10 wait_for_disks
>> /usr/lib/python3.6/site-packages/ironic_python_agent/hardware.py:652
>> Jun 22 11:12:45 localhost.localdomain ironic-python-agent[1543]:
>> 2018-06-22 11:12:45.299 1543 DEBUG oslo_concurrency.processutils [-]
>> Running cmd (subprocess): udevadm settle execute
>> /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372
>> Jun 22 11:12:45 localhost.localdomain ironic-python-agent[1543]:
>> 2018-06-22 11:12:45.357 1543 DEBUG oslo_concurrency.processutils [-] CMD
>> "udevadm settle" returned: 0 in 0.058s execute
>> /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409
>> Jun 22 11:12:45 localhost.localdomain ironic-python-agent[1543]:
>> 2018-06-22 11:12:45.392 1543 DEBUG ironic_lib.utils [-] Execution
>> completed, command line is "udevadm settle" execute
>> /usr/lib/python3.6/site-packages/ironic_lib/utils.py:101
>> Jun 22 11:12:45 localhost.localdomain ironic-python-agent[1543]:
>> 2018-06-22 11:12:45.426 1543 DEBUG ironic_lib.utils [-] Command stdout is:
>> "" execute /usr/lib/python3.6/site-packages/ironic_lib/utils.py:103
>> Jun 22 11:12:45 localhost.localdomain ironic-python-agent[1543]:
>> 2018-06-22 11:12:45.460 1543 DEBUG ironic_lib.utils [-] Command stderr is:
>> "" execute /usr/lib/python3.6/site-packages/ironic_lib/utils.py:104
>> Jun 22 11:12:45 localhost.localdomain ironic-python-agent[1543]:
>> 2018-06-22 11:12:45.496 1543 WARNING root [-] Path /dev/disk/by-path is
>> inaccessible, /dev/disk/by-path/* version of block device name is
>> unavailable Cause: [Errno 2] No such file or directory:
>> '/dev/disk/by-path': FileNotFoundError: [Errno 2] No such file or
>> directory: '/dev/disk/by-path'
>> Jun 22 11:12:45 localhost.localdomain ironic-python-agent[1543]:
>> 2018-06-22 11:12:45.549 1543 DEBUG oslo_concurrency.processutils [-]
>> Running cmd (subprocess): lsblk -Pbia -oKNAME,MODEL,SIZE,ROTA,TYPE execute
>> /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372
>> Jun 22 11:12:45 localhost.localdomain ironic-python-agent[1543]:
>> 2018-06-22 11:12:45.647 1543 DEBUG oslo_concurrency.processutils [-] CMD
>> "lsblk -Pbia -oKNAME,MODEL,SIZE,ROTA,TYPE" returned: 0 in 0.097s execute
>> /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409
>> Jun 22 11:12:45 localhost.localdomain ironic-python-agent[1543]:
>> 2018-06-22 11:12:45.683 1543 DEBUG ironic_lib.utils [-] Execution
>> completed, command line is "lsblk -Pbia -oKNAME,MODEL,SIZE,ROTA,TYPE"
>> execute /usr/lib/python3.6/site-packages/ironic_lib/utils.py:101
>> Jun 22 11:12:45 localhost.localdomain ironic-python-agent[1543]:
>> 2018-06-22 11:12:45.719 1543 DEBUG ironic_lib.utils [-] Command stdout is:
>> "" execute /usr/lib/python3.6/site-packages/ironic_lib/utils.py:103
>> Jun 22 11:12:45 localhost.localdomain ironic-python-agent[1543]:
>> 2018-06-22 11:12:45.755 1543 DEBUG ironic_lib.utils [-] Command stderr is:
>> "" execute /usr/lib/python3.6/site-packages/ironic_lib/utils.py:104
>>
>> Is there a way to solve the issue? For example, can I modify ramdisk and
>> include DUD driver? I tried this guide:
>> https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/partner_integration/overcloud_images#initrd_modifying_the_initial_ramdisks
>>
>> but I don't know how to include an ISO instead of an rpm packet as
>> described in the example.
>>
>
> Indeed, I don't think you can use ISO as it is, you'll need to figure out
> what is inside. If it's an RPM (as I assume), you'll need to extract it and
> install into the ramdisk.
>
> If nothing helps, you can try building a ramdisk with CentOS 7, the (very)
> recent versions of ironic-python-agent-builder allow using Python 3 on
> CentOS 7.
>
> Dmitry
>
>
>> Thank you,
>> Marco
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20200804/2579fd6d/attachment-0001.html>


More information about the openstack-discuss mailing list