[tripleo][ansible-ceph[ussuri][rdo][centos8] fails on ansible-ceph execution.
Hi all, using tripleo 13.4.0-1.el8 and ceph-ansible 4.0.25-1.el8 on CentOS8 from @centos-openstack-ussuri repo. but get [1] error. I thought it was due to not found python exec, but I saw later, when added verbosity 4, that it is able to find python3. It looks like output in paste.openstack.org is too short, not sure what was an issue, but here is a second place [2] for same full output [1] http://paste.openstack.org/show/i2XpSBiSVjuL69Ahm1sl/ [2] https://proxy.qwq.lt/ceph-ansible.html -- Ruslanas Gžibovskis +370 6030 7030
In [2] I see Error: Could not stat device /dev/vdb - No such file or directory. /dev/vdb is the default and as per the logs it doesn't exist on your HCI node. For your HCI node you need to have a block device (usually a dedicated disk) which can be configured as an OSD and you need to pass the path to it as described in the following section of the doc. https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features... Also, ensure your disk is factory clean or the ceph tools won't initialize it as an OSD. The easiest way to do this is to configure ironic's automatic node cleaning. John On Mon, Sep 21, 2020 at 9:45 AM Ruslanas Gžibovskis <ruslanas@lpic.lt> wrote:
Hi all,
using tripleo 13.4.0-1.el8 and ceph-ansible 4.0.25-1.el8 on CentOS8 from @centos-openstack-ussuri repo.
but get [1] error. I thought it was due to not found python exec, but I saw later, when added verbosity 4, that it is able to find python3.
It looks like output in paste.openstack.org is too short, not sure what was an issue, but here is a second place [2] for same full output
[1] http://paste.openstack.org/show/i2XpSBiSVjuL69Ahm1sl/ [2] https://proxy.qwq.lt/ceph-ansible.html
-- Ruslanas Gžibovskis +370 6030 7030
Yes, I do not have vdb... I have sda sdb sdc sde sdd... and I believe it might have come from journal_size: 16384 ? here is a part of conf file... CephAnsibleDiskConfig: devices: - /dev/sdc - /dev/sde - /dev/sdd osd_scenario: lvm osd_objectstore: bluestore journal_size: 16384 # commented this out now. Yes, undercloud node cleaning is the first option I enable/configure in undercloud.conf ;) after that I configure IP addresses/subnets :) On Mon, 21 Sep 2020 at 16:55, John Fulton <johfulto@redhat.com> wrote:
In [2] I see Error: Could not stat device /dev/vdb - No such file or directory.
/dev/vdb is the default and as per the logs it doesn't exist on your HCI node. For your HCI node you need to have a block device (usually a dedicated disk) which can be configured as an OSD and you need to pass the path to it as described in the following section of the doc.
https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features...
Also, ensure your disk is factory clean or the ceph tools won't initialize it as an OSD. The easiest way to do this is to configure ironic's automatic node cleaning.
John
On Mon, Sep 21, 2020 at 9:45 AM Ruslanas Gžibovskis <ruslanas@lpic.lt> wrote:
Hi all,
using tripleo 13.4.0-1.el8 and ceph-ansible 4.0.25-1.el8 on CentOS8 from
@centos-openstack-ussuri repo.
but get [1] error. I thought it was due to not found python exec, but I
saw later, when added verbosity 4, that it is able to find python3.
It looks like output in paste.openstack.org is too short, not sure what
was an issue, but here is a second place [2] for same full output
[1] http://paste.openstack.org/show/i2XpSBiSVjuL69Ahm1sl/ [2] https://proxy.qwq.lt/ceph-ansible.html
-- Ruslanas Gžibovskis +370 6030 7030
-- Ruslanas Gžibovskis +370 6030 7030
On Mon, Sep 21, 2020 at 11:11 AM Ruslanas Gžibovskis <ruslanas@lpic.lt> wrote:
Yes, I do not have vdb... I have sda sdb sdc sde sdd... and I believe it might have come from journal_size: 16384 ? here is a part of conf file... CephAnsibleDiskConfig: devices: - /dev/sdc - /dev/sde - /dev/sdd osd_scenario: lvm osd_objectstore: bluestore journal_size: 16384 # commented this out now.
If you used the above, perhaps in foo.yaml, but got the error message you shared, then I suspect you are deploying with your parameters in the wrong order. You should use the following order: openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ -e foo.yaml If the order of arguments is such that foo.yaml precedes /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml then the CephAnsibleDisksConfig will override what was set in foo.yaml and instead use the default in /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml which uses a disk you don't have. Also, please do not use journal_size it is deprecated and that parameter doesn't make sense for bluestore. As linked from the documentation ceph-volume batch mode (https://docs.ceph.com/en/latest/ceph-volume/lvm/batch/) should do the right thing if you modify the above (and just drop journal size). John
Yes, undercloud node cleaning is the first option I enable/configure in undercloud.conf ;) after that I configure IP addresses/subnets :)
On Mon, 21 Sep 2020 at 16:55, John Fulton <johfulto@redhat.com> wrote:
In [2] I see Error: Could not stat device /dev/vdb - No such file or directory.
/dev/vdb is the default and as per the logs it doesn't exist on your HCI node. For your HCI node you need to have a block device (usually a dedicated disk) which can be configured as an OSD and you need to pass the path to it as described in the following section of the doc.
https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features...
Also, ensure your disk is factory clean or the ceph tools won't initialize it as an OSD. The easiest way to do this is to configure ironic's automatic node cleaning.
John
On Mon, Sep 21, 2020 at 9:45 AM Ruslanas Gžibovskis <ruslanas@lpic.lt> wrote:
Hi all,
using tripleo 13.4.0-1.el8 and ceph-ansible 4.0.25-1.el8 on CentOS8 from @centos-openstack-ussuri repo.
but get [1] error. I thought it was due to not found python exec, but I saw later, when added verbosity 4, that it is able to find python3.
It looks like output in paste.openstack.org is too short, not sure what was an issue, but here is a second place [2] for same full output
[1] http://paste.openstack.org/show/i2XpSBiSVjuL69Ahm1sl/ [2] https://proxy.qwq.lt/ceph-ansible.html
-- Ruslanas Gžibovskis +370 6030 7030
-- Ruslanas Gžibovskis +370 6030 7030
Hmm, looks like a good point, I even thought I forgot to sort it. BUT, I double checked now, and my node-info.yaml is the last one... only network_data and roles_data are above default configs: _THT="/usr/share/openstack-tripleo-heat-templates" _LTHT="$(pwd)" time openstack --verbose overcloud deploy \ --force-postconfig --templates \ --stack v3 \ -r ${_LTHT}/roles_data.yaml \ -n ${_LTHT}/network_data.yaml \ -e ${_LTHT}/containers-prepare-parameter.yaml \ -e ${_LTHT}/overcloud_images.yaml \ -e ${_THT}/environments/disable-telemetry.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-ansible.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-rgw.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-mds.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-dashboard.yaml \ -e ${_LTHT}/node-info.yaml \ --ntp-server 8.8.8.8 all the config, can be found here[1]. meanwhile I will comment out my journal option. [1] https://github.com/qw3r3wq/OSP-ussuri/blob/master/v3/node-info.yaml
Your config-download directory, per stack, will have a ceph-ansible sub directory; check the devices list there. It will contain the resultant of your overrides. What devices are listed? If /dev/vdb is still listed then something is breaking the expected override pattern. John On Mon, Sep 21, 2020 at 11:49 AM Ruslanas Gžibovskis <ruslanas@lpic.lt> wrote:
Hmm,
looks like a good point, I even thought I forgot to sort it. BUT, I double checked now, and my node-info.yaml is the last one... only network_data and roles_data are above default configs:
_THT="/usr/share/openstack-tripleo-heat-templates" _LTHT="$(pwd)" time openstack --verbose overcloud deploy \ --force-postconfig --templates \ --stack v3 \ -r ${_LTHT}/roles_data.yaml \ -n ${_LTHT}/network_data.yaml \ -e ${_LTHT}/containers-prepare-parameter.yaml \ -e ${_LTHT}/overcloud_images.yaml \ -e ${_THT}/environments/disable-telemetry.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-ansible.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-rgw.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-mds.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-dashboard.yaml \ -e ${_LTHT}/node-info.yaml \ --ntp-server 8.8.8.8
all the config, can be found here[1].
meanwhile I will comment out my journal option.
[1] https://github.com/qw3r3wq/OSP-ussuri/blob/master/v3/node-info.yaml
it's in: ./external_deploy_steps_tasks.yaml and: (undercloud) [stack@undercloudv3 v3]$ cat ./ceph-ansible/group_vars/osds.yml devices: - /dev/vdb osd_objectstore: bluestore osd_scenario: lvm (undercloud) [stack@undercloudv3 v3]$ And you ARE right. Thank you for helping to notice it, there is no my list of devices... those sdc sde sdd... clearing out and redeploying my OpenStack now. but node-info is always the last one. Maybe I should add it before and after, 2 times Just for fun? (added, I will see how it will go) By the way, just a small notice, but I believe that should not be a problem, that I have stack named v3, not overcloud... I believe it is ok, yes?
I have one thought. stack@undercloudv3 v3]$ cat /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml resource_registry: OS::TripleO::Services::CephMgr: ../../deployment/ceph-ansible/ceph-mgr.yaml OS::TripleO::Services::CephMon: ../../deployment/ceph-ansible/ceph-mon.yaml OS::TripleO::Services::CephOSD: ../../deployment/ceph-ansible/ceph-osd.yaml OS::TripleO::Services::CephClient: ../../deployment/ceph-ansible/ceph-client.yaml parameter_defaults: # Ensure that if user overrides CephAnsiblePlaybook via some env # file, we go back to default when they stop passing their env file. CephAnsiblePlaybook: ['default'] CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderBackupBackend: ceph NovaEnableRbdBackend: true GlanceBackend: rbd ## Uncomment below if enabling legacy telemetry # GnocchiBackend: rbd [stack@undercloudv3 v3]$ And my deploy has: -e ${_THT}/environments/ceph-ansible/ceph-ansible.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-rgw.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-mds.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-dashboard.yaml \ generally the same files, BUT, they are specified by user, and it "might feel like" the user overwrote default settings? Also I am thinking on the things you helped me tho find, John. And I recalled, what I have found strange. NFS part. That it was trying to configure CephNfs... Or it should even I do not have it specified? From the output [1] here is the small part of it: "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml", [1] https://proxy.qwq.lt/ceph-ansible.html
Also another thing, cat ./ceph-ansible/group_vars/osds.yml looks that have not been modified over last re-deployments. delete'ing it again and removing config-download and everything from swift... I do not like it do not override everything... especially when launching deployment, when there is no stack (I mean in undercloud host, as overcloud nodes should be cleaned up by undercloud). Thank you, will keep updated. On Mon, 21 Sep 2020 at 19:33, Ruslanas Gžibovskis <ruslanas@lpic.lt> wrote:
I have one thought.
stack@undercloudv3 v3]$ cat /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml
resource_registry: OS::TripleO::Services::CephMgr: ../../deployment/ceph-ansible/ceph-mgr.yaml OS::TripleO::Services::CephMon: ../../deployment/ceph-ansible/ceph-mon.yaml OS::TripleO::Services::CephOSD: ../../deployment/ceph-ansible/ceph-osd.yaml OS::TripleO::Services::CephClient: ../../deployment/ceph-ansible/ceph-client.yaml
parameter_defaults: # Ensure that if user overrides CephAnsiblePlaybook via some env # file, we go back to default when they stop passing their env file. CephAnsiblePlaybook: ['default']
CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderBackupBackend: ceph NovaEnableRbdBackend: true GlanceBackend: rbd ## Uncomment below if enabling legacy telemetry # GnocchiBackend: rbd [stack@undercloudv3 v3]$
And my deploy has: -e ${_THT}/environments/ceph-ansible/ceph-ansible.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-rgw.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-mds.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-dashboard.yaml \
generally the same files, BUT, they are specified by user, and it "might feel like" the user overwrote default settings?
Also I am thinking on the things you helped me tho find, John. And I recalled, what I have found strange. NFS part. That it was trying to configure CephNfs... Or it should even I do not have it specified? From the output [1] here is the small part of it: "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml",
-- Ruslanas Gžibovskis +370 6030 7030
On Mon, Sep 21, 2020 at 1:05 PM Ruslanas Gžibovskis <ruslanas@lpic.lt> wrote:
Also another thing, cat ./ceph-ansible/group_vars/osds.yml looks that have not been modified over last re-deployments. delete'ing it again and removing config-download and everything from swift...
The tripleo-ansible role tripleo_ceph_work_dir will manage that directory for you (recreate it when needed to reflect what is in Heat). It is run when config-download is run. https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansi...
I do not like it do not override everything... especially when launching deployment, when there is no stack (I mean in undercloud host, as overcloud nodes should be cleaned up by undercloud).
If there is no stack, the stack will be created when you deploy and config-download's directory of playbooks will also be recreated. You shouldn't need to worry about cleaning up the existing config-download directory. You can, but you don't have to. https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployme... John
Thank you, will keep updated.
On Mon, 21 Sep 2020 at 19:33, Ruslanas Gžibovskis <ruslanas@lpic.lt> wrote:
I have one thought.
stack@undercloudv3 v3]$ cat /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml resource_registry: OS::TripleO::Services::CephMgr: ../../deployment/ceph-ansible/ceph-mgr.yaml OS::TripleO::Services::CephMon: ../../deployment/ceph-ansible/ceph-mon.yaml OS::TripleO::Services::CephOSD: ../../deployment/ceph-ansible/ceph-osd.yaml OS::TripleO::Services::CephClient: ../../deployment/ceph-ansible/ceph-client.yaml
parameter_defaults: # Ensure that if user overrides CephAnsiblePlaybook via some env # file, we go back to default when they stop passing their env file. CephAnsiblePlaybook: ['default']
CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderBackupBackend: ceph NovaEnableRbdBackend: true GlanceBackend: rbd ## Uncomment below if enabling legacy telemetry # GnocchiBackend: rbd [stack@undercloudv3 v3]$
And my deploy has: -e ${_THT}/environments/ceph-ansible/ceph-ansible.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-rgw.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-mds.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-dashboard.yaml \
generally the same files, BUT, they are specified by user, and it "might feel like" the user overwrote default settings?
Also I am thinking on the things you helped me tho find, John. And I recalled, what I have found strange. NFS part. That it was trying to configure CephNfs... Or it should even I do not have it specified? From the output [1] here is the small part of it: "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml",
-- Ruslanas Gžibovskis +370 6030 7030
Just wanted to share a few observations from your https://github.com/qw3r3wq/OSP-ussuri/blob/master/v3/node-info.yaml 1. Your mon_max_pg_per_osd should be closer to 100 or 200. You have it set at 4k: CephConfigOverrides: global: mon_max_pg_per_osd: 4096 Maybe you set this to workaround https://ceph.com/community/new-luminous-pg-overdose-protection/ but this is not a good way to do it for any production data. This check was added to avoid setting this value too high so working around it increases the chances you can have the problems the check was made to avoid. I assume this is just a test cluster (1 mon) but I wanted to let you know. 2. Replicas If you only have one OSD node you need to set "CephPoolDefaultSize: 1" (that should help you with the pg overdose issue too). 3. metrics pool If you're deploying with telemetry disabled then you don't need a metrics pool. 4. Backend overrides You shouldn't need GlanceBackend: rbd, GnocchiBackend: rbd, or NovaEnableRbdBackend: true as that gets set by default by using the ceph-ansible env file we've been talking about. 5. DistributedComputeHCICount role This role is meant to be used with distributed compute nodes which don't run in the same stack as the controller node. They are meant to be used as described in [1] I think the ComputeHCI node would be a better role to deploy in the same stack as the Controller. Not saying you can't do this but it doesn't look like you're using the role for what it was designed for so I at least wanted to point that out. [1] https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features... John On Mon, Sep 21, 2020 at 1:29 PM John Fulton <johfulto@redhat.com> wrote:
On Mon, Sep 21, 2020 at 1:05 PM Ruslanas Gžibovskis <ruslanas@lpic.lt> wrote:
Also another thing, cat ./ceph-ansible/group_vars/osds.yml looks that have not been modified over last re-deployments. delete'ing it again and removing config-download and everything from swift...
The tripleo-ansible role tripleo_ceph_work_dir will manage that directory for you (recreate it when needed to reflect what is in Heat). It is run when config-download is run.
https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansi...
I do not like it do not override everything... especially when launching deployment, when there is no stack (I mean in undercloud host, as overcloud nodes should be cleaned up by undercloud).
If there is no stack, the stack will be created when you deploy and config-download's directory of playbooks will also be recreated. You shouldn't need to worry about cleaning up the existing config-download directory. You can, but you don't have to.
https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployme...
John
Thank you, will keep updated.
On Mon, 21 Sep 2020 at 19:33, Ruslanas Gžibovskis <ruslanas@lpic.lt> wrote:
I have one thought.
stack@undercloudv3 v3]$ cat /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml resource_registry: OS::TripleO::Services::CephMgr: ../../deployment/ceph-ansible/ceph-mgr.yaml OS::TripleO::Services::CephMon: ../../deployment/ceph-ansible/ceph-mon.yaml OS::TripleO::Services::CephOSD: ../../deployment/ceph-ansible/ceph-osd.yaml OS::TripleO::Services::CephClient: ../../deployment/ceph-ansible/ceph-client.yaml
parameter_defaults: # Ensure that if user overrides CephAnsiblePlaybook via some env # file, we go back to default when they stop passing their env file. CephAnsiblePlaybook: ['default']
CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderBackupBackend: ceph NovaEnableRbdBackend: true GlanceBackend: rbd ## Uncomment below if enabling legacy telemetry # GnocchiBackend: rbd [stack@undercloudv3 v3]$
And my deploy has: -e ${_THT}/environments/ceph-ansible/ceph-ansible.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-rgw.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-mds.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-dashboard.yaml \
generally the same files, BUT, they are specified by user, and it "might feel like" the user overwrote default settings?
Also I am thinking on the things you helped me tho find, John. And I recalled, what I have found strange. NFS part. That it was trying to configure CephNfs... Or it should even I do not have it specified? From the output [1] here is the small part of it: "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml",
-- Ruslanas Gžibovskis +370 6030 7030
On Mon, Sep 21, 2020 at 12:34 PM Ruslanas Gžibovskis <ruslanas@lpic.lt> wrote:
I have one thought.
stack@undercloudv3 v3]$ cat /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml resource_registry: OS::TripleO::Services::CephMgr: ../../deployment/ceph-ansible/ceph-mgr.yaml OS::TripleO::Services::CephMon: ../../deployment/ceph-ansible/ceph-mon.yaml OS::TripleO::Services::CephOSD: ../../deployment/ceph-ansible/ceph-osd.yaml OS::TripleO::Services::CephClient: ../../deployment/ceph-ansible/ceph-client.yaml
parameter_defaults: # Ensure that if user overrides CephAnsiblePlaybook via some env # file, we go back to default when they stop passing their env file. CephAnsiblePlaybook: ['default']
CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderBackupBackend: ceph NovaEnableRbdBackend: true GlanceBackend: rbd ## Uncomment below if enabling legacy telemetry # GnocchiBackend: rbd [stack@undercloudv3 v3]$
And my deploy has: -e ${_THT}/environments/ceph-ansible/ceph-ansible.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-rgw.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-mds.yaml \ -e ${_THT}/environments/ceph-ansible/ceph-dashboard.yaml \
The above is normal. Looks like you're using it as expected.
generally the same files, BUT, they are specified by user, and it "might feel like" the user overwrote default settings?
I assume that ${_THT} refers to /usr/share/openstack-tripleo-heat-templates. I don't recommend editing the THT shipped with TripleO. If it has been modified then I recommend restoring it to the original from the RPM.
Also I am thinking on the things you helped me tho find, John. And I recalled, what I have found strange. NFS part. That it was trying to configure CephNfs... Or it should even I do not have it specified? From the output [1] here is the small part of it: "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml", Those roles will be used if you're also trying to configure manila:
https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features... It will get the OSD running first however and that's failing the the vdb issue in your log below ([1]). John
On Mon, Sep 21, 2020 at 12:12 PM Ruslanas Gžibovskis <ruslanas@lpic.lt> wrote:
it's in: ./external_deploy_steps_tasks.yaml and: (undercloud) [stack@undercloudv3 v3]$ cat ./ceph-ansible/group_vars/osds.yml devices: - /dev/vdb osd_objectstore: bluestore osd_scenario: lvm (undercloud) [stack@undercloudv3 v3]$
And you ARE right. Thank you for helping to notice it, there is no my list of devices... those sdc sde sdd...
clearing out and redeploying my OpenStack now. but node-info is always the last one. Maybe I should add it before and after, 2 times Just for fun? (added, I will see how it will go)
Then for whatever reason in the series of overrides the default CephAnsibleDisksConfig devices list is getting used and not your overrides. I'm very confident the override order works correctly if the templates are in the right order. I recommend simplifying by removing templates and then adding in only what you need in iterative layers. You node overrides look complex.
By the way, just a small notice, but I believe that should not be a problem, that I have stack named v3, not overcloud... I believe it is ok, yes?
Yes, you can call the stack whatever you like by using the --stack option. John
participants (2)
-
John Fulton
-
Ruslanas Gžibovskis