[Cinder][NFS][Openstack-Ansible] Cinder-Volume Mess - Containers and Metal by Accident

Dave Hall kdhall at binghamton.edu
Sun Aug 7 14:11:27 UTC 2022


Hello,

Please pardon the repost - I noticed this morning that I didn't finish the
subject line.

Problem summary:  I have a bunch of lingering non-functional cinder
definitions and I'm looking for guidance on how to clean them up.

Thanks.

-Dave

--
Dave Hall
Binghamton University
kdhall at binghamton.edu



On Sat, Aug 6, 2022 at 2:52 PM Dave Hall <kdhall at binghamton.edu> wrote:

> Hello,
>
> I seem to have gotten myself in a bit of a mess trying to set up Cinder
> with an NFS back-end.  After working with Glance and NFS, I started on
> Cinder.  I noticed immediately that there weren't any NFS mounts in the
> Cinder-API containers like there were in the Glance-API containers.  Also
> that there were no NFS packages in the Cinder-API containers.
>
> In reading some Cinder documentation, I also got the impression that each
> Cinder host/container needs to have its own NFS store.
>
> Pawing through the playbooks and documentation I saw that unlike Glance,
> Cinder is split into two pieces - Cinder-API and Cinder-Volume.  I found
> cinder-volume.yml.example in env.d, activated it, and created Cinder-Volume
> containers on my 3 infra hosts.  I also created 3 separate NFS shares and
> changed the storage-hosts section of my openstack_user_config.yml
> accordingly.
>
> After this I found that while I was able to create volumes, the
> prep_volume part of launching an instance was failing.
>
> Digging in, I found:
>
> # openstack volume service list
>
> +------------------+-------------------------------------------------------+------+---------+-------+----------------------------+
> | Binary           | Host
>  | Zone | Status  | State | Updated At                 |
>
> +------------------+-------------------------------------------------------+------+---------+-------+----------------------------+
> | cinder-volume    | C6220-9 at nfs_volume
>  | nova | enabled | down  | 2022-07-23T02:46:13.000000 |
> | cinder-volume    | C6220-10 at nfs_volume
>   | nova | enabled | down  | 2022-07-23T02:46:14.000000 |
> | cinder-volume    | C6220-11 at nfs_volume
>   | nova | enabled | down  | 2022-07-23T02:46:14.000000 |
> | cinder-scheduler | infra36-cinder-api-container-da8e100f
> | nova | enabled | up    | 2022-08-06T13:29:10.000000 |
> | cinder-scheduler | infra38-cinder-api-container-27219f93
> | nova | enabled | up    | 2022-08-06T13:29:10.000000 |
> | cinder-scheduler | infra37-cinder-api-container-ea7f847b
> | nova | enabled | up    | 2022-08-06T13:29:10.000000 |
> | cinder-volume    | C6220-9 at nfs_volume1
>   | nova | enabled | up    | 2022-08-06T13:29:10.000000 |
> | cinder-volume    | infra37-cinder-volumes-container-5b9635ad at nfs_volume
>  | nova | enabled | down  | 2022-08-04T18:32:53.000000 |
> | cinder-volume    | infra36-cinder-volumes-container-77190057 at nfs_volume1
> | nova | enabled | down  | 2022-08-06T13:03:03.000000 |
> | cinder-volume    | infra38-cinder-volumes-container-a7bcfc9b at nfs_volume
>  | nova | enabled | down  | 2022-08-04T18:32:53.000000 |
> | cinder-volume    | infra37-cinder-volumes-container-5b9635ad at nfs_volume2
> | nova | enabled | down  | 2022-08-06T13:03:05.000000 |
> | cinder-volume    | C6220-10 at nfs_volume2
>  | nova | enabled | up    | 2022-08-06T13:29:10.000000 |
> | cinder-volume    | C6220-11 at nfs_volume3
>  | nova | enabled | up    | 2022-08-06T13:29:10.000000 |
> | cinder-volume    | infra38-cinder-volumes-container-a7bcfc9b at nfs_volume3
> | nova | enabled | down  | 2022-08-06T13:03:03.000000 |
>
> +------------------+-------------------------------------------------------+------+---------+-------+----------------------------+
>
> Thinking I could save this, I used containers-lxc-destroy.yml to destroy
> my cinder-volumes containers and deactivated cinder-volume.yml.example.
> Then I ran setup-hosts.yml, which has restored the cinder-volumes
> containers even though is_metal: false has been removed.
>
> Clearly a stronger intervention will be required.  I would like to fully
> get rid of the cinder-volumes containers and go back to an is_metal: true
> scenario.  I also need to get rid of  the unnumbered nfs_volume referenes,
> which I assume are in some cinder config file somewhere.
>
> Below is a clip from my openstack_user_config.yml:
>
> storage_hosts:
>   infra36:
>     ip: 172.29.236.36
>     container_vars:
>       cinder_backends:
>         nfs_volume1:
>           volume_backend_name: NFS_VOLUME1
>           volume_driver: cinder.volume.drivers.nfs.NfsDriver
>           nfs_mount_options:
> "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
>           nfs_shares_config: /etc/cinder/nfs_shares_volume1
>           shares:
>           - { ip: "172.29.244.27", share: "/NFS_VOLUME1" }
>   infra37:
>     ip: 172.29.236.37
>     container_vars:
>       cinder_backends:
>         nfs_volume2:
>           volume_backend_name: NFS_VOLUME2
>           volume_driver: cinder.volume.drivers.nfs.NfsDriver
>           nfs_mount_options:
> "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
>           nfs_shares_config: /etc/cinder/nfs_shares_volume2
>           shares:
>           - { ip: "172.29.244.27", share: "/NFS_VOLUME2" }
>   infra38:
>     ip: 172.29.236.38
>     container_vars:
>       cinder_backends:
>         nfs_volume3:
>           volume_backend_name: NFS_VOLUME3
>           volume_driver: cinder.volume.drivers.nfs.NfsDriver
>           nfs_mount_options:
> "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
>           nfs_shares_config: /etc/cinder/nfs_shares_volume3
>           shares:
>           - { ip: "172.29.244.27", share: "/NFS_VOLUME3" }
>
> Any advice would be greatly appreciated.
>
> Thanks.
>
> -Dave
>
> --
> Dave Hall
> Binghamton University
> kdhall at binghamton.edu
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20220807/b63434c5/attachment.htm>


More information about the openstack-discuss mailing list