[Ussuri] [openstack-ansible] [cinder] Can't attach volumes to instances

Gorka Eguileor geguileo at redhat.com
Fri Oct 9 09:30:04 UTC 2020


On 07/10, Oliver Wenz wrote:
> Hi,
> I've deployed OpenStack successfully using openstack-ansible. I use
> cinder with LVM backend and can create volumes. However, when I attach
> them to an instance, they stay detached (though there's no Error
> Message) both using CLI and the Dashboard.
>
> Looking for a solution I read that the cinder logs might contain
> relevant information but in Ussuri they don't seem to be present under
> /var/log/cinder...
>
> Here's the part of my openstack_user_config.yml regarding Cinder:
>
> ```
> storage_hosts:
>    lvm-storage1:
>      ip: 192.168.110.202
>      container_vars:
>        cinder_backends:
>          lvm:
>            volume_backend_name: LVM_iSCSI
>            volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
>            volume_group: cinder-volumes
>            iscsi_ip_address: 10.0.3.202
>          limit_container_types: cinder_volume
> ```
>
> I've created cinder-volumes with vgcreate before the installation and
> all cinder services are up:
>
> # openstack volume service list
> +------------------+--------------------------------------+------+---------+-------+----------------------------+
> | Binary           | Host                                 | Zone |
> Status  | State | Updated At                 |
> +------------------+--------------------------------------+------+---------+-------+----------------------------+
> | cinder-backup    | bc1bl10                              | nova |
> enabled | up    | 2020-10-07T11:24:10.000000 |
> | cinder-volume    | bc1bl10 at lvm                          | nova |
> enabled | up    | 2020-10-07T11:24:05.000000 |
> | cinder-scheduler | infra1-cinder-api-container-1dacc920 | nova |
> enabled | up    | 2020-10-07T11:24:05.000000 |
> +------------------+--------------------------------------+------+---------+-------+----------------------------+
>

Hi,

Configuration option iscsi_ip_address was removed a long time ago in
Cinder, the new one is target_ip_address (I don't know if the playbook
maps it or what).

I recommend you run the attach request with the --debug flag to get the
request id, that way you can easily track the request and see where it
failed.

Then you check the logs like Dmitriy mentions and see where things
failed.

It can fail on:

- cinder-volume: if it cannot map the volume (unlikely)
- nova-compute: on os-brick, so you'll have a traceback

It's important that the target_ip_address can be accessed from the Nova
compute using the interface for the IP defined in my_ip in nova.conf

Assuming that iscsi_ip_address is not doing anything, then the LVM
driver will probably use the one defined in ip (192.168.110.202).

If my_ip is not defined in nova.conf, then you can see the default for
Nova running in that compute node:

  python -c 'from oslo_utils import netutils; print(netutils.get_my_ipv4())'

So make sure you can actually access from that interface the IP in
Cinder.

I wouldn't bother with all that myself, I would just set debug log
levels in cinder-volume and check the initialize_connection call in the
logs to see the parameters in the entry call that Nova is sending (the
IP it is going to be connecting from) and the return value where we can
see the IP of the iSCSI target.

Hope that helps.

Cheers,
Gorka.

>
> Thanks in advance!
>
> Kind regards,
> Oliver
>




More information about the openstack-discuss mailing list