[openstack][stein][cinder] iscsi multipath issues

Ignazio Cassano ignaziocassano at gmail.com
Wed Nov 18 11:26:38 UTC 2020


Hello Everyone,
I am testing stein with cinder iscsi driver for unity.
Since I became crazy  on queens, I decided to try with stein.
At this time I have only one virtual machine with only one iscsi volume.

It was running on podiscsivc-kvm02 and I migrated it on podiscsivc-kvm01.

Let me show what multipath -ll displays on podiscsivc-kvm02 after live
migration:

36006016006e04400dce5b45f0ac77301 dm-3 DGC     ,VRAID
size=40G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=0 status=active
| `- 17:0:0:60 sdm 8:192 failed faulty running
`-+- policy='round-robin 0' prio=0 status=enabled
  |- 19:0:0:60 sdk 8:160 failed faulty running
  `- 21:0:0:60 sdj 8:144 failed faulty running

And now on destination node  podiscsivc-kvm01:
36006016006e04400dce5b45f0ac77301 dm-3 DGC     ,VRAID
size=40G features='2 queue_if_no_path retain_attached_hw_handler'
hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 17:0:0:14 sdm 8:192 active ready running
| `- 15:0:0:14 sdl 8:176 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  |- 21:0:0:14 sdj 8:144 active ready running
  `- 19:0:0:14 sdk 8:160 active ready running

On source node in /var/log/messages I get:
Nov 18 10:34:19 podiscsivc-kvm02 multipathd:
36006016006e04400dce5b45f0ac77301: failed in domap for addition of new path
sdl
Nov 18 10:34:19 podiscsivc-kvm02 multipathd:
36006016006e04400dce5b45f0ac77301: uev_add_path sleep
Nov 18 10:34:20 podiscsivc-kvm02 multipathd:
36006016006e04400dce5b45f0ac77301: failed in domap for addition of new path
sdl
Nov 18 10:34:20 podiscsivc-kvm02 kernel: device-mapper: table: 253:3:
multipath: error getting device
Nov 18 10:34:20 podiscsivc-kvm02 kernel: device-mapper: ioctl: error adding
target to table
Nov 18 10:34:20 podiscsivc-kvm02 multipathd:
36006016006e04400dce5b45f0ac77301: uev_add_path sleep

On storage node seems it wotks fine, because the host access migrated from
source to destination node.

Multipath.conf is the following:

blacklist {
    # Skip the files uner /dev that are definitely not FC/iSCSI devices
    # Different system may need different customization
    devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
    devnode "^hd[a-z][0-9]*"
    devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"

    # Skip LUNZ device from VNX/Unity
    device {
        vendor "DGC"
        product "LUNZ"
    }
}

defaults {
    user_friendly_names no
    flush_on_last_del yes
    remove_retries 12
    skip_kpartx yes
}

devices {
    # Device attributed for EMC CLARiiON and VNX/Unity series ALUA
    device {
        vendor "DGC"
        product ".*"
        product_blacklist "LUNZ"
        path_grouping_policy group_by_prio
        path_selector "round-robin 0"
        path_checker emc_clariion
        features "0"
        no_path_retry 12
        hardware_handler "1 alua"
        prio alua
        failback immediate
    }
}

Why it leaves  failed faulty running devices ?
Is it a correct behaviour ?
Every time I migrate my single instance from a node to another the number
of files under /dev/disk/by-path increase of 4 (the number of path used for
unity storage) on the destination node.


I contacted Dell support and they said it is not an a problem of their
cinder driver but ti could be related to nova.
Please help me !!!
I must decide if acquire nfs storage or iscsi storage.
On the cinder compatibility drive matrix most vendors are iscsi.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20201118/743d18a8/attachment-0001.html>


More information about the openstack-discuss mailing list