[Openstack] multipath - EMC vs NetApp
Takashi Natsume
natsume.takashi at lab.ntt.co.jp
Wed Nov 27 01:43:20 UTC 2013
Hello Xing,
Thank you for your reply.
You wrote:
> In the logs, you got a login failure. I'm wondering if the
iscsi_ip_address
> in cinder.conf has the correct value. It should be the iSCSI IP address
> from SPA or SPB.
We set one of SP-Ports' IP addresses to iscsi_ip_address.
To be specific, A-4v0's IP address.
> The code in cinder.brick.initiator.connector should work for VNX. It uses
> one iSCSI target IP address to do iscsiadm discovery and that should
return
> multiple IP addresses. If not, we need to look at your configuration.
'iscsiadm discovery' returns multiple IP addresses.
So cinder-volume tried to login to each of them.
> Let me contact the account manager for NTT to set up a conference call
with
> you to get this resolved quickly.
We will get in touch with the account manager of EMC Japan for our company.
Regards,
Takashi Natsume
NTT Software Innovation Center
Tel: +81-422-59-4399
E-mail: natsume.takashi at lab.ntt.co.jp
> -----Original Message-----
> From: yang, xing [mailto:xing.yang at emc.com]
> Sent: Wednesday, November 27, 2013 2:07 AM
> To: Takashi Natsume; openstack at lists.openstack.org
> Subject: RE: [Openstack] multipath - EMC vs NetApp
>
> Hi Takashi,
>
> In the logs, you got a login failure. I'm wondering if the
iscsi_ip_address
> in cinder.conf has the correct value. It should be the iSCSI IP address
> from SPA or SPB.
>
> The code in cinder.brick.initiator.connector should work for VNX. It uses
> one iSCSI target IP address to do iscsiadm discovery and that should
return
> multiple IP addresses. If not, we need to look at your configuration.
>
> Let me contact the account manager for NTT to set up a conference call
with
> you to get this resolved quickly.
>
> Thanks,
> Xing
>
>
>
> -----Original Message-----
> From: Takashi Natsume [mailto:natsume.takashi at lab.ntt.co.jp]
> Sent: Tuesday, November 26, 2013 3:52 AM
> To: yang, xing; openstack at lists.openstack.org
> Subject: RE: [Openstack] multipath - EMC vs NetApp
>
> Hello Xing,
>
> Thank you for your reply.
>
> You wrote:
> > What OpenStack release are you running? iSCSI multipath support was
> > added
> very late in Grizzly release and there were patches regarding multipath
> added even after the Grizzly release. EMC VNX5300 should work with iSCSI
> multipath with Grizzly and beyond.
>
> We tested with 'master' code
> (nova:1dca95a74788667e52cab664c8a1dd942222d9c8
> and cinder:9b599d092ffa168a73ab7fa98ff20cb2cb48fe0b) last month, but
> iSCSI multipath could not be configured.
>
> Our configuration was as follows.
> (IP addresses, passwords, prefixes are masked.)
>
> ----------------------------------------------------------------
> - /etc/nova/nova.conf
> volume_api_class = nova.volume.cinder.API libvirt_iscsi_use_multipath =
> True
>
> - /etc/cinder/cinder.conf
> use_multipath_for_image_xfer = true
> iscsi_target_prefix = iqn.1992-04.com.emc:cx.fcn00XXXXXXXX
> iscsi_ip_address = XX.XX.XX.11
> volume_driver =
> cinder.volume.drivers.emc.emc_smis_iscsi.EMCSMISISCSIDriver
> cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml
>
> - /etc/cinder/cinder_emc_config.xml
>
> <?xml version='1.0' encoding='UTF-8'?>
> <EMC>
> <StorageType>Pool0</StorageType>
> <EcomServerIp>XX.XX.XX.XX</EcomServerIp>
> <EcomServerPort>5988</EcomServerPort>
> <EcomUserName>admin</EcomUserName>
> <EcomPassword>xxxxxxxx</EcomPassword>
> </EMC>
>
> - cinder-volume host
> A python-pywbem package(Version 0.7.0-4) was installed on the
cinder-volume
> host.
>
> - SMI-S Provider
> EMC SMI-S Provider V4.6.1.1 was set up on RHEL 6.3.
> and configured by following the SMI-S Provider Release notes.
>
> - VNX5300
> Thin Provisioning enabler was installed.
> Thin pool 'Pool0' was created.
>
> And "Register with VNX" by Unisphere
> (See
> http://docs.openstack.org/grizzly/openstack-block-storage/admin/conten
> t/emc-
> smis-iscsi-driver.html)
> ----------------------------------------------------------------
>
> When copy_image_to_volume was executed, the following log was outputed.
> (Target IQN and portal IP addresses are masked.)
>
> ---
> 2013-10-31 11:27:43.322 26080 DEBUG cinder.brick.initiator.connector
> [req-18a3433e-561b-4f04-b3be-b5d5956780f9
> 59b7221e2f8041e39f643520b78ba533 964613d2565a4a069da9658e44b544ae]
> iscsiadm ('--login',): stdout=Logging in to [iface: default, target:
> iqn.1992-04.com.emc:cx.fcn00XXXXXXXX.a6, portal:
> XX.XX.XX.13,3260]
> Login to [iface: default, target:
> iqn.1992-04.com.emc:cx.fcn00XXXXXXXX.a6,
> portal: XX.XX.XX.13,3260]: successful
> 2013-10-31 11:27:45.680 26080 DEBUG cinder.brick.initiator.connector
> [req-18a3433e-561b-4f04-b3be-b5d5956780f9
> 59b7221e2f8041e39f643520b78ba533 964613d2565a4a069da9658e44b544ae]
> iscsiadm ('--login',): stdout=Logging in to [iface: default, target:
> iqn.1992-04.com.emc:cx.fcn00XXXXXXXX.a6, portal:
> XX.XX.XX.17,3260]
> stderr=iscsiadm: Could not login to [iface: default, target:
> iqn.1992-04.com.emc:cx.fcn00XXXXXXXX.a6, portal: XX.XX.XX.17,3260]:
> 2013-10-31 11:27:47.945 26080 DEBUG cinder.brick.initiator.connector
> [req-18a3433e-561b-4f04-b3be-b5d5956780f9
> 59b7221e2f8041e39f643520b78ba533 964613d2565a4a069da9658e44b544ae]
> iscsiadm ('--login',): stdout=Logging in to [iface: default, target:
> iqn.1992-04.com.emc:cx.fcn00XXXXXXXX.a6, portal:
> XX.XX.XX.12,3260]
> stderr=iscsiadm: Could not login to [iface: default, target:
> iqn.1992-04.com.emc:cx.fcn00XXXXXXXX.a6, portal: XX.XX.XX.12,3260]:
> 2013-10-31 11:27:50.144 26080 DEBUG cinder.brick.initiator.connector
> [req-18a3433e-561b-4f04-b3be-b5d5956780f9
> 59b7221e2f8041e39f643520b78ba533 964613d2565a4a069da9658e44b544ae]
> iscsiadm ('--login',): stdout=Logging in to [iface: default, target:
> iqn.1992-04.com.emc:cx.fcn00XXXXXXXX.a6, portal:
> XX.XX.XX.15,3260]
> stderr=iscsiadm: Could not login to [iface: default, target:
> iqn.1992-04.com.emc:cx.fcn00XXXXXXXX.a6, portal: XX.XX.XX.15,3260]:
> 2013-10-31 11:27:52.374 26080 DEBUG cinder.brick.initiator.connector
> [req-18a3433e-561b-4f04-b3be-b5d5956780f9
> 59b7221e2f8041e39f643520b78ba533 964613d2565a4a069da9658e44b544ae]
> iscsiadm ('--login',): stdout=Logging in to [iface: default, target:
> iqn.1992-04.com.emc:cx.fcn00XXXXXXXX.a6, portal:
> XX.XX.XX.11,3260]
> stderr=iscsiadm: Could not login to [iface: default, target:
> iqn.1992-04.com.emc:cx.fcn00XXXXXXXX.a6, portal: XX.XX.XX.11,3260]:
> 2013-10-31 11:27:54.621 26080 DEBUG cinder.brick.initiator.connector
> [req-18a3433e-561b-4f04-b3be-b5d5956780f9
> 59b7221e2f8041e39f643520b78ba533 964613d2565a4a069da9658e44b544ae]
> iscsiadm ('--login',): stdout=Logging in to [iface: default, target:
> iqn.1992-04.com.emc:cx.fcn00XXXXXXXX.a6, portal:
> XX.XX.XX.16,3260]
> stderr=iscsiadm: Could not login to [iface: default, target:
> iqn.1992-04.com.emc:cx.fcn00XXXXXXXX.a6, portal: XX.XX.XX.16,3260]:
> 2013-10-31 11:27:56.914 26080 DEBUG cinder.brick.initiator.connector
> [req-18a3433e-561b-4f04-b3be-b5d5956780f9
> 59b7221e2f8041e39f643520b78ba533 964613d2565a4a069da9658e44b544ae]
> iscsiadm ('--login',): stdout=Logging in to [iface: default, target:
> iqn.1992-04.com.emc:cx.fcn00XXXXXXXX.a6, portal:
> XX.XX.XX.18,3260]
> stderr=iscsiadm: Could not login to [iface: default, target:
> iqn.1992-04.com.emc:cx.fcn00XXXXXXXX.a6, portal: XX.XX.XX.18,3260]:
> 2013-10-31 11:27:59.174 26080 DEBUG cinder.brick.initiator.connector
> [req-18a3433e-561b-4f04-b3be-b5d5956780f9
> 59b7221e2f8041e39f643520b78ba533 964613d2565a4a069da9658e44b544ae]
> iscsiadm ('--login',): stdout=Logging in to [iface: default, target:
> iqn.1992-04.com.emc:cx.fcn00XXXXXXXX.a6, portal:
> XX.XX.XX.14,3260]
> stderr=iscsiadm: Could not login to [iface: default, target:
> iqn.1992-04.com.emc:cx.fcn00XXXXXXXX.a6, portal: XX.XX.XX.14,3260]:
> ---
>
> In cinder.brick.initiator.connector, it is using a single iSCSI
target(IQN)
> for login to multiple (different) portal IP addresses.
> VNX5300 has multiple iSCSI ports with different iSCSI target (IQN)s.
> (Normally it should use a different iSCSI target(IQN) for each IP
address.)
>
> If our settings was wrong, could you please let us know what are the
correct
> settings?
>
> Regards,
> Takashi Natsume
> NTT Software Innovation Center
> Tel: +81-422-59-4399
> E-mail: natsume.takashi at lab.ntt.co.jp
>
> > -----Original Message-----
> > From: yang, xing [mailto:xing.yang at emc.com]
> > Sent: Tuesday, November 26, 2013 12:12 AM
> > To: Takashi Natsume; openstack at lists.openstack.org
> > Subject: RE: [Openstack] multipath - EMC vs NetApp
> >
> > Hi Takashi,
> >
> > What OpenStack release are you running? iSCSI multipath support was
> > added very late in Grizzly release and there were patches regarding
> > multipath added even after the Grizzly release. EMC VNX5300 should
> > work with iSCSI multipath with Grizzly and beyond.
> >
> > Thanks,
> > Xing
> >
> >
> >
> > -----Original Message-----
> > From: Takashi Natsume [mailto:natsume.takashi at lab.ntt.co.jp]
> > Sent: Monday, November 25, 2013 6:07 AM
> > To: yang, xing; openstack at lists.openstack.org
> > Subject: RE: [Openstack] multipath - EMC vs NetApp
> >
> > Hello Xing and all,
> >
> > We tried to set up iSCSI multipath with EMC VNX5300.
> > However, we had some problems with setting the iSCSI multipath with
> > EMC VNX5300.
> > So, we analyzed the logs and source codes, and found out that
> > nova-compute (cinder-volume also same) is using a single iSCSI
> > target(IQN) for login to multiple (different) portal IP addresses.
> > VNX5300 has multiple iSCSI ports with different iSCSI target (IQN)s.
> > (Normally it should use a different iSCSI target(IQN) for each IP
> address.)
> >
> > So, we realized that, iSCSI multipath cannot be configured with VNX5300.
> > But we would like to use iSCSI multipath with EMC VNX5300.
> >
> > Regards,
> > Takashi Natsume
> > NTT Software Innovation Center
> > Tel: +81-422-59-4399
> > E-mail: natsume.takashi at lab.ntt.co.jp
> >
> > From: yang, xing [mailto:xing.yang at emc.com]
> > Sent: Friday, November 22, 2013 7:25 AM
> > To: Carlos Alvarez; openstack at lists.openstack.org
> > Subject: Re: [Openstack] multipath - EMC vs NetApp
> >
> > Hi Carlos,
> >
> > We are working on this issue and will keep you informed.
> >
> > Thanks,
> > Xing
> >
> >
> > From: Carlos Alvarez [mailto:cbalvarez at gmail.com]
> > Sent: Monday, November 18, 2013 1:19 PM
> > To: openstack at lists.openstack.org
> > Subject: [Openstack] multipath - EMC vs NetApp
> >
> > Hi All.
> >
> > I recently added a EMC V-Max storage system, and realized the
> > multipath is not working. The device is called /dev/mapper/XXXX but
> > when I see the multipath -ll output, I see just one path. It is
> > working fine with a
> NetApp
> > 3250.
> >
> > Looking into differences, I see the output of the iscsiadm discovery
> > differs:
> >
> > Netapp output:
> > root at grizzly-dev04:~# iscsiadm -m discovery -t sendtargets -p
> > 10.33.5.10
> > 10.33.5.10:3260,1030
> > iqn.1992-08.com.netapp:sn.0e3c22d9f2ea11e2a2f2123478563412:vs.10
> > 10.33.5.11:3260,1031
> > iqn.1992-08.com.netapp:sn.0e3c22d9f2ea11e2a2f2123478563412:vs.10
> >
> > Emc output:
> > root at grizzly-dev04:~# iscsiadm -m discovery -t sendtargets -p
> > 10.33.5.25
> > 10.33.5.25:3260,1 iqn.1992-04.com.emc:50000973f00bcd44
> >
> >
> > Looking into the code, the former is clearly what the connect_volume
> method
> > is expecting: a single ip which returns both path. I reported it to
> > EMC and the answer is that it works with netapp because the 3250 has a
> > feature EMC lacks (called multipath groups).
> >
> > Is anybody using multipath with a V-Max?. It should work or EMC is not
> > supported?
> >
> > Thanks!
> > Carlos
> >
> >
>
>
>
More information about the Openstack
mailing list