[Openstack] Iscsi target IP address

Andrew Mann andrew at divvycloud.com
Wed Jul 30 16:17:04 UTC 2014


In /etc/cinder.conf on the host that is serving your cinder volumes, try
adding:
iscsi_ip_address=192.168.100.32

under the [DEFAULT] section I think.

/user/lib/python2.7/dist-packages/cinder/volume/driver.py lists these
configuration options:


volume_opts = [
    cfg.IntOpt('num_shell_tries',
               default=3,
               help='number of times to attempt to run flakey shell
commands'),
    cfg.IntOpt('reserved_percentage',
               default=0,
               help='The percentage of backend capacity is reserved'),
    cfg.IntOpt('iscsi_num_targets',
               default=100,
               help='The maximum number of iscsi target ids per host'),
    cfg.StrOpt('iscsi_target_prefix',
               default='iqn.2010-10.org.openstack:',
               help='prefix for iscsi volumes'),
    cfg.StrOpt('iscsi_ip_address',
               default='$my_ip',
               help='The IP address that the iSCSI daemon is listening on'),
    cfg.IntOpt('iscsi_port',
               default=3260,
               help='The port that the iSCSI daemon is listening on'),
    cfg.IntOpt('num_volume_device_scan_tries',
               deprecated_name='num_iscsi_scan_tries',
               default=3,
               help='The maximum number of times to rescan targets'
                    ' to find volume'),
    cfg.StrOpt('volume_backend_name',
               default=None,
               help='The backend name for a given driver implementation'),
    cfg.BoolOpt('use_multipath_for_image_xfer',
                default=False,
                help='Do we attach/detach volumes in cinder using multipath
'
                     'for volume to image and image to volume transfers?'),
    cfg.StrOpt('volume_clear',
               default='zero',
               help='Method used to wipe old voumes (valid options are: '
                    'none, zero, shred)'),
    cfg.IntOpt('volume_clear_size',
               default=0,
               help='Size in MiB to wipe at start of old volumes. 0 =>
all'),
    cfg.StrOpt('volume_clear_ionice',
               default=None,
               help='The flag to pass to ionice to alter the i/o priority '
                    'of the process used to zero a volume after deletion, '
                    'for example "-c3" for idle only priority.'),
    cfg.StrOpt('iscsi_helper',
               default='tgtadm',
               help='iscsi target user-land tool to use'),
    cfg.StrOpt('volumes_dir',
               default='$state_path/volumes',
               help='Volume configuration file storage '
               'directory'),
    cfg.StrOpt('iet_conf',
               default='/etc/iet/ietd.conf',
               help='IET configuration file'),
    cfg.StrOpt('lio_initiator_iqns',
               default='',
               help=('Comma-separated list of initiator IQNs '
                     'allowed to connect to the '
                     'iSCSI target. (From Nova compute nodes.)')),
    cfg.StrOpt('iscsi_iotype',
               default='fileio',
               help=('Sets the behavior of the iSCSI target '
                     'to either perform blockio or fileio '
                     'optionally, auto can be set and Cinder '
                     'will autodetect type of backing device')),
    cfg.StrOpt('volume_dd_blocksize',
               default='1M',
               help='The default block size used when copying/clearing '
                    'volumes'),
]





On Wed, Jul 30, 2014 at 10:26 AM, Olivier Cant <olivier.cant at exxoss.com>
wrote:

> Hi,
>
> I'm having a bit of trouble with my opnestack setup (icehouse) in our lab
> when I try to launch a new instance.
>
> In nova I get the following error :
>
> 2014-07-30 14:19:15.780 13528 TRACE nova.compute.manager [instance:
> 6df99b40-0171-4a52-b362-832d1556c589] Command: sudo nova-rootwrap
> /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.2010-10.org.openstack:
> volume-00bc4992-dd1e-405f-884a-d38224838a86 -p 10.0.2.15:3260 --rescan
> 2014-07-30 14:19:15.780 13528 TRACE nova.compute.manager [instance:
> 6df99b40-0171-4a52-b362-832d1556c589] Exit code: 21
> 2014-07-30 14:19:15.780 13528 TRACE nova.compute.manager [instance:
> 6df99b40-0171-4a52-b362-832d1556c589] Stdout: ''
> 2014-07-30 14:19:15.780 13528 TRACE nova.compute.manager [instance:
> 6df99b40-0171-4a52-b362-832d1556c589] Stderr: 'iscsiadm: No session
> found.\n'
>
> I tried to run iSCSI discovery to the cinder node and I can't connect to
> it.
>
> The cinder node has two interfaces (10.0.2.15 and 192.168.100.32) on two
> separate networks.  10.0.2.0/24 is our management network and hosts
> aren't allowed to talk to each other on that network (only to a management
> host) thus there can't be any iscsi session established over that link.
>
> The volume is correctly created on the target node and tgtadm shows the
> correct configuration
> If I try to establish an iSCSI session with the target at 192.168.100.31
> it succeeds.
>
> My question is, how can I specify the ip address of the target that should
> be used (my guess would be in the nova.conf but I'm quite new to openstack
> so I might be wrong and I haven't found any option in the nova.conf file).
> Can someone describe how openstack "discovers" the ip address of the iSCSI
> provider and how it behave if the provider has several interfaces ?
>
> Thank you already for your help.
>
> Olivier
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>



-- 
Andrew Mann
DivvyCloud Inc.
www.divvycloud.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140730/a3bcdf02/attachment.html>


More information about the Openstack mailing list