Re: [kolla-ansible][cinder] MPIO - how to set "storage_interface" when using two NICs for storage connection
Hi again, I found some informations about MPIO here (https://docs.oracle.com/cd/E78305_01/E78304/html/setup-cinder.html) : "By default, the Cinder block storage service uses the iSCSI protocol to connect instances to volumes. The iscsid container runs on compute nodes and handles the iSCSI session using an iSCSI initiator name that is generated when you deploy Nova compute services. The Nova compute service supports iSCSI multipath for failover purposes and increased performance. Multipath is enabled by default (configured with the enable_multipathd property). When multipath is enabled, the iSCSI initiator (the Compute node) is able to obtain a list of addresses from the storage node that the initiator can use as multiple paths to the iSCSI LUN." But still - how should I set storage_interface, if I have two NICs for storage connection (let’s say eth4 and eth5)? storage_interface=eth4,eth5 ? Best regards Adam Tomas
Hi, I’d like to use MPIO in connection to storage array, so I have two physical NIC’s in different VLANS on each node and also 4 NIC’s in the storage array (2 in each storage VLAN) so it should result in 4 MPIO paths from every node (2 paths in each VLAN). But how to set it in kolla-ansible? Can I point to more than one interface using storage_interface variable? Or maybe I should configure it in different way?
Hi Adam, The ``storage_interface`` variable is only used to set the default of ``swift_storage_interface`` and, in turn, ``swift_replication_interface`` by default as well, i.e., it controls where swift will offer its services (and possibly where the replication backbone will be). [1] It is irrelevant to block (volume) storage interfaces. Enabling multipathing (via enable_multipathd) should be sufficient in your case, as long as the targets advertise multiple paths. Please let us know how the cinder guide could be improved. [2] [1] https://docs.openstack.org/kolla-ansible/xena/admin/production-architecture-... [2] https://docs.openstack.org/kolla-ansible/xena/reference/storage/cinder-guide... Kind regards, -yoctozepto On Thu, 23 Dec 2021 at 09:26, Adam Tomas <bkslash@poczta.onet.pl> wrote:
Hi again,
I found some informations about MPIO here (https://docs.oracle.com/cd/E78305_01/E78304/html/setup-cinder.html) :
"By default, the Cinder block storage service uses the iSCSI protocol to connect instances to volumes. The iscsid container runs on compute nodes and handles the iSCSI session using an iSCSI initiator name that is generated when you deploy Nova compute services.
The Nova compute service supports iSCSI multipath for failover purposes and increased performance. Multipath is enabled by default (configured with the enable_multipathd property). When multipath is enabled, the iSCSI initiator (the Compute node) is able to obtain a list of addresses from the storage node that the initiator can use as multiple paths to the iSCSI LUN."
But still - how should I set storage_interface, if I have two NICs for storage connection (let’s say eth4 and eth5)? storage_interface=eth4,eth5 ?
Best regards Adam Tomas
Hi, I’d like to use MPIO in connection to storage array, so I have two physical NIC’s in different VLANS on each node and also 4 NIC’s in the storage array (2 in each storage VLAN) so it should result in 4 MPIO paths from every node (2 paths in each VLAN). But how to set it in kolla-ansible? Can I point to more than one interface using storage_interface variable? Or maybe I should configure it in different way?
Hi Radosław, thanks for the answer. So if I’m not going to use swift, then I shouldn’t set storage_interface, right? Ok then - if I use external iSCSI storage (like NetApp, Nexenta, etc) I have to configure proper cinder.volume.driver.iscsi.XXXX and specify storage host. But what if my storage host is reachable on multiple IP addresses? Should I set XXX_host=FIRST_IP XXX_host=SECOND_IP XXX_host=THIRD_IP … ? I assume, that because each container sees all host NICs then it’s enough to have interfaces on host connected to the same network as external storage, the one specified in cinder.conf? If I use only external iSCSI storage, then only cinder container is responsible for connecting volumes on external storage with instances? Are iscsid & tgtd containers used only if I have external volumes mounted to compute host? By enabling multipathing you mean to set enable_multipathd in globals.yml? Best regards Adam Tomas
Wiadomość napisana przez Radosław Piliszek <radoslaw.piliszek@gmail.com> w dniu 23.12.2021, o godz. 09:36:
Hi Adam,
The ``storage_interface`` variable is only used to set the default of ``swift_storage_interface`` and, in turn, ``swift_replication_interface`` by default as well, i.e., it controls where swift will offer its services (and possibly where the replication backbone will be). [1] It is irrelevant to block (volume) storage interfaces.
Enabling multipathing (via enable_multipathd) should be sufficient in your case, as long as the targets advertise multiple paths.
Please let us know how the cinder guide could be improved. [2]
[1] https://docs.openstack.org/kolla-ansible/xena/admin/production-architecture-... [2] https://docs.openstack.org/kolla-ansible/xena/reference/storage/cinder-guide...
Kind regards, -yoctozepto
On Thu, 23 Dec 2021 at 09:26, Adam Tomas <bkslash@poczta.onet.pl> wrote:
Hi again,
I found some informations about MPIO here (https://docs.oracle.com/cd/E78305_01/E78304/html/setup-cinder.html) :
"By default, the Cinder block storage service uses the iSCSI protocol to connect instances to volumes. The iscsid container runs on compute nodes and handles the iSCSI session using an iSCSI initiator name that is generated when you deploy Nova compute services.
The Nova compute service supports iSCSI multipath for failover purposes and increased performance. Multipath is enabled by default (configured with the enable_multipathd property). When multipath is enabled, the iSCSI initiator (the Compute node) is able to obtain a list of addresses from the storage node that the initiator can use as multiple paths to the iSCSI LUN."
But still - how should I set storage_interface, if I have two NICs for storage connection (let’s say eth4 and eth5)? storage_interface=eth4,eth5 ?
Best regards Adam Tomas
Hi, I’d like to use MPIO in connection to storage array, so I have two physical NIC’s in different VLANS on each node and also 4 NIC’s in the storage array (2 in each storage VLAN) so it should result in 4 MPIO paths from every node (2 paths in each VLAN). But how to set it in kolla-ansible? Can I point to more than one interface using storage_interface variable? Or maybe I should configure it in different way?
On Thu, 23 Dec 2021 at 10:31, Adam Tomas <bkslash@poczta.onet.pl> wrote:
Hi Radosław, thanks for the answer. So if I’m not going to use swift, then I shouldn’t set storage_interface, right?
That's right. And we will deprecate and remove this confusing variable.
Ok then - if I use external iSCSI storage (like NetApp, Nexenta, etc) I have to configure proper cinder.volume.driver.iscsi.XXXX and specify storage host. But what if my storage host is reachable on multiple IP addresses? Should I set
XXX_host=FIRST_IP XXX_host=SECOND_IP XXX_host=THIRD_IP …
?
No idea, I have not used it like this. Probably best to ask cinder folks specifically about this config. Kolla does not affect this.
I assume, that because each container sees all host NICs then it’s enough to have interfaces on host connected to the same network as external storage, the one specified in cinder.conf?
That sounds sensible.
If I use only external iSCSI storage, then only cinder container is responsible for connecting volumes on external storage with instances? Are iscsid & tgtd containers used only if I have external volumes mounted to compute host?
tgtd is only used if you are on Debuntu and use Linux LVM-backed iSCSI target, it's irrelevant for external iSCSI. iscsid, on the other hand, is used to setup the initiator (client), and thus it runs where cinder-volume and nova-compute run, to establish the connection to the target. Same for multipathd.
By enabling multipathing you mean to set enable_multipathd in globals.yml?
Yes, that's precisely what I meant. :-) Kind regards, -yoctozepto
Best regards
Adam Tomas
Wiadomość napisana przez Radosław Piliszek <radoslaw.piliszek@gmail.com> w dniu 23.12.2021, o godz. 09:36:
Hi Adam,
The ``storage_interface`` variable is only used to set the default of ``swift_storage_interface`` and, in turn, ``swift_replication_interface`` by default as well, i.e., it controls where swift will offer its services (and possibly where the replication backbone will be). [1] It is irrelevant to block (volume) storage interfaces.
Enabling multipathing (via enable_multipathd) should be sufficient in your case, as long as the targets advertise multiple paths.
Please let us know how the cinder guide could be improved. [2]
[1] https://docs.openstack.org/kolla-ansible/xena/admin/production-architecture-... [2] https://docs.openstack.org/kolla-ansible/xena/reference/storage/cinder-guide...
Kind regards, -yoctozepto
On Thu, 23 Dec 2021 at 09:26, Adam Tomas <bkslash@poczta.onet.pl> wrote:
Hi again,
I found some informations about MPIO here (https://docs.oracle.com/cd/E78305_01/E78304/html/setup-cinder.html) :
"By default, the Cinder block storage service uses the iSCSI protocol to connect instances to volumes. The iscsid container runs on compute nodes and handles the iSCSI session using an iSCSI initiator name that is generated when you deploy Nova compute services.
The Nova compute service supports iSCSI multipath for failover purposes and increased performance. Multipath is enabled by default (configured with the enable_multipathd property). When multipath is enabled, the iSCSI initiator (the Compute node) is able to obtain a list of addresses from the storage node that the initiator can use as multiple paths to the iSCSI LUN."
But still - how should I set storage_interface, if I have two NICs for storage connection (let’s say eth4 and eth5)? storage_interface=eth4,eth5 ?
Best regards Adam Tomas
Hi, I’d like to use MPIO in connection to storage array, so I have two physical NIC’s in different VLANS on each node and also 4 NIC’s in the storage array (2 in each storage VLAN) so it should result in 4 MPIO paths from every node (2 paths in each VLAN). But how to set it in kolla-ansible? Can I point to more than one interface using storage_interface variable? Or maybe I should configure it in different way?
Thanks a lot :) So - anyone from cinder team - how to use cinder with external MPIO iSCSI storage? If I use external iSCSI storage (like NetApp, Nexenta, etc) I have to configure proper cinder.volume.driver.iscsi.XXXX and specify storage host. But what if my storage host is reachable on multiple IP addresses? Should I set XXX_host=FIRST_IP XXX_host=SECOND_IP XXX_host=THIRD_IP … ? or one XXX_host=IP is enough to get additional paths from iSCSI target? Best regards Adam Tomas
participants (2)
-
Adam Tomas
-
Radosław Piliszek