On 12/04, Michael Knox wrote:
Hi Satish,
We support customers with "enterprise" SAN arrays. We try to keep it iSCSI, but we have a couple on FC. It's all down to the Array support in Cinder,
Hi, I agree, there are customers using FC in OpenStack. FCP is not a strange storage transport protocol to use in OpenStack or one that has little support. FC and multipathing is relatively common, what is less frequently used are the Zone Managers, but they should also work and the last issue that had to be fixed in those were back on the Python 2 to 3 transition.
as others have pointed out. For Glance, we just use an NFS mount from the Array that is mounted on the control nodes and we are good
Just be aware that some arrays have their drivers outside of cinder, like Pure or EMC's Unity (Storops), so you will need to consider that.
I believe you mean they have external dependencies, mostly for the client library that communicates with their storage array, because they have the drivers in-tree in the Cinder repository. Cheers, Gorka.
Cheers. Michael
On Fri, Apr 12, 2024 at 5:15 PM Satish Patel <satish.txt@gmail.com> wrote:
Agreed with your suggestion to give it a try and see. How about Glance images where should I store them? I don't think glance supports FC SAN.
It would be overkill to build checks on FC SAN disks because FC SAN disk itself does mirroring inside FC storage.
Anyway I will keep this thread alive until I find a proper way to handle this storage using a cinder driver.
On Fri, Apr 12, 2024 at 4:32 PM <tom@tjclark.xyz> wrote:
Here's the support matrix. Each driver lists the supported transport protocols. https://docs.openstack.org/cinder/latest/reference/support-matrix.html
I'm not sure there are more moving parts than with any other implementation, putting aside the zoning aspect. There is an ethernet based management interface that cinder volume directs. It receives identifier(s) for one (or more) hypervisor/initiator data interfaces(WWPN, IP Address, hostid, etc.), and the spec for the volume itself. The initator and target interact over the data interface, making the volume available on the hypervisor.
This workflow is broadly the same for all protocols.
Unfortunately, there isn't a Cinder GlusterFS driver (anymore). I would suggest testing with a correct FC SAN driver, assuming one exists. Even if it works first time, enable debug level logging so you can see and get a feel for what's happening underneath and then you can assess if it's an environment you can maintain. With any luck, you might be surprised by how straight forward it looks. In some ways FC is a lot simpler than some of the Ethernet based options.
I'm not sure of any practical solution where a "store" containing volumes is exported rather than individual volumes themselves, unless you build Ceph up using a bunch of LUNs but definitely more of a thought exercise than a recommendation.