Agreed with your suggestion to give it a try and see. How about Glance images where should I store them? I don't think glance supports FC SAN.
Glance does support the FC SAN but in an indirect way. There is a cinder backend for glance where you can configure your images to be stored in cinder volumes that are stored on the FC SAN as LUNs.
This driver was introduced with the intent to be able to use your Cinder storage solution for glance as well.
It would be overkill to build checks on FC SAN disks because FC SAN disk itself does mirroring inside FC storage.
Anyway I will keep this thread alive until I find a proper way to handle this storage using a cinder driver.
Here's the support matrix. Each driver lists the supported transport protocols.
https://docs.openstack.org/cinder/latest/reference/support-matrix.html
I'm not sure there are more moving parts than with any other implementation, putting aside the zoning aspect.
There is an ethernet based management interface that cinder volume directs. It receives identifier(s) for one (or more) hypervisor/initiator data interfaces(WWPN, IP Address, hostid, etc.), and the spec for the volume itself.
The initator and target interact over the data interface, making the volume available on the hypervisor.
This workflow is broadly the same for all protocols.
Unfortunately, there isn't a Cinder GlusterFS driver (anymore).
I would suggest testing with a correct FC SAN driver, assuming one exists.
Even if it works first time, enable debug level logging so you can see and get a feel for what's happening underneath and then you can assess if it's an environment you can maintain. With any luck, you might be surprised by how straight forward it looks.
In some ways FC is a lot simpler than some of the Ethernet based options.
I'm not sure of any practical solution where a "store" containing volumes is exported rather than individual volumes themselves, unless you build Ceph up using a bunch of LUNs but definitely more of a thought exercise than a recommendation.