Here's the support matrix. Each driver lists the supported transport protocols.
https://docs.openstack.org/cinder/latest/reference/support-matrix.html
I'm not sure there are more moving parts than with any other implementation, putting aside the zoning aspect.
There is an ethernet based management interface that cinder volume directs. It receives identifier(s) for one (or more) hypervisor/initiator data interfaces(WWPN, IP Address, hostid, etc.), and the spec for the volume itself.
The initator and target interact over the data interface, making the volume available on the hypervisor.
This workflow is broadly the same for all protocols.
Unfortunately, there isn't a Cinder GlusterFS driver (anymore).
I would suggest testing with a correct FC SAN driver, assuming one exists.
Even if it works first time, enable debug level logging so you can see and get a feel for what's happening underneath and then you can assess if it's an environment you can maintain. With any luck, you might be surprised by how straight forward it looks.
In some ways FC is a lot simpler than some of the Ethernet based options.
I'm not sure of any practical solution where a "store" containing volumes is exported rather than individual volumes themselves, unless you build Ceph up using a bunch of LUNs but definitely more of a thought exercise than a recommendation.