[nova][cinder] Openstack with FC SAN storage with HA
Folks, I have a question, Does openstack support FC SAN storage? I have deployed openstack with ceph but first time got this requirement that a customer has SAN storage because they were using VMware but now move to openstack. How does openstack talk to FC SAN and what about VM migration etc? Do I need to mount the LUN disk on each computer and point nova to the LUN disk? Or I have to create some kind of shared file system on top of the LUN and expose that to nova/cinder. Can someone explain the entire workflow?
There's support for FC SAN Storage, but the implementation is vendor specific. What you'll want to check is the Cinder driver matrix for the specific SAN. Consider each volume as an indepedent LUN. Cinder will (generally) use SSH to talk to the SAN controller managing the creation and export of new LUNs. Cinder will provide the WWPN(s) for the relevant hypervisor to the SAN and export the volumes accordingly. Cinder prompts to rescan the FC bus, the volume is made available, and passed through to nova/libvirt. There's some simplification here, where os-brick on the hypervisor takes care of some elements but this can be considered under the Cinder umbrella. There's also limited support for cinder to dynamically configure switch zoning. You'll want to configure multipath on the host, and then configure nova for multipath. Having a single large LUN that contains all volumes could be achievable with the LVM driver and clustered LVM but would be quite a unique setup.
Thank you for the email, I will check vendor support but it looks very unique to me because I'm not sure how many people are using this kind of solution with openstack. I am a little worried about troubleshooting stuff because it has lots of moving parts. How about mount lun on all computers and create GlusterFS between all compute nodes and that will provide redundancy + HA for all compute nodes. On Fri, Apr 12, 2024 at 2:06 PM <tom@tjclark.xyz> wrote:
There's support for FC SAN Storage, but the implementation is vendor specific.
What you'll want to check is the Cinder driver matrix for the specific SAN. Consider each volume as an indepedent LUN.
Cinder will (generally) use SSH to talk to the SAN controller managing the creation and export of new LUNs. Cinder will provide the WWPN(s) for the relevant hypervisor to the SAN and export the volumes accordingly. Cinder prompts to rescan the FC bus, the volume is made available, and passed through to nova/libvirt.
There's some simplification here, where os-brick on the hypervisor takes care of some elements but this can be considered under the Cinder umbrella.
There's also limited support for cinder to dynamically configure switch zoning.
You'll want to configure multipath on the host, and then configure nova for multipath.
Having a single large LUN that contains all volumes could be achievable with the LVM driver and clustered LVM but would be quite a unique setup.
Here's the support matrix. Each driver lists the supported transport protocols. https://docs.openstack.org/cinder/latest/reference/support-matrix.html I'm not sure there are more moving parts than with any other implementation, putting aside the zoning aspect. There is an ethernet based management interface that cinder volume directs. It receives identifier(s) for one (or more) hypervisor/initiator data interfaces(WWPN, IP Address, hostid, etc.), and the spec for the volume itself. The initator and target interact over the data interface, making the volume available on the hypervisor. This workflow is broadly the same for all protocols. Unfortunately, there isn't a Cinder GlusterFS driver (anymore). I would suggest testing with a correct FC SAN driver, assuming one exists. Even if it works first time, enable debug level logging so you can see and get a feel for what's happening underneath and then you can assess if it's an environment you can maintain. With any luck, you might be surprised by how straight forward it looks. In some ways FC is a lot simpler than some of the Ethernet based options. I'm not sure of any practical solution where a "store" containing volumes is exported rather than individual volumes themselves, unless you build Ceph up using a bunch of LUNs but definitely more of a thought exercise than a recommendation.
Agreed with your suggestion to give it a try and see. How about Glance images where should I store them? I don't think glance supports FC SAN. It would be overkill to build checks on FC SAN disks because FC SAN disk itself does mirroring inside FC storage. Anyway I will keep this thread alive until I find a proper way to handle this storage using a cinder driver. On Fri, Apr 12, 2024 at 4:32 PM <tom@tjclark.xyz> wrote:
Here's the support matrix. Each driver lists the supported transport protocols. https://docs.openstack.org/cinder/latest/reference/support-matrix.html
I'm not sure there are more moving parts than with any other implementation, putting aside the zoning aspect. There is an ethernet based management interface that cinder volume directs. It receives identifier(s) for one (or more) hypervisor/initiator data interfaces(WWPN, IP Address, hostid, etc.), and the spec for the volume itself. The initator and target interact over the data interface, making the volume available on the hypervisor.
This workflow is broadly the same for all protocols.
Unfortunately, there isn't a Cinder GlusterFS driver (anymore). I would suggest testing with a correct FC SAN driver, assuming one exists. Even if it works first time, enable debug level logging so you can see and get a feel for what's happening underneath and then you can assess if it's an environment you can maintain. With any luck, you might be surprised by how straight forward it looks. In some ways FC is a lot simpler than some of the Ethernet based options.
I'm not sure of any practical solution where a "store" containing volumes is exported rather than individual volumes themselves, unless you build Ceph up using a bunch of LUNs but definitely more of a thought exercise than a recommendation.
Hi Satish, We support customers with "enterprise" SAN arrays. We try to keep it iSCSI, but we have a couple on FC. It's all down to the Array support in Cinder, as others have pointed out. For Glance, we just use an NFS mount from the Array that is mounted on the control nodes and we are good Just be aware that some arrays have their drivers outside of cinder, like Pure or EMC's Unity (Storops), so you will need to consider that. Cheers. Michael On Fri, Apr 12, 2024 at 5:15 PM Satish Patel <satish.txt@gmail.com> wrote:
Agreed with your suggestion to give it a try and see. How about Glance images where should I store them? I don't think glance supports FC SAN.
It would be overkill to build checks on FC SAN disks because FC SAN disk itself does mirroring inside FC storage.
Anyway I will keep this thread alive until I find a proper way to handle this storage using a cinder driver.
On Fri, Apr 12, 2024 at 4:32 PM <tom@tjclark.xyz> wrote:
Here's the support matrix. Each driver lists the supported transport protocols. https://docs.openstack.org/cinder/latest/reference/support-matrix.html
I'm not sure there are more moving parts than with any other implementation, putting aside the zoning aspect. There is an ethernet based management interface that cinder volume directs. It receives identifier(s) for one (or more) hypervisor/initiator data interfaces(WWPN, IP Address, hostid, etc.), and the spec for the volume itself. The initator and target interact over the data interface, making the volume available on the hypervisor.
This workflow is broadly the same for all protocols.
Unfortunately, there isn't a Cinder GlusterFS driver (anymore). I would suggest testing with a correct FC SAN driver, assuming one exists. Even if it works first time, enable debug level logging so you can see and get a feel for what's happening underneath and then you can assess if it's an environment you can maintain. With any luck, you might be surprised by how straight forward it looks. In some ways FC is a lot simpler than some of the Ethernet based options.
I'm not sure of any practical solution where a "store" containing volumes is exported rather than individual volumes themselves, unless you build Ceph up using a bunch of LUNs but definitely more of a thought exercise than a recommendation.
On 12/04, Michael Knox wrote:
Hi Satish,
We support customers with "enterprise" SAN arrays. We try to keep it iSCSI, but we have a couple on FC. It's all down to the Array support in Cinder,
Hi, I agree, there are customers using FC in OpenStack. FCP is not a strange storage transport protocol to use in OpenStack or one that has little support. FC and multipathing is relatively common, what is less frequently used are the Zone Managers, but they should also work and the last issue that had to be fixed in those were back on the Python 2 to 3 transition.
as others have pointed out. For Glance, we just use an NFS mount from the Array that is mounted on the control nodes and we are good
Just be aware that some arrays have their drivers outside of cinder, like Pure or EMC's Unity (Storops), so you will need to consider that.
I believe you mean they have external dependencies, mostly for the client library that communicates with their storage array, because they have the drivers in-tree in the Cinder repository. Cheers, Gorka.
Cheers. Michael
On Fri, Apr 12, 2024 at 5:15 PM Satish Patel <satish.txt@gmail.com> wrote:
Agreed with your suggestion to give it a try and see. How about Glance images where should I store them? I don't think glance supports FC SAN.
It would be overkill to build checks on FC SAN disks because FC SAN disk itself does mirroring inside FC storage.
Anyway I will keep this thread alive until I find a proper way to handle this storage using a cinder driver.
On Fri, Apr 12, 2024 at 4:32 PM <tom@tjclark.xyz> wrote:
Here's the support matrix. Each driver lists the supported transport protocols. https://docs.openstack.org/cinder/latest/reference/support-matrix.html
I'm not sure there are more moving parts than with any other implementation, putting aside the zoning aspect. There is an ethernet based management interface that cinder volume directs. It receives identifier(s) for one (or more) hypervisor/initiator data interfaces(WWPN, IP Address, hostid, etc.), and the spec for the volume itself. The initator and target interact over the data interface, making the volume available on the hypervisor.
This workflow is broadly the same for all protocols.
Unfortunately, there isn't a Cinder GlusterFS driver (anymore). I would suggest testing with a correct FC SAN driver, assuming one exists. Even if it works first time, enable debug level logging so you can see and get a feel for what's happening underneath and then you can assess if it's an environment you can maintain. With any luck, you might be surprised by how straight forward it looks. In some ways FC is a lot simpler than some of the Ethernet based options.
I'm not sure of any practical solution where a "store" containing volumes is exported rather than individual volumes themselves, unless you build Ceph up using a bunch of LUNs but definitely more of a thought exercise than a recommendation.
Hi Satish, On Sat, Apr 13, 2024 at 2:44 AM Satish Patel <satish.txt@gmail.com> wrote:
Agreed with your suggestion to give it a try and see. How about Glance images where should I store them? I don't think glance supports FC SAN.
Glance does support the FC SAN but in an indirect way. There is a cinder backend for glance where you can configure your images to be stored in cinder volumes that are stored on the FC SAN as LUNs. This driver was introduced with the intent to be able to use your Cinder storage solution for glance as well.
It would be overkill to build checks on FC SAN disks because FC SAN disk itself does mirroring inside FC storage.
Anyway I will keep this thread alive until I find a proper way to handle this storage using a cinder driver.
On Fri, Apr 12, 2024 at 4:32 PM <tom@tjclark.xyz> wrote:
Here's the support matrix. Each driver lists the supported transport protocols. https://docs.openstack.org/cinder/latest/reference/support-matrix.html
I'm not sure there are more moving parts than with any other implementation, putting aside the zoning aspect. There is an ethernet based management interface that cinder volume directs. It receives identifier(s) for one (or more) hypervisor/initiator data interfaces(WWPN, IP Address, hostid, etc.), and the spec for the volume itself. The initator and target interact over the data interface, making the volume available on the hypervisor.
This workflow is broadly the same for all protocols.
Unfortunately, there isn't a Cinder GlusterFS driver (anymore). I would suggest testing with a correct FC SAN driver, assuming one exists. Even if it works first time, enable debug level logging so you can see and get a feel for what's happening underneath and then you can assess if it's an environment you can maintain. With any luck, you might be surprised by how straight forward it looks. In some ways FC is a lot simpler than some of the Ethernet based options.
I'm not sure of any practical solution where a "store" containing volumes is exported rather than individual volumes themselves, unless you build Ceph up using a bunch of LUNs but definitely more of a thought exercise than a recommendation.
Thank you everyone for your advice, This is very helpful information. I will keep you posted as I make progress on my FC deployment. ~S On Sat, Apr 13, 2024 at 2:10 PM Rajat Dhasmana <rdhasman@redhat.com> wrote:
Hi Satish,
On Sat, Apr 13, 2024 at 2:44 AM Satish Patel <satish.txt@gmail.com> wrote:
Agreed with your suggestion to give it a try and see. How about Glance images where should I store them? I don't think glance supports FC SAN.
Glance does support the FC SAN but in an indirect way. There is a cinder backend for glance where you can configure your images to be stored in cinder volumes that are stored on the FC SAN as LUNs. This driver was introduced with the intent to be able to use your Cinder storage solution for glance as well.
It would be overkill to build checks on FC SAN disks because FC SAN disk itself does mirroring inside FC storage.
Anyway I will keep this thread alive until I find a proper way to handle this storage using a cinder driver.
On Fri, Apr 12, 2024 at 4:32 PM <tom@tjclark.xyz> wrote:
Here's the support matrix. Each driver lists the supported transport protocols. https://docs.openstack.org/cinder/latest/reference/support-matrix.html
I'm not sure there are more moving parts than with any other implementation, putting aside the zoning aspect. There is an ethernet based management interface that cinder volume directs. It receives identifier(s) for one (or more) hypervisor/initiator data interfaces(WWPN, IP Address, hostid, etc.), and the spec for the volume itself. The initator and target interact over the data interface, making the volume available on the hypervisor.
This workflow is broadly the same for all protocols.
Unfortunately, there isn't a Cinder GlusterFS driver (anymore). I would suggest testing with a correct FC SAN driver, assuming one exists. Even if it works first time, enable debug level logging so you can see and get a feel for what's happening underneath and then you can assess if it's an environment you can maintain. With any luck, you might be surprised by how straight forward it looks. In some ways FC is a lot simpler than some of the Ethernet based options.
I'm not sure of any practical solution where a "store" containing volumes is exported rather than individual volumes themselves, unless you build Ceph up using a bunch of LUNs but definitely more of a thought exercise than a recommendation.
participants (5)
-
Gorka Eguileor
-
Michael Knox
-
Rajat Dhasmana
-
Satish Patel
-
tom@tjclark.xyz