Opentack + FCP Storages

Gorka Eguileor geguileo at redhat.com
Mon May 22 15:57:16 UTC 2023


On 17/05, Jorge Visentini wrote:
> Hello.
>
> Today in our environment we only use FCP 3PAR Storages.
> Is there a "friendly" way to use FCP Storages with Openstack?
> I know and I've already tested Ceph, so I can say that it's the best
> storage integration for Openstack, but it's not my case hehe

Hi,

As a Cinder and OS-Brick developer I use a FC 3PAR system for most of my
testing and FC development of os-brick, and the only requirement for it
to work is an external Python dependency (python-3parclient) installed
wherever cinder-volume is going to ru.

I've tried the driver with both FC zone managers, cisco and brocade, and
it works as expected.

My only complain would be that there are a couple of nuisances and
issues, which may be related to my 3PAR system being really, really,
old, so I end up using a custom driver that includes my own patches that
haven't merged yet [1][2][3].

I also use a custom python-3parclient with my fix that hasn't merged
either [4].

For me the most important of those patches is the one that allows me to
disable the online copy [2], because I find that this 3PAR feature gives
me more problems that benefits, though that may only be to me.

If you are doing a full OpenStack deployment with multiple controller
services that are running cinder-volume in Active-Passive and then a
bunch of compute nodes, just remember that you'll need HBAs in all the
controller nodes where cinder-volume could be running as well as all
your compute nodes.  If you are not using the Zone manager driver you'll
need to configure your switches manually to allow those hosts access to
the 3PAR.

Cheers,
Gorka.

[1]: https://review.opendev.org/c/openstack/cinder/+/756709
[2]: https://review.opendev.org/c/openstack/cinder/+/756710
[3]: https://review.opendev.org/c/openstack/cinder/+/756711
[4]: https://github.com/hpe-storage/python-3parclient/pull/79

>
> All the best!
> --
> Att,
> Jorge Visentini
> +55 55 98432-9868




More information about the openstack-discuss mailing list