[Openstack-operators] Utilising FC SAN in OpenStack

Christian Wittwer wittwerch at gmail.com
Wed Mar 21 07:57:26 UTC 2012

Hi Greg,
Well you could simply don't use openstack-volume. We have a high-available
gluster volume which we mount on every compute node
under NOVA-INST-DIR/instances/, that's it.
You can then give a instance more local storage (configured via the
flavors) and this additional disk file is stored within the instance (and
therefore on gluster too).


2012/3/21 Greg Cockburn <gergnz at gmail.com>

> Hi all,
> I am looking at building an OpenStack cluster for internal use at my
> company.
> My previous experience has been solely on virtual clusters using either in
> house scripts and management tools, or 3rd party commercial offerings (e.g.
> OracleVM).
> I am trying to understand how the underlying block device layer works, and
> am struggling to understand the documentation.
> We have this NOVA-INST-DIR/instances/ where we have a _base and a delta
> file created, and this should be mounted on all compute nodes using some
> cluster FS, NFS or similar.
> Then we have openstack-volume, controller-node, LVM and iSCSI in the mix.
> I understand where openstack is coming from trying to build an environment
> from commodity parts, but the block devices are at the core of the stack, a
> vital part and require fast, low latency storage.
> Using 1Gb iSCSI to a single host seems crazy to me (single point of
> failure, slow, high-latency).
> I really want to understand how to integrate a FC SAN that utilises
> Multipathing into the environment and utilise something like CLVM or
> cluster File-system (e.g. OCFS2) to manage the block devices on each
> compute node.
> Any information on how I might be able to achieve this would be great.
> Thanks,
> Greg.
> _______________________________________________
> Openstack-operators mailing list
> Openstack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20120321/37dd735a/attachment-0002.html>

More information about the Openstack-operators mailing list