Hello Saverio, thank you I will have a look at these documents. Michael > Saverio Proto <zioproto at gmail.com> hat am 21. Juni 2016 um 09:42 geschrieben: > > > Hello Michael, > > a very widely adopted solution is to use Ceph with rbd volumes. > > http://docs.openstack.org/liberty/config-reference/content/ceph-rados.html > http://docs.ceph.com/docs/master/rbd/rbd-openstack/ > > you find more options here under Volume drivers: > http://docs.openstack.org/liberty/config-reference/content/section_volume-drivers.html > > Saverio > > > 2016-06-21 9:27 GMT+02:00 Michael Stang <michael.stang at dhbw-mannheim.de>: > > Hi, > > > > I wonder what is the recommendation for a shared storage for the compute > > nodes? At the moment we are using an iSCSI device which is served to all > > compute nodes with multipath, the filesystem is OCFS2. But this makes it a > > little unflexible in my opinion, because you have to decide how many compute > > nodes you will have in the future. > > > > So is there any suggestion which kind of shared storage to use for the > > compute nodes and what filesystem? > > > > Thanky, > > Michael > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > Viele Grüße Michael Stang Laboringenieur, Dipl. Inf. (FH) Duale Hochschule Baden-Württemberg Mannheim Baden-Wuerttemberg Cooperative State University Mannheim ZeMath Zentrum für mathematisch-naturwissenschaftliches Basiswissen Fachbereich Informatik, Fakultät Technik Coblitzallee 1-9 68163 Mannheim Tel.: +49 (0)621 4105 - 1367 michael.stang at dhbw-mannheim.de http://www.dhbw-mannheim.de -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160621/cc355267/attachment.html>