[Openstack-operators] Distributed Filesystem
Matt Joyce
matt at nycresistor.com
Wed Apr 24 18:42:44 UTC 2013
Awesome. I'd still love to get the public info into a more consumable
format. =D
On Wed, Apr 24, 2013 at 11:31 AM, Tim Bell <Tim.Bell at cern.ch> wrote:
> ** **
>
> BTW, the presentation of the user survey results was made at the summit
> last week. While the presentation will be made available through the summit
> web site, for those who would like to see the results before can go to
> http://www.slideshare.net/noggin143/havana-survey-resultsfinal-19903492for the consolidated results from the user committee and foundation.
> ****
>
> ** **
>
> The user survey remains open and we are still collecting useful data from
> users and deployments so you’re welcome to fill out the details at
> https://www.openstack.org/user-survey if there are deployments and user
> input which has not yet been received.****
>
> ** **
>
> We will be writing up a more complete blog of the results since the
> conference time slot was limited.****
>
> ** **
>
> Note: all results are aggregated. The user committee has signed
> non-disclosure agreements so that sites who do not wish their individual
> details to be made public can still contribute to the community
> understanding and this input is highly appreciated in confidence.****
>
> ** **
>
> Tim****
>
> ** **
>
> *From:* Tim Bell
> *Sent:* 24 April 2013 20:10
> *To:* 'Razique Mahroua'; Lorin Hochstein
>
> *Cc:* JuanFra Rodriguez Cardoso; openstack-operators at lists.openstack.org
> *Subject:* RE: [Openstack-operators] Distributed Filesystem****
>
> ** **
>
> ** **
>
> From the latest OpenStack user survey presented at the summit,****
>
> ** **
>
> ****
>
> * *
>
> *Tim*
>
> ** **
>
> *From:* Razique Mahroua [mailto:razique.mahroua at gmail.com<razique.mahroua at gmail.com>]
>
> *Sent:* 24 April 2013 18:21
> *To:* Lorin Hochstein
> *Cc:* JuanFra Rodriguez Cardoso; openstack-operators at lists.openstack.org
> *Subject:* Re: [Openstack-operators] Distributed Filesystem****
>
> ** **
>
> I feel you Jacob, ****
>
> Loring I had the exact same issue ! Using both Argonaut and Bobtail, on
> high I/O load the mount crashed the server - well the server wasn't
> crashing, the mount went crazy, and impossible to unmount the disk, kill
> the process, so I always ended up rebooting the nodes. What is interesting
> though is that the reason why it is not still considered as
> production-ready is because of the way metadata is currently implemented,
> rather than the code itself....****
>
> ** **
>
> ** **
>
> *Razique Mahroua - Nuage & Co*****
>
> razique.mahroua at gmail.com****
>
> Tel : +33 9 72 37 94 15****
>
>
> ****
>
> ** **
>
> Le 24 avr. 2013 à 17:36, Lorin Hochstein <lorin at nimbisservices.com> a
> écrit :****
>
> ** **
>
> Razique:****
>
> ** **
>
> Out of curiosity, what kinds of problems did you see with CephFS? I've
> heard it's not ready for production yet, but I haven't heard anybody talk
> about specific experiences with it.****
>
> ** **
>
> Lorin****
>
> ** **
>
> On Sat, Apr 20, 2013 at 8:14 AM, Razique Mahroua <
> razique.mahroua at gmail.com> wrote:****
>
> Hi Paras, ****
>
> that's the kind of setup I've always seen myself. After unsuccessful tests
> with CephFS, I'll move to the following strategy:****
>
> - GlusterFS as a shared storage for the instances (check the official doc,
> we wrote about its deployment for OpenStack)****
>
> - Ceph cluster wit the direct RBD gateway from nova to RADOS****
>
> - Ceph cluster as well the imaging service (Glance)****
>
> ** **
>
> Some others use MooseFS as well the the stared storage (we wrote a
> deployment guide as well)****
>
> Best regards,****
>
> Razique****
>
> ** **
>
> ** **
>
> *Razique Mahroua - Nuage & Co*****
>
> razique.mahroua at gmail.com****
>
> Tel : +33 9 72 37 94 15****
>
>
> <NUAGECO-LOGO-Fblan_petit.jpg> ****
>
> ** **
>
> Le 19 avr. 2013 à 17:05, Paras pradhan <pradhanparas at gmail.com> a écrit :*
> ***
>
> ** **
>
> Well I am not sure if we would like to do it since it is marked
> as deprecated. So this is what I am thinking. For shared storage, I will be
> using glusterfs and use cinder just for extra block disk on the instances.
> This what the Openstack operators doing typically ? ****
>
> ** **
>
> Thanks****
>
> Paras.****
>
> ** **
>
> On Fri, Apr 19, 2013 at 12:10 AM, Razique Mahroua <
> razique.mahroua at gmail.com> wrote:****
>
> More infos here:****
>
> http://osdir.com/ml/openstack-cloud-computing/2012-08/msg00293.html****
>
> ** **
>
> But I'm not sure about the last updates - you can still use it at the
> moment ****
>
> Razique****
>
> ** **
>
> *Razique Mahroua - Nuage & Co*****
>
> razique.mahroua at gmail.com****
>
> Tel : +33 9 72 37 94 15****
>
> ** **
>
> <NUAGECO-LOGO-Fblan_petit.jpg> ****
>
> ** **
>
> Le 18 avr. 2013 à 17:13, Paras pradhan <pradhanparas at gmail.com> a écrit :*
> ***
>
> ** **
>
> Regarding block migration, this is what confuses me. This is from the
> Openstack operations manual****
>
> ** **
>
> --****
>
> Theoretically live migration can be done with non-shared storage, using***
> *
>
> a feature known as KVM live block migration. However, this is a
> littleknown feature in OpenStack, with limited testing when compared to
> live migration, and is slated for deprecation in KVM upstream.****
>
> --****
>
> ** **
>
> Paras.****
>
> ** **
>
> On Thu, Apr 18, 2013 at 3:00 AM, Razique Mahroua <
> razique.mahroua at gmail.com> wrote:****
>
> Sure :)****
>
> Great feedbacks around. Many technologies do pretty much everything on the
> paper - but I guess in the end it's more about if the tech. does the job
> and if it does it well.****
>
> For such critical implementation, reliable solution is a must-have - ie
> that have proven through years they can be used and are stable enough for
> us to enjoy our week-ends :)****
>
> ** **
>
> Razique****
>
> ** **
>
> Le 18 avr. 2013 à 00:14, Paras pradhan <pradhanparas at gmail.com> a écrit :*
> ***
>
> ** **
>
> Thanks for the replies Razique. We are doing a test installation and
> looking for options for live migration. Looks like both cinder and shared
> file stirage are options. Among these two which one do you guys recommended
> considering the Cinder block will be typical lvm based commodity hardware.
> ****
>
> ** **
>
> Thanks****
>
> Paras.****
>
> ** **
>
> On Wed, Apr 17, 2013 at 5:03 PM, Razique Mahroua <
> razique.mahroua at gmail.com> wrote:****
>
> Definitely, use the "--block_migrate" flag along the nova migrate command
> so you don't need a shared storage.****
>
> You can boot from Cinder, depending on which version of OPS you run ****
>
> ** **
>
> *Razique Mahroua - Nuage & Co*****
>
> razique.mahroua at gmail.com****
>
> Tel : +33 9 72 37 94 15****
>
> ** **
>
> <NUAGECO-LOGO-Fblan_petit.jpg> ****
>
> ** **
>
> Le 17 avr. 2013 à 23:55, Paras pradhan <pradhanparas at gmail.com> a écrit :*
> ***
>
> ** **
>
> Can we do live migration without using shared storage like glusterfs and
> using cinder to boot the volume from? ****
>
> ** **
>
> Sorry little off topic ****
>
> ** **
>
> Thanks****
>
> Paras.****
>
> ** **
>
> On Wed, Apr 17, 2013 at 4:53 PM, Razique Mahroua <
> razique.mahroua at gmail.com> wrote:****
>
> Many use either a proprietary backend or the good old LVM****
>
> I'll go with Ceph for it since there is a native integration between
> cinder/ nova-volume and Ceph****
>
> ** **
>
> *Razique Mahroua - Nuage & Co*****
>
> razique.mahroua at gmail.com****
>
> Tel : +33 9 72 37 94 15****
>
> ** **
>
> <NUAGECO-LOGO-Fblan_petit.jpg> ****
>
> ** **
>
> Le 17 avr. 2013 à 23:49, Paras pradhan <pradhanparas at gmail.com> a écrit :*
> ***
>
> ** **
>
> What do people use for cinder?****
>
> ** **
>
> Thanks****
>
> Paras.****
>
> ** **
>
> On Wed, Apr 17, 2013 at 4:41 PM, Razique Mahroua <
> razique.mahroua at gmail.com> wrote:****
>
> I was about to use CephFS (Bobtail) but the I can't resize the instances
> without having CephFS crashing.****
>
> I'm currently considering GlusterFS which not only provides great
> performance, it's also pretty much easy to administer :)****
>
> ** **
>
> Le 17 avr. 2013 à 22:07, JuanFra Rodriguez Cardoso <
> juanfra.rodriguez.cardoso at gmail.com> a écrit :****
>
> ** **
>
> Glance and Nova with MooseFS.
> Reliable, good performance and easy configuration.****
>
>
> ****
>
> ---****
>
> JuanFra****
>
> ** **
>
> 2013/4/17 Jacob Godin <jacobgodin at gmail.com>****
>
> Hi all,****
>
> ** **
>
> Just a quick survey for all of you running distributed file systems for
> nova-compute instance storage. What are you running? Why are you using that
> particular file system?****
>
> ** **
>
> We are currently running CephFS and chose it because we are already using
> Ceph for volume and image storage. It works great, except for snapshotting,
> where we see slow performance and high CPU load.****
>
> ** **
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators***
> *
>
> ** **
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators***
> *
>
> ** **
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators***
> *
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators***
> *
>
>
>
> ****
>
> ** **
>
> -- ****
>
> Lorin Hochstein****
>
> Lead Architect - Cloud Services****
>
> Nimbis Services, Inc.****
>
> www.nimbisservices.com****
>
> ** **
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130424/9898011f/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 32925 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130424/9898011f/attachment-0001.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.jpg
Type: image/jpeg
Size: 10122 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130424/9898011f/attachment-0001.jpg>
More information about the OpenStack-operators
mailing list