[Openstack-operators] Distributed Filesystem

Razique Mahroua razique.mahroua at gmail.com
Thu Apr 18 08:00:00 UTC 2013


Sure :)
Great feedbacks around. Many technologies do pretty much everything on the paper - but I guess in the end it's more about if the tech. does the job and if it does it well.
For such critical implementation, reliable solution is a must-have - ie that have proven through years they can be used and are stable enough for us to enjoy our week-ends :)

Razique

Le 18 avr. 2013 à 00:14, Paras pradhan <pradhanparas at gmail.com> a écrit :

> Thanks for the replies Razique. We are doing a test installation and looking for options for live migration. Looks like both cinder and shared file stirage are options. Among these two which one do you guys recommended considering the Cinder block will be typical lvm based commodity hardware.
> 
> Thanks
> Paras.
> 
> 
> On Wed, Apr 17, 2013 at 5:03 PM, Razique Mahroua <razique.mahroua at gmail.com> wrote:
> Definitely, use the "--block_migrate" flag along the nova migrate command so you don't need a shared storage.
> You can boot from Cinder, depending on which version of OPS you run 
> 
> Razique Mahroua - Nuage & Co
> razique.mahroua at gmail.com
> Tel : +33 9 72 37 94 15
> 
> <NUAGECO-LOGO-Fblan_petit.jpg>
> 
> Le 17 avr. 2013 à 23:55, Paras pradhan <pradhanparas at gmail.com> a écrit :
> 
>> Can we do live migration without using shared storage like glusterfs and using cinder to boot the volume from? 
>> 
>> Sorry little off topic 
>> 
>> Thanks
>> Paras.
>> 
>> 
>> On Wed, Apr 17, 2013 at 4:53 PM, Razique Mahroua <razique.mahroua at gmail.com> wrote:
>> Many use either a proprietary backend or the good old LVM
>> I'll go with Ceph for it since there is a native integration between cinder/ nova-volume and Ceph
>> 
>> Razique Mahroua - Nuage & Co
>> razique.mahroua at gmail.com
>> Tel : +33 9 72 37 94 15
>> 
>> <NUAGECO-LOGO-Fblan_petit.jpg>
>> 
>> Le 17 avr. 2013 à 23:49, Paras pradhan <pradhanparas at gmail.com> a écrit :
>> 
>>> What do people use for cinder?
>>> 
>>> Thanks
>>> Paras.
>>> 
>>> 
>>> On Wed, Apr 17, 2013 at 4:41 PM, Razique Mahroua <razique.mahroua at gmail.com> wrote:
>>> I was about to use CephFS (Bobtail) but the I can't resize the instances without having CephFS crashing.
>>> I'm currently considering GlusterFS which not only provides great performance, it's also pretty much easy to administer  :)
>>> 
>>> Le 17 avr. 2013 à 22:07, JuanFra Rodriguez Cardoso <juanfra.rodriguez.cardoso at gmail.com> a écrit :
>>> 
>>>> Glance and Nova with MooseFS.
>>>> Reliable, good performance and easy configuration.
>>>> 
>>>> ---
>>>> JuanFra
>>>> 
>>>> 
>>>> 2013/4/17 Jacob Godin <jacobgodin at gmail.com>
>>>> Hi all,
>>>> 
>>>> Just a quick survey for all of you running distributed file systems for nova-compute instance storage. What are you running? Why are you using that particular file system?
>>>> 
>>>> We are currently running CephFS and chose it because we are already using Ceph for volume and image storage. It works great, except for snapshotting, where we see slow performance and high CPU load.
>>>> 
>>>> 
>>>> _______________________________________________
>>>> OpenStack-operators mailing list
>>>> OpenStack-operators at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>> 
>>>> 
>>>> _______________________________________________
>>>> OpenStack-operators mailing list
>>>> OpenStack-operators at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>> 
>>> 
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>> 
>>> 
>> 
>> 
> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130418/ac553dd6/attachment.html>


More information about the OpenStack-operators mailing list