[Openstack-operators] Distributed Filesystem
Joe Topjian
joe.topjian at cybera.ca
Wed Apr 24 19:30:48 UTC 2013
Has anyone tried creating a block device from a Ceph pool and then
exporting that device via NFS to the compute nodes for instance storage?
I'm kicking that idea around here.
On Wed, Apr 24, 2013 at 11:31 AM, Razique Mahroua <razique.mahroua at gmail.com
> wrote:
> I ended up using Gluster as a shared storage for instance and Ceph for
> Cinder/ Nova-volume and admin storage as well.
> works perfectly!
>
> *Razique Mahroua** - **Nuage & Co*
> razique.mahroua at gmail.com
> Tel : +33 9 72 37 94 15
>
>
> Le 24 avr. 2013 à 19:08, Jacob Godin <jacobgodin at gmail.com> a écrit :
>
> Razique, what did you end up deciding on? I would like to keep my Ceph
> RADOS setup, but need a different filesystem to put on top of it. Wondering
> if anyone else is doing that?
>
>
>
>
> On Wed, Apr 24, 2013 at 1:21 PM, Razique Mahroua <
> razique.mahroua at gmail.com> wrote:
>
>> I feel you Jacob,
>> Loring I had the exact same issue ! Using both Argonaut and Bobtail, on
>> high I/O load the mount crashed the server - well the server wasn't
>> crashing, the mount went crazy, and impossible to unmount the disk, kill
>> the process, so I always ended up rebooting the nodes. What is interesting
>> though is that the reason why it is not still considered as
>> production-ready is because of the way metadata is currently implemented,
>> rather than the code itself....
>>
>>
>> *Razique Mahroua** - **Nuage & Co*
>> razique.mahroua at gmail.com
>> Tel : +33 9 72 37 94 15
>>
>> <NUAGECO-LOGO-Fblan_petit.jpg>
>>
>> Le 24 avr. 2013 à 17:36, Lorin Hochstein <lorin at nimbisservices.com> a
>> écrit :
>>
>> Razique:
>>
>> Out of curiosity, what kinds of problems did you see with CephFS? I've
>> heard it's not ready for production yet, but I haven't heard anybody talk
>> about specific experiences with it.
>>
>> Lorin
>>
>>
>> On Sat, Apr 20, 2013 at 8:14 AM, Razique Mahroua <
>> razique.mahroua at gmail.com> wrote:
>>
>>> Hi Paras,
>>> that's the kind of setup I've always seen myself. After unsuccessful
>>> tests with CephFS, I'll move to the following strategy:
>>> - GlusterFS as a shared storage for the instances (check the official
>>> doc, we wrote about its deployment for OpenStack)
>>> - Ceph cluster wit the direct RBD gateway from nova to RADOS
>>> - Ceph cluster as well the imaging service (Glance)
>>>
>>> Some others use MooseFS as well the the stared storage (we wrote a
>>> deployment guide as well)
>>> Best regards,
>>> Razique
>>>
>>>
>>> *Razique Mahroua** - **Nuage & Co*
>>> razique.mahroua at gmail.com
>>> Tel : +33 9 72 37 94 15
>>>
>>> <NUAGECO-LOGO-Fblan_petit.jpg>
>>>
>>> Le 19 avr. 2013 à 17:05, Paras pradhan <pradhanparas at gmail.com> a écrit
>>> :
>>>
>>> Well I am not sure if we would like to do it since it is marked
>>> as deprecated. So this is what I am thinking. For shared storage, I will be
>>> using glusterfs and use cinder just for extra block disk on the instances.
>>> This what the Openstack operators doing typically ?
>>>
>>> Thanks
>>> Paras.
>>>
>>>
>>> On Fri, Apr 19, 2013 at 12:10 AM, Razique Mahroua <
>>> razique.mahroua at gmail.com> wrote:
>>>
>>>> More infos here:
>>>> http://osdir.com/ml/openstack-cloud-computing/2012-08/msg00293.html
>>>>
>>>> But I'm not sure about the last updates - you can still use it at the
>>>> moment
>>>> Razique
>>>>
>>>> *Razique Mahroua** - **Nuage & Co*
>>>> razique.mahroua at gmail.com
>>>> Tel : +33 9 72 37 94 15
>>>>
>>>> <NUAGECO-LOGO-Fblan_petit.jpg>
>>>>
>>>> Le 18 avr. 2013 à 17:13, Paras pradhan <pradhanparas at gmail.com> a
>>>> écrit :
>>>>
>>>> Regarding block migration, this is what confuses me. This is from the
>>>> Openstack operations manual
>>>>
>>>> --
>>>> Theoretically live migration can be done with non-shared storage, using
>>>> a feature known as KVM live block migration. However, this is a
>>>> littleknown feature in OpenStack, with limited testing when compared to
>>>> live migration, and is slated for deprecation in KVM upstream.
>>>> --
>>>>
>>>> Paras.
>>>>
>>>>
>>>> On Thu, Apr 18, 2013 at 3:00 AM, Razique Mahroua <
>>>> razique.mahroua at gmail.com> wrote:
>>>>
>>>>> Sure :)
>>>>> Great feedbacks around. Many technologies do pretty much everything on
>>>>> the paper - but I guess in the end it's more about if the tech. does the
>>>>> job and if it does it well.
>>>>> For such critical implementation, reliable solution is a must-have -
>>>>> ie that have proven through years they can be used and are stable enough
>>>>> for us to enjoy our week-ends :)
>>>>>
>>>>> Razique
>>>>>
>>>>> Le 18 avr. 2013 à 00:14, Paras pradhan <pradhanparas at gmail.com> a
>>>>> écrit :
>>>>>
>>>>> Thanks for the replies Razique. We are doing a test installation and
>>>>> looking for options for live migration. Looks like both cinder and shared
>>>>> file stirage are options. Among these two which one do you guys recommended
>>>>> considering the Cinder block will be typical lvm based commodity hardware.
>>>>>
>>>>> Thanks
>>>>> Paras.
>>>>>
>>>>>
>>>>> On Wed, Apr 17, 2013 at 5:03 PM, Razique Mahroua <
>>>>> razique.mahroua at gmail.com> wrote:
>>>>>
>>>>>> Definitely, use the "--block_migrate" flag along the nova migrate
>>>>>> command so you don't need a shared storage.
>>>>>> You can boot from Cinder, depending on which version of OPS you run
>>>>>>
>>>>>> *Razique Mahroua** - **Nuage & Co*
>>>>>> razique.mahroua at gmail.com
>>>>>> Tel : +33 9 72 37 94 15
>>>>>>
>>>>>> <NUAGECO-LOGO-Fblan_petit.jpg>
>>>>>>
>>>>>> Le 17 avr. 2013 à 23:55, Paras pradhan <pradhanparas at gmail.com> a
>>>>>> écrit :
>>>>>>
>>>>>> Can we do live migration without using shared storage like glusterfs
>>>>>> and using cinder to boot the volume from?
>>>>>>
>>>>>> Sorry little off topic
>>>>>>
>>>>>> Thanks
>>>>>> Paras.
>>>>>>
>>>>>>
>>>>>> On Wed, Apr 17, 2013 at 4:53 PM, Razique Mahroua <
>>>>>> razique.mahroua at gmail.com> wrote:
>>>>>>
>>>>>>> Many use either a proprietary backend or the good old LVM
>>>>>>> I'll go with Ceph for it since there is a native integration between
>>>>>>> cinder/ nova-volume and Ceph
>>>>>>>
>>>>>>> *Razique Mahroua** - **Nuage & Co*
>>>>>>> razique.mahroua at gmail.com
>>>>>>> Tel : +33 9 72 37 94 15
>>>>>>>
>>>>>>> <NUAGECO-LOGO-Fblan_petit.jpg>
>>>>>>>
>>>>>>> Le 17 avr. 2013 à 23:49, Paras pradhan <pradhanparas at gmail.com> a
>>>>>>> écrit :
>>>>>>>
>>>>>>> What do people use for cinder?
>>>>>>>
>>>>>>> Thanks
>>>>>>> Paras.
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Apr 17, 2013 at 4:41 PM, Razique Mahroua <
>>>>>>> razique.mahroua at gmail.com> wrote:
>>>>>>>
>>>>>>>> I was about to use CephFS (Bobtail) but the I can't resize the
>>>>>>>> instances without having CephFS crashing.
>>>>>>>> I'm currently considering GlusterFS which not only provides great
>>>>>>>> performance, it's also pretty much easy to administer :)
>>>>>>>>
>>>>>>>> Le 17 avr. 2013 à 22:07, JuanFra Rodriguez Cardoso <
>>>>>>>> juanfra.rodriguez.cardoso at gmail.com> a écrit :
>>>>>>>>
>>>>>>>> Glance and Nova with MooseFS.
>>>>>>>> Reliable, good performance and easy configuration.
>>>>>>>>
>>>>>>>> ---
>>>>>>>> JuanFra
>>>>>>>>
>>>>>>>>
>>>>>>>> 2013/4/17 Jacob Godin <jacobgodin at gmail.com>
>>>>>>>>
>>>>>>>>> Hi all,
>>>>>>>>>
>>>>>>>>> Just a quick survey for all of you running distributed file
>>>>>>>>> systems for nova-compute instance storage. What are you running? Why are
>>>>>>>>> you using that particular file system?
>>>>>>>>>
>>>>>>>>> We are currently running CephFS and chose it because we are
>>>>>>>>> already using Ceph for volume and image storage. It works great, except for
>>>>>>>>> snapshotting, where we see slow performance and high CPU load.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> OpenStack-operators mailing list
>>>>>>>>> OpenStack-operators at lists.openstack.org
>>>>>>>>>
>>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>>>>>>>
>>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> OpenStack-operators mailing list
>>>>>>>> OpenStack-operators at lists.openstack.org
>>>>>>>>
>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> OpenStack-operators mailing list
>>>>>>>> OpenStack-operators at lists.openstack.org
>>>>>>>>
>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>
>>
>> --
>> Lorin Hochstein
>> Lead Architect - Cloud Services
>> Nimbis Services, Inc.
>> www.nimbisservices.com
>>
>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
--
Joe Topjian
Systems Administrator
Cybera Inc.
www.cybera.ca
Cybera is a not-for-profit organization that works to spur and support
innovation, for the economic benefit of Alberta, through the use
of cyberinfrastructure.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130424/470534f6/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: NUAGECO-LOGO-Fblan_petit.jpg
Type: image/jpeg
Size: 10122 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130424/470534f6/attachment.jpg>
More information about the OpenStack-operators
mailing list