[Openstack-operators] [Openstack] [Cinder] Re: Multiple machines hosting cinder-volumes with Folsom ?
Sylvain Bauza
sylvain.bauza at digimind.com
Mon Jun 3 07:47:29 UTC 2013
Thanks Jerome for the clarification.
I just posted out a blogpost for adding a Second volume to Cinder in
Folsom [3]. Maybe it could be merged with the official Folsom Ubuntu
Cinder documentation ? There is only H/A aspects that are mentioned by now.
Someone from the documentation team ? Could you please point me out some
materials for committing a new doc for that ?
Thanks,
-Sylvain
[3] :
http://sbauza.wordpress.com/2013/06/03/adding-a-second-cinder-volume-with-folsom/
Le 31/05/2013 14:26, Jérôme Gallard a écrit :
> Hi Sylvain,
>
> Great to know that you found how to solve your issue.
>
> Thanks for reporting that you found the Grizzly doc confusing.
> In fact, the Grizzly release introduced the multi-backend feature.
> This feature allows to have more than one backend on a same compute
> (ie, to be able to have several cinder-volume running on a same
> compute). This feature is not available in Folsom: you can only run
> one cinder-volume per compute (in that case, if you want to manage
> several backends, you have to have several computes).
>
> Thanks a lot for your remarks,
> Jérôme
>
>
> On Fri, May 31, 2013 at 1:55 PM, Sylvain Bauza
> <sylvain.bauza at digimind.com <mailto:sylvain.bauza at digimind.com>> wrote:
>
> Thanks but it didn't match my needs. I already know how to deploy
> Cinder on a single host, my point was more relative to deploying a
> second Cinder-volume instance, and if yes, what to do.
>
> Nevermind, I successed in deploying a second Cinder-volume, just
> by looking at the packages and the confs. It's pretty
> straightforward, so I'm not surprised it wasn't documented.
> Nevertheless, I think that the Grizzly doc I mentioned [1] is
> confusing : by looking at it, I was thinking Cinder was unable to
> have two distinct volumes with Folsom release. Maybe updating the
> folsom branch for Cinder documentation, precising it *is*
> possible, is worth a try ?
>
> Anyway, I'm documenting out the process in my own (new) blog. Keep
> tuned, I'll post the URL out there.
>
> -Sylvain
>
>
>
> Le 31/05/2013 11:39, Jérôme Gallard a écrit :
>> Hi Sylvain,
>>
>> Maybe the folsom documentation for cinder will help you:
>> http://docs.openstack.org/folsom/openstack-compute/install/apt/content/osfolubuntu-cinder.html
>>
>>
>> Regards,
>> Jérôme
>>
>>
>> On Fri, May 31, 2013 at 9:21 AM, Sylvain Bauza
>> <sylvain.bauza at digimind.com <mailto:sylvain.bauza at digimind.com>>
>> wrote:
>>
>> Putting openstack-ops@ in the loop :-)
>>
>> Le 30/05/2013 17:26, Sylvain Bauza a écrit :
>>
>> Le 30/05/2013 15:25, Sylvain Bauza a écrit :
>>
>> Hi,
>>
>> It sounds quite unclear for me about the possibility
>> *in Folsom* to have two distinct Cinder hosts having
>> each one LVM backend called cinder-volumes ?
>>
>> As per the doc [1], I would say the answer is no, but
>> could you please confirm ?
>>
>> If so, do you have any idea on how to trick a nearly
>> full LVM cinder-volumes VG ? (I can't hardly add a
>> new disk for adding a second PV).
>>
>> Thanks,
>> -Sylvain
>>
>> [1] :
>> http://docs.openstack.org/grizzly/openstack-block-storage/admin/content/multi_backend.html
>>
>>
>> Replying to myself. As per [2], it seems having a
>> multiple cinder-volume setup in Folsom is achiveable.
>> Could someone from Cinder confirm that this setup is OK ?
>>
>> [2] : https://lists.launchpad.net/openstack/msg21825.html
>>
>>
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> <https://launchpad.net/%7Eopenstack>
>> Post to : openstack at lists.launchpad.net
>> <mailto:openstack at lists.launchpad.net>
>> Unsubscribe : https://launchpad.net/~openstack
>> <https://launchpad.net/%7Eopenstack>
>> More help : https://help.launchpad.net/ListHelp
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130603/a773df37/attachment.html>
More information about the OpenStack-operators
mailing list