[Openstack] DRBD storage for Openstack installations

郭耀謙 tonytkdk at gmail.com
Thu May 26 17:26:06 UTC 2011


Hello Diego ,
How's your new release version ....
Is that good to work ? I did not test it . Maybe I'll play with new release
next week.
btw , the test cloud for myself which made by Stackops is still running well
.
Just say hi and a report

Cheers
HugoKuo

2011/5/26 Diego Parrilla Santamaría <diego.parrilla.santamaria at gmail.com>

> Hi Oleg,
>
> thank you very much for your post, it's really didactic. We are taking a
> different approach for HA at storage level, but I have worked formerly with
> DRBD and I think it's a very good choice.
>
> I'm curious about how you have deployed nova-volume nodes in your
> architecture. You don't specify if the two nodes of the DRBD cluster run one
> or two instances of nova-volume. If you run one instance probably you have
> implemented some kind of fault-tolerant active-passive service if the
> nova-volume process fails in the active node, but I would like to know if
> you can run an active-active two nova-volume instances on two different
> physical nodes on top of the DRBD shared resource.
>
> Regards
> Diego
>
> --
> Diego Parrilla
> <http://www.stackops.com>*CEO*
> *www.stackops.com | * diego.parrilla at stackops.com** | +34 649 94 43 29 |
> skype:diegoparrilla*
> * <http://www.stackops.com>
>
>
>
> On Thu, May 26, 2011 at 1:29 PM, Oleg Gelbukh <ogelbukh at mirantis.com>wrote:
>
>> Hi,
>> We were researching Openstack for our private cloud, and want to share
>> experience and get tips from community as we go on.
>>
>> We have settled on DRBD as shared storage platform for our installation.
>> LVM is used over the drbd device to mange logical volumes. OCFS2 file system
>> is created on one of volumes, mounted and set up as *image_path* and *
>> instance_path* in the *nova.conf*, other space is reserved for storage
>> volumes (managed by nova-volume).
>>
>> As a result, we have shared storage suitable for features such as live
>> migration and snapshots. We also have some level of fault-tolerance, with
>> DRBD I/O error handling, which automatically redirects I/O requests to peer
>> node over network in case of primary node failure. We created script<https://github.com/Mirantis/openstack-utils/blob/master/recovery_instance_by_id.py>for bootstrapping lost VMs in two crash scenarios:
>> * dom0 host restart/domU failure: restore VMs on the same host
>> * dom0 host failure: restore VMs on peer node
>> We are considering such pair of servers with shared storage as a basic
>> block for the cloud structure.
>>
>> For whom it may interest, the details of DRBD installation are here<http://mirantis.blogspot.com/2011/05/shared-storage-for-openstack-based-on.html>.
>> I'll be glad to answer any questions and highly appreciate feedback on this.
>>
>> Oleg S. Gelbukh,
>> Mirantis Inc.
>> www.mirantis.com
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack at lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20110527/b9f7f022/attachment.html>


More information about the Openstack mailing list