[Openstack] Live Migration with Gluster Storage

Sylvain Bauza sylvain.bauza at bull.net
Tue Aug 20 08:49:38 UTC 2013


Please note that there is huge improvement in terms of perfs if you 
choose to cherry-pick the libgfapi driver which has recently been 
implemented in Nova [1].

That would assume you use Cinder bootable volumes instead of classical 
QCOW2 instances, but the improvement is worth it.

-Sylvain


   Le 20/08/2013 09:29, Marco CONSONNI a écrit :
> Hello Guilherme and all,
>
> I was able to deploy live migration with gluster: I originally tried 
> NFS like you did but I found problems.
> On the contrary, Gluster works perfectly and it's quite easy to instal 
> and configure.
>
> This is what you need to do for a basic installation assuming that you 
> have a node, working as a gluster server, with 2 disks and a set of 
> compute nodes, working as gluster clients, that use the gluster shared 
> directory for saving the running images.
>
> -- On the gluster server --
>
> 1) Prepare the volumes
>
> Assuming that you have two disks (/dev/sdb and /dev/sdc), create a 
> primary partition on both of them using fdisk command.
> Format the volumes with command: sudo mkfs.xfs -i size=512 /dev/sdb1 
> and sudo mkfs.xfs -i size=512 /dev/sdc1
> Prepare two directories for mounting the two volumes: sudo mkdir -p 
> /export/brick1 and sudo mkdir -p /export/brick2
> Configure /etc/fstab for mounting the volumes by adding the following 
> lines:
>
> /dev/sdb1/export/brick1xfsdefaults02
>
> /dev/sdc1/export/brick2xfsdefaults02
>
> Mount the two volumes with command  sudo mount --a
>
> 2) install and configure gluster server
>
> sudo apt-get install glusterfs-server
>
> sudo gluster volume create openstack stripe 2 <IP address of the 
> server>:/export/brick1 <IP address of the server>:/export/brick2
>
> sudo gluster volume start openstack
>
> -- On the gluster clients / compute nodes  --
>
> 1) Install the gluster client with command sudo apt-get install 
> glusterfs-client
>
> 2) in the /etc/fstab, configure a gluster filesystem with name 
> /var/lib/nova/instances by adding the following line:
>
> <IP address of gluster server>:/openstack /var/lib/nova/instances 
> glusterfs defaults,_netdev 0 0
>
> Note that if you have already have a /var/lib/nova/instances directory 
> on the compute node, this fstab configuration simply 'hides' that but 
> the contents are still there.
> This configuration is needed for forcing compute node to store 
> instances onto the gluster shared  directory.
>
> Hope it helps,
> Marco.
>
>
>
> 2013/8/7 Guilherme Russi <luisguilherme.cr at gmail.com 
> <mailto:luisguilherme.cr at gmail.com>>
>
>     Hello guys,
>
>      I've been trying to deploy live migration to my cloud using NFS
>     but without success, I'd like to know if somebody has tried live
>     migration with Gluster Storage, does it work? Any problem when
>     installing it? Following the documentation from its website is
>     easy to install?
>
>     The only thing that is left to my cloud works 100% is the live
>     migration.
>
>     Thank you all.
>
>     Guilherme.
>
>     _______________________________________________
>     Mailing list:
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>     Post to     : openstack at lists.openstack.org
>     <mailto:openstack at lists.openstack.org>
>     Unsubscribe :
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130820/5441250d/attachment.html>


More information about the Openstack mailing list