[Openstack] using Glusterfs for instance storage

Vishvananda Ishaya vishvananda at gmail.com
Thu Apr 11 18:14:37 UTC 2013


You should check your syslog for app armor denied messages. It is possible
app armor is getting in the way here.

Vish

On Apr 11, 2013, at 8:35 AM, John Paul Walters <jwalters at isi.edu> wrote:

> Hi Sylvain,
> 
> I agree, though I've confirmed that the UID and GID are consistent across both the compute nodes and my Glusterfs nodes. 
> 
> JP
> 
> 
> On Apr 11, 2013, at 11:22 AM, Sylvain Bauza <sylvain.bauza at digimind.com> wrote:
> 
>> Agree.
>> As for other shared FS, this is *highly* important to make sure Nova UID and GID are consistent in between all compute nodes. 
>> If this is not the case, then you have to usermod all instances...
>> 
>> -Sylvain
>> 
>> Le 11/04/2013 16:49, Razique Mahroua a écrit :
>>> Hi JP,
>>> my bet is that this is a writing permissions issue. Does nova has the right to write within the mounted directory?
>>> 
>>> Razique Mahroua - Nuage & Co
>>> razique.mahroua at gmail.com
>>> Tel : +33 9 72 37 94 15
>>> 
>>> 
>>> 
>>> Le 11 avr. 2013 à 16:36, John Paul Walters <jwalters at isi.edu> a écrit :
>>> 
>>>> Hi,
>>>> 
>>>> We've started implementing a Glusterfs-based solution for instance storage in order to provide live migration.  I've run into a strange problem when using a multi-node Gluster setup that I hope someone has a suggestion to resolve.
>>>> 
>>>> I have a 12 node distributed/replicated Gluster cluster.  I can mount it to my client machines, and it seems to be working alright.  When I launch instances, the nova-compute log on the client machines are giving me two error messages:
>>>> 
>>>> First is a qemu-kvm error: could not open disk image /exports/instances/instances/instance-00000242/disk: Invalid argument
>>>> (full output at http://pastebin.com/i8vzWegJ)
>>>> 
>>>> The second error message comes a short time later ending with nova.openstack.common.rpc.amqp Invalid: Instance has already been created
>>>> (full output at http://pastebin.com/6Ta4kkBN)
>>>> 
>>>> This happens reliably with the multi-Gluster-node setup.  Oddly, after creating a test Gluster volume composed of a single brick and single node, everything works fine.
>>>> 
>>>> Does anyone have any suggestions?
>>>> 
>>>> thanks,
>>>> JP
>>>> 
>>>> 
>>>> _______________________________________________
>>>> Mailing list: https://launchpad.net/~openstack
>>>> Post to     : openstack at lists.launchpad.net
>>>> Unsubscribe : https://launchpad.net/~openstack
>>>> More help   : https://help.launchpad.net/ListHelp
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to     : openstack at lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>> 
> 
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130411/45e302ed/attachment.html>


More information about the Openstack mailing list