[Openstack] two or more NFS / gluster mounts

Diego Parrilla Santamaría diego.parrilla.santamaria at gmail.com
Thu Dec 20 17:54:37 UTC 2012


Hi John,

Yes, that's a really good solution. It is exactly what the StackOps
Enterprise Edition offers out of the box.  It's a simpler alternative
assuming you are big enough to have several clusters of compute nodes, and
each cluster with different quality of service preassigned. And it works...
if the scheduler function works.

My proposal about a hierarchy of folders for shared storage comes from
requirements of some customers that want to be able to control the IO on a
tenant basis, and want to use very cheap scalable shared storage.

Let's say that StackOps EE follows now a static approach, and we would like
to have a dynamic one ;-)


Cheers
Diego
 --
Diego Parrilla
<http://www.stackops.com/>*CEO*
*www.stackops.com | * diego.parrilla at stackops.com** | +34 649 94 43 29 |
skype:diegoparrilla*
* <http://www.stackops.com/>
*

*




On Thu, Dec 20, 2012 at 6:37 PM, John Griffith
<john.griffith at solidfire.com>wrote:

>
>
> On Thu, Dec 20, 2012 at 9:37 AM, JuanFra Rodriguez Cardoso <
> juanfra.rodriguez.cardoso at gmail.com> wrote:
>
>> Yes, I'm really agree with Diego.
>> It would be a good choice for submitting a blueprint with this storage
>> feature based on tenants.
>>
>> According to current quotas control, it limits the:
>>
>>    -
>>
>>    Number of volumes which may be created
>>    -
>>
>>    Total size of all volumes within a project as measured in GB
>>    -
>>
>>    Number of instances which may be launched
>>    -
>>
>>    Number of processor cores which may be allocated
>>    - Publicly accessible IP addresses
>>
>>
>> Another new feature related to shared storage we had thought about, it's
>> to include an option for choosing if an instance has to be replicated or
>> not, i.e. in a MooseFS scenario, to indicate goal (number of replicas).
>> It's useful for example in testing or demo projects, where HA is not
>> required.
>>
>> Regards,
>>
>> JuanFra.
>>
>> 2012/12/20 Diego Parrilla Santamaría <diego.parrilla.santamaria at gmail.com
>> >
>>
>>> mmm... not sure if the concept of oVirt multiple storage domains is
>>> something that can be implemented in Nova as it is, but I would like to
>>> share my thoughts because it's something that -from my point of view-
>>> matters.
>>>
>>> If you want to change the folder where the nova instances are stored you
>>> have to modify the option in nova-compute.conf  'instances_path':
>>>
>>> If you look at that folder (/var/lib/nova/instances/ by default) you
>>> will see a structure like this:
>>>
>>> drwxrwxr-x   2 nova nova   73 Dec  4 12:16 _base
>>> drwxrwxr-x   2 nova nova    5 Oct 16 13:34 instance-00000002
>>> ...
>>> drwxrwxr-x   2 nova nova    5 Nov 26 17:38 instance-0000005c
>>> drwxrwxr-x   2 nova nova    6 Dec 11 15:38 instance-00000065
>>>
>>> If you have a shared storage for that folder, then your fstab entry
>>> looks like this one:
>>> 10.15.100.3:/volumes/vol1/zone1/instances /var/lib/nova/instances nfs
>>> defaults 0 0
>>>
>>> So, I think that it could be possible to implement something like
>>> 'storage domains', but tenant/project oriented. Instead of having multiple
>>> generic mountpoints, each tenant would have a private mountpoint for
>>> his/her instances. So the /var/lib/nova/instances could look like this
>>> sample:
>>>
>>> /instances
>>> +/tenantID1
>>> ++/instance-0000X
>>> ++/instance-0000Y
>>> ++/instance-0000Z
>>> +/tenantID2
>>> ++/instance-0000A
>>> ++/instance-0000B
>>> ++/instance-0000C
>>> ...
>>> +/tenantIDN
>>> ++/instance-0000A
>>> ++/instance-0000B
>>> ++/instance-0000C
>>>
>>> And in the /etc/fstab something like this sample too:
>>> 10.15.100.3:/volumes/vol1/zone1/instances/tenantID1
>>> /var/lib/nova/instances/tenantID1 nfs defaults 0 0
>>> 10.15.100.3:/volumes/vol1/zone1/instances/tenantID2 /var/lib/nova/instances/tenantID2 nfs
>>> defaults 0 0
>>> ...
>>> 10.15.100.3:/volumes/vol1/zone1/instances/tenantIDN
>>> /var/lib/nova/instances/tenantIDN nfs defaults 0 0
>>>
>>> With this approach, we could have something like per tenant QoS on
>>> shared storage to resell differente storage capabilities on a tenant basis.
>>>
>>> I would love to hear feedback, drawback, improvements...
>>>
>>> Cheers
>>> Diego
>>>
>>>  --
>>> Diego Parrilla
>>> <http://www.stackops.com/>*CEO*
>>> *www.stackops.com | * diego.parrilla at stackops.com** | +34 649 94 43 29|
>>> skype:diegoparrilla*
>>> * <http://www.stackops.com/>
>>> *
>>>
>>> *
>>>
>>>
>>>
>>>
>>> On Thu, Dec 20, 2012 at 4:32 PM, Andrew Holway <a.holway at syseleven.de>wrote:
>>>
>>>> Good plan.
>>>>
>>>>
>>>> https://blueprints.launchpad.net/openstack-ci/+spec/multiple-storage-domains
>>>>
>>>>
>>>> On Dec 20, 2012, at 4:25 PM, David Busby wrote:
>>>>
>>>> > I may of course be entirely wrong :) which would be cool if this is
>>>> achievable / on the roadmap.
>>>> >
>>>> > At the very least if this is not already in discussion I'd raise it
>>>> on launchpad as a potential feature.
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > On Thu, Dec 20, 2012 at 3:19 PM, Andrew Holway <a.holway at syseleven.de>
>>>> wrote:
>>>> > Ah shame. You can specify different storage domains in oVirt.
>>>> >
>>>> > On Dec 20, 2012, at 4:16 PM, David Busby wrote:
>>>> >
>>>> > > Hi Andrew,
>>>> > >
>>>> > > An interesting idea, but I am unaware if nova supports storage
>>>> affinity in any way, it does support host affinity iirc, as a kludge you
>>>> could have say some nova compute nodes using your "slow mount" and reserve
>>>> the "fast mount" nodes as required, perhaps even defining separate zones
>>>> for deployment?
>>>> > >
>>>> > > Cheers
>>>> > >
>>>> > > David
>>>> > >
>>>> > >
>>>> > >
>>>> > >
>>>> > >
>>>> > > On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway <
>>>> a.holway at syseleven.de> wrote:
>>>> > > Hi David,
>>>> > >
>>>> > > It is for nova.
>>>> > >
>>>> > > Im not sure I understand. I want to be able to say to openstack;
>>>> "openstack, please install this instance (A) on this mountpoint and please
>>>> install this instance (B) on this other mountpoint." I am planning on
>>>> having two NFS / Gluster based stores, a fast one and a slow one.
>>>> > >
>>>> > > I probably will not want to say please every time :)
>>>> > >
>>>> > > Thanks,
>>>> > >
>>>> > > Andrew
>>>> > >
>>>> > > On Dec 20, 2012, at 3:42 PM, David Busby wrote:
>>>> > >
>>>> > > > Hi Andrew,
>>>> > > >
>>>> > > > Is this for glance or nova ?
>>>> > > >
>>>> > > > For nova change:
>>>> > > >
>>>> > > > state_path = /var/lib/nova
>>>> > > > lock_path = /var/lib/nova/tmp
>>>> > > >
>>>> > > > in your nova.conf
>>>> > > >
>>>> > > > For glance I'm unsure, may be easier to just mount gluster right
>>>> onto /var/lib/glance (similarly could do the same for /var/lib/nova).
>>>> > > >
>>>> > > > And just my £0.02 I've had no end of problems getting gluster to
>>>> "play nice" on small POC clusters (3 - 5 nodes, I've tried nfs tried
>>>> glusterfs, tried 2 replica N distribute setups with many a random glusterfs
>>>> death), as such I have opted for using ceph.
>>>> > > >
>>>> > > > ceph's rados can also be used with cinder from the brief reading
>>>> I've been doing into it.
>>>> > > >
>>>> > > >
>>>> > > > Cheers
>>>> > > >
>>>> > > > David
>>>> > > >
>>>> > > >
>>>> > > >
>>>> > > >
>>>> > > >
>>>> > > > On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway <
>>>> a.holway at syseleven.de> wrote:
>>>> > > > Hi,
>>>> > > >
>>>> > > > If I have /nfs1mount and /nfs2mount or /nfs1mount and
>>>> /glustermount can I control where openstack puts the disk files?
>>>> > > >
>>>> > > > Thanks,
>>>> > > >
>>>> > > > Andrew
>>>> > > >
>>>> > > > _______________________________________________
>>>> > > > Mailing list: https://launchpad.net/~openstack
>>>> > > > Post to     : openstack at lists.launchpad.net
>>>> > > > Unsubscribe : https://launchpad.net/~openstack
>>>> > > > More help   : https://help.launchpad.net/ListHelp
>>>> > > >
>>>> > >
>>>> > >
>>>> > >
>>>> >
>>>> >
>>>> >
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Mailing list: https://launchpad.net/~openstack
>>>> Post to     : openstack at lists.launchpad.net
>>>> Unsubscribe : https://launchpad.net/~openstack
>>>> More help   : https://help.launchpad.net/ListHelp
>>>>
>>>
>>>
>>> _______________________________________________
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to     : openstack at lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>>>
>>>
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack at lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>> For the sake of simplicity could you just use the existing features and
> add a compute node with instances_patch set to the *other* back-ends?  Then
> utilize the existing compute scheduler parameters to determine which
> compute node and thus which storage back-end is used for the instance?
>
> John
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20121220/bcab1434/attachment.html>


More information about the Openstack mailing list