[openstack-dev] [TripleO] Improving Swift deployments with TripleO

Christian Schwede cschwede at redhat.com
Mon Aug 22 13:40:23 UTC 2016


On 04.08.16 15:39, Giulio Fidente wrote:
> On 08/04/2016 01:26 PM, Christian Schwede wrote:
>> On 04.08.16 10:27, Giulio Fidente wrote:
>>> On 08/02/2016 09:36 PM, Christian Schwede wrote:
>>>> Hello everyone,
>>>
>>> thanks Christian,
>>>
>>>> I'd like to improve the Swift deployments done by TripleO. There are a
>>>> few problems today when deployed with the current defaults:
>>>>
>>>> 1. Adding new nodes (or replacing existing nodes) is not possible,
>>>> because the rings are built locally on each host and a new node doesn't
>>>> know about the "history" of the rings. Therefore rings might become
>>>> different on the nodes, and that results in an unusable state
>>>> eventually.
>>>
>>> one of the ideas for this was to use a tempurl in the undercloud swift
>>> where to upload the rings built by a single overcloud node, not by the
>>> undercloud
>>>
>>> so I proposed a new heat resource which would permit us to create a
>>> swift tempurl in the undercloud during the deployment
>>>
>>> https://review.openstack.org/#/c/350707/
>>>
>>> if we build the rings on the undercloud we can ignore this and use a
>>> mistral action instead, as pointed by Steven
>>>
>>> the good thing about building rings in the overcloud is that it doesn't
>>> force us to have a static node mapping for each and every deployment but
>>> it makes hard to cope with heterogeneous environments
>>
>> That's true. However - we still need to collect the device data from all
>> the nodes from the undercloud, push it to at least one overcloud mode,
>> build/update the rings there, push it to the undercloud Swift and use
>> that on all overcloud nodes. Or not?
> 
> sure, let's build on the undercloud, when automated with mistral it
> shouldn't make a big difference for the user
> 
>> I was also thinking more about the static node mapping and how to avoid
>> this. Could we add a host alias using the node UUIDs? That would never
>> change, it's available from the introspection data and therefore could
>> be used in the rings.
>>
>> http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_specific_hieradata.html#collecting-the-node-uuid
>>
> 
> right, this is the mechanism I wanted to use to proviude per-node disk
> maps, it's how it works for ceph disks as well

I looked into this further and proposed a patch upstream:

https://review.openstack.org/358643

This worked fine in my tests, an example /etc/hosts looks like this:

http://paste.openstack.org/show/562206/

And based on that patch we could build the Swift rings even if the nodes
are down and never been deployed, because the system uuid will never
change and is unique. I updated my tripleo-swift-ring-tool and just run
a successful deployment with the patch (also using the merged patches
from Giulio).

Let me know what you think about it - I think with that patch we could
integrate the tripleo-swift-ring-tool.

-- Christian

>>>> 2. The rings are only using a single device, and it seems that this is
>>>> just a directory and not a mountpoint with a real device. Therefore
>>>> data
>>>> is stored on the root device - even if you have 100TB disk space in the
>>>> background. If not fixed manually your root device will run out of
>>>> space
>>>> eventually.
>>>
>>> for the disks instead I am thinking to add a create_resources wrapper in
>>> puppet-swift:
>>>
>>> https://review.openstack.org/#/c/350790
>>> https://review.openstack.org/#/c/350840/
>>>
>>> so that we can pass via hieradata per-node swift::storage::disks maps
>>>
>>> we have a mechanism to push per-node hieradata based on the system uuid,
>>> we could extend the tool to capture the nodes (system) uuid and generate
>>> per-node maps
>>
>> Awesome, thanks Giulio!
>>
>> I will test that today. So the tool could generate the mapping
>> automatically, and we don't need to filter puppet facts on the nodes
>> itself. Nice!   
> 
> and we could re-use the same tool to generate the ceph::osds disk maps
> as well :)
> 




More information about the OpenStack-dev mailing list