[openstack-dev] [TripleO] Improving Swift deployments with TripleO

Christian Schwede cschwede at redhat.com
Thu Aug 4 11:26:04 UTC 2016


On 04.08.16 10:27, Giulio Fidente wrote:
> On 08/02/2016 09:36 PM, Christian Schwede wrote:
>> Hello everyone,
> 
> thanks Christian,
> 
>> I'd like to improve the Swift deployments done by TripleO. There are a
>> few problems today when deployed with the current defaults:
>>
>> 1. Adding new nodes (or replacing existing nodes) is not possible,
>> because the rings are built locally on each host and a new node doesn't
>> know about the "history" of the rings. Therefore rings might become
>> different on the nodes, and that results in an unusable state eventually.
> 
> one of the ideas for this was to use a tempurl in the undercloud swift
> where to upload the rings built by a single overcloud node, not by the
> undercloud
> 
> so I proposed a new heat resource which would permit us to create a
> swift tempurl in the undercloud during the deployment
> 
> https://review.openstack.org/#/c/350707/
> 
> if we build the rings on the undercloud we can ignore this and use a
> mistral action instead, as pointed by Steven
> 
> the good thing about building rings in the overcloud is that it doesn't
> force us to have a static node mapping for each and every deployment but
> it makes hard to cope with heterogeneous environments

That's true. However - we still need to collect the device data from all
the nodes from the undercloud, push it to at least one overcloud mode,
build/update the rings there, push it to the undercloud Swift and use
that on all overcloud nodes. Or not?

That leaves some room for new inconsistencies IMO: how do we ensure that
the overcloud node uses the last ring to start with? Also, ring building
has to be limited to one overcloud node, otherwise we might end up with
multiple ringbuilding nodes? How can an operator manually modify the rings?

The tool to build the rings on the undercloud could be further improved
later, for example I'd like to be able to move data to new nodes slowly
over time, and also query existing storage servers about the progress.
Therefore we need some more functionality than currently available in
the ringbuilding part in puppet-swift IMO.

I think if we move this step to the undercloud we could solve a lot of
these challenges in a consistent way. WDYT?

I was also thinking more about the static node mapping and how to avoid
this. Could we add a host alias using the node UUIDs? That would never
change, it's available from the introspection data and therefore could
be used in the rings.

http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_specific_hieradata.html#collecting-the-node-uuid

>> 2. The rings are only using a single device, and it seems that this is
>> just a directory and not a mountpoint with a real device. Therefore data
>> is stored on the root device - even if you have 100TB disk space in the
>> background. If not fixed manually your root device will run out of space
>> eventually.
> 
> for the disks instead I am thinking to add a create_resources wrapper in
> puppet-swift:
> 
> https://review.openstack.org/#/c/350790
> https://review.openstack.org/#/c/350840/
>
> so that we can pass via hieradata per-node swift::storage::disks maps
> 
> we have a mechanism to push per-node hieradata based on the system uuid,
> we could extend the tool to capture the nodes (system) uuid and generate
> per-node maps

Awesome, thanks Giulio!

I will test that today. So the tool could generate the mapping
automatically, and we don't need to filter puppet facts on the nodes
itself. Nice!

> then, with the above puppet changes and having the per-node map and the
> rings download url, we could feed them to the templates, replace with an
> environment the rings building implementation and deploy without further
> customizations
> 
> what do you think?

Yes, that sounds like a good plan to me.

I'll continue working on the ringbuilder tool for now and see how I
integrate this into the Mistral actions.

-- Christian



More information about the OpenStack-dev mailing list