[Openstack-operators] [puppet] [swift] Deploying multiple Swift clusters within the same Puppet environment
Vinsh, Adam
adam.vinsh at twcable.com
Fri Jan 8 04:57:13 UTC 2016
Hello Pieter,
I saw your ping in IRC and wanted to follow up with some help.
In short, the module won't support multiple clusters in the same environment.
The puppet master would infact group together all of those exported resources and combine them into one ring.
About the exported resources part. I do see what you see, that what exists is a resource collector and not an exported resource realize(collector).
I believe this exists from early days when an open source(not enterprise) puppet user might not have had access to puppetdb or whatever was needed to use exported resources.
I do agree we should update the documentation around this. I'll add that to my todo list.
You could change those collectors and test out this use case if you wanted to.
The way the module uses the ring builder currently suites small single node test setups and the CI jobs (only using the resource collector).
My suggestion is that you build both rings outside of puppet to start with, using swift-ring-builder.
It's a fairly easy tool included with swift and can be easily scripted.
You could then store those rings on your puppet master or some other node.
You could use the swift:ringserver and swift::ringsync classes to distribute those rings.
OR
In production I store the rings on a redundant file server and use the wget module to grab the ring:
I also version the path of the ring and send that data in using hiera.. helps with coordinating ring updates.
define ::swift::sync_ring(
$ring_server,
$path,
){
wget::fetch { $name:
source => "http://${ring_server}/${path}/${name}",
execuser => 'swift',
destination => "/etc/swift/${name}",
timeout => 30,
cache_dir => '/var/cache/swift',
verbose => false,
}
}
On an object node profile:
::swift::sync_ring { 'account.ring.gz':
ring_server => $swift_ring_server,
path => "swift/swift_rings/<clustername>/${ring_version}",
require => File['/etc/swift/'],
}
With this method, you could then use hiera to key a specific ring to each 1+3 proxy/object set.
As long as proxy A has ring A, it won't be talking to Object node B.. for example.
Also- If you have time, read up on "storage policies" in swift. I can think of some ways you could get
creative there and run both clusters off of a single proxy.
-Adam
From: "Wijngaarden, Pieter van" <pieter.van.wijngaarden at philips.com<mailto:pieter.van.wijngaarden at philips.com>>
Date: Tuesday, December 29, 2015 at 4:50 AM
To: "openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>" <openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>>
Subject: [Openstack-operators] [puppet] [swift] Deploying multiple Swift clusters within the same Puppet environment
Hi,
Hopefully this is the right place to ask for guidance! I'm deploying OpenStack/Swift in a development environment and intend to create/deploy multiple Swift clusters with different hardware configurations underneath and then do performance benchmarking. For example, I'm now building 2 clusters with each 1+3 nodes (1 proxy + 3 storage nodes), one cluster with NL-SAS disks and another with SAS disks. I also have a single-node 'cluster' up & running already. I'm setting up the configuration of all these clusters in Puppet.
I plan to do the ring creation as follows (please correct me if I'm saying weird or dumb things, I'm fairly new to Puppet):
In the Puppet declaration for each storage node, I think I should create the ring_account_device resources and export them, something like this:
hiera('swift_devices').each |String $device| {
@@ring_account_device { "$backbone_ip:6002/$device":
zone => hiera('swift_zone'),
weight => hiera('swift_device_weight'),
}
}
Then on the proxy node (once for each cluster), I include the swift::ringbuilder which collects these resources and creates the ring. At least for the single-node cluster this works.
However, if I would create multiple clusters, how does Puppet know which exported resources are intended for Cluster 1 (with NL-SAS disks) and which are for Cluster 2 (with SAS disks)? Is this at all possible using Puppet? I fear that if I deploy this, I will have 2 proxies which build a ring, each using all servers/drives on each storage node.
Also, I don't fully understand why (in modules/swift/ringbuilder.pp), I see the following:
Swift::Ringbuilder::Create['account'] -> Ring_account_device <| |> ~> Swift::Ringbuilder::Rebalance['account']
Here, it looks to me like the Ring is built with resource collector (Ring_account_device <| |>) , while the syntax to realize exported resources should use double angle brackets (<<| |>>)if I understand correctly? What am I missing/overlooking here? Should I run the ring builder on the storage nodes instead, and are things then synced in some other way? (That makes little sense to me...).
Can I make this work, and if so, how? Any help is greatly appreciated!
Happy holidays & kind regards,
Pieter van Wijngaarden
________________________________
The information contained in this message may be confidential and legally protected under applicable law. The message is intended solely for the addressee(s). If you are not the intended recipient, you are hereby notified that any use, forwarding, dissemination, or reproduction of this message is strictly prohibited and may be unlawful. If you are not the intended recipient, please contact the sender by return e-mail and destroy all copies of the original message.
________________________________
This E-mail and any of its attachments may contain Time Warner Cable proprietary information, which is privileged, confidential, or subject to copyright belonging to Time Warner Cable. This E-mail is intended solely for the use of the individual or entity to which it is addressed. If you are not the intended recipient of this E-mail, you are hereby notified that any dissemination, distribution, copying, or action taken in relation to the contents of and attachments to this E-mail is strictly prohibited and may be unlawful. If you have received this E-mail in error, please notify the sender immediately and permanently delete the original and any copy of this E-mail and any printout.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160108/953c6e27/attachment.html>
More information about the OpenStack-operators
mailing list