[openstack-dev] [puppet] [ceph] Managing a ceph cluster's lifecycle with puppet

Stillwell, Bryan bryan.stillwell at twcable.com
Fri May 29 21:30:35 UTC 2015


Hey guys,

One of my top tasks this quarter is to puppetize our ceph environments.
I've already spent some time going over the module, but wanted to start a
conversation around some of the tasks I would like the module to do.
Currently I'm planning on doing the work in the following phases:

Phase 1 - Switch the installation of ceph packages, management of
ceph.conf,
          and management of cephx keys from ceph-deploy to puppet-ceph.

Phase 2 - Switch the installation and management of mon nodes to be handled
          by puppet-ceph.

Phase 3 - Switch the installation and management of osd nodes to be handled
          by puppet-ceph.

I'm mostly done with phase 1, but wanted to ask what the best way to handle
additional options are in the config file?  I want to be able to manage
options like the following:

[osd]
osd_journal_size = 16384
osd_max_backfills = 1
osd_recovery_max_active = 1
osd_recovery_op_priority = 1
osd_recovery_max_single_start = 1
osd_op_threads = 12
osd_crush_initial_weight = 0

[client]
rbd cache = true
rbd cache writethrough until flush = true


Phase 2 seems pretty straight-forward at this point, but if you guys have
any things I should watch out for I wouldn't mind hearing them.

Phase 3 is where I have the most questions:

How well does puppet-ceph handle pool configuration?

Can it handle setting the room/row/rack/node in the CRUSH map when adding
new nodes?

What's the best process for replace a failed HDD?  Journal drive?

How could we use best utilize puppet-ceph for augmenting an existing
cluster
without causing performance problems?

Is there a good way to decommission hardware?

Any ideas around managing ceph rules?  (default, ssd-only pools, primary
affinity, cache tiering, erasure coding)

What's it look like to upgrade a ceph cluster with puppet-ceph?


Thanks,

Bryan Stillwell


This E-mail and any of its attachments may contain Time Warner Cable proprietary information, which is privileged, confidential, or subject to copyright belonging to Time Warner Cable. This E-mail is intended solely for the use of the individual or entity to which it is addressed. If you are not the intended recipient of this E-mail, you are hereby notified that any dissemination, distribution, copying, or action taken in relation to the contents of and attachments to this E-mail is strictly prohibited and may be unlawful. If you have received this E-mail in error, please notify the sender immediately and permanently delete the original and any copy of this E-mail and any printout.



More information about the OpenStack-dev mailing list