[openstack-dev] [puppet] [ceph] Managing a ceph cluster's lifecycle with puppet
bryan.stillwell at twcable.com
Tue Jun 2 14:57:48 UTC 2015
Thanks David, this helped out quite a bit for my phase 1 work.
How do you handle the journal partition when replacing a failed drive?
On 5/31/15, 3:06 PM, "David Moreau Simard" <dmsimard at iweb.com> wrote:
>The configuration is done through a provider - you can use it in your
>composition layer, your site.pp or wherever else you want - a bit like
> 'osd/osd_journal_size', value => '16384';
> 'osd/osd_max_backfills', value => '1';
> # ...
> 'client/rbd_cache': value => 'true';
> 'client/rbd_cache_writethrough_until_flush' => 'true'
>Off the top of my head, the major things that the module does not
>- Advanced pool configuration (Erasure coded pools, Cache tiering)
>- CRUSH Map management
>In general, puppet-ceph is still early in the development process
>compared to mature and feature-full modules like puppet-nova,
>puppet-neutron or puppet-cinder.
>Contributions and feedback is definitely welcome !
>Personally I use puppet-ceph to bootstrap the installation of the
>nodes/OSDs and manage the cluster's critical configuration manually.
>For upgrading the cluster, I'm biased towards orchestrating this with
>another tool like Ansible since you usually want to control which server
>you upgrade first.
>David Moreau Simard
>On 2015-05-29 05:32 PM, Stillwell, Bryan wrote:
>> Hey guys,
>> One of my top tasks this quarter is to puppetize our ceph environments.
>> I've already spent some time going over the module, but wanted to start
>> conversation around some of the tasks I would like the module to do.
>> Currently I'm planning on doing the work in the following phases:
>> Phase 1 - Switch the installation of ceph packages, management of
>> and management of cephx keys from ceph-deploy to puppet-ceph.
>> Phase 2 - Switch the installation and management of mon nodes to be
>> by puppet-ceph.
>> Phase 3 - Switch the installation and management of osd nodes to be
>> by puppet-ceph.
>> I'm mostly done with phase 1, but wanted to ask what the best way to
>> additional options are in the config file? I want to be able to manage
>> options like the following:
>> osd_journal_size = 16384
>> osd_max_backfills = 1
>> osd_recovery_max_active = 1
>> osd_recovery_op_priority = 1
>> osd_recovery_max_single_start = 1
>> osd_op_threads = 12
>> osd_crush_initial_weight = 0
>> rbd cache = true
>> rbd cache writethrough until flush = true
>> Phase 2 seems pretty straight-forward at this point, but if you guys
>> any things I should watch out for I wouldn't mind hearing them.
>> Phase 3 is where I have the most questions:
>> How well does puppet-ceph handle pool configuration?
>> Can it handle setting the room/row/rack/node in the CRUSH map when
>> new nodes?
>> What's the best process for replace a failed HDD? Journal drive?
>> How could we use best utilize puppet-ceph for augmenting an existing
>> without causing performance problems?
>> Is there a good way to decommission hardware?
>> Any ideas around managing ceph rules? (default, ssd-only pools, primary
>> affinity, cache tiering, erasure coding)
>> What's it look like to upgrade a ceph cluster with puppet-ceph?
>> Bryan Stillwell
>> This E-mail and any of its attachments may contain Time Warner Cable
>>proprietary information, which is privileged, confidential, or subject
>>to copyright belonging to Time Warner Cable. This E-mail is intended
>>solely for the use of the individual or entity to which it is addressed.
>>If you are not the intended recipient of this E-mail, you are hereby
>>notified that any dissemination, distribution, copying, or action taken
>>in relation to the contents of and attachments to this E-mail is
>>strictly prohibited and may be unlawful. If you have received this
>>E-mail in error, please notify the sender immediately and permanently
>>delete the original and any copy of this E-mail and any printout.
>> OpenStack Development Mailing List (not for usage questions)
>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
This E-mail and any of its attachments may contain Time Warner Cable proprietary information, which is privileged, confidential, or subject to copyright belonging to Time Warner Cable. This E-mail is intended solely for the use of the individual or entity to which it is addressed. If you are not the intended recipient of this E-mail, you are hereby notified that any dissemination, distribution, copying, or action taken in relation to the contents of and attachments to this E-mail is strictly prohibited and may be unlawful. If you have received this E-mail in error, please notify the sender immediately and permanently delete the original and any copy of this E-mail and any printout.
More information about the OpenStack-dev