Thank you very much!

cephadm for sure seems to have a lot of advantages, especially regarding the upgrades - by the way, does it support "online" upgrades or is a cluster downtime needed?

Best regards

-- 
Francesco Di Nucci
System Administrator 
Compute & Networking Service, INFN Naples

Email: francesco.dinucci@na.infn.it
On 20/05/25 12:58, Michel Jouvin wrote:

Hi,

Management tools are always subjet to personal preferences but honnestly, I cannot really think of cons for cephadm. It is really a great tool, progressing a lot versions after versions and I'd rate a major advantage the fact it is a management layer on top (or aside as you prefer) the cluster: it means you can even repair a somewhat broken cluster with cephadm (as long as you have a mgr running as it is a mgr module).

If you have a cluster that you manage with something else, you can discuss whether you want to move to cephadm and when... but it you are starting a new cluster, I think there is no real discussion! Just, as mentioned by others, you still need a server provisionning tool to deploy the OS, Podman or Docker and the cephadm SSH keys on the server that are/will be part of the cluster.

Michel

Le 20/05/2025 à 11:25, Utkarsh Bhatt a écrit :
Hey!
Deploying and operating a ceph cluster is a complex endeavour.
And much of what can be done (in a holistic way from an operator perspective) depends
on the tooling.

1. Cephadm:

cephadm IS the go-to upstream tooling for ceph orchestration and has both pros and cons.
It deploys Ceph in a containerised fashion (docker/podman) and manages the containers on
operator's behalf. It is definitely worth checking out. It is also worth mentioning that cephadm
is a tool and is abstracted from the ceph cluster itself (i.e. containers constituting the cluster)
which allows you to choose the ceph container image at deploy time (or via a systematic upgrade
laterally). The choice basically being between upstream images built IIRC using CentOS (!verify)
or downstream images like Ceph Rock built using ceph packages from Ubuntu LTS.

2. MicroCeph:

MicroCeph is a machine orchestrator for Ceph which spawns isolated Ceph services on the host
(and not containers). Again like any operator it has its pros and cons. The idea behind MicroCeph
is to make Ceph orchestration/operation easy and over the past cycles it has grown more featureful.
It uses trust tokens generated by an existing cluster member for scaling the cluster horizontally
and the cluster itself is isolated from the underlying host (no SSHfoo as well). Since it is
the same Ceph underneath, native integrations (with openstack services like keystone, cinder,
glance, or K8s via ceph CSI) work as is. It is worth checking out if it suits your use cases.

3. Rook

Rook is an interesting way of operating a ceph cluster more suited for k8s native clouds. Interesting
read however, for you it is less relevant.

I would love to hear more from your ceph adventures, please keep us posted!

Utkarsh

On Tue, May 20, 2025 at 1:02 PM Francesco Di Nucci <francesco.dinucci@na.infn.it> wrote:
Thank you,

this might be a solution too (using other tools to setup the OS and then
switch to cephadm)

It's not only about familiarity, I was also thinking about feasibility
in the long run, with upgrades, new nodes etc, so nice to know that CEPH
and OpenStack managements can be decoupled

Best regards

--
Francesco Di Nucci
System Administrator
Compute & Networking Service, INFN Naples

Email: francesco.dinucci@na.infn.it

On 20/05/25 09:12, Eugen Block wrote:
> Hi,
>
> I would say this is opinion based and depends on your experience and
> infrastructure.
> Even if you decided to use cephadm as a Ceph deployment tool, you
> still need to have some installation and configuration management in
> place, at least if you have more than a few hosts. Because with the
> Ceph orchestrator (cephadm) you can only add hosts that have been
> configured to your needs, like ssh keys, podman/docker, chrony etc. If
> you use puppet for that, it might be the right choice for you.
> We are using a combination of cobbler and Salt (Uyuni project) to
> perform automatic OS installation via PXE boot and configuration via
> Salt. Once the systems are ready to join the Ceph cluster, we just add
> them via orchestrator (ceph orch host add ...) and then the rest is
> managed by cephadm. So in our case, Ceph is decoupled from OpenStack
> management, although the OpenStack hosts are also installed and
> configured via Salt.
>
> I'd say choose the method you're most familiar with.
>
> Regards,
> Eugen
>
> Zitat von Francesco Di Nucci <francesco.dinucci@na.infn.it>:
>
>> Thank you,
>>
>> I'd read it, but as there also are other methods such as ceph-ansible
>> and puppet-ceph I am trying to get a feedback from other operators
>> about their experiences, as in this case I'm particularly interested
>> in integration of CEPH with OpenStack
>>
>> Best regards
>>
>> --
>> Francesco Di Nucci
>> System Administrator
>> Compute & Networking Service, INFN Naples
>>
>> Email:francesco.dinucci@na.infn.it
>>
>> On 19/05/25 16:29, Maksim Malchuk wrote:
>>> Hi Francesco,
>>>
>>> The CEPH community recommends using CEPHADM as the primary tool for
>>> deploying CEPH:
>>> https://docs.ceph.com/en/latest/install/#recommended-methods
>>>
>>>
>>> On Mon, May 19, 2025 at 4:19 PM Francesco Di Nucci
>>> <francesco.dinucci@na.infn.it> wrote:
>>>
>>>    Hi all,
>>>
>>>    we're planning (finally) to setup a CEPH cluster to be used as
>>>    OpenStack
>>>    backend.
>>>
>>>    The cloud is actually set up with Foreman+Puppet, to setup CEPH what
>>>    would you advice?
>>>
>>>    cephadm as it's the preferred method in the CEPH docs or Puppet
>>>    with the
>>>    puppet-ceph module, as it's part of openstack?
>>>
>>>    Thanks in advance
>>>
>>>    --     Francesco Di Nucci
>>>    System Administrator
>>>    Compute & Networking Service, INFN Naples
>>>
>>>    Email: francesco.dinucci@na.infn.it
>>>
>>>
>>>
>>>
>>> --
>>> Regards,
>>> Maksim Malchuk
>>>
>
>
>
>