Thank you, I had not understood correctly, apologies (and a live upgrade is great!) -- Francesco Di Nucci System Administrator Compute & Networking Service, INFN Naples Email:francesco.dinucci@na.infn.it On 20/05/25 14:07, Michel Jouvin wrote:
Francesco,
Cephadm implements "rolling upgrade". It will stop the services one by one so that the cluster always remains healthy, with no impact on the services (except for rgw or MDS that may be single instance but it is just the time of a restart). And it will pause if at any point Moving forward may put the cluster at risk.
Michel Sent from my mobile
Le 20 mai 2025 14:04:07 Francesco Di Nucci <francesco.dinucci@na.infn.it> a écrit :
Thanks a lot, this discussion is being very helpful!
-- Francesco Di Nucci System Administrator Compute & Networking Service, INFN Naples
Email:francesco.dinucci@na.infn.it On 20/05/25 10:57, Michel Jouvin wrote:
Hi Francesco,
We used to manage our Ceph cluster with a Puppet -like tool and decided to switch to cephadm to do the cluster manager in Octopus, using our system management tool only to provision the OS and Podman and configure the SSH key required by cephadm. It has been a huge improving terms of management efficiency, in particular the time required by any Ceph upgrade operation, including OS upgrades/réinstallation where cephadm redeploys everything required automatically.
We will never move back to previous way of managing our cluster!
Michel Sent from my mobile
Le 20 mai 2025 09:33:07 Francesco Di Nucci <francesco.dinucci@na.infn.it> a écrit :
Thank you,
this might be a solution too (using other tools to setup the OS and then switch to cephadm)
It's not only about familiarity, I was also thinking about feasibility in the long run, with upgrades, new nodes etc, so nice to know that CEPH and OpenStack managements can be decoupled
Best regards
-- Francesco Di Nucci System Administrator Compute & Networking Service, INFN Naples
Email: francesco.dinucci@na.infn.it
On 20/05/25 09:12, Eugen Block wrote:
Hi,
I would say this is opinion based and depends on your experience and infrastructure. Even if you decided to use cephadm as a Ceph deployment tool, you still need to have some installation and configuration management in place, at least if you have more than a few hosts. Because with the Ceph orchestrator (cephadm) you can only add hosts that have been configured to your needs, like ssh keys, podman/docker, chrony etc. If you use puppet for that, it might be the right choice for you. We are using a combination of cobbler and Salt (Uyuni project) to perform automatic OS installation via PXE boot and configuration via Salt. Once the systems are ready to join the Ceph cluster, we just add them via orchestrator (ceph orch host add ...) and then the rest is managed by cephadm. So in our case, Ceph is decoupled from OpenStack management, although the OpenStack hosts are also installed and configured via Salt.
I'd say choose the method you're most familiar with.
Regards, Eugen
Zitat von Francesco Di Nucci <francesco.dinucci@na.infn.it>:
Thank you,
I'd read it, but as there also are other methods such as ceph-ansible and puppet-ceph I am trying to get a feedback from other operators about their experiences, as in this case I'm particularly interested in integration of CEPH with OpenStack
Best regards
-- Francesco Di Nucci System Administrator Compute & Networking Service, INFN Naples
Email:francesco.dinucci@na.infn.it
On 19/05/25 16:29, Maksim Malchuk wrote: > Hi Francesco, > > The CEPH community recommends using CEPHADM as the primary tool for > deploying CEPH: > https://docs.ceph.com/en/latest/install/#recommended-methods > > > On Mon, May 19, 2025 at 4:19 PM Francesco Di Nucci > <francesco.dinucci@na.infn.it> wrote: > > Hi all, > > we're planning (finally) to setup a CEPH cluster to be used as > OpenStack > backend. > > The cloud is actually set up with Foreman+Puppet, to setup > CEPH what > would you advice? > > cephadm as it's the preferred method in the CEPH docs or Puppet > with the > puppet-ceph module, as it's part of openstack? > > Thanks in advance > > -- Francesco Di Nucci > System Administrator > Compute & Networking Service, INFN Naples > > Email: francesco.dinucci@na.infn.it > > > > > -- > Regards, > Maksim Malchuk >