[manila][cinder][glance][nova] Pop-up team for design and development of a Cephadm DevStack plugin
mark at stackhpc.com
Wed Jan 19 14:40:52 UTC 2022
On Wed, 19 Jan 2022 at 11:08, Victoria Martínez de la Cruz
<victoria at vmartinezdelacruz.com> wrote:
> Hi all,
> I'm reaching out to you to let you know that we will start the design and development of a Cephadm DevStack plugin.
> Some of the reasons on why we want to take this approach:
> - devstack-plugin-ceph worked for us for a lot of years, but the development of it relies on several hacks to adapt to the different Ceph versions we use and the different distros we support. This led to a monolithic script that sometimes is hard to debug and break our development environments and our CI
> - cephadm is the deployment tool developed and maintained by the Ceph community, it allows their users to get specific Ceph versions very easily and enforces good practices for Ceph clusters. From their docs, "Cephadm manages the full lifecycle of a Ceph cluster. It starts by bootstrapping a tiny Ceph cluster on a single node (one monitor and one manager) and then uses the orchestration interface (“day 2” commands) to expand the cluster to include all hosts and to provision all Ceph daemons and services. "
> - OpenStack deployment tools are starting to use cephadm as their way to deploy Ceph, so it would be nice to include cephadm in our development process to be closer with what is being done in the field
> I started the development of this in , but it might be better to change devstack-plugin-ceph to do this instead of having a new plugin. This is something I would love to discuss in a first meeting.
> Having said this, I propose using the channel #openstack-cephadm in the OFTC network to talk about this and set up a first meeting with people interested in contributing to this effort.
>  https://docs.ceph.com/en/pacific/cephadm/
>  https://github.com/vkmc/devstack-plugin-cephadm
In case it's useful as a reference, we built an Ansible collection
that drives cephadm:
Feedback welcome, of course!
More information about the openstack-discuss