Dear Openstack-helmers,

Here is the short summary of our discussion during PTG 2024.2. If you would like to further discuss some of these topics or have any other questions you are welcome to continue the discussion either in this mailing thread or start a new one.

## Migration from legacy Ceph OSH charts to Rook
During 2024.1 we developed the migration procedure which seems to work well at least for some configurations. At the moment we have a doc [1] that describes the procedure and also there is a script [2] which can be used as an example. The procedure is obviously quite dangerous and we decided to put some efforts to create a test job for this. Anyway, at this point it is recommended to use Rook for managing Ceph clusters. Once we have the migration test job and we are sure it works well enough we will officially deprecate the legacy Ceph charts with the deprecation period of one or two release cycles.

## MariaDB operator
At the moment the OSH MariaDB chart [3] is recommended by default for managing MariaDB clusters. We are not yet ready to switch to the MariaDB-operator [4] (and the mariadb-cluster OSH chart [5]) by default for our test jobs. We will continue supporting MariaDB-operator test job and will try to work closer with the operator community. Ideally, we would like the MariaDB-operator project to provide their own Helm chart which we could utilize.

## OVN
Our OVN test deployment jobs seem to work well and we can recommend using OVN as the default Openstack networking backend. However, migrating from OVS to OVN is not trivial and assumes addressing at least the following two points:
- Currently we don't have the migration procedure like, for example, this [6], [7] in the context of OSH. It would be great to have a volunteer to contribute this.
- Some of our users have OVS customizations which seem not to have 1-1 feature parity with OVN. So, even if we have the migration strategy, the feature alignment is going to take some time.

We agreed to continue supporting OVS and OVN jobs 50/50 in this release cycle. Also it seems to worth it to initiate a wider discussion about the migration itself and about the timeline.

## Tacker documentation and testing
During 2023.2 we added the Tacker chart [8] but it is not mentioned in the user deployment documetation. We agreed that the Tacker team will contribute the Tacker and the Barbican sections to the OSH deployment documentation (Barbican is needed for Tacker).

There is a deployment test job for Tacker [9] although it is not currently a part of the check pipeline. We agreed to improve it so it not only deploys the Tacker project but also it should run some test scenarios against it.

## Third party RabbitMQ chart
Our long term strategy is to focus more on Openstack charts and to use third parties for manging backend infrastructure wherever possible. So it looks like it worth it to do research regarding possible usage of the RabbitMQ operator [10].

We agreed that we are going to put efforts during 2024.2 release cycle to give it a try and probably to create a test job that will utilize this operator.

## Improve end user experience
Most of our users consume the OSH charts as part of their CI/CD infrastructure and consider OSH as a Openstack managing framework rather than end-to-end solution. However, for the new users who would like to try the Openstack running on top of K8s with the minimal efforts it is important to have an experience similar to what they usually have with other Helm charts. This assumes the following
- a user has a K8s cluster (possibly with some prerequisites) and the K8s API endpoint
- a user has kubectl/helm command line tools configured (to connect to K8s API endpoint) and ready to use (probably from a remote host)
- a user is not supposed to clone any git repositories to deploy the Openstack
- a user is able to consume Helm charts published on a Helm repo

During 2024.1 we have been working to imporove the deploy-env Ansible role to support single/multi-node environments, to make it possible to run kubectl/helm CLI tools from a remote node, to expose Openstack services via Metallb, etc.

At this point we are ready to have a test job that will imitate the end user behavior with the only exception that for test jobs we heavily rely on two scripts `get-values-overrides.sh` [11] and `wait-for-pods.sh` [12]. They are not absolutely necessary but still users can benefit from using them and it looks like we can easily move this functionality to a Helm plugin. So, eventually the deployment can be performed with minimal efforts using pre-published charts.

## Widen the range of K8s configurations used for Openstack deployment
Now all the test jobs are run with the same K8s configuration (Calico as a CNI, Ingress-Nginx as a ingress controller) and we are trying to stay as less K8s details dependent as possible. At the same time we could probably create test jobs that will exercise some other K8s configurations like for example
- Cilium/Kube-Router as a CNI implementation
- HAProxy as an ingress controller

This seems to be minor, but still volunteers are welcome to contribute.

[1] https://docs.openstack.org/openstack-helm/latest/troubleshooting/migrate-ceph-to-rook.html
[2] https://opendev.org/openstack/openstack-helm-infra/src/branch/master/tools/deployment/ceph/migrate-to-rook-ceph.sh
[3] https://opendev.org/openstack/openstack-helm-infra/src/branch/master/mariadb
[4] https://github.com/mariadb-operator/mariadb-operator
[5] https://opendev.org/openstack/openstack-helm-infra/src/branch/master/mariadb-cluster
[6] https://docs.openstack.org/neutron/latest/ovn/migration.html
[7] https://opendev.org/openstack/neutron/src/branch/master/tools/ovn_migration
[8] https://opendev.org/openstack/openstack-helm/src/branch/master/tacker
[9] https://opendev.org/openstack/openstack-helm/src/branch/master/zuul.d/base.yaml#L346-L367
[10] https://www.rabbitmq.com/kubernetes/operator/install-operator
[11] https://opendev.org/openstack/openstack-helm-infra/src/branch/master/tools/deployment/common/get-values-overrides.sh
[12] https://opendev.org/openstack/openstack-helm-infra/src/branch/master/tools/deployment/common/wait-for-pods.sh


--
Best regards,
Kozhukalov Vladimir