Hi,

Thanks Michael,

I did forget one step in my previous post.

I have to upgrade ceph from Pacific to Quincy before upgrading Openstack from Yoga to Zed, since the Zed version supports only ceph Quincy only, at least this is my understanding.
So I have to stop my Openstack before upgrading to ceph Quincy.

So the migration will be like :
  1. Upgrade to Rocky 9, starting with the controllers, then the compute/storage nodes.
    1. Undeploy 1 controller at a time, upgrade to Rocky 9, re-deploy the controller.
    2. Undeploy 3 compute/storage nodes at a time, upgrade to Rocky 9, redeploy the nodes.
  2. Convert the deployed ceph Pacific from ceph-ansible to cephadm (I am not sure if I can do this while Openstack is running).
  3. Stop the Openstack platform
  4. Upgrade ceph Pacific to ceph Quincy.
  5. Upgrade from Yoga to ZED
Regards.

Virus-free.www.avast.com

Le sam. 9 déc. 2023 à 00:53, Michael Knox <michael@knox.net.nz> a écrit :
Hi Wodel, 

I can't comment on the Ceph portion yet (for better or worse we have EMC or Pure storage mostly, just 1 lonely Ceph cluster). 

We have undertaken a move from ubuntu 18.04 to rocky linux 9. We focused on upgrading non-hypervisor nodes first, so control, monitoring, util nodes, then Live Migrated VMs around, so we could remove hypervisors out of the cloud, rebuild, pop them back. 

No downtime for us, just took a bit longer. For Y - Z, not a lot of considerations, we had a swag of Openstack updates in the middle of this too. 

I have that ceph cluster coming, it's ubuntu 18.04 and some out of date version Ceph. I would be interested in your results with your ceph approach. I have looked yet, but I am hoping I can do it with no downtime 

Cheers
Michael 

On Thu, Dec 7, 2023 at 11:08 AM wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,

I need to upgrade my Openstack deployment from Yoga (Rocky Linux 8) to Zed (Rocky Linux 9).
My deployment is an HCI deployment with ceph pacific deployed using ceph-ansible.
it contains :
- 03 controllers that contain also ceph mgr/mon/rgw/cephfs containers
- 27 compute/storage nodes

I am planning to follow these steps :
  1. Upgrade to Rocky 9, starting with the controllers, then the compute/storage nodes.
    1. Undeploy 1 controller at a time, upgrade to Rocky 9, re-deploy the controller.
    2. Undeploy 3 compute/storage nodes at a time, upgrade to Rocky 9, redeploy the nodes.
  2. Stop the Openstack platform
  3. Convert the deployed ceph pacific from ceph-ansible to cephadm.
  4. Upgrade from Yoga to ZED
If you can help me with some insights/thoughts things to do / to avoid, so that the upgrade will move smoothly, that would be great.

Regards.