[Kolla-ansible] Upgrading from Yoga to Zed HCI deployment
Hi, I need to upgrade my Openstack deployment from Yoga (Rocky Linux 8) to Zed (Rocky Linux 9). My deployment is an HCI deployment with ceph pacific deployed using ceph-ansible. it contains : - 03 controllers that contain also ceph mgr/mon/rgw/cephfs containers - 27 compute/storage nodes I am planning to follow these steps : 1. Upgrade to Rocky 9, starting with the controllers, then the compute/storage nodes. 1. Undeploy 1 controller at a time, upgrade to Rocky 9, re-deploy the controller. 2. Undeploy 3 compute/storage nodes at a time, upgrade to Rocky 9, redeploy the nodes. 2. Stop the Openstack platform 3. Convert the deployed ceph pacific from ceph-ansible to cephadm. 4. Upgrade from Yoga to ZED If you can help me with some insights/thoughts things to do / to avoid, so that the upgrade will move smoothly, that would be great. Regards.
Hi Wodel, I can't comment on the Ceph portion yet (for better or worse we have EMC or Pure storage mostly, just 1 lonely Ceph cluster). We have undertaken a move from ubuntu 18.04 to rocky linux 9. We focused on upgrading non-hypervisor nodes first, so control, monitoring, util nodes, then Live Migrated VMs around, so we could remove hypervisors out of the cloud, rebuild, pop them back. No downtime for us, just took a bit longer. For Y - Z, not a lot of considerations, we had a swag of Openstack updates in the middle of this too. I have that ceph cluster coming, it's ubuntu 18.04 and some out of date version Ceph. I would be interested in your results with your ceph approach. I have looked yet, but I am hoping I can do it with no downtime Cheers Michael On Thu, Dec 7, 2023 at 11:08 AM wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
I need to upgrade my Openstack deployment from Yoga (Rocky Linux 8) to Zed (Rocky Linux 9). My deployment is an HCI deployment with ceph pacific deployed using ceph-ansible. it contains : - 03 controllers that contain also ceph mgr/mon/rgw/cephfs containers - 27 compute/storage nodes
I am planning to follow these steps :
1. Upgrade to Rocky 9, starting with the controllers, then the compute/storage nodes. 1. Undeploy 1 controller at a time, upgrade to Rocky 9, re-deploy the controller. 2. Undeploy 3 compute/storage nodes at a time, upgrade to Rocky 9, redeploy the nodes. 2. Stop the Openstack platform 3. Convert the deployed ceph pacific from ceph-ansible to cephadm. 4. Upgrade from Yoga to ZED
If you can help me with some insights/thoughts things to do / to avoid, so that the upgrade will move smoothly, that would be great.
Regards.
Hi, Thanks Michael, I did forget one step in my previous post. I have to upgrade ceph from Pacific to Quincy before upgrading Openstack from Yoga to Zed, since the Zed version supports only ceph Quincy only, at least this is my understanding. So I have to stop my Openstack before upgrading to ceph Quincy. So the migration will be like : 1. Upgrade to Rocky 9, starting with the controllers, then the compute/storage nodes. 1. Undeploy 1 controller at a time, upgrade to Rocky 9, re-deploy the controller. 2. Undeploy 3 compute/storage nodes at a time, upgrade to Rocky 9, redeploy the nodes. 2. Convert the deployed ceph Pacific from ceph-ansible to cephadm (I am not sure if I can do this while Openstack is running). 3. Stop the Openstack platform 4. Upgrade ceph Pacific to ceph Quincy. 5. Upgrade from Yoga to ZED Regards. <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.www.avast.com <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> Le sam. 9 déc. 2023 à 00:53, Michael Knox <michael@knox.net.nz> a écrit :
Hi Wodel,
I can't comment on the Ceph portion yet (for better or worse we have EMC or Pure storage mostly, just 1 lonely Ceph cluster).
We have undertaken a move from ubuntu 18.04 to rocky linux 9. We focused on upgrading non-hypervisor nodes first, so control, monitoring, util nodes, then Live Migrated VMs around, so we could remove hypervisors out of the cloud, rebuild, pop them back.
No downtime for us, just took a bit longer. For Y - Z, not a lot of considerations, we had a swag of Openstack updates in the middle of this too.
I have that ceph cluster coming, it's ubuntu 18.04 and some out of date version Ceph. I would be interested in your results with your ceph approach. I have looked yet, but I am hoping I can do it with no downtime
Cheers Michael
On Thu, Dec 7, 2023 at 11:08 AM wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
I need to upgrade my Openstack deployment from Yoga (Rocky Linux 8) to Zed (Rocky Linux 9). My deployment is an HCI deployment with ceph pacific deployed using ceph-ansible. it contains : - 03 controllers that contain also ceph mgr/mon/rgw/cephfs containers - 27 compute/storage nodes
I am planning to follow these steps :
1. Upgrade to Rocky 9, starting with the controllers, then the compute/storage nodes. 1. Undeploy 1 controller at a time, upgrade to Rocky 9, re-deploy the controller. 2. Undeploy 3 compute/storage nodes at a time, upgrade to Rocky 9, redeploy the nodes. 2. Stop the Openstack platform 3. Convert the deployed ceph pacific from ceph-ansible to cephadm. 4. Upgrade from Yoga to ZED
If you can help me with some insights/thoughts things to do / to avoid, so that the upgrade will move smoothly, that would be great.
Regards.
Hi, just a few comments:
2. Convert the deployed ceph Pacific from ceph-ansible to cephadm (I am not sure if I can do this while Openstack is running).
That's how ceph is designed (if configured properly), you should be able to convert to cephadm without downtime or interruptions. You would adopt one daemon at a time, starting with MONs, then MGRs, then OSDs (depending on the actual configuration you could adopt an entire host at a time). If you have gateways in use (MDS, RGW, etc.) you'll have to recreate the services by the orchestrator. As soon as the cephadm deployed daemons are up, they take over the active role, making the old daemon a standby daemon in the MDS case. RGWs should still keep running until you decommission them.
4. Upgrade ceph Pacific to ceph Quincy.
If you plan to upgrade ceph anyway and you stop your openstack platform, you could upgrade the cluster during adoption. For example, upgrading/converting an Octopus MON daemon would look like this: $ cephadm --image quay.io/ceph/ceph:v17.2.7 adopt --style legacy --name mon.ceph01 I recommend to test this procedure first in a test environment, but it usually works for me. And make sure to read the docs carefully to not miss any steps required for an upgrade. Regards, Eugen Zitat von wodel youchi <wodel.youchi@gmail.com>:
Hi,
Thanks Michael,
I did forget one step in my previous post.
I have to upgrade ceph from Pacific to Quincy before upgrading Openstack from Yoga to Zed, since the Zed version supports only ceph Quincy only, at least this is my understanding. So I have to stop my Openstack before upgrading to ceph Quincy.
So the migration will be like :
1. Upgrade to Rocky 9, starting with the controllers, then the compute/storage nodes. 1. Undeploy 1 controller at a time, upgrade to Rocky 9, re-deploy the controller. 2. Undeploy 3 compute/storage nodes at a time, upgrade to Rocky 9, redeploy the nodes. 2. Convert the deployed ceph Pacific from ceph-ansible to cephadm (I am not sure if I can do this while Openstack is running). 3. Stop the Openstack platform 4. Upgrade ceph Pacific to ceph Quincy. 5. Upgrade from Yoga to ZED
Regards.
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.www.avast.com <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Le sam. 9 déc. 2023 à 00:53, Michael Knox <michael@knox.net.nz> a écrit :
Hi Wodel,
I can't comment on the Ceph portion yet (for better or worse we have EMC or Pure storage mostly, just 1 lonely Ceph cluster).
We have undertaken a move from ubuntu 18.04 to rocky linux 9. We focused on upgrading non-hypervisor nodes first, so control, monitoring, util nodes, then Live Migrated VMs around, so we could remove hypervisors out of the cloud, rebuild, pop them back.
No downtime for us, just took a bit longer. For Y - Z, not a lot of considerations, we had a swag of Openstack updates in the middle of this too.
I have that ceph cluster coming, it's ubuntu 18.04 and some out of date version Ceph. I would be interested in your results with your ceph approach. I have looked yet, but I am hoping I can do it with no downtime
Cheers Michael
On Thu, Dec 7, 2023 at 11:08 AM wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
I need to upgrade my Openstack deployment from Yoga (Rocky Linux 8) to Zed (Rocky Linux 9). My deployment is an HCI deployment with ceph pacific deployed using ceph-ansible. it contains : - 03 controllers that contain also ceph mgr/mon/rgw/cephfs containers - 27 compute/storage nodes
I am planning to follow these steps :
1. Upgrade to Rocky 9, starting with the controllers, then the compute/storage nodes. 1. Undeploy 1 controller at a time, upgrade to Rocky 9, re-deploy the controller. 2. Undeploy 3 compute/storage nodes at a time, upgrade to Rocky 9, redeploy the nodes. 2. Stop the Openstack platform 3. Convert the deployed ceph pacific from ceph-ansible to cephadm. 4. Upgrade from Yoga to ZED
If you can help me with some insights/thoughts things to do / to avoid, so that the upgrade will move smoothly, that would be great.
Regards.
Hey Wodel, Converting an existing Ceph cluster from ceph-ansible to cephadm is a feasible process, and it can be done without causing downtime or interruptions. Here are the steps from my side, you can follow: Preparation Step: Ensure that the cephadm command line tool is available on each host in the existing cluster. Choose a version of Ceph for the conversion. This procedure works with any release of Ceph from Octopus (15.2.z) onwards. Pass the desired Ceph image to cephadm using the following command: cephadm --image $IMAGE <rest of command goes here> Confirm that the conversion has started by running: cephadm ls Before starting the conversion, all existing daemons should have a style of “legacy” when you run cephadm ls. Adoption Process: Ensure that the Ceph configuration has been migrated to use the cluster config database. If the /etc/ceph/ceph.conf file is identical on each host, you can run the following command on one host to affect all hosts: ceph config assimilate-conf -i /etc/ceph/ceph.conf If there are configuration variations between hosts, repeat this command on each host. Adopt each monitor (MON) using the following command: cephadm adopt --style legacy --name mon.<hostname> Each legacy monitor will stop, quickly restart as a cephadm container, and rejoin the quorum. Adopt each manager (MGR) similarly: cephadm adopt --style legacy --name mgr.<hostname> Enable cephadm: ceph mgr module enable cephadm ceph orch set backend cephadm Generate an SSH key: ceph cephadm generate-key ceph cephadm get-pub-key > ~/ceph.pub Install the cluster SSH key on each host: ssh-copy-id -f -i ~/ceph.pub root@<host> You can also import an existing SSH key if needed. Upgrade to Ceph Q: If you plan to upgrade Ceph anyway and you stop your OpenStack platform, you can upgrade the cluster during adoption. For example, upgrading or converting an Octopus MON daemon would look like this: $ cephadm --image quay.io/ceph/ceph:v17.2.7 adopt --style legacy --name mon.ceph01 Best, Kerem ÇELİKER Head of Cloud Architecture linkedin.com/in/keremceliker/
participants (5)
-
Eugen Block
-
KEREM CELIKER
-
Michael Knox
-
Oliver Weinmann
-
wodel youchi