[ops] Migration from CentOS streams to Ubuntu and fast forward updates

Sean Mooney smooney at redhat.com
Fri Dec 9 14:21:38 UTC 2022


On Fri, 2022-12-09 at 11:52 +0000, Eugen Block wrote:
> Hi,
> 
> we're considering a similar scenario. I don't mean to hijack this  
> thread, but would it be possible to migrate to Ubuntu without  
> downtime? There are several options I can think of, one that worked  
> for us in the past: a reinstallation of the control nodes but keeping  
> the database (upgraded in a VM). But this means downtime until at  
> least one control node is up and if possible I would like to avoid  
> that, so I'm thinking of this:
> - Adding compute nodes and install them with Ubuntu Victoria (our  
> current version).
> - Move the workload away from the other compute nodes and reinstall  
> them one by one.

yes so as has been mentioned in the tread before.
openstack does not really care about the underlying os within reason.

your installer tool might but there is nothing that would prevent you adding 3 new contolers going form 3->6
and then stoping the old controller after teh db have replicated and the new contoler are working

simiarly you can add new computes and cold migrate one by one until your cloud is entrily workign on the other os.

if you decouple changeing the opesntack version form changeing the os,
i.e. do one then the other if you intend to do both.
that should allow you to keep the cloud operational during this tansition.

the main issues you ware likely to encounter are version mismatch beteen the rabbit/mariadb packages shiped on each distor
(and ovn if you use that) and simiar constraits.

for any given release of openstack we supprot both distrobutiosn so both should ahve compaitble packages with the openstack
code but infra stucture compent like the db may not be compatibel the corresponding package form the other disto.

if contol plane uptime is not required at all time a simple workaroudn for the db is just do a backup and restore.
that would still allow you to have the workload remain active.

when i first started working on openstack many years ago my default dev envionment had a a contoler with ubuntu and kernel ovs
a centos compute with linux bridge and a ubuntu compute with ovs-dpdk.

using devstack all 3 nodes could consume the rabbitmq and db form the ubuntu contoler and it was possibel to cold  migrate vm
between all 3 nodes (assuming you used vlan networking).

i would not recommend that in production for any protracted period of tiem but it was defnietly possible to do in the past.

> 
> That should work for computes (with cold or live migration). But what  
> about the control nodes? We have them in HA setup, would it be  
> possible to
> - Stop one control node, reinstall it with Ubuntu (Victoria), make  
> sure all UIDs/GIDs are the same as before (we use ceph as shared  
> storage) so the mounted CephFS still works between the nodes (same for  
> the nova live-migration).
live migration does not work due to the qemu emulator paths beign differnt between ubuntu and centos
you may also have issues with the centos vm xmls reference selinux where as apparmor is used as teh security
context on ubuntu.

those issses should not matter for cold migration as the xml is regenerated on teh destination host not the
souce host as it is with live migration.

so if you can spawn a vm on the ubuntu node form a centos contol plane you should be able to cold migrate provide the ssh keys are exchanged.

by the way while it proably does work nova has never offially supproted using cephfs or any other shared block/file system for the instance
state path other then NFS.

i know that some operators have got cephfs and glusterfs to work but that has never offically been supported or tested by the nova team.

> - Repeat for other control nodes.
> 
> Would those mixed control nodes work together? I will try it anyway in  
> a test environment, but I wanted to ask if anybody has tried this  
> approach. I'd appreciate any insights.
as i said yes in principal it should work provided rabbit/mariadb/galeara are happy

at the openstack level while untested we woudl expect openstack to work provided its the same version in both cases.

usign something like kolla-ansible wehre you can use the same contaienr image on both os would help as that way you are
sure there is no mismathc. with that said that is not a toplogy they test/supprot but is one they can deploy.
so you would need to test this.
> 
> Thanks,
> Eugen
> 
> 
> Zitat von Massimo Sgaravatto <massimo.sgaravatto at gmail.com>:
> 
> > Ok, thanks a lot
> > If cold migration is supposed to work between hosts with different
> > operating systems, we are fine
> > Cheers, Massimo
> > 
> > On Tue, Dec 6, 2022 at 10:48 AM Sean Mooney <smooney at redhat.com> wrote:
> > 
> > > On Tue, 2022-12-06 at 10:34 +0100, Dmitriy Rabotyagov wrote:
> > > > Hi Massimo,
> > > > 
> > > > Assuming you have manual installation (not using any deployment
> > > > projects), I have several comments on your plan.
> > > > 
> > > > 1. I've missed when you're going to upgrade Nova/Neutron on computes.
> > > > As you should not create a gap in OpenStack versions between
> > > > controllers and computes since nova-scheduler has a requirement on RPC
> > > > version computes will be using. Or, you must define the rpc version
> > > > explicitly in config to have older computes (but it's not really a
> > > > suggested way).
> > > > 2. Also once you do db sync, your second controller might misbehave
> > > > (as some fields could be renamed or new tables must be used), so you
> > > > will need to disable it from accepting requests until syncing
> > > > openstack version as well. If you're not going to upgrade it until
> > > > getting first one to Yoga - it should be disabled all the time until
> > > > you get Y services running on it.
> > > > 3. It's totally fine to run multi-distro setup. For computes the only
> > > > thing that can go wrong is live migrations, and that depends on
> > > > libvirt/qemu versions. I'm not sure if CentOS 8 Stream have compatible
> > > > version with Ubuntu 22.04 for live migrations to work though, but if
> > > > you care about them (I guess you do if you want to migrate workloads
> > > > semalessly) - you'd better check. But my guess would be that CentOS 8
> > > > Stream should have compatible versions with Ubuntu 20.04 - still needs
> > > > deeper checking.
> > > the live migration issue is a know limiation
> > > basically it wont work across distro today because the qemu emulator path
> > > is distro specific and we do not pass that back form the destinatino to the
> > > source so libvirt will try and boot the vm referncign a binary that does
> > > not exist
> > > im sure you could propaly solve that with a symlink or similar.
> > > if you did the next issue you would hit is we dont normally allwo live
> > > mgration
> > > form a newer qemu/libvirt verson to an older one
> > > 
> > > with all that said cold migration shoudl work fine and wihtine any one
> > > host os live migration
> > > will work. you could proably use host aggreates or simialr to enforece
> > > that if needed but
> > > cold migration is the best way to move the workloads form hypervior hosts
> > > with different distros.
> > > 
> > > > 
> > > > вт, 6 дек. 2022 г. в 09:40, Massimo Sgaravatto <
> > > massimo.sgaravatto at gmail.com>:
> > > > > 
> > > > > Any comments on these questions ?
> > > > > Thanks, Massimo
> > > > > 
> > > > > On Fri, Dec 2, 2022 at 5:02 PM Massimo Sgaravatto <
> > > massimo.sgaravatto at gmail.com> wrote:
> > > > > > 
> > > > > > Dear all
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > Dear all
> > > > > > 
> > > > > > We are now running an OpenStack deployment: Yoga on CentOS8Stream.
> > > > > > 
> > > > > > We are now thinking about a possible migration to Ubuntu for several
> > > reasons in particular:
> > > > > > 
> > > > > > a- 5 years support for both the Operating System and OpenStack
> > > (considering LTS releases)
> > > > > > b- Possibility do do a clean update between two Ubuntu LTS releases
> > > > > > c- Easier procedure (also because of b) for fast forward updates
> > > (this is what we use to do)
> > > > > > 
> > > > > > Considering the latter item, my understanding is that an update from
> > > Ubuntu 20.04 Ussuri to Ubuntu 22.04 Yoga could be done in the following
> > > > > > way (we have two controller nodes and n compute nodes):
> > > > > > 
> > > > > > - Update of first controller node from Ubuntu 20.04 Ussuri to Ubuntu
> > > 20.04 Victoria (update OpenStack packages + dbsync)
> > > > > > - Update of first controller node from Ubuntu 20.04 Victoria to
> > > Ubuntu 20.04 Wallaby (update OpenStack packages + dbsync)
> > > > > > - Update of first controller node from Ubuntu 20.04 Wallaby to
> > > Ubuntu 20.04 Xena (update OpenStack packages + dbsync)
> > > > > > - Update of first controller node from Ubuntu 20.04 Xena to Ubuntu
> > > 20.04 Yoga (update OpenStack packages + dbsync)
> > > > > > - Update of first controller node from Ubuntu 20.04 Yoga to Ubuntu
> > > 22.04 Yoga (update Ubuntu  packages)
> > > > > > - Update of second controller node from Ubuntu 20.04 Ussuri to
> > > Ubuntu 22.04 Yoga (update OpenStack and Ubuntu packages)
> > > > > > - Update of the compute nodes from Ubuntu 20.04 Ussuri to Ubuntu
> > > 22.04 Yoga (update OpenStack and Ubuntu packages)
> > > > > > 
> > > > > > 
> > > > > > We would do the same when migrating from Ubuntu 22.04 Yoga to Ubuntu
> > > 24.04 and the OpenStack xyz release (where xyz
> > > > > > is the LTS release used in Ubuntu 24.04)
> > > > > > 
> > > > > > Is this supposed to work or am I missing something ?
> > > > > > 
> > > > > > If we decide to migrate to Ubuntu, the first step would be the
> > > reinstallation with Ubuntu 22.04/Yoga of each node
> > > > > > currently running CentOS8 stream/Yoga.
> > > > > > I suppose there are no problems having in the same OpenStack
> > > installation nodes running the same
> > > > > > Openstack version but different operating systems, or am I wrong ?
> > > > > > 
> > > > > > Thanks, Massimo
> > > > > > 
> > > > 
> > > 
> > > 
> 
> 
> 
> 




More information about the openstack-discuss mailing list