<div dir="auto">Many tHanks<div dir="auto">Ignazio</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Il Mer 2 Ott 2019, 09:44 Pierre Riteau <<a href="mailto:pierre@stackhpc.com">pierre@stackhpc.com</a>> ha scritto:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi everyone,<br>
<br>
I hope you don't mind me reviving this thread, to let you know I wrote<br>
an article after we successfully completed the migration of a running<br>
OpenStack deployment to Kolla:<br>
<a href="http://www.stackhpc.com/migrating-to-kolla.html" rel="noreferrer noreferrer" target="_blank">http://www.stackhpc.com/migrating-to-kolla.html</a><br>
<br>
Don't hesitate to contact me if you have more questions about how this<br>
type of migration can be performed.<br>
<br>
Pierre<br>
<br>
On Mon, 1 Jul 2019 at 14:02, Ignazio Cassano <<a href="mailto:ignaziocassano@gmail.com" target="_blank" rel="noreferrer">ignaziocassano@gmail.com</a>> wrote:<br>
><br>
> I checked them and I modified for fitting to new installation<br>
> thanks<br>
> Ignazio<br>
><br>
> Il giorno lun 1 lug 2019 alle ore 13:36 Mohammed Naser <<a href="mailto:mnaser@vexxhost.com" target="_blank" rel="noreferrer">mnaser@vexxhost.com</a>> ha scritto:<br>
>><br>
>> You should check your cell mapping records inside Nova. They're probably not right of you moved your database and rabbit<br>
>><br>
>> Sorry for top posting this is from a phone.<br>
>><br>
>> On Mon., Jul. 1, 2019, 5:46 a.m. Ignazio Cassano, <<a href="mailto:ignaziocassano@gmail.com" target="_blank" rel="noreferrer">ignaziocassano@gmail.com</a>> wrote:<br>
>>><br>
>>> PS<br>
>>> I presume the problem is neutron, because instances on new kvm nodes remain in building state e do not aquire address.<br>
>>> Probably the netron db imported from old openstack installation has some difrrences ....probably I must check defferences from old and new neutron services configuration files.<br>
>>> Ignazio<br>
>>><br>
>>> Il giorno lun 1 lug 2019 alle ore 10:10 Mark Goddard <<a href="mailto:mark@stackhpc.com" target="_blank" rel="noreferrer">mark@stackhpc.com</a>> ha scritto:<br>
>>>><br>
>>>> It sounds like you got quite close to having this working. I'd suggest<br>
>>>> debugging this instance build failure. One difference with kolla is<br>
>>>> that we run libvirt inside a container. Have you stopped libvirt from<br>
>>>> running on the host?<br>
>>>> Mark<br>
>>>><br>
>>>> On Sun, 30 Jun 2019 at 09:55, Ignazio Cassano <<a href="mailto:ignaziocassano@gmail.com" target="_blank" rel="noreferrer">ignaziocassano@gmail.com</a>> wrote:<br>
>>>> ><br>
>>>> > Hi Mark,<br>
>>>> > let me to explain what I am trying.<br>
>>>> > I have a queens installation based on centos and pacemaker with some instances and heat stacks.<br>
>>>> > I would like to have another installation with same instances, projects, stacks ....I'd like to have same uuid for all objects (users,projects instances and so on, because it is controlled by a cloud management platform we wrote.<br>
>>>> ><br>
>>>> > I stopped controllers on old queens installation backupping the openstack database.<br>
>>>> > I installed the new kolla openstack queens on new three controllers with same addresses of the old intallation , vip as well.<br>
>>>> > One of the three controllers is also a kvm node on queens.<br>
>>>> > I stopped all containeres except rabbit,keepalive,rabbit,haproxy and mariadb.<br>
>>>> > I deleted al openstack db on mariadb container and I imported the old tables, changing the address of rabbit for pointing to the new rabbit cluster.<br>
>>>> > I restarded containers.<br>
>>>> > Changing the rabbit address on old kvm nodes, I can see the old virtual machines and I can open console on them.<br>
>>>> > I can see all networks (tenant and provider) of al installation, but when I try to create a new instance on the new kvm, it remains in buiding state.<br>
>>>> > Seems it cannot aquire an address.<br>
>>>> > Storage between old and new installation are shred on nfs NETAPP, so I can see cinder volumes.<br>
>>>> > I suppose db structure is different between a kolla installation and a manual instaltion !?<br>
>>>> > What is wrong ?<br>
>>>> > Thanks<br>
>>>> > Ignazio<br>
>>>> ><br>
>>>> ><br>
>>>> ><br>
>>>> ><br>
>>>> > Il giorno gio 27 giu 2019 alle ore 16:44 Mark Goddard <<a href="mailto:mark@stackhpc.com" target="_blank" rel="noreferrer">mark@stackhpc.com</a>> ha scritto:<br>
>>>> >><br>
>>>> >> On Thu, 27 Jun 2019 at 14:46, Ignazio Cassano <<a href="mailto:ignaziocassano@gmail.com" target="_blank" rel="noreferrer">ignaziocassano@gmail.com</a>> wrote:<br>
>>>> >> ><br>
>>>> >> > Sorry, for my question.<br>
>>>> >> > It does not need to change anything because endpoints refer to haproxy vips.<br>
>>>> >> > So if your new glance works fine you change haproxy backends for glance.<br>
>>>> >> > Regards<br>
>>>> >> > Ignazio<br>
>>>> >><br>
>>>> >> That's correct - only the haproxy backend needs to be updated.<br>
>>>> >><br>
>>>> >> ><br>
>>>> >> ><br>
>>>> >> > Il giorno gio 27 giu 2019 alle ore 15:21 Ignazio Cassano <<a href="mailto:ignaziocassano@gmail.com" target="_blank" rel="noreferrer">ignaziocassano@gmail.com</a>> ha scritto:<br>
>>>> >> >><br>
>>>> >> >> Hello Mark,<br>
>>>> >> >> let me to verify if I understood your method.<br>
>>>> >> >><br>
>>>> >> >> You have old controllers,haproxy,mariadb and nova computes.<br>
>>>> >> >> You installed three new controllers but kolla.ansible inventory contains old mariadb and old rabbit servers.<br>
>>>> >> >> You are deployng single service on new controllers staring with glance.<br>
>>>> >> >> When you deploy glance on new controllers, it changes the glance endpoint on old mariadb db ?<br>
>>>> >> >> Regards<br>
>>>> >> >> Ignazio<br>
>>>> >> >><br>
>>>> >> >> Il giorno gio 27 giu 2019 alle ore 10:52 Mark Goddard <<a href="mailto:mark@stackhpc.com" target="_blank" rel="noreferrer">mark@stackhpc.com</a>> ha scritto:<br>
>>>> >> >>><br>
>>>> >> >>> On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano <<a href="mailto:ignaziocassano@gmail.com" target="_blank" rel="noreferrer">ignaziocassano@gmail.com</a>> wrote:<br>
>>>> >> >>> ><br>
>>>> >> >>> > Hello,<br>
>>>> >> >>> > Anyone have tried to migrate an existing openstack installation to kolla containers?<br>
>>>> >> >>><br>
>>>> >> >>> Hi,<br>
>>>> >> >>><br>
>>>> >> >>> I'm aware of two people currently working on that. Gregory Orange and<br>
>>>> >> >>> one of my colleagues, Pierre Riteau. Pierre is away currently, so I<br>
>>>> >> >>> hope he doesn't mind me quoting him from an email to Gregory.<br>
>>>> >> >>><br>
>>>> >> >>> Mark<br>
>>>> >> >>><br>
>>>> >> >>> "I am indeed working on a similar migration using Kolla Ansible with<br>
>>>> >> >>> Kayobe, starting from a non-containerised OpenStack deployment based<br>
>>>> >> >>> on CentOS RPMs.<br>
>>>> >> >>> Existing OpenStack services are deployed across several controller<br>
>>>> >> >>> nodes and all sit behind HAProxy, including for internal endpoints.<br>
>>>> >> >>> We have additional controller nodes that we use to deploy<br>
>>>> >> >>> containerised services. If you don't have the luxury of additional<br>
>>>> >> >>> nodes, it will be more difficult as you will need to avoid processes<br>
>>>> >> >>> clashing when listening on the same port.<br>
>>>> >> >>><br>
>>>> >> >>> The method I am using resembles your second suggestion, however I am<br>
>>>> >> >>> deploying only one containerised service at a time, in order to<br>
>>>> >> >>> validate each of them independently.<br>
>>>> >> >>> I use the --tags option of kolla-ansible to restrict Ansible to<br>
>>>> >> >>> specific roles, and when I am happy with the resulting configuration I<br>
>>>> >> >>> update HAProxy to point to the new controllers.<br>
>>>> >> >>><br>
>>>> >> >>> As long as the configuration matches, this should be completely<br>
>>>> >> >>> transparent for purely HTTP-based services like Glance. You need to be<br>
>>>> >> >>> more careful with services that include components listening for RPC,<br>
>>>> >> >>> such as Nova: if the new nova.conf is incorrect and you've deployed a<br>
>>>> >> >>> nova-conductor that uses it, you could get failed instances launches.<br>
>>>> >> >>> Some roles depend on others: if you are deploying the<br>
>>>> >> >>> neutron-openvswitch-agent, you need to run the openvswitch role as<br>
>>>> >> >>> well.<br>
>>>> >> >>><br>
>>>> >> >>> I suggest starting with migrating Glance as it doesn't have any<br>
>>>> >> >>> internal services and is easy to validate. Note that properly<br>
>>>> >> >>> migrating Keystone requires keeping existing Fernet keys around, so<br>
>>>> >> >>> any token stays valid until the time it is expected to stop working<br>
>>>> >> >>> (which is fairly complex, see<br>
>>>> >> >>> <a href="https://bugs.launchpad.net/kolla-ansible/+bug/1809469" rel="noreferrer noreferrer" target="_blank">https://bugs.launchpad.net/kolla-ansible/+bug/1809469</a>).<br>
>>>> >> >>><br>
>>>> >> >>> While initially I was using an approach similar to your first<br>
>>>> >> >>> suggestion, it can have side effects since Kolla Ansible uses these<br>
>>>> >> >>> variables when templating configuration. As an example, most services<br>
>>>> >> >>> will only have notifications enabled if enable_ceilometer is true.<br>
>>>> >> >>><br>
>>>> >> >>> I've added existing control plane nodes to the Kolla Ansible inventory<br>
>>>> >> >>> as separate groups, which allows me to use the existing database and<br>
>>>> >> >>> RabbitMQ for the containerised services.<br>
>>>> >> >>> For example, instead of:<br>
>>>> >> >>><br>
>>>> >> >>> [mariadb:children]<br>
>>>> >> >>> control<br>
>>>> >> >>><br>
>>>> >> >>> you may have:<br>
>>>> >> >>><br>
>>>> >> >>> [mariadb:children]<br>
>>>> >> >>> oldcontrol_db<br>
>>>> >> >>><br>
>>>> >> >>> I still have to perform the migration of these underlying services to<br>
>>>> >> >>> the new control plane, I will let you know if there is any hurdle.<br>
>>>> >> >>><br>
>>>> >> >>> A few random things to note:<br>
>>>> >> >>><br>
>>>> >> >>> - if run on existing control plane hosts, the baremetal role removes<br>
>>>> >> >>> some packages listed in `redhat_pkg_removals` which can trigger the<br>
>>>> >> >>> removal of OpenStack dependencies using them! I've changed this<br>
>>>> >> >>> variable to an empty list.<br>
>>>> >> >>> - compare your existing deployment with a Kolla Ansible one to check<br>
>>>> >> >>> for differences in endpoints, configuration files, database users,<br>
>>>> >> >>> service users, etc. For Heat, Kolla uses the domain heat_user_domain,<br>
>>>> >> >>> while your existing deployment may use another one (and this is<br>
>>>> >> >>> hardcoded in the Kolla Heat image). Kolla Ansible uses the "service"<br>
>>>> >> >>> project while a couple of deployments I worked with were using<br>
>>>> >> >>> "services". This shouldn't matter, except there was a bug in Kolla<br>
>>>> >> >>> which prevented it from setting the roles correctly:<br>
>>>> >> >>> <a href="https://bugs.launchpad.net/kolla/+bug/1791896" rel="noreferrer noreferrer" target="_blank">https://bugs.launchpad.net/kolla/+bug/1791896</a> (now fixed in latest<br>
>>>> >> >>> Rocky and Queens images)<br>
>>>> >> >>> - the ml2_conf.ini generated for Neutron generates physical network<br>
>>>> >> >>> names like physnet1, physnet2… you may want to override<br>
>>>> >> >>> bridge_mappings completely.<br>
>>>> >> >>> - although sometimes it could be easier to change your existing<br>
>>>> >> >>> deployment to match Kolla Ansible settings, rather than configure<br>
>>>> >> >>> Kolla Ansible to match your deployment."<br>
>>>> >> >>><br>
>>>> >> >>> > Thanks<br>
>>>> >> >>> > Ignazio<br>
>>>> >> >>> ><br>
<br>
</blockquote></div>