[kolla-ansible] migration

Mohammed Naser mnaser at vexxhost.com
Mon Jul 1 11:36:43 UTC 2019


You should check your cell mapping records inside Nova.  They're probably
not right of you moved your database and rabbit

Sorry for top posting this is from a phone.

On Mon., Jul. 1, 2019, 5:46 a.m. Ignazio Cassano, <ignaziocassano at gmail.com>
wrote:

> PS
> I presume the problem is neutron, because instances on new kvm nodes
> remain in building state e do not aquire address.
> Probably the netron db imported from old openstack installation has some
> difrrences ....probably I must check defferences from old and new neutron
> services configuration files.
> Ignazio
>
> Il giorno lun 1 lug 2019 alle ore 10:10 Mark Goddard <mark at stackhpc.com>
> ha scritto:
>
>> It sounds like you got quite close to having this working. I'd suggest
>> debugging this instance build failure. One difference with kolla is
>> that we run libvirt inside a container. Have you stopped libvirt from
>> running on the host?
>> Mark
>>
>> On Sun, 30 Jun 2019 at 09:55, Ignazio Cassano <ignaziocassano at gmail.com>
>> wrote:
>> >
>> > Hi Mark,
>> > let me to explain what I am trying.
>> > I have a queens installation based on centos and pacemaker with some
>> instances and heat stacks.
>> > I would like to have another installation with same instances,
>> projects, stacks ....I'd like to have same uuid for all objects
>> (users,projects instances and so on, because it is controlled by a cloud
>> management platform we wrote.
>> >
>> > I stopped controllers on old queens installation backupping the
>> openstack database.
>> > I installed the new kolla openstack queens on new three controllers
>> with same addresses of the old intallation , vip as well.
>> > One of the three controllers is also a kvm node on queens.
>> > I stopped all containeres except rabbit,keepalive,rabbit,haproxy and
>> mariadb.
>> > I deleted al openstack db on mariadb container and I imported the old
>> tables, changing the address of rabbit for pointing to the new rabbit
>> cluster.
>> > I restarded containers.
>> > Changing the rabbit address on old kvm nodes, I can see the old virtual
>> machines and I can open console on them.
>> > I can see all networks (tenant and provider) of al installation, but
>> when I try to create a new instance on the new kvm, it remains in buiding
>> state.
>> > Seems it cannot aquire an address.
>> > Storage between old and new installation are shred on nfs NETAPP, so I
>> can see cinder volumes.
>> > I suppose db structure is different between a kolla installation and a
>> manual instaltion !?
>> > What is wrong ?
>> > Thanks
>> > Ignazio
>> >
>> >
>> >
>> >
>> > Il giorno gio 27 giu 2019 alle ore 16:44 Mark Goddard <
>> mark at stackhpc.com> ha scritto:
>> >>
>> >> On Thu, 27 Jun 2019 at 14:46, Ignazio Cassano <
>> ignaziocassano at gmail.com> wrote:
>> >> >
>> >> > Sorry, for my question.
>> >> > It does not need to change anything because endpoints refer to
>> haproxy vips.
>> >> > So if your new glance works fine you change haproxy backends for
>> glance.
>> >> > Regards
>> >> > Ignazio
>> >>
>> >> That's correct - only the haproxy backend needs to be updated.
>> >>
>> >> >
>> >> >
>> >> > Il giorno gio 27 giu 2019 alle ore 15:21 Ignazio Cassano <
>> ignaziocassano at gmail.com> ha scritto:
>> >> >>
>> >> >> Hello Mark,
>> >> >> let me to verify if I understood your method.
>> >> >>
>> >> >> You have old controllers,haproxy,mariadb and nova computes.
>> >> >> You installed three new controllers but kolla.ansible inventory
>> contains old mariadb and old rabbit servers.
>> >> >> You are deployng single service on new controllers staring with
>> glance.
>> >> >> When you deploy glance on new controllers, it changes the glance
>> endpoint on old mariadb db ?
>> >> >> Regards
>> >> >> Ignazio
>> >> >>
>> >> >> Il giorno gio 27 giu 2019 alle ore 10:52 Mark Goddard <
>> mark at stackhpc.com> ha scritto:
>> >> >>>
>> >> >>> On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano <
>> ignaziocassano at gmail.com> wrote:
>> >> >>> >
>> >> >>> > Hello,
>> >> >>> > Anyone have tried to migrate an existing openstack installation
>> to kolla containers?
>> >> >>>
>> >> >>> Hi,
>> >> >>>
>> >> >>> I'm aware of two people currently working on that. Gregory Orange
>> and
>> >> >>> one of my colleagues, Pierre Riteau. Pierre is away currently, so I
>> >> >>> hope he doesn't mind me quoting him from an email to Gregory.
>> >> >>>
>> >> >>> Mark
>> >> >>>
>> >> >>> "I am indeed working on a similar migration using Kolla Ansible
>> with
>> >> >>> Kayobe, starting from a non-containerised OpenStack deployment
>> based
>> >> >>> on CentOS RPMs.
>> >> >>> Existing OpenStack services are deployed across several controller
>> >> >>> nodes and all sit behind HAProxy, including for internal endpoints.
>> >> >>> We have additional controller nodes that we use to deploy
>> >> >>> containerised services. If you don't have the luxury of additional
>> >> >>> nodes, it will be more difficult as you will need to avoid
>> processes
>> >> >>> clashing when listening on the same port.
>> >> >>>
>> >> >>> The method I am using resembles your second suggestion, however I
>> am
>> >> >>> deploying only one containerised service at a time, in order to
>> >> >>> validate each of them independently.
>> >> >>> I use the --tags option of kolla-ansible to restrict Ansible to
>> >> >>> specific roles, and when I am happy with the resulting
>> configuration I
>> >> >>> update HAProxy to point to the new controllers.
>> >> >>>
>> >> >>> As long as the configuration matches, this should be completely
>> >> >>> transparent for purely HTTP-based services like Glance. You need
>> to be
>> >> >>> more careful with services that include components listening for
>> RPC,
>> >> >>> such as Nova: if the new nova.conf is incorrect and you've
>> deployed a
>> >> >>> nova-conductor that uses it, you could get failed instances
>> launches.
>> >> >>> Some roles depend on others: if you are deploying the
>> >> >>> neutron-openvswitch-agent, you need to run the openvswitch role as
>> >> >>> well.
>> >> >>>
>> >> >>> I suggest starting with migrating Glance as it doesn't have any
>> >> >>> internal services and is easy to validate. Note that properly
>> >> >>> migrating Keystone requires keeping existing Fernet keys around, so
>> >> >>> any token stays valid until the time it is expected to stop working
>> >> >>> (which is fairly complex, see
>> >> >>> https://bugs.launchpad.net/kolla-ansible/+bug/1809469).
>> >> >>>
>> >> >>> While initially I was using an approach similar to your first
>> >> >>> suggestion, it can have side effects since Kolla Ansible uses these
>> >> >>> variables when templating configuration. As an example, most
>> services
>> >> >>> will only have notifications enabled if enable_ceilometer is true.
>> >> >>>
>> >> >>> I've added existing control plane nodes to the Kolla Ansible
>> inventory
>> >> >>> as separate groups, which allows me to use the existing database
>> and
>> >> >>> RabbitMQ for the containerised services.
>> >> >>> For example, instead of:
>> >> >>>
>> >> >>> [mariadb:children]
>> >> >>> control
>> >> >>>
>> >> >>> you may have:
>> >> >>>
>> >> >>> [mariadb:children]
>> >> >>> oldcontrol_db
>> >> >>>
>> >> >>> I still have to perform the migration of these underlying services
>> to
>> >> >>> the new control plane, I will let you know if there is any hurdle.
>> >> >>>
>> >> >>> A few random things to note:
>> >> >>>
>> >> >>> - if run on existing control plane hosts, the baremetal role
>> removes
>> >> >>> some packages listed in `redhat_pkg_removals` which can trigger the
>> >> >>> removal of OpenStack dependencies using them! I've changed this
>> >> >>> variable to an empty list.
>> >> >>> - compare your existing deployment with a Kolla Ansible one to
>> check
>> >> >>> for differences in endpoints, configuration files, database users,
>> >> >>> service users, etc. For Heat, Kolla uses the domain
>> heat_user_domain,
>> >> >>> while your existing deployment may use another one (and this is
>> >> >>> hardcoded in the Kolla Heat image). Kolla Ansible uses the
>> "service"
>> >> >>> project while a couple of deployments I worked with were using
>> >> >>> "services". This shouldn't matter, except there was a bug in Kolla
>> >> >>> which prevented it from setting the roles correctly:
>> >> >>> https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in latest
>> >> >>> Rocky and Queens images)
>> >> >>> - the ml2_conf.ini generated for Neutron generates physical network
>> >> >>> names like physnet1, physnet2… you may want to override
>> >> >>> bridge_mappings completely.
>> >> >>> - although sometimes it could be easier to change your existing
>> >> >>> deployment to match Kolla Ansible settings, rather than configure
>> >> >>> Kolla Ansible to match your deployment."
>> >> >>>
>> >> >>> > Thanks
>> >> >>> > Ignazio
>> >> >>> >
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190701/ba81eb7c/attachment-0001.html>


More information about the openstack-discuss mailing list