ignaziocassano at gmail.com
Sun Jun 30 08:55:02 UTC 2019
let me to explain what I am trying.
I have a queens installation based on centos and pacemaker with some
instances and heat stacks.
I would like to have another installation with same instances, projects,
stacks ....I'd like to have same uuid for all objects (users,projects
instances and so on, because it is controlled by a cloud management
platform we wrote.
I stopped controllers on old queens installation backupping the openstack
I installed the new kolla openstack queens on new three controllers with
same addresses of the old intallation , vip as well.
One of the three controllers is also a kvm node on queens.
I stopped all containeres except rabbit,keepalive,rabbit,haproxy and
I deleted al openstack db on mariadb container and I imported the old
tables, changing the address of rabbit for pointing to the new rabbit
I restarded containers.
Changing the rabbit address on old kvm nodes, I can see the old virtual
machines and I can open console on them.
I can see all networks (tenant and provider) of al installation, but when I
try to create a new instance on the new kvm, it remains in buiding state.
Seems it cannot aquire an address.
Storage between old and new installation are shred on nfs NETAPP, so I can
see cinder volumes.
I suppose db structure is different between a kolla installation and a
manual instaltion !?
What is wrong ?
Il giorno gio 27 giu 2019 alle ore 16:44 Mark Goddard <mark at stackhpc.com>
> On Thu, 27 Jun 2019 at 14:46, Ignazio Cassano <ignaziocassano at gmail.com>
> > Sorry, for my question.
> > It does not need to change anything because endpoints refer to haproxy
> > So if your new glance works fine you change haproxy backends for glance.
> > Regards
> > Ignazio
> That's correct - only the haproxy backend needs to be updated.
> > Il giorno gio 27 giu 2019 alle ore 15:21 Ignazio Cassano <
> ignaziocassano at gmail.com> ha scritto:
> >> Hello Mark,
> >> let me to verify if I understood your method.
> >> You have old controllers,haproxy,mariadb and nova computes.
> >> You installed three new controllers but kolla.ansible inventory
> contains old mariadb and old rabbit servers.
> >> You are deployng single service on new controllers staring with glance.
> >> When you deploy glance on new controllers, it changes the glance
> endpoint on old mariadb db ?
> >> Regards
> >> Ignazio
> >> Il giorno gio 27 giu 2019 alle ore 10:52 Mark Goddard <
> mark at stackhpc.com> ha scritto:
> >>> On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano <
> ignaziocassano at gmail.com> wrote:
> >>> >
> >>> > Hello,
> >>> > Anyone have tried to migrate an existing openstack installation to
> kolla containers?
> >>> Hi,
> >>> I'm aware of two people currently working on that. Gregory Orange and
> >>> one of my colleagues, Pierre Riteau. Pierre is away currently, so I
> >>> hope he doesn't mind me quoting him from an email to Gregory.
> >>> Mark
> >>> "I am indeed working on a similar migration using Kolla Ansible with
> >>> Kayobe, starting from a non-containerised OpenStack deployment based
> >>> on CentOS RPMs.
> >>> Existing OpenStack services are deployed across several controller
> >>> nodes and all sit behind HAProxy, including for internal endpoints.
> >>> We have additional controller nodes that we use to deploy
> >>> containerised services. If you don't have the luxury of additional
> >>> nodes, it will be more difficult as you will need to avoid processes
> >>> clashing when listening on the same port.
> >>> The method I am using resembles your second suggestion, however I am
> >>> deploying only one containerised service at a time, in order to
> >>> validate each of them independently.
> >>> I use the --tags option of kolla-ansible to restrict Ansible to
> >>> specific roles, and when I am happy with the resulting configuration I
> >>> update HAProxy to point to the new controllers.
> >>> As long as the configuration matches, this should be completely
> >>> transparent for purely HTTP-based services like Glance. You need to be
> >>> more careful with services that include components listening for RPC,
> >>> such as Nova: if the new nova.conf is incorrect and you've deployed a
> >>> nova-conductor that uses it, you could get failed instances launches.
> >>> Some roles depend on others: if you are deploying the
> >>> neutron-openvswitch-agent, you need to run the openvswitch role as
> >>> well.
> >>> I suggest starting with migrating Glance as it doesn't have any
> >>> internal services and is easy to validate. Note that properly
> >>> migrating Keystone requires keeping existing Fernet keys around, so
> >>> any token stays valid until the time it is expected to stop working
> >>> (which is fairly complex, see
> >>> https://bugs.launchpad.net/kolla-ansible/+bug/1809469).
> >>> While initially I was using an approach similar to your first
> >>> suggestion, it can have side effects since Kolla Ansible uses these
> >>> variables when templating configuration. As an example, most services
> >>> will only have notifications enabled if enable_ceilometer is true.
> >>> I've added existing control plane nodes to the Kolla Ansible inventory
> >>> as separate groups, which allows me to use the existing database and
> >>> RabbitMQ for the containerised services.
> >>> For example, instead of:
> >>> [mariadb:children]
> >>> control
> >>> you may have:
> >>> [mariadb:children]
> >>> oldcontrol_db
> >>> I still have to perform the migration of these underlying services to
> >>> the new control plane, I will let you know if there is any hurdle.
> >>> A few random things to note:
> >>> - if run on existing control plane hosts, the baremetal role removes
> >>> some packages listed in `redhat_pkg_removals` which can trigger the
> >>> removal of OpenStack dependencies using them! I've changed this
> >>> variable to an empty list.
> >>> - compare your existing deployment with a Kolla Ansible one to check
> >>> for differences in endpoints, configuration files, database users,
> >>> service users, etc. For Heat, Kolla uses the domain heat_user_domain,
> >>> while your existing deployment may use another one (and this is
> >>> hardcoded in the Kolla Heat image). Kolla Ansible uses the "service"
> >>> project while a couple of deployments I worked with were using
> >>> "services". This shouldn't matter, except there was a bug in Kolla
> >>> which prevented it from setting the roles correctly:
> >>> https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in latest
> >>> Rocky and Queens images)
> >>> - the ml2_conf.ini generated for Neutron generates physical network
> >>> names like physnet1, physnet2… you may want to override
> >>> bridge_mappings completely.
> >>> - although sometimes it could be easier to change your existing
> >>> deployment to match Kolla Ansible settings, rather than configure
> >>> Kolla Ansible to match your deployment."
> >>> > Thanks
> >>> > Ignazio
> >>> >
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the openstack-discuss