mark at stackhpc.com
Thu Jun 27 08:51:53 UTC 2019
On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano <ignaziocassano at gmail.com> wrote:
> Anyone have tried to migrate an existing openstack installation to kolla containers?
I'm aware of two people currently working on that. Gregory Orange and
one of my colleagues, Pierre Riteau. Pierre is away currently, so I
hope he doesn't mind me quoting him from an email to Gregory.
"I am indeed working on a similar migration using Kolla Ansible with
Kayobe, starting from a non-containerised OpenStack deployment based
on CentOS RPMs.
Existing OpenStack services are deployed across several controller
nodes and all sit behind HAProxy, including for internal endpoints.
We have additional controller nodes that we use to deploy
containerised services. If you don't have the luxury of additional
nodes, it will be more difficult as you will need to avoid processes
clashing when listening on the same port.
The method I am using resembles your second suggestion, however I am
deploying only one containerised service at a time, in order to
validate each of them independently.
I use the --tags option of kolla-ansible to restrict Ansible to
specific roles, and when I am happy with the resulting configuration I
update HAProxy to point to the new controllers.
As long as the configuration matches, this should be completely
transparent for purely HTTP-based services like Glance. You need to be
more careful with services that include components listening for RPC,
such as Nova: if the new nova.conf is incorrect and you've deployed a
nova-conductor that uses it, you could get failed instances launches.
Some roles depend on others: if you are deploying the
neutron-openvswitch-agent, you need to run the openvswitch role as
I suggest starting with migrating Glance as it doesn't have any
internal services and is easy to validate. Note that properly
migrating Keystone requires keeping existing Fernet keys around, so
any token stays valid until the time it is expected to stop working
(which is fairly complex, see
While initially I was using an approach similar to your first
suggestion, it can have side effects since Kolla Ansible uses these
variables when templating configuration. As an example, most services
will only have notifications enabled if enable_ceilometer is true.
I've added existing control plane nodes to the Kolla Ansible inventory
as separate groups, which allows me to use the existing database and
RabbitMQ for the containerised services.
For example, instead of:
you may have:
I still have to perform the migration of these underlying services to
the new control plane, I will let you know if there is any hurdle.
A few random things to note:
- if run on existing control plane hosts, the baremetal role removes
some packages listed in `redhat_pkg_removals` which can trigger the
removal of OpenStack dependencies using them! I've changed this
variable to an empty list.
- compare your existing deployment with a Kolla Ansible one to check
for differences in endpoints, configuration files, database users,
service users, etc. For Heat, Kolla uses the domain heat_user_domain,
while your existing deployment may use another one (and this is
hardcoded in the Kolla Heat image). Kolla Ansible uses the "service"
project while a couple of deployments I worked with were using
"services". This shouldn't matter, except there was a bug in Kolla
which prevented it from setting the roles correctly:
https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in latest
Rocky and Queens images)
- the ml2_conf.ini generated for Neutron generates physical network
names like physnet1, physnet2… you may want to override
- although sometimes it could be easier to change your existing
deployment to match Kolla Ansible settings, rather than configure
Kolla Ansible to match your deployment."
More information about the openstack-discuss