[kolla-ansible] migration
Hello, Anyone have tried to migrate an existing openstack installation to kolla containers? Thanks Ignazio
On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano ignaziocassano@gmail.com wrote:
Hello, Anyone have tried to migrate an existing openstack installation to kolla containers?
Hi,
I'm aware of two people currently working on that. Gregory Orange and one of my colleagues, Pierre Riteau. Pierre is away currently, so I hope he doesn't mind me quoting him from an email to Gregory.
Mark
"I am indeed working on a similar migration using Kolla Ansible with Kayobe, starting from a non-containerised OpenStack deployment based on CentOS RPMs. Existing OpenStack services are deployed across several controller nodes and all sit behind HAProxy, including for internal endpoints. We have additional controller nodes that we use to deploy containerised services. If you don't have the luxury of additional nodes, it will be more difficult as you will need to avoid processes clashing when listening on the same port.
The method I am using resembles your second suggestion, however I am deploying only one containerised service at a time, in order to validate each of them independently. I use the --tags option of kolla-ansible to restrict Ansible to specific roles, and when I am happy with the resulting configuration I update HAProxy to point to the new controllers.
As long as the configuration matches, this should be completely transparent for purely HTTP-based services like Glance. You need to be more careful with services that include components listening for RPC, such as Nova: if the new nova.conf is incorrect and you've deployed a nova-conductor that uses it, you could get failed instances launches. Some roles depend on others: if you are deploying the neutron-openvswitch-agent, you need to run the openvswitch role as well.
I suggest starting with migrating Glance as it doesn't have any internal services and is easy to validate. Note that properly migrating Keystone requires keeping existing Fernet keys around, so any token stays valid until the time it is expected to stop working (which is fairly complex, see https://bugs.launchpad.net/kolla-ansible/+bug/1809469).
While initially I was using an approach similar to your first suggestion, it can have side effects since Kolla Ansible uses these variables when templating configuration. As an example, most services will only have notifications enabled if enable_ceilometer is true.
I've added existing control plane nodes to the Kolla Ansible inventory as separate groups, which allows me to use the existing database and RabbitMQ for the containerised services. For example, instead of:
[mariadb:children] control
you may have:
[mariadb:children] oldcontrol_db
I still have to perform the migration of these underlying services to the new control plane, I will let you know if there is any hurdle.
A few random things to note:
- if run on existing control plane hosts, the baremetal role removes some packages listed in `redhat_pkg_removals` which can trigger the removal of OpenStack dependencies using them! I've changed this variable to an empty list. - compare your existing deployment with a Kolla Ansible one to check for differences in endpoints, configuration files, database users, service users, etc. For Heat, Kolla uses the domain heat_user_domain, while your existing deployment may use another one (and this is hardcoded in the Kolla Heat image). Kolla Ansible uses the "service" project while a couple of deployments I worked with were using "services". This shouldn't matter, except there was a bug in Kolla which prevented it from setting the roles correctly: https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in latest Rocky and Queens images) - the ml2_conf.ini generated for Neutron generates physical network names like physnet1, physnet2… you may want to override bridge_mappings completely. - although sometimes it could be easier to change your existing deployment to match Kolla Ansible settings, rather than configure Kolla Ansible to match your deployment."
Thanks Ignazio
Many thanks. I hope you'll write come docs at the end Ignazio
Il Gio 27 Giu 2019 10:52 Mark Goddard mark@stackhpc.com ha scritto:
On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano ignaziocassano@gmail.com wrote:
Hello, Anyone have tried to migrate an existing openstack installation to kolla
containers?
Hi,
I'm aware of two people currently working on that. Gregory Orange and one of my colleagues, Pierre Riteau. Pierre is away currently, so I hope he doesn't mind me quoting him from an email to Gregory.
Mark
"I am indeed working on a similar migration using Kolla Ansible with Kayobe, starting from a non-containerised OpenStack deployment based on CentOS RPMs. Existing OpenStack services are deployed across several controller nodes and all sit behind HAProxy, including for internal endpoints. We have additional controller nodes that we use to deploy containerised services. If you don't have the luxury of additional nodes, it will be more difficult as you will need to avoid processes clashing when listening on the same port.
The method I am using resembles your second suggestion, however I am deploying only one containerised service at a time, in order to validate each of them independently. I use the --tags option of kolla-ansible to restrict Ansible to specific roles, and when I am happy with the resulting configuration I update HAProxy to point to the new controllers.
As long as the configuration matches, this should be completely transparent for purely HTTP-based services like Glance. You need to be more careful with services that include components listening for RPC, such as Nova: if the new nova.conf is incorrect and you've deployed a nova-conductor that uses it, you could get failed instances launches. Some roles depend on others: if you are deploying the neutron-openvswitch-agent, you need to run the openvswitch role as well.
I suggest starting with migrating Glance as it doesn't have any internal services and is easy to validate. Note that properly migrating Keystone requires keeping existing Fernet keys around, so any token stays valid until the time it is expected to stop working (which is fairly complex, see https://bugs.launchpad.net/kolla-ansible/+bug/1809469).
While initially I was using an approach similar to your first suggestion, it can have side effects since Kolla Ansible uses these variables when templating configuration. As an example, most services will only have notifications enabled if enable_ceilometer is true.
I've added existing control plane nodes to the Kolla Ansible inventory as separate groups, which allows me to use the existing database and RabbitMQ for the containerised services. For example, instead of:
[mariadb:children] control
you may have:
[mariadb:children] oldcontrol_db
I still have to perform the migration of these underlying services to the new control plane, I will let you know if there is any hurdle.
A few random things to note:
- if run on existing control plane hosts, the baremetal role removes
some packages listed in `redhat_pkg_removals` which can trigger the removal of OpenStack dependencies using them! I've changed this variable to an empty list.
- compare your existing deployment with a Kolla Ansible one to check
for differences in endpoints, configuration files, database users, service users, etc. For Heat, Kolla uses the domain heat_user_domain, while your existing deployment may use another one (and this is hardcoded in the Kolla Heat image). Kolla Ansible uses the "service" project while a couple of deployments I worked with were using "services". This shouldn't matter, except there was a bug in Kolla which prevented it from setting the roles correctly: https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in latest Rocky and Queens images)
- the ml2_conf.ini generated for Neutron generates physical network
names like physnet1, physnet2… you may want to override bridge_mappings completely.
- although sometimes it could be easier to change your existing
deployment to match Kolla Ansible settings, rather than configure Kolla Ansible to match your deployment."
Thanks Ignazio
Hello Mark, let me to verify if I understood your method.
You have old controllers,haproxy,mariadb and nova computes. You installed three new controllers but kolla.ansible inventory contains old mariadb and old rabbit servers. You are deployng single service on new controllers staring with glance. When you deploy glance on new controllers, it changes the glance endpoint on old mariadb db ? Regards Ignazio
Il giorno gio 27 giu 2019 alle ore 10:52 Mark Goddard mark@stackhpc.com ha scritto:
On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano ignaziocassano@gmail.com wrote:
Hello, Anyone have tried to migrate an existing openstack installation to kolla
containers?
Hi,
I'm aware of two people currently working on that. Gregory Orange and one of my colleagues, Pierre Riteau. Pierre is away currently, so I hope he doesn't mind me quoting him from an email to Gregory.
Mark
"I am indeed working on a similar migration using Kolla Ansible with Kayobe, starting from a non-containerised OpenStack deployment based on CentOS RPMs. Existing OpenStack services are deployed across several controller nodes and all sit behind HAProxy, including for internal endpoints. We have additional controller nodes that we use to deploy containerised services. If you don't have the luxury of additional nodes, it will be more difficult as you will need to avoid processes clashing when listening on the same port.
The method I am using resembles your second suggestion, however I am deploying only one containerised service at a time, in order to validate each of them independently. I use the --tags option of kolla-ansible to restrict Ansible to specific roles, and when I am happy with the resulting configuration I update HAProxy to point to the new controllers.
As long as the configuration matches, this should be completely transparent for purely HTTP-based services like Glance. You need to be more careful with services that include components listening for RPC, such as Nova: if the new nova.conf is incorrect and you've deployed a nova-conductor that uses it, you could get failed instances launches. Some roles depend on others: if you are deploying the neutron-openvswitch-agent, you need to run the openvswitch role as well.
I suggest starting with migrating Glance as it doesn't have any internal services and is easy to validate. Note that properly migrating Keystone requires keeping existing Fernet keys around, so any token stays valid until the time it is expected to stop working (which is fairly complex, see https://bugs.launchpad.net/kolla-ansible/+bug/1809469).
While initially I was using an approach similar to your first suggestion, it can have side effects since Kolla Ansible uses these variables when templating configuration. As an example, most services will only have notifications enabled if enable_ceilometer is true.
I've added existing control plane nodes to the Kolla Ansible inventory as separate groups, which allows me to use the existing database and RabbitMQ for the containerised services. For example, instead of:
[mariadb:children] control
you may have:
[mariadb:children] oldcontrol_db
I still have to perform the migration of these underlying services to the new control plane, I will let you know if there is any hurdle.
A few random things to note:
- if run on existing control plane hosts, the baremetal role removes
some packages listed in `redhat_pkg_removals` which can trigger the removal of OpenStack dependencies using them! I've changed this variable to an empty list.
- compare your existing deployment with a Kolla Ansible one to check
for differences in endpoints, configuration files, database users, service users, etc. For Heat, Kolla uses the domain heat_user_domain, while your existing deployment may use another one (and this is hardcoded in the Kolla Heat image). Kolla Ansible uses the "service" project while a couple of deployments I worked with were using "services". This shouldn't matter, except there was a bug in Kolla which prevented it from setting the roles correctly: https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in latest Rocky and Queens images)
- the ml2_conf.ini generated for Neutron generates physical network
names like physnet1, physnet2… you may want to override bridge_mappings completely.
- although sometimes it could be easier to change your existing
deployment to match Kolla Ansible settings, rather than configure Kolla Ansible to match your deployment."
Thanks Ignazio
Sorry, for my question. It does not need to change anything because endpoints refer to haproxy vips. So if your new glance works fine you change haproxy backends for glance. Regards Ignazio
Il giorno gio 27 giu 2019 alle ore 15:21 Ignazio Cassano < ignaziocassano@gmail.com> ha scritto:
Hello Mark, let me to verify if I understood your method.
You have old controllers,haproxy,mariadb and nova computes. You installed three new controllers but kolla.ansible inventory contains old mariadb and old rabbit servers. You are deployng single service on new controllers staring with glance. When you deploy glance on new controllers, it changes the glance endpoint on old mariadb db ? Regards Ignazio
Il giorno gio 27 giu 2019 alle ore 10:52 Mark Goddard mark@stackhpc.com ha scritto:
On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano ignaziocassano@gmail.com wrote:
Hello, Anyone have tried to migrate an existing openstack installation to
kolla containers?
Hi,
I'm aware of two people currently working on that. Gregory Orange and one of my colleagues, Pierre Riteau. Pierre is away currently, so I hope he doesn't mind me quoting him from an email to Gregory.
Mark
"I am indeed working on a similar migration using Kolla Ansible with Kayobe, starting from a non-containerised OpenStack deployment based on CentOS RPMs. Existing OpenStack services are deployed across several controller nodes and all sit behind HAProxy, including for internal endpoints. We have additional controller nodes that we use to deploy containerised services. If you don't have the luxury of additional nodes, it will be more difficult as you will need to avoid processes clashing when listening on the same port.
The method I am using resembles your second suggestion, however I am deploying only one containerised service at a time, in order to validate each of them independently. I use the --tags option of kolla-ansible to restrict Ansible to specific roles, and when I am happy with the resulting configuration I update HAProxy to point to the new controllers.
As long as the configuration matches, this should be completely transparent for purely HTTP-based services like Glance. You need to be more careful with services that include components listening for RPC, such as Nova: if the new nova.conf is incorrect and you've deployed a nova-conductor that uses it, you could get failed instances launches. Some roles depend on others: if you are deploying the neutron-openvswitch-agent, you need to run the openvswitch role as well.
I suggest starting with migrating Glance as it doesn't have any internal services and is easy to validate. Note that properly migrating Keystone requires keeping existing Fernet keys around, so any token stays valid until the time it is expected to stop working (which is fairly complex, see https://bugs.launchpad.net/kolla-ansible/+bug/1809469).
While initially I was using an approach similar to your first suggestion, it can have side effects since Kolla Ansible uses these variables when templating configuration. As an example, most services will only have notifications enabled if enable_ceilometer is true.
I've added existing control plane nodes to the Kolla Ansible inventory as separate groups, which allows me to use the existing database and RabbitMQ for the containerised services. For example, instead of:
[mariadb:children] control
you may have:
[mariadb:children] oldcontrol_db
I still have to perform the migration of these underlying services to the new control plane, I will let you know if there is any hurdle.
A few random things to note:
- if run on existing control plane hosts, the baremetal role removes
some packages listed in `redhat_pkg_removals` which can trigger the removal of OpenStack dependencies using them! I've changed this variable to an empty list.
- compare your existing deployment with a Kolla Ansible one to check
for differences in endpoints, configuration files, database users, service users, etc. For Heat, Kolla uses the domain heat_user_domain, while your existing deployment may use another one (and this is hardcoded in the Kolla Heat image). Kolla Ansible uses the "service" project while a couple of deployments I worked with were using "services". This shouldn't matter, except there was a bug in Kolla which prevented it from setting the roles correctly: https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in latest Rocky and Queens images)
- the ml2_conf.ini generated for Neutron generates physical network
names like physnet1, physnet2… you may want to override bridge_mappings completely.
- although sometimes it could be easier to change your existing
deployment to match Kolla Ansible settings, rather than configure Kolla Ansible to match your deployment."
Thanks Ignazio
On Thu, 27 Jun 2019 at 14:46, Ignazio Cassano ignaziocassano@gmail.com wrote:
Sorry, for my question. It does not need to change anything because endpoints refer to haproxy vips. So if your new glance works fine you change haproxy backends for glance. Regards Ignazio
That's correct - only the haproxy backend needs to be updated.
Il giorno gio 27 giu 2019 alle ore 15:21 Ignazio Cassano ignaziocassano@gmail.com ha scritto:
Hello Mark, let me to verify if I understood your method.
You have old controllers,haproxy,mariadb and nova computes. You installed three new controllers but kolla.ansible inventory contains old mariadb and old rabbit servers. You are deployng single service on new controllers staring with glance. When you deploy glance on new controllers, it changes the glance endpoint on old mariadb db ? Regards Ignazio
Il giorno gio 27 giu 2019 alle ore 10:52 Mark Goddard mark@stackhpc.com ha scritto:
On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano ignaziocassano@gmail.com wrote:
Hello, Anyone have tried to migrate an existing openstack installation to kolla containers?
Hi,
I'm aware of two people currently working on that. Gregory Orange and one of my colleagues, Pierre Riteau. Pierre is away currently, so I hope he doesn't mind me quoting him from an email to Gregory.
Mark
"I am indeed working on a similar migration using Kolla Ansible with Kayobe, starting from a non-containerised OpenStack deployment based on CentOS RPMs. Existing OpenStack services are deployed across several controller nodes and all sit behind HAProxy, including for internal endpoints. We have additional controller nodes that we use to deploy containerised services. If you don't have the luxury of additional nodes, it will be more difficult as you will need to avoid processes clashing when listening on the same port.
The method I am using resembles your second suggestion, however I am deploying only one containerised service at a time, in order to validate each of them independently. I use the --tags option of kolla-ansible to restrict Ansible to specific roles, and when I am happy with the resulting configuration I update HAProxy to point to the new controllers.
As long as the configuration matches, this should be completely transparent for purely HTTP-based services like Glance. You need to be more careful with services that include components listening for RPC, such as Nova: if the new nova.conf is incorrect and you've deployed a nova-conductor that uses it, you could get failed instances launches. Some roles depend on others: if you are deploying the neutron-openvswitch-agent, you need to run the openvswitch role as well.
I suggest starting with migrating Glance as it doesn't have any internal services and is easy to validate. Note that properly migrating Keystone requires keeping existing Fernet keys around, so any token stays valid until the time it is expected to stop working (which is fairly complex, see https://bugs.launchpad.net/kolla-ansible/+bug/1809469).
While initially I was using an approach similar to your first suggestion, it can have side effects since Kolla Ansible uses these variables when templating configuration. As an example, most services will only have notifications enabled if enable_ceilometer is true.
I've added existing control plane nodes to the Kolla Ansible inventory as separate groups, which allows me to use the existing database and RabbitMQ for the containerised services. For example, instead of:
[mariadb:children] control
you may have:
[mariadb:children] oldcontrol_db
I still have to perform the migration of these underlying services to the new control plane, I will let you know if there is any hurdle.
A few random things to note:
- if run on existing control plane hosts, the baremetal role removes
some packages listed in `redhat_pkg_removals` which can trigger the removal of OpenStack dependencies using them! I've changed this variable to an empty list.
- compare your existing deployment with a Kolla Ansible one to check
for differences in endpoints, configuration files, database users, service users, etc. For Heat, Kolla uses the domain heat_user_domain, while your existing deployment may use another one (and this is hardcoded in the Kolla Heat image). Kolla Ansible uses the "service" project while a couple of deployments I worked with were using "services". This shouldn't matter, except there was a bug in Kolla which prevented it from setting the roles correctly: https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in latest Rocky and Queens images)
- the ml2_conf.ini generated for Neutron generates physical network
names like physnet1, physnet2… you may want to override bridge_mappings completely.
- although sometimes it could be easier to change your existing
deployment to match Kolla Ansible settings, rather than configure Kolla Ansible to match your deployment."
Thanks Ignazio
Hi Mark, let me to explain what I am trying. I have a queens installation based on centos and pacemaker with some instances and heat stacks. I would like to have another installation with same instances, projects, stacks ....I'd like to have same uuid for all objects (users,projects instances and so on, because it is controlled by a cloud management platform we wrote.
I stopped controllers on old queens installation backupping the openstack database. I installed the new kolla openstack queens on new three controllers with same addresses of the old intallation , vip as well. One of the three controllers is also a kvm node on queens. I stopped all containeres except rabbit,keepalive,rabbit,haproxy and mariadb. I deleted al openstack db on mariadb container and I imported the old tables, changing the address of rabbit for pointing to the new rabbit cluster. I restarded containers. Changing the rabbit address on old kvm nodes, I can see the old virtual machines and I can open console on them. I can see all networks (tenant and provider) of al installation, but when I try to create a new instance on the new kvm, it remains in buiding state. Seems it cannot aquire an address. Storage between old and new installation are shred on nfs NETAPP, so I can see cinder volumes. I suppose db structure is different between a kolla installation and a manual instaltion !? What is wrong ? Thanks Ignazio
Il giorno gio 27 giu 2019 alle ore 16:44 Mark Goddard mark@stackhpc.com ha scritto:
On Thu, 27 Jun 2019 at 14:46, Ignazio Cassano ignaziocassano@gmail.com wrote:
Sorry, for my question. It does not need to change anything because endpoints refer to haproxy
vips.
So if your new glance works fine you change haproxy backends for glance. Regards Ignazio
That's correct - only the haproxy backend needs to be updated.
Il giorno gio 27 giu 2019 alle ore 15:21 Ignazio Cassano <
ignaziocassano@gmail.com> ha scritto:
Hello Mark, let me to verify if I understood your method.
You have old controllers,haproxy,mariadb and nova computes. You installed three new controllers but kolla.ansible inventory
contains old mariadb and old rabbit servers.
You are deployng single service on new controllers staring with glance. When you deploy glance on new controllers, it changes the glance
endpoint on old mariadb db ?
Regards Ignazio
Il giorno gio 27 giu 2019 alle ore 10:52 Mark Goddard <
mark@stackhpc.com> ha scritto:
On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano <
ignaziocassano@gmail.com> wrote:
Hello, Anyone have tried to migrate an existing openstack installation to
kolla containers?
Hi,
I'm aware of two people currently working on that. Gregory Orange and one of my colleagues, Pierre Riteau. Pierre is away currently, so I hope he doesn't mind me quoting him from an email to Gregory.
Mark
"I am indeed working on a similar migration using Kolla Ansible with Kayobe, starting from a non-containerised OpenStack deployment based on CentOS RPMs. Existing OpenStack services are deployed across several controller nodes and all sit behind HAProxy, including for internal endpoints. We have additional controller nodes that we use to deploy containerised services. If you don't have the luxury of additional nodes, it will be more difficult as you will need to avoid processes clashing when listening on the same port.
The method I am using resembles your second suggestion, however I am deploying only one containerised service at a time, in order to validate each of them independently. I use the --tags option of kolla-ansible to restrict Ansible to specific roles, and when I am happy with the resulting configuration I update HAProxy to point to the new controllers.
As long as the configuration matches, this should be completely transparent for purely HTTP-based services like Glance. You need to be more careful with services that include components listening for RPC, such as Nova: if the new nova.conf is incorrect and you've deployed a nova-conductor that uses it, you could get failed instances launches. Some roles depend on others: if you are deploying the neutron-openvswitch-agent, you need to run the openvswitch role as well.
I suggest starting with migrating Glance as it doesn't have any internal services and is easy to validate. Note that properly migrating Keystone requires keeping existing Fernet keys around, so any token stays valid until the time it is expected to stop working (which is fairly complex, see https://bugs.launchpad.net/kolla-ansible/+bug/1809469).
While initially I was using an approach similar to your first suggestion, it can have side effects since Kolla Ansible uses these variables when templating configuration. As an example, most services will only have notifications enabled if enable_ceilometer is true.
I've added existing control plane nodes to the Kolla Ansible inventory as separate groups, which allows me to use the existing database and RabbitMQ for the containerised services. For example, instead of:
[mariadb:children] control
you may have:
[mariadb:children] oldcontrol_db
I still have to perform the migration of these underlying services to the new control plane, I will let you know if there is any hurdle.
A few random things to note:
- if run on existing control plane hosts, the baremetal role removes
some packages listed in `redhat_pkg_removals` which can trigger the removal of OpenStack dependencies using them! I've changed this variable to an empty list.
- compare your existing deployment with a Kolla Ansible one to check
for differences in endpoints, configuration files, database users, service users, etc. For Heat, Kolla uses the domain heat_user_domain, while your existing deployment may use another one (and this is hardcoded in the Kolla Heat image). Kolla Ansible uses the "service" project while a couple of deployments I worked with were using "services". This shouldn't matter, except there was a bug in Kolla which prevented it from setting the roles correctly: https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in latest Rocky and Queens images)
- the ml2_conf.ini generated for Neutron generates physical network
names like physnet1, physnet2… you may want to override bridge_mappings completely.
- although sometimes it could be easier to change your existing
deployment to match Kolla Ansible settings, rather than configure Kolla Ansible to match your deployment."
Thanks Ignazio
Hi Ignazio,
it is hard to tell without logs. Please attach (pastebin) the relevant ones (probably nova ones, maybe neutron and cinder). Also, did you keep the old configs and tried comparing them with new ones?
Kind regards, Radek
niedz., 30 cze 2019 o 11:07 Ignazio Cassano ignaziocassano@gmail.com napisał(a):
Hi Mark, let me to explain what I am trying. I have a queens installation based on centos and pacemaker with some instances and heat stacks. I would like to have another installation with same instances, projects, stacks ....I'd like to have same uuid for all objects (users,projects instances and so on, because it is controlled by a cloud management platform we wrote.
I stopped controllers on old queens installation backupping the openstack database. I installed the new kolla openstack queens on new three controllers with same addresses of the old intallation , vip as well. One of the three controllers is also a kvm node on queens. I stopped all containeres except rabbit,keepalive,rabbit,haproxy and mariadb. I deleted al openstack db on mariadb container and I imported the old tables, changing the address of rabbit for pointing to the new rabbit cluster. I restarded containers. Changing the rabbit address on old kvm nodes, I can see the old virtual machines and I can open console on them. I can see all networks (tenant and provider) of al installation, but when I try to create a new instance on the new kvm, it remains in buiding state. Seems it cannot aquire an address. Storage between old and new installation are shred on nfs NETAPP, so I can see cinder volumes. I suppose db structure is different between a kolla installation and a manual instaltion !? What is wrong ? Thanks Ignazio
Il giorno gio 27 giu 2019 alle ore 16:44 Mark Goddard mark@stackhpc.com ha scritto:
On Thu, 27 Jun 2019 at 14:46, Ignazio Cassano ignaziocassano@gmail.com wrote:
Sorry, for my question. It does not need to change anything because endpoints refer to haproxy
vips.
So if your new glance works fine you change haproxy backends for glance. Regards Ignazio
That's correct - only the haproxy backend needs to be updated.
Il giorno gio 27 giu 2019 alle ore 15:21 Ignazio Cassano <
ignaziocassano@gmail.com> ha scritto:
Hello Mark, let me to verify if I understood your method.
You have old controllers,haproxy,mariadb and nova computes. You installed three new controllers but kolla.ansible inventory
contains old mariadb and old rabbit servers.
You are deployng single service on new controllers staring with glance. When you deploy glance on new controllers, it changes the glance
endpoint on old mariadb db ?
Regards Ignazio
Il giorno gio 27 giu 2019 alle ore 10:52 Mark Goddard <
mark@stackhpc.com> ha scritto:
On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano <
ignaziocassano@gmail.com> wrote:
Hello, Anyone have tried to migrate an existing openstack installation to
kolla containers?
Hi,
I'm aware of two people currently working on that. Gregory Orange and one of my colleagues, Pierre Riteau. Pierre is away currently, so I hope he doesn't mind me quoting him from an email to Gregory.
Mark
"I am indeed working on a similar migration using Kolla Ansible with Kayobe, starting from a non-containerised OpenStack deployment based on CentOS RPMs. Existing OpenStack services are deployed across several controller nodes and all sit behind HAProxy, including for internal endpoints. We have additional controller nodes that we use to deploy containerised services. If you don't have the luxury of additional nodes, it will be more difficult as you will need to avoid processes clashing when listening on the same port.
The method I am using resembles your second suggestion, however I am deploying only one containerised service at a time, in order to validate each of them independently. I use the --tags option of kolla-ansible to restrict Ansible to specific roles, and when I am happy with the resulting configuration I update HAProxy to point to the new controllers.
As long as the configuration matches, this should be completely transparent for purely HTTP-based services like Glance. You need to be more careful with services that include components listening for RPC, such as Nova: if the new nova.conf is incorrect and you've deployed a nova-conductor that uses it, you could get failed instances launches. Some roles depend on others: if you are deploying the neutron-openvswitch-agent, you need to run the openvswitch role as well.
I suggest starting with migrating Glance as it doesn't have any internal services and is easy to validate. Note that properly migrating Keystone requires keeping existing Fernet keys around, so any token stays valid until the time it is expected to stop working (which is fairly complex, see https://bugs.launchpad.net/kolla-ansible/+bug/1809469).
While initially I was using an approach similar to your first suggestion, it can have side effects since Kolla Ansible uses these variables when templating configuration. As an example, most services will only have notifications enabled if enable_ceilometer is true.
I've added existing control plane nodes to the Kolla Ansible inventory as separate groups, which allows me to use the existing database and RabbitMQ for the containerised services. For example, instead of:
[mariadb:children] control
you may have:
[mariadb:children] oldcontrol_db
I still have to perform the migration of these underlying services to the new control plane, I will let you know if there is any hurdle.
A few random things to note:
- if run on existing control plane hosts, the baremetal role removes
some packages listed in `redhat_pkg_removals` which can trigger the removal of OpenStack dependencies using them! I've changed this variable to an empty list.
- compare your existing deployment with a Kolla Ansible one to check
for differences in endpoints, configuration files, database users, service users, etc. For Heat, Kolla uses the domain heat_user_domain, while your existing deployment may use another one (and this is hardcoded in the Kolla Heat image). Kolla Ansible uses the "service" project while a couple of deployments I worked with were using "services". This shouldn't matter, except there was a bug in Kolla which prevented it from setting the roles correctly: https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in latest Rocky and Queens images)
- the ml2_conf.ini generated for Neutron generates physical network
names like physnet1, physnet2… you may want to override bridge_mappings completely.
- although sometimes it could be easier to change your existing
deployment to match Kolla Ansible settings, rather than configure Kolla Ansible to match your deployment."
Thanks Ignazio
Hello Radosław, unfortunately I came back to old installation restarting old controllers and I installed a new kolla queens with new addresses, so now I have two different openstacks installations. I compared the configurations: the difference is that the old does not use any memcache secret, while the kolla installation use them....but I do not think they are stored on db. I am looking for a method for migrating from non kolla to kolla but I did not find any example searching google. If there isn't any documented method I'll try again and I'll send logs.
An alternative could be migrating instances from a volume backup but it a very slow way and my cloud management platform should re-import all uuid .
Installing kolla queens works I got an openstack that works fine but I need a procedure to migrate. Thanks and Regards Ignazio
Il giorno dom 30 giu 2019 alle ore 11:18 Radosław Piliszek < radoslaw.piliszek@gmail.com> ha scritto:
Hi Ignazio,
it is hard to tell without logs. Please attach (pastebin) the relevant ones (probably nova ones, maybe neutron and cinder). Also, did you keep the old configs and tried comparing them with new ones?
Kind regards, Radek
niedz., 30 cze 2019 o 11:07 Ignazio Cassano ignaziocassano@gmail.com napisał(a):
Hi Mark, let me to explain what I am trying. I have a queens installation based on centos and pacemaker with some instances and heat stacks. I would like to have another installation with same instances, projects, stacks ....I'd like to have same uuid for all objects (users,projects instances and so on, because it is controlled by a cloud management platform we wrote.
I stopped controllers on old queens installation backupping the openstack database. I installed the new kolla openstack queens on new three controllers with same addresses of the old intallation , vip as well. One of the three controllers is also a kvm node on queens. I stopped all containeres except rabbit,keepalive,rabbit,haproxy and mariadb. I deleted al openstack db on mariadb container and I imported the old tables, changing the address of rabbit for pointing to the new rabbit cluster. I restarded containers. Changing the rabbit address on old kvm nodes, I can see the old virtual machines and I can open console on them. I can see all networks (tenant and provider) of al installation, but when I try to create a new instance on the new kvm, it remains in buiding state. Seems it cannot aquire an address. Storage between old and new installation are shred on nfs NETAPP, so I can see cinder volumes. I suppose db structure is different between a kolla installation and a manual instaltion !? What is wrong ? Thanks Ignazio
Il giorno gio 27 giu 2019 alle ore 16:44 Mark Goddard mark@stackhpc.com ha scritto:
On Thu, 27 Jun 2019 at 14:46, Ignazio Cassano ignaziocassano@gmail.com wrote:
Sorry, for my question. It does not need to change anything because endpoints refer to haproxy
vips.
So if your new glance works fine you change haproxy backends for
glance.
Regards Ignazio
That's correct - only the haproxy backend needs to be updated.
Il giorno gio 27 giu 2019 alle ore 15:21 Ignazio Cassano <
ignaziocassano@gmail.com> ha scritto:
Hello Mark, let me to verify if I understood your method.
You have old controllers,haproxy,mariadb and nova computes. You installed three new controllers but kolla.ansible inventory
contains old mariadb and old rabbit servers.
You are deployng single service on new controllers staring with
glance.
When you deploy glance on new controllers, it changes the glance
endpoint on old mariadb db ?
Regards Ignazio
Il giorno gio 27 giu 2019 alle ore 10:52 Mark Goddard <
mark@stackhpc.com> ha scritto:
On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano <
ignaziocassano@gmail.com> wrote:
> > Hello, > Anyone have tried to migrate an existing openstack installation to
kolla containers?
Hi,
I'm aware of two people currently working on that. Gregory Orange and one of my colleagues, Pierre Riteau. Pierre is away currently, so I hope he doesn't mind me quoting him from an email to Gregory.
Mark
"I am indeed working on a similar migration using Kolla Ansible with Kayobe, starting from a non-containerised OpenStack deployment based on CentOS RPMs. Existing OpenStack services are deployed across several controller nodes and all sit behind HAProxy, including for internal endpoints. We have additional controller nodes that we use to deploy containerised services. If you don't have the luxury of additional nodes, it will be more difficult as you will need to avoid processes clashing when listening on the same port.
The method I am using resembles your second suggestion, however I am deploying only one containerised service at a time, in order to validate each of them independently. I use the --tags option of kolla-ansible to restrict Ansible to specific roles, and when I am happy with the resulting configuration
I
update HAProxy to point to the new controllers.
As long as the configuration matches, this should be completely transparent for purely HTTP-based services like Glance. You need to
be
more careful with services that include components listening for RPC, such as Nova: if the new nova.conf is incorrect and you've deployed a nova-conductor that uses it, you could get failed instances launches. Some roles depend on others: if you are deploying the neutron-openvswitch-agent, you need to run the openvswitch role as well.
I suggest starting with migrating Glance as it doesn't have any internal services and is easy to validate. Note that properly migrating Keystone requires keeping existing Fernet keys around, so any token stays valid until the time it is expected to stop working (which is fairly complex, see https://bugs.launchpad.net/kolla-ansible/+bug/1809469).
While initially I was using an approach similar to your first suggestion, it can have side effects since Kolla Ansible uses these variables when templating configuration. As an example, most services will only have notifications enabled if enable_ceilometer is true.
I've added existing control plane nodes to the Kolla Ansible
inventory
as separate groups, which allows me to use the existing database and RabbitMQ for the containerised services. For example, instead of:
[mariadb:children] control
you may have:
[mariadb:children] oldcontrol_db
I still have to perform the migration of these underlying services to the new control plane, I will let you know if there is any hurdle.
A few random things to note:
- if run on existing control plane hosts, the baremetal role removes
some packages listed in `redhat_pkg_removals` which can trigger the removal of OpenStack dependencies using them! I've changed this variable to an empty list.
- compare your existing deployment with a Kolla Ansible one to check
for differences in endpoints, configuration files, database users, service users, etc. For Heat, Kolla uses the domain heat_user_domain, while your existing deployment may use another one (and this is hardcoded in the Kolla Heat image). Kolla Ansible uses the "service" project while a couple of deployments I worked with were using "services". This shouldn't matter, except there was a bug in Kolla which prevented it from setting the roles correctly: https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in latest Rocky and Queens images)
- the ml2_conf.ini generated for Neutron generates physical network
names like physnet1, physnet2… you may want to override bridge_mappings completely.
- although sometimes it could be easier to change your existing
deployment to match Kolla Ansible settings, rather than configure Kolla Ansible to match your deployment."
> Thanks > Ignazio >
PS I saw a video about kolla-migrator showed during an openstack summit....but its documentation si very poor. Regards Ignazio
Il giorno dom 30 giu 2019 alle ore 11:18 Radosław Piliszek < radoslaw.piliszek@gmail.com> ha scritto:
Hi Ignazio,
it is hard to tell without logs. Please attach (pastebin) the relevant ones (probably nova ones, maybe neutron and cinder). Also, did you keep the old configs and tried comparing them with new ones?
Kind regards, Radek
niedz., 30 cze 2019 o 11:07 Ignazio Cassano ignaziocassano@gmail.com napisał(a):
Hi Mark, let me to explain what I am trying. I have a queens installation based on centos and pacemaker with some instances and heat stacks. I would like to have another installation with same instances, projects, stacks ....I'd like to have same uuid for all objects (users,projects instances and so on, because it is controlled by a cloud management platform we wrote.
I stopped controllers on old queens installation backupping the openstack database. I installed the new kolla openstack queens on new three controllers with same addresses of the old intallation , vip as well. One of the three controllers is also a kvm node on queens. I stopped all containeres except rabbit,keepalive,rabbit,haproxy and mariadb. I deleted al openstack db on mariadb container and I imported the old tables, changing the address of rabbit for pointing to the new rabbit cluster. I restarded containers. Changing the rabbit address on old kvm nodes, I can see the old virtual machines and I can open console on them. I can see all networks (tenant and provider) of al installation, but when I try to create a new instance on the new kvm, it remains in buiding state. Seems it cannot aquire an address. Storage between old and new installation are shred on nfs NETAPP, so I can see cinder volumes. I suppose db structure is different between a kolla installation and a manual instaltion !? What is wrong ? Thanks Ignazio
Il giorno gio 27 giu 2019 alle ore 16:44 Mark Goddard mark@stackhpc.com ha scritto:
On Thu, 27 Jun 2019 at 14:46, Ignazio Cassano ignaziocassano@gmail.com wrote:
Sorry, for my question. It does not need to change anything because endpoints refer to haproxy
vips.
So if your new glance works fine you change haproxy backends for
glance.
Regards Ignazio
That's correct - only the haproxy backend needs to be updated.
Il giorno gio 27 giu 2019 alle ore 15:21 Ignazio Cassano <
ignaziocassano@gmail.com> ha scritto:
Hello Mark, let me to verify if I understood your method.
You have old controllers,haproxy,mariadb and nova computes. You installed three new controllers but kolla.ansible inventory
contains old mariadb and old rabbit servers.
You are deployng single service on new controllers staring with
glance.
When you deploy glance on new controllers, it changes the glance
endpoint on old mariadb db ?
Regards Ignazio
Il giorno gio 27 giu 2019 alle ore 10:52 Mark Goddard <
mark@stackhpc.com> ha scritto:
On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano <
ignaziocassano@gmail.com> wrote:
> > Hello, > Anyone have tried to migrate an existing openstack installation to
kolla containers?
Hi,
I'm aware of two people currently working on that. Gregory Orange and one of my colleagues, Pierre Riteau. Pierre is away currently, so I hope he doesn't mind me quoting him from an email to Gregory.
Mark
"I am indeed working on a similar migration using Kolla Ansible with Kayobe, starting from a non-containerised OpenStack deployment based on CentOS RPMs. Existing OpenStack services are deployed across several controller nodes and all sit behind HAProxy, including for internal endpoints. We have additional controller nodes that we use to deploy containerised services. If you don't have the luxury of additional nodes, it will be more difficult as you will need to avoid processes clashing when listening on the same port.
The method I am using resembles your second suggestion, however I am deploying only one containerised service at a time, in order to validate each of them independently. I use the --tags option of kolla-ansible to restrict Ansible to specific roles, and when I am happy with the resulting configuration
I
update HAProxy to point to the new controllers.
As long as the configuration matches, this should be completely transparent for purely HTTP-based services like Glance. You need to
be
more careful with services that include components listening for RPC, such as Nova: if the new nova.conf is incorrect and you've deployed a nova-conductor that uses it, you could get failed instances launches. Some roles depend on others: if you are deploying the neutron-openvswitch-agent, you need to run the openvswitch role as well.
I suggest starting with migrating Glance as it doesn't have any internal services and is easy to validate. Note that properly migrating Keystone requires keeping existing Fernet keys around, so any token stays valid until the time it is expected to stop working (which is fairly complex, see https://bugs.launchpad.net/kolla-ansible/+bug/1809469).
While initially I was using an approach similar to your first suggestion, it can have side effects since Kolla Ansible uses these variables when templating configuration. As an example, most services will only have notifications enabled if enable_ceilometer is true.
I've added existing control plane nodes to the Kolla Ansible
inventory
as separate groups, which allows me to use the existing database and RabbitMQ for the containerised services. For example, instead of:
[mariadb:children] control
you may have:
[mariadb:children] oldcontrol_db
I still have to perform the migration of these underlying services to the new control plane, I will let you know if there is any hurdle.
A few random things to note:
- if run on existing control plane hosts, the baremetal role removes
some packages listed in `redhat_pkg_removals` which can trigger the removal of OpenStack dependencies using them! I've changed this variable to an empty list.
- compare your existing deployment with a Kolla Ansible one to check
for differences in endpoints, configuration files, database users, service users, etc. For Heat, Kolla uses the domain heat_user_domain, while your existing deployment may use another one (and this is hardcoded in the Kolla Heat image). Kolla Ansible uses the "service" project while a couple of deployments I worked with were using "services". This shouldn't matter, except there was a bug in Kolla which prevented it from setting the roles correctly: https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in latest Rocky and Queens images)
- the ml2_conf.ini generated for Neutron generates physical network
names like physnet1, physnet2… you may want to override bridge_mappings completely.
- although sometimes it could be easier to change your existing
deployment to match Kolla Ansible settings, rather than configure Kolla Ansible to match your deployment."
> Thanks > Ignazio >
It sounds like you got quite close to having this working. I'd suggest debugging this instance build failure. One difference with kolla is that we run libvirt inside a container. Have you stopped libvirt from running on the host? Mark
On Sun, 30 Jun 2019 at 09:55, Ignazio Cassano ignaziocassano@gmail.com wrote:
Hi Mark, let me to explain what I am trying. I have a queens installation based on centos and pacemaker with some instances and heat stacks. I would like to have another installation with same instances, projects, stacks ....I'd like to have same uuid for all objects (users,projects instances and so on, because it is controlled by a cloud management platform we wrote.
I stopped controllers on old queens installation backupping the openstack database. I installed the new kolla openstack queens on new three controllers with same addresses of the old intallation , vip as well. One of the three controllers is also a kvm node on queens. I stopped all containeres except rabbit,keepalive,rabbit,haproxy and mariadb. I deleted al openstack db on mariadb container and I imported the old tables, changing the address of rabbit for pointing to the new rabbit cluster. I restarded containers. Changing the rabbit address on old kvm nodes, I can see the old virtual machines and I can open console on them. I can see all networks (tenant and provider) of al installation, but when I try to create a new instance on the new kvm, it remains in buiding state. Seems it cannot aquire an address. Storage between old and new installation are shred on nfs NETAPP, so I can see cinder volumes. I suppose db structure is different between a kolla installation and a manual instaltion !? What is wrong ? Thanks Ignazio
Il giorno gio 27 giu 2019 alle ore 16:44 Mark Goddard mark@stackhpc.com ha scritto:
On Thu, 27 Jun 2019 at 14:46, Ignazio Cassano ignaziocassano@gmail.com wrote:
Sorry, for my question. It does not need to change anything because endpoints refer to haproxy vips. So if your new glance works fine you change haproxy backends for glance. Regards Ignazio
That's correct - only the haproxy backend needs to be updated.
Il giorno gio 27 giu 2019 alle ore 15:21 Ignazio Cassano ignaziocassano@gmail.com ha scritto:
Hello Mark, let me to verify if I understood your method.
You have old controllers,haproxy,mariadb and nova computes. You installed three new controllers but kolla.ansible inventory contains old mariadb and old rabbit servers. You are deployng single service on new controllers staring with glance. When you deploy glance on new controllers, it changes the glance endpoint on old mariadb db ? Regards Ignazio
Il giorno gio 27 giu 2019 alle ore 10:52 Mark Goddard mark@stackhpc.com ha scritto:
On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano ignaziocassano@gmail.com wrote:
Hello, Anyone have tried to migrate an existing openstack installation to kolla containers?
Hi,
I'm aware of two people currently working on that. Gregory Orange and one of my colleagues, Pierre Riteau. Pierre is away currently, so I hope he doesn't mind me quoting him from an email to Gregory.
Mark
"I am indeed working on a similar migration using Kolla Ansible with Kayobe, starting from a non-containerised OpenStack deployment based on CentOS RPMs. Existing OpenStack services are deployed across several controller nodes and all sit behind HAProxy, including for internal endpoints. We have additional controller nodes that we use to deploy containerised services. If you don't have the luxury of additional nodes, it will be more difficult as you will need to avoid processes clashing when listening on the same port.
The method I am using resembles your second suggestion, however I am deploying only one containerised service at a time, in order to validate each of them independently. I use the --tags option of kolla-ansible to restrict Ansible to specific roles, and when I am happy with the resulting configuration I update HAProxy to point to the new controllers.
As long as the configuration matches, this should be completely transparent for purely HTTP-based services like Glance. You need to be more careful with services that include components listening for RPC, such as Nova: if the new nova.conf is incorrect and you've deployed a nova-conductor that uses it, you could get failed instances launches. Some roles depend on others: if you are deploying the neutron-openvswitch-agent, you need to run the openvswitch role as well.
I suggest starting with migrating Glance as it doesn't have any internal services and is easy to validate. Note that properly migrating Keystone requires keeping existing Fernet keys around, so any token stays valid until the time it is expected to stop working (which is fairly complex, see https://bugs.launchpad.net/kolla-ansible/+bug/1809469).
While initially I was using an approach similar to your first suggestion, it can have side effects since Kolla Ansible uses these variables when templating configuration. As an example, most services will only have notifications enabled if enable_ceilometer is true.
I've added existing control plane nodes to the Kolla Ansible inventory as separate groups, which allows me to use the existing database and RabbitMQ for the containerised services. For example, instead of:
[mariadb:children] control
you may have:
[mariadb:children] oldcontrol_db
I still have to perform the migration of these underlying services to the new control plane, I will let you know if there is any hurdle.
A few random things to note:
- if run on existing control plane hosts, the baremetal role removes
some packages listed in `redhat_pkg_removals` which can trigger the removal of OpenStack dependencies using them! I've changed this variable to an empty list.
- compare your existing deployment with a Kolla Ansible one to check
for differences in endpoints, configuration files, database users, service users, etc. For Heat, Kolla uses the domain heat_user_domain, while your existing deployment may use another one (and this is hardcoded in the Kolla Heat image). Kolla Ansible uses the "service" project while a couple of deployments I worked with were using "services". This shouldn't matter, except there was a bug in Kolla which prevented it from setting the roles correctly: https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in latest Rocky and Queens images)
- the ml2_conf.ini generated for Neutron generates physical network
names like physnet1, physnet2… you may want to override bridge_mappings completely.
- although sometimes it could be easier to change your existing
deployment to match Kolla Ansible settings, rather than configure Kolla Ansible to match your deployment."
Thanks Ignazio
Hi Mark, the kolla environment has a new kvm host hosted on controller node. The kolla openstack can see instances on old kvm nodes and it can access them using vnc console porvided by dashboard, but cannot run new instances on new kvm host :-( Regards Ignazio
Il giorno lun 1 lug 2019 alle ore 10:10 Mark Goddard mark@stackhpc.com ha scritto:
It sounds like you got quite close to having this working. I'd suggest debugging this instance build failure. One difference with kolla is that we run libvirt inside a container. Have you stopped libvirt from running on the host? Mark
On Sun, 30 Jun 2019 at 09:55, Ignazio Cassano ignaziocassano@gmail.com wrote:
Hi Mark, let me to explain what I am trying. I have a queens installation based on centos and pacemaker with some
instances and heat stacks.
I would like to have another installation with same instances, projects,
stacks ....I'd like to have same uuid for all objects (users,projects instances and so on, because it is controlled by a cloud management platform we wrote.
I stopped controllers on old queens installation backupping the
openstack database.
I installed the new kolla openstack queens on new three controllers with
same addresses of the old intallation , vip as well.
One of the three controllers is also a kvm node on queens. I stopped all containeres except rabbit,keepalive,rabbit,haproxy and
mariadb.
I deleted al openstack db on mariadb container and I imported the old
tables, changing the address of rabbit for pointing to the new rabbit cluster.
I restarded containers. Changing the rabbit address on old kvm nodes, I can see the old virtual
machines and I can open console on them.
I can see all networks (tenant and provider) of al installation, but
when I try to create a new instance on the new kvm, it remains in buiding state.
Seems it cannot aquire an address. Storage between old and new installation are shred on nfs NETAPP, so I
can see cinder volumes.
I suppose db structure is different between a kolla installation and a
manual instaltion !?
What is wrong ? Thanks Ignazio
Il giorno gio 27 giu 2019 alle ore 16:44 Mark Goddard mark@stackhpc.com
ha scritto:
On Thu, 27 Jun 2019 at 14:46, Ignazio Cassano ignaziocassano@gmail.com
wrote:
Sorry, for my question. It does not need to change anything because endpoints refer to
haproxy vips.
So if your new glance works fine you change haproxy backends for
glance.
Regards Ignazio
That's correct - only the haproxy backend needs to be updated.
Il giorno gio 27 giu 2019 alle ore 15:21 Ignazio Cassano <
ignaziocassano@gmail.com> ha scritto:
Hello Mark, let me to verify if I understood your method.
You have old controllers,haproxy,mariadb and nova computes. You installed three new controllers but kolla.ansible inventory
contains old mariadb and old rabbit servers.
You are deployng single service on new controllers staring with
glance.
When you deploy glance on new controllers, it changes the glance
endpoint on old mariadb db ?
Regards Ignazio
Il giorno gio 27 giu 2019 alle ore 10:52 Mark Goddard <
mark@stackhpc.com> ha scritto:
On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano <
ignaziocassano@gmail.com> wrote:
> > Hello, > Anyone have tried to migrate an existing openstack installation
to kolla containers?
Hi,
I'm aware of two people currently working on that. Gregory Orange
and
one of my colleagues, Pierre Riteau. Pierre is away currently, so I hope he doesn't mind me quoting him from an email to Gregory.
Mark
"I am indeed working on a similar migration using Kolla Ansible with Kayobe, starting from a non-containerised OpenStack deployment based on CentOS RPMs. Existing OpenStack services are deployed across several controller nodes and all sit behind HAProxy, including for internal endpoints. We have additional controller nodes that we use to deploy containerised services. If you don't have the luxury of additional nodes, it will be more difficult as you will need to avoid processes clashing when listening on the same port.
The method I am using resembles your second suggestion, however I am deploying only one containerised service at a time, in order to validate each of them independently. I use the --tags option of kolla-ansible to restrict Ansible to specific roles, and when I am happy with the resulting
configuration I
update HAProxy to point to the new controllers.
As long as the configuration matches, this should be completely transparent for purely HTTP-based services like Glance. You need to
be
more careful with services that include components listening for
RPC,
such as Nova: if the new nova.conf is incorrect and you've deployed
a
nova-conductor that uses it, you could get failed instances
launches.
Some roles depend on others: if you are deploying the neutron-openvswitch-agent, you need to run the openvswitch role as well.
I suggest starting with migrating Glance as it doesn't have any internal services and is easy to validate. Note that properly migrating Keystone requires keeping existing Fernet keys around, so any token stays valid until the time it is expected to stop working (which is fairly complex, see https://bugs.launchpad.net/kolla-ansible/+bug/1809469).
While initially I was using an approach similar to your first suggestion, it can have side effects since Kolla Ansible uses these variables when templating configuration. As an example, most
services
will only have notifications enabled if enable_ceilometer is true.
I've added existing control plane nodes to the Kolla Ansible
inventory
as separate groups, which allows me to use the existing database and RabbitMQ for the containerised services. For example, instead of:
[mariadb:children] control
you may have:
[mariadb:children] oldcontrol_db
I still have to perform the migration of these underlying services
to
the new control plane, I will let you know if there is any hurdle.
A few random things to note:
- if run on existing control plane hosts, the baremetal role removes
some packages listed in `redhat_pkg_removals` which can trigger the removal of OpenStack dependencies using them! I've changed this variable to an empty list.
- compare your existing deployment with a Kolla Ansible one to check
for differences in endpoints, configuration files, database users, service users, etc. For Heat, Kolla uses the domain
heat_user_domain,
while your existing deployment may use another one (and this is hardcoded in the Kolla Heat image). Kolla Ansible uses the "service" project while a couple of deployments I worked with were using "services". This shouldn't matter, except there was a bug in Kolla which prevented it from setting the roles correctly: https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in latest Rocky and Queens images)
- the ml2_conf.ini generated for Neutron generates physical network
names like physnet1, physnet2… you may want to override bridge_mappings completely.
- although sometimes it could be easier to change your existing
deployment to match Kolla Ansible settings, rather than configure Kolla Ansible to match your deployment."
> Thanks > Ignazio >
PS I presume the problem is neutron, because instances on new kvm nodes remain in building state e do not aquire address. Probably the netron db imported from old openstack installation has some difrrences ....probably I must check defferences from old and new neutron services configuration files. Ignazio
Il giorno lun 1 lug 2019 alle ore 10:10 Mark Goddard mark@stackhpc.com ha scritto:
It sounds like you got quite close to having this working. I'd suggest debugging this instance build failure. One difference with kolla is that we run libvirt inside a container. Have you stopped libvirt from running on the host? Mark
On Sun, 30 Jun 2019 at 09:55, Ignazio Cassano ignaziocassano@gmail.com wrote:
Hi Mark, let me to explain what I am trying. I have a queens installation based on centos and pacemaker with some
instances and heat stacks.
I would like to have another installation with same instances, projects,
stacks ....I'd like to have same uuid for all objects (users,projects instances and so on, because it is controlled by a cloud management platform we wrote.
I stopped controllers on old queens installation backupping the
openstack database.
I installed the new kolla openstack queens on new three controllers with
same addresses of the old intallation , vip as well.
One of the three controllers is also a kvm node on queens. I stopped all containeres except rabbit,keepalive,rabbit,haproxy and
mariadb.
I deleted al openstack db on mariadb container and I imported the old
tables, changing the address of rabbit for pointing to the new rabbit cluster.
I restarded containers. Changing the rabbit address on old kvm nodes, I can see the old virtual
machines and I can open console on them.
I can see all networks (tenant and provider) of al installation, but
when I try to create a new instance on the new kvm, it remains in buiding state.
Seems it cannot aquire an address. Storage between old and new installation are shred on nfs NETAPP, so I
can see cinder volumes.
I suppose db structure is different between a kolla installation and a
manual instaltion !?
What is wrong ? Thanks Ignazio
Il giorno gio 27 giu 2019 alle ore 16:44 Mark Goddard mark@stackhpc.com
ha scritto:
On Thu, 27 Jun 2019 at 14:46, Ignazio Cassano ignaziocassano@gmail.com
wrote:
Sorry, for my question. It does not need to change anything because endpoints refer to
haproxy vips.
So if your new glance works fine you change haproxy backends for
glance.
Regards Ignazio
That's correct - only the haproxy backend needs to be updated.
Il giorno gio 27 giu 2019 alle ore 15:21 Ignazio Cassano <
ignaziocassano@gmail.com> ha scritto:
Hello Mark, let me to verify if I understood your method.
You have old controllers,haproxy,mariadb and nova computes. You installed three new controllers but kolla.ansible inventory
contains old mariadb and old rabbit servers.
You are deployng single service on new controllers staring with
glance.
When you deploy glance on new controllers, it changes the glance
endpoint on old mariadb db ?
Regards Ignazio
Il giorno gio 27 giu 2019 alle ore 10:52 Mark Goddard <
mark@stackhpc.com> ha scritto:
On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano <
ignaziocassano@gmail.com> wrote:
> > Hello, > Anyone have tried to migrate an existing openstack installation
to kolla containers?
Hi,
I'm aware of two people currently working on that. Gregory Orange
and
one of my colleagues, Pierre Riteau. Pierre is away currently, so I hope he doesn't mind me quoting him from an email to Gregory.
Mark
"I am indeed working on a similar migration using Kolla Ansible with Kayobe, starting from a non-containerised OpenStack deployment based on CentOS RPMs. Existing OpenStack services are deployed across several controller nodes and all sit behind HAProxy, including for internal endpoints. We have additional controller nodes that we use to deploy containerised services. If you don't have the luxury of additional nodes, it will be more difficult as you will need to avoid processes clashing when listening on the same port.
The method I am using resembles your second suggestion, however I am deploying only one containerised service at a time, in order to validate each of them independently. I use the --tags option of kolla-ansible to restrict Ansible to specific roles, and when I am happy with the resulting
configuration I
update HAProxy to point to the new controllers.
As long as the configuration matches, this should be completely transparent for purely HTTP-based services like Glance. You need to
be
more careful with services that include components listening for
RPC,
such as Nova: if the new nova.conf is incorrect and you've deployed
a
nova-conductor that uses it, you could get failed instances
launches.
Some roles depend on others: if you are deploying the neutron-openvswitch-agent, you need to run the openvswitch role as well.
I suggest starting with migrating Glance as it doesn't have any internal services and is easy to validate. Note that properly migrating Keystone requires keeping existing Fernet keys around, so any token stays valid until the time it is expected to stop working (which is fairly complex, see https://bugs.launchpad.net/kolla-ansible/+bug/1809469).
While initially I was using an approach similar to your first suggestion, it can have side effects since Kolla Ansible uses these variables when templating configuration. As an example, most
services
will only have notifications enabled if enable_ceilometer is true.
I've added existing control plane nodes to the Kolla Ansible
inventory
as separate groups, which allows me to use the existing database and RabbitMQ for the containerised services. For example, instead of:
[mariadb:children] control
you may have:
[mariadb:children] oldcontrol_db
I still have to perform the migration of these underlying services
to
the new control plane, I will let you know if there is any hurdle.
A few random things to note:
- if run on existing control plane hosts, the baremetal role removes
some packages listed in `redhat_pkg_removals` which can trigger the removal of OpenStack dependencies using them! I've changed this variable to an empty list.
- compare your existing deployment with a Kolla Ansible one to check
for differences in endpoints, configuration files, database users, service users, etc. For Heat, Kolla uses the domain
heat_user_domain,
while your existing deployment may use another one (and this is hardcoded in the Kolla Heat image). Kolla Ansible uses the "service" project while a couple of deployments I worked with were using "services". This shouldn't matter, except there was a bug in Kolla which prevented it from setting the roles correctly: https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in latest Rocky and Queens images)
- the ml2_conf.ini generated for Neutron generates physical network
names like physnet1, physnet2… you may want to override bridge_mappings completely.
- although sometimes it could be easier to change your existing
deployment to match Kolla Ansible settings, rather than configure Kolla Ansible to match your deployment."
> Thanks > Ignazio >
You should check your cell mapping records inside Nova. They're probably not right of you moved your database and rabbit
Sorry for top posting this is from a phone.
On Mon., Jul. 1, 2019, 5:46 a.m. Ignazio Cassano, ignaziocassano@gmail.com wrote:
PS I presume the problem is neutron, because instances on new kvm nodes remain in building state e do not aquire address. Probably the netron db imported from old openstack installation has some difrrences ....probably I must check defferences from old and new neutron services configuration files. Ignazio
Il giorno lun 1 lug 2019 alle ore 10:10 Mark Goddard mark@stackhpc.com ha scritto:
It sounds like you got quite close to having this working. I'd suggest debugging this instance build failure. One difference with kolla is that we run libvirt inside a container. Have you stopped libvirt from running on the host? Mark
On Sun, 30 Jun 2019 at 09:55, Ignazio Cassano ignaziocassano@gmail.com wrote:
Hi Mark, let me to explain what I am trying. I have a queens installation based on centos and pacemaker with some
instances and heat stacks.
I would like to have another installation with same instances,
projects, stacks ....I'd like to have same uuid for all objects (users,projects instances and so on, because it is controlled by a cloud management platform we wrote.
I stopped controllers on old queens installation backupping the
openstack database.
I installed the new kolla openstack queens on new three controllers
with same addresses of the old intallation , vip as well.
One of the three controllers is also a kvm node on queens. I stopped all containeres except rabbit,keepalive,rabbit,haproxy and
mariadb.
I deleted al openstack db on mariadb container and I imported the old
tables, changing the address of rabbit for pointing to the new rabbit cluster.
I restarded containers. Changing the rabbit address on old kvm nodes, I can see the old virtual
machines and I can open console on them.
I can see all networks (tenant and provider) of al installation, but
when I try to create a new instance on the new kvm, it remains in buiding state.
Seems it cannot aquire an address. Storage between old and new installation are shred on nfs NETAPP, so I
can see cinder volumes.
I suppose db structure is different between a kolla installation and a
manual instaltion !?
What is wrong ? Thanks Ignazio
Il giorno gio 27 giu 2019 alle ore 16:44 Mark Goddard <
mark@stackhpc.com> ha scritto:
On Thu, 27 Jun 2019 at 14:46, Ignazio Cassano <
ignaziocassano@gmail.com> wrote:
Sorry, for my question. It does not need to change anything because endpoints refer to
haproxy vips.
So if your new glance works fine you change haproxy backends for
glance.
Regards Ignazio
That's correct - only the haproxy backend needs to be updated.
Il giorno gio 27 giu 2019 alle ore 15:21 Ignazio Cassano <
ignaziocassano@gmail.com> ha scritto:
Hello Mark, let me to verify if I understood your method.
You have old controllers,haproxy,mariadb and nova computes. You installed three new controllers but kolla.ansible inventory
contains old mariadb and old rabbit servers.
You are deployng single service on new controllers staring with
glance.
When you deploy glance on new controllers, it changes the glance
endpoint on old mariadb db ?
Regards Ignazio
Il giorno gio 27 giu 2019 alle ore 10:52 Mark Goddard <
mark@stackhpc.com> ha scritto:
> > On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano <
ignaziocassano@gmail.com> wrote:
> > > > Hello, > > Anyone have tried to migrate an existing openstack installation
to kolla containers?
> > Hi, > > I'm aware of two people currently working on that. Gregory Orange
and
> one of my colleagues, Pierre Riteau. Pierre is away currently, so I > hope he doesn't mind me quoting him from an email to Gregory. > > Mark > > "I am indeed working on a similar migration using Kolla Ansible
with
> Kayobe, starting from a non-containerised OpenStack deployment
based
> on CentOS RPMs. > Existing OpenStack services are deployed across several controller > nodes and all sit behind HAProxy, including for internal endpoints. > We have additional controller nodes that we use to deploy > containerised services. If you don't have the luxury of additional > nodes, it will be more difficult as you will need to avoid
processes
> clashing when listening on the same port. > > The method I am using resembles your second suggestion, however I
am
> deploying only one containerised service at a time, in order to > validate each of them independently. > I use the --tags option of kolla-ansible to restrict Ansible to > specific roles, and when I am happy with the resulting
configuration I
> update HAProxy to point to the new controllers. > > As long as the configuration matches, this should be completely > transparent for purely HTTP-based services like Glance. You need
to be
> more careful with services that include components listening for
RPC,
> such as Nova: if the new nova.conf is incorrect and you've
deployed a
> nova-conductor that uses it, you could get failed instances
launches.
> Some roles depend on others: if you are deploying the > neutron-openvswitch-agent, you need to run the openvswitch role as > well. > > I suggest starting with migrating Glance as it doesn't have any > internal services and is easy to validate. Note that properly > migrating Keystone requires keeping existing Fernet keys around, so > any token stays valid until the time it is expected to stop working > (which is fairly complex, see > https://bugs.launchpad.net/kolla-ansible/+bug/1809469). > > While initially I was using an approach similar to your first > suggestion, it can have side effects since Kolla Ansible uses these > variables when templating configuration. As an example, most
services
> will only have notifications enabled if enable_ceilometer is true. > > I've added existing control plane nodes to the Kolla Ansible
inventory
> as separate groups, which allows me to use the existing database
and
> RabbitMQ for the containerised services. > For example, instead of: > > [mariadb:children] > control > > you may have: > > [mariadb:children] > oldcontrol_db > > I still have to perform the migration of these underlying services
to
> the new control plane, I will let you know if there is any hurdle. > > A few random things to note: > > - if run on existing control plane hosts, the baremetal role
removes
> some packages listed in `redhat_pkg_removals` which can trigger the > removal of OpenStack dependencies using them! I've changed this > variable to an empty list. > - compare your existing deployment with a Kolla Ansible one to
check
> for differences in endpoints, configuration files, database users, > service users, etc. For Heat, Kolla uses the domain
heat_user_domain,
> while your existing deployment may use another one (and this is > hardcoded in the Kolla Heat image). Kolla Ansible uses the
"service"
> project while a couple of deployments I worked with were using > "services". This shouldn't matter, except there was a bug in Kolla > which prevented it from setting the roles correctly: > https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in latest > Rocky and Queens images) > - the ml2_conf.ini generated for Neutron generates physical network > names like physnet1, physnet2… you may want to override > bridge_mappings completely. > - although sometimes it could be easier to change your existing > deployment to match Kolla Ansible settings, rather than configure > Kolla Ansible to match your deployment." > > > Thanks > > Ignazio > >
I checked them and I modified for fitting to new installation thanks Ignazio
Il giorno lun 1 lug 2019 alle ore 13:36 Mohammed Naser mnaser@vexxhost.com ha scritto:
You should check your cell mapping records inside Nova. They're probably not right of you moved your database and rabbit
Sorry for top posting this is from a phone.
On Mon., Jul. 1, 2019, 5:46 a.m. Ignazio Cassano, < ignaziocassano@gmail.com> wrote:
PS I presume the problem is neutron, because instances on new kvm nodes remain in building state e do not aquire address. Probably the netron db imported from old openstack installation has some difrrences ....probably I must check defferences from old and new neutron services configuration files. Ignazio
Il giorno lun 1 lug 2019 alle ore 10:10 Mark Goddard mark@stackhpc.com ha scritto:
It sounds like you got quite close to having this working. I'd suggest debugging this instance build failure. One difference with kolla is that we run libvirt inside a container. Have you stopped libvirt from running on the host? Mark
On Sun, 30 Jun 2019 at 09:55, Ignazio Cassano ignaziocassano@gmail.com wrote:
Hi Mark, let me to explain what I am trying. I have a queens installation based on centos and pacemaker with some
instances and heat stacks.
I would like to have another installation with same instances,
projects, stacks ....I'd like to have same uuid for all objects (users,projects instances and so on, because it is controlled by a cloud management platform we wrote.
I stopped controllers on old queens installation backupping the
openstack database.
I installed the new kolla openstack queens on new three controllers
with same addresses of the old intallation , vip as well.
One of the three controllers is also a kvm node on queens. I stopped all containeres except rabbit,keepalive,rabbit,haproxy and
mariadb.
I deleted al openstack db on mariadb container and I imported the old
tables, changing the address of rabbit for pointing to the new rabbit cluster.
I restarded containers. Changing the rabbit address on old kvm nodes, I can see the old
virtual machines and I can open console on them.
I can see all networks (tenant and provider) of al installation, but
when I try to create a new instance on the new kvm, it remains in buiding state.
Seems it cannot aquire an address. Storage between old and new installation are shred on nfs NETAPP, so I
can see cinder volumes.
I suppose db structure is different between a kolla installation and a
manual instaltion !?
What is wrong ? Thanks Ignazio
Il giorno gio 27 giu 2019 alle ore 16:44 Mark Goddard <
mark@stackhpc.com> ha scritto:
On Thu, 27 Jun 2019 at 14:46, Ignazio Cassano <
ignaziocassano@gmail.com> wrote:
Sorry, for my question. It does not need to change anything because endpoints refer to
haproxy vips.
So if your new glance works fine you change haproxy backends for
glance.
Regards Ignazio
That's correct - only the haproxy backend needs to be updated.
Il giorno gio 27 giu 2019 alle ore 15:21 Ignazio Cassano <
ignaziocassano@gmail.com> ha scritto:
> > Hello Mark, > let me to verify if I understood your method. > > You have old controllers,haproxy,mariadb and nova computes. > You installed three new controllers but kolla.ansible inventory
contains old mariadb and old rabbit servers.
> You are deployng single service on new controllers staring with
glance.
> When you deploy glance on new controllers, it changes the glance
endpoint on old mariadb db ?
> Regards > Ignazio > > Il giorno gio 27 giu 2019 alle ore 10:52 Mark Goddard <
mark@stackhpc.com> ha scritto:
>> >> On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano <
ignaziocassano@gmail.com> wrote:
>> > >> > Hello, >> > Anyone have tried to migrate an existing openstack installation
to kolla containers?
>> >> Hi, >> >> I'm aware of two people currently working on that. Gregory Orange
and
>> one of my colleagues, Pierre Riteau. Pierre is away currently, so
I
>> hope he doesn't mind me quoting him from an email to Gregory. >> >> Mark >> >> "I am indeed working on a similar migration using Kolla Ansible
with
>> Kayobe, starting from a non-containerised OpenStack deployment
based
>> on CentOS RPMs. >> Existing OpenStack services are deployed across several controller >> nodes and all sit behind HAProxy, including for internal
endpoints.
>> We have additional controller nodes that we use to deploy >> containerised services. If you don't have the luxury of additional >> nodes, it will be more difficult as you will need to avoid
processes
>> clashing when listening on the same port. >> >> The method I am using resembles your second suggestion, however I
am
>> deploying only one containerised service at a time, in order to >> validate each of them independently. >> I use the --tags option of kolla-ansible to restrict Ansible to >> specific roles, and when I am happy with the resulting
configuration I
>> update HAProxy to point to the new controllers. >> >> As long as the configuration matches, this should be completely >> transparent for purely HTTP-based services like Glance. You need
to be
>> more careful with services that include components listening for
RPC,
>> such as Nova: if the new nova.conf is incorrect and you've
deployed a
>> nova-conductor that uses it, you could get failed instances
launches.
>> Some roles depend on others: if you are deploying the >> neutron-openvswitch-agent, you need to run the openvswitch role as >> well. >> >> I suggest starting with migrating Glance as it doesn't have any >> internal services and is easy to validate. Note that properly >> migrating Keystone requires keeping existing Fernet keys around,
so
>> any token stays valid until the time it is expected to stop
working
>> (which is fairly complex, see >> https://bugs.launchpad.net/kolla-ansible/+bug/1809469). >> >> While initially I was using an approach similar to your first >> suggestion, it can have side effects since Kolla Ansible uses
these
>> variables when templating configuration. As an example, most
services
>> will only have notifications enabled if enable_ceilometer is true. >> >> I've added existing control plane nodes to the Kolla Ansible
inventory
>> as separate groups, which allows me to use the existing database
and
>> RabbitMQ for the containerised services. >> For example, instead of: >> >> [mariadb:children] >> control >> >> you may have: >> >> [mariadb:children] >> oldcontrol_db >> >> I still have to perform the migration of these underlying
services to
>> the new control plane, I will let you know if there is any hurdle. >> >> A few random things to note: >> >> - if run on existing control plane hosts, the baremetal role
removes
>> some packages listed in `redhat_pkg_removals` which can trigger
the
>> removal of OpenStack dependencies using them! I've changed this >> variable to an empty list. >> - compare your existing deployment with a Kolla Ansible one to
check
>> for differences in endpoints, configuration files, database users, >> service users, etc. For Heat, Kolla uses the domain
heat_user_domain,
>> while your existing deployment may use another one (and this is >> hardcoded in the Kolla Heat image). Kolla Ansible uses the
"service"
>> project while a couple of deployments I worked with were using >> "services". This shouldn't matter, except there was a bug in Kolla >> which prevented it from setting the roles correctly: >> https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in
latest
>> Rocky and Queens images) >> - the ml2_conf.ini generated for Neutron generates physical
network
>> names like physnet1, physnet2… you may want to override >> bridge_mappings completely. >> - although sometimes it could be easier to change your existing >> deployment to match Kolla Ansible settings, rather than configure >> Kolla Ansible to match your deployment." >> >> > Thanks >> > Ignazio >> >
Hi everyone,
I hope you don't mind me reviving this thread, to let you know I wrote an article after we successfully completed the migration of a running OpenStack deployment to Kolla: http://www.stackhpc.com/migrating-to-kolla.html
Don't hesitate to contact me if you have more questions about how this type of migration can be performed.
Pierre
On Mon, 1 Jul 2019 at 14:02, Ignazio Cassano ignaziocassano@gmail.com wrote:
I checked them and I modified for fitting to new installation thanks Ignazio
Il giorno lun 1 lug 2019 alle ore 13:36 Mohammed Naser mnaser@vexxhost.com ha scritto:
You should check your cell mapping records inside Nova. They're probably not right of you moved your database and rabbit
Sorry for top posting this is from a phone.
On Mon., Jul. 1, 2019, 5:46 a.m. Ignazio Cassano, ignaziocassano@gmail.com wrote:
PS I presume the problem is neutron, because instances on new kvm nodes remain in building state e do not aquire address. Probably the netron db imported from old openstack installation has some difrrences ....probably I must check defferences from old and new neutron services configuration files. Ignazio
Il giorno lun 1 lug 2019 alle ore 10:10 Mark Goddard mark@stackhpc.com ha scritto:
It sounds like you got quite close to having this working. I'd suggest debugging this instance build failure. One difference with kolla is that we run libvirt inside a container. Have you stopped libvirt from running on the host? Mark
On Sun, 30 Jun 2019 at 09:55, Ignazio Cassano ignaziocassano@gmail.com wrote:
Hi Mark, let me to explain what I am trying. I have a queens installation based on centos and pacemaker with some instances and heat stacks. I would like to have another installation with same instances, projects, stacks ....I'd like to have same uuid for all objects (users,projects instances and so on, because it is controlled by a cloud management platform we wrote.
I stopped controllers on old queens installation backupping the openstack database. I installed the new kolla openstack queens on new three controllers with same addresses of the old intallation , vip as well. One of the three controllers is also a kvm node on queens. I stopped all containeres except rabbit,keepalive,rabbit,haproxy and mariadb. I deleted al openstack db on mariadb container and I imported the old tables, changing the address of rabbit for pointing to the new rabbit cluster. I restarded containers. Changing the rabbit address on old kvm nodes, I can see the old virtual machines and I can open console on them. I can see all networks (tenant and provider) of al installation, but when I try to create a new instance on the new kvm, it remains in buiding state. Seems it cannot aquire an address. Storage between old and new installation are shred on nfs NETAPP, so I can see cinder volumes. I suppose db structure is different between a kolla installation and a manual instaltion !? What is wrong ? Thanks Ignazio
Il giorno gio 27 giu 2019 alle ore 16:44 Mark Goddard mark@stackhpc.com ha scritto:
On Thu, 27 Jun 2019 at 14:46, Ignazio Cassano ignaziocassano@gmail.com wrote: > > Sorry, for my question. > It does not need to change anything because endpoints refer to haproxy vips. > So if your new glance works fine you change haproxy backends for glance. > Regards > Ignazio
That's correct - only the haproxy backend needs to be updated.
> > > Il giorno gio 27 giu 2019 alle ore 15:21 Ignazio Cassano ignaziocassano@gmail.com ha scritto: >> >> Hello Mark, >> let me to verify if I understood your method. >> >> You have old controllers,haproxy,mariadb and nova computes. >> You installed three new controllers but kolla.ansible inventory contains old mariadb and old rabbit servers. >> You are deployng single service on new controllers staring with glance. >> When you deploy glance on new controllers, it changes the glance endpoint on old mariadb db ? >> Regards >> Ignazio >> >> Il giorno gio 27 giu 2019 alle ore 10:52 Mark Goddard mark@stackhpc.com ha scritto: >>> >>> On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano ignaziocassano@gmail.com wrote: >>> > >>> > Hello, >>> > Anyone have tried to migrate an existing openstack installation to kolla containers? >>> >>> Hi, >>> >>> I'm aware of two people currently working on that. Gregory Orange and >>> one of my colleagues, Pierre Riteau. Pierre is away currently, so I >>> hope he doesn't mind me quoting him from an email to Gregory. >>> >>> Mark >>> >>> "I am indeed working on a similar migration using Kolla Ansible with >>> Kayobe, starting from a non-containerised OpenStack deployment based >>> on CentOS RPMs. >>> Existing OpenStack services are deployed across several controller >>> nodes and all sit behind HAProxy, including for internal endpoints. >>> We have additional controller nodes that we use to deploy >>> containerised services. If you don't have the luxury of additional >>> nodes, it will be more difficult as you will need to avoid processes >>> clashing when listening on the same port. >>> >>> The method I am using resembles your second suggestion, however I am >>> deploying only one containerised service at a time, in order to >>> validate each of them independently. >>> I use the --tags option of kolla-ansible to restrict Ansible to >>> specific roles, and when I am happy with the resulting configuration I >>> update HAProxy to point to the new controllers. >>> >>> As long as the configuration matches, this should be completely >>> transparent for purely HTTP-based services like Glance. You need to be >>> more careful with services that include components listening for RPC, >>> such as Nova: if the new nova.conf is incorrect and you've deployed a >>> nova-conductor that uses it, you could get failed instances launches. >>> Some roles depend on others: if you are deploying the >>> neutron-openvswitch-agent, you need to run the openvswitch role as >>> well. >>> >>> I suggest starting with migrating Glance as it doesn't have any >>> internal services and is easy to validate. Note that properly >>> migrating Keystone requires keeping existing Fernet keys around, so >>> any token stays valid until the time it is expected to stop working >>> (which is fairly complex, see >>> https://bugs.launchpad.net/kolla-ansible/+bug/1809469). >>> >>> While initially I was using an approach similar to your first >>> suggestion, it can have side effects since Kolla Ansible uses these >>> variables when templating configuration. As an example, most services >>> will only have notifications enabled if enable_ceilometer is true. >>> >>> I've added existing control plane nodes to the Kolla Ansible inventory >>> as separate groups, which allows me to use the existing database and >>> RabbitMQ for the containerised services. >>> For example, instead of: >>> >>> [mariadb:children] >>> control >>> >>> you may have: >>> >>> [mariadb:children] >>> oldcontrol_db >>> >>> I still have to perform the migration of these underlying services to >>> the new control plane, I will let you know if there is any hurdle. >>> >>> A few random things to note: >>> >>> - if run on existing control plane hosts, the baremetal role removes >>> some packages listed in `redhat_pkg_removals` which can trigger the >>> removal of OpenStack dependencies using them! I've changed this >>> variable to an empty list. >>> - compare your existing deployment with a Kolla Ansible one to check >>> for differences in endpoints, configuration files, database users, >>> service users, etc. For Heat, Kolla uses the domain heat_user_domain, >>> while your existing deployment may use another one (and this is >>> hardcoded in the Kolla Heat image). Kolla Ansible uses the "service" >>> project while a couple of deployments I worked with were using >>> "services". This shouldn't matter, except there was a bug in Kolla >>> which prevented it from setting the roles correctly: >>> https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in latest >>> Rocky and Queens images) >>> - the ml2_conf.ini generated for Neutron generates physical network >>> names like physnet1, physnet2… you may want to override >>> bridge_mappings completely. >>> - although sometimes it could be easier to change your existing >>> deployment to match Kolla Ansible settings, rather than configure >>> Kolla Ansible to match your deployment." >>> >>> > Thanks >>> > Ignazio >>> >
Many tHanks Ignazio
Il Mer 2 Ott 2019, 09:44 Pierre Riteau pierre@stackhpc.com ha scritto:
Hi everyone,
I hope you don't mind me reviving this thread, to let you know I wrote an article after we successfully completed the migration of a running OpenStack deployment to Kolla: http://www.stackhpc.com/migrating-to-kolla.html
Don't hesitate to contact me if you have more questions about how this type of migration can be performed.
Pierre
On Mon, 1 Jul 2019 at 14:02, Ignazio Cassano ignaziocassano@gmail.com wrote:
I checked them and I modified for fitting to new installation thanks Ignazio
Il giorno lun 1 lug 2019 alle ore 13:36 Mohammed Naser <
mnaser@vexxhost.com> ha scritto:
You should check your cell mapping records inside Nova. They're
probably not right of you moved your database and rabbit
Sorry for top posting this is from a phone.
On Mon., Jul. 1, 2019, 5:46 a.m. Ignazio Cassano, <
ignaziocassano@gmail.com> wrote:
PS I presume the problem is neutron, because instances on new kvm nodes
remain in building state e do not aquire address.
Probably the netron db imported from old openstack installation has
some difrrences ....probably I must check defferences from old and new neutron services configuration files.
Ignazio
Il giorno lun 1 lug 2019 alle ore 10:10 Mark Goddard <
mark@stackhpc.com> ha scritto:
It sounds like you got quite close to having this working. I'd suggest debugging this instance build failure. One difference with kolla is that we run libvirt inside a container. Have you stopped libvirt from running on the host? Mark
On Sun, 30 Jun 2019 at 09:55, Ignazio Cassano <
ignaziocassano@gmail.com> wrote:
Hi Mark, let me to explain what I am trying. I have a queens installation based on centos and pacemaker with
some instances and heat stacks.
I would like to have another installation with same instances,
projects, stacks ....I'd like to have same uuid for all objects (users,projects instances and so on, because it is controlled by a cloud management platform we wrote.
I stopped controllers on old queens installation backupping the
openstack database.
I installed the new kolla openstack queens on new three controllers
with same addresses of the old intallation , vip as well.
One of the three controllers is also a kvm node on queens. I stopped all containeres except rabbit,keepalive,rabbit,haproxy
and mariadb.
I deleted al openstack db on mariadb container and I imported the
old tables, changing the address of rabbit for pointing to the new rabbit cluster.
I restarded containers. Changing the rabbit address on old kvm nodes, I can see the old
virtual machines and I can open console on them.
I can see all networks (tenant and provider) of al installation,
but when I try to create a new instance on the new kvm, it remains in buiding state.
Seems it cannot aquire an address. Storage between old and new installation are shred on nfs NETAPP,
so I can see cinder volumes.
I suppose db structure is different between a kolla installation
and a manual instaltion !?
What is wrong ? Thanks Ignazio
Il giorno gio 27 giu 2019 alle ore 16:44 Mark Goddard <
mark@stackhpc.com> ha scritto:
> > On Thu, 27 Jun 2019 at 14:46, Ignazio Cassano <
ignaziocassano@gmail.com> wrote:
> > > > Sorry, for my question. > > It does not need to change anything because endpoints refer to
haproxy vips.
> > So if your new glance works fine you change haproxy backends for
glance.
> > Regards > > Ignazio > > That's correct - only the haproxy backend needs to be updated. > > > > > > > Il giorno gio 27 giu 2019 alle ore 15:21 Ignazio Cassano <
ignaziocassano@gmail.com> ha scritto:
> >> > >> Hello Mark, > >> let me to verify if I understood your method. > >> > >> You have old controllers,haproxy,mariadb and nova computes. > >> You installed three new controllers but kolla.ansible inventory
contains old mariadb and old rabbit servers.
> >> You are deployng single service on new controllers staring with
glance.
> >> When you deploy glance on new controllers, it changes the
glance endpoint on old mariadb db ?
> >> Regards > >> Ignazio > >> > >> Il giorno gio 27 giu 2019 alle ore 10:52 Mark Goddard <
mark@stackhpc.com> ha scritto:
> >>> > >>> On Wed, 26 Jun 2019 at 19:34, Ignazio Cassano <
ignaziocassano@gmail.com> wrote:
> >>> > > >>> > Hello, > >>> > Anyone have tried to migrate an existing openstack
installation to kolla containers?
> >>> > >>> Hi, > >>> > >>> I'm aware of two people currently working on that. Gregory
Orange and
> >>> one of my colleagues, Pierre Riteau. Pierre is away currently,
so I
> >>> hope he doesn't mind me quoting him from an email to Gregory. > >>> > >>> Mark > >>> > >>> "I am indeed working on a similar migration using Kolla
Ansible with
> >>> Kayobe, starting from a non-containerised OpenStack deployment
based
> >>> on CentOS RPMs. > >>> Existing OpenStack services are deployed across several
controller
> >>> nodes and all sit behind HAProxy, including for internal
endpoints.
> >>> We have additional controller nodes that we use to deploy > >>> containerised services. If you don't have the luxury of
additional
> >>> nodes, it will be more difficult as you will need to avoid
processes
> >>> clashing when listening on the same port. > >>> > >>> The method I am using resembles your second suggestion,
however I am
> >>> deploying only one containerised service at a time, in order to > >>> validate each of them independently. > >>> I use the --tags option of kolla-ansible to restrict Ansible to > >>> specific roles, and when I am happy with the resulting
configuration I
> >>> update HAProxy to point to the new controllers. > >>> > >>> As long as the configuration matches, this should be completely > >>> transparent for purely HTTP-based services like Glance. You
need to be
> >>> more careful with services that include components listening
for RPC,
> >>> such as Nova: if the new nova.conf is incorrect and you've
deployed a
> >>> nova-conductor that uses it, you could get failed instances
launches.
> >>> Some roles depend on others: if you are deploying the > >>> neutron-openvswitch-agent, you need to run the openvswitch
role as
> >>> well. > >>> > >>> I suggest starting with migrating Glance as it doesn't have any > >>> internal services and is easy to validate. Note that properly > >>> migrating Keystone requires keeping existing Fernet keys
around, so
> >>> any token stays valid until the time it is expected to stop
working
> >>> (which is fairly complex, see > >>> https://bugs.launchpad.net/kolla-ansible/+bug/1809469). > >>> > >>> While initially I was using an approach similar to your first > >>> suggestion, it can have side effects since Kolla Ansible uses
these
> >>> variables when templating configuration. As an example, most
services
> >>> will only have notifications enabled if enable_ceilometer is
true.
> >>> > >>> I've added existing control plane nodes to the Kolla Ansible
inventory
> >>> as separate groups, which allows me to use the existing
database and
> >>> RabbitMQ for the containerised services. > >>> For example, instead of: > >>> > >>> [mariadb:children] > >>> control > >>> > >>> you may have: > >>> > >>> [mariadb:children] > >>> oldcontrol_db > >>> > >>> I still have to perform the migration of these underlying
services to
> >>> the new control plane, I will let you know if there is any
hurdle.
> >>> > >>> A few random things to note: > >>> > >>> - if run on existing control plane hosts, the baremetal role
removes
> >>> some packages listed in `redhat_pkg_removals` which can
trigger the
> >>> removal of OpenStack dependencies using them! I've changed this > >>> variable to an empty list. > >>> - compare your existing deployment with a Kolla Ansible one to
check
> >>> for differences in endpoints, configuration files, database
users,
> >>> service users, etc. For Heat, Kolla uses the domain
heat_user_domain,
> >>> while your existing deployment may use another one (and this is > >>> hardcoded in the Kolla Heat image). Kolla Ansible uses the
"service"
> >>> project while a couple of deployments I worked with were using > >>> "services". This shouldn't matter, except there was a bug in
Kolla
> >>> which prevented it from setting the roles correctly: > >>> https://bugs.launchpad.net/kolla/+bug/1791896 (now fixed in
latest
> >>> Rocky and Queens images) > >>> - the ml2_conf.ini generated for Neutron generates physical
network
> >>> names like physnet1, physnet2… you may want to override > >>> bridge_mappings completely. > >>> - although sometimes it could be easier to change your existing > >>> deployment to match Kolla Ansible settings, rather than
configure
> >>> Kolla Ansible to match your deployment." > >>> > >>> > Thanks > >>> > Ignazio > >>> >
participants (5)
-
Ignazio Cassano
-
Mark Goddard
-
Mohammed Naser
-
Pierre Riteau
-
Radosław Piliszek