Fwd: [Kolla] [ Kolla-Ansible] Existing Workloads getting Impacted on Upgrade
Hi Team, We have deployed the Openstack Victoria release using Multinode Kolla Ansible deployment. There are 2 nodes - one for compute and other for Controller+Network Following the below link, we also upgraded the setup from Victoria to wallaby https://docs.openstack.org/kolla-ansible/latest/user/operating-kolla.html During the Upgrade process, we figured out that the VM which was spawned before Upgrade went to shut off during the middle of Upgrade process and post completion of the upgrade process, we had to manually start the VM again. Ideally we were expecting that the existing Workloads would remain unimpacted in this process. Can someone please suggest if our observation is correct or any steps that needs to be taken in order to avoid this? Regards Anirudh Gupta
For the record, in the other mailing thread (on stx) I suggested this should not be happening and it was likely some resource pressure. -yoctozepto On Fri, Jul 16, 2021 at 8:31 PM Anirudh Gupta <anyrude10@gmail.com> wrote:
Hi Team,
We have deployed the Openstack Victoria release using Multinode Kolla Ansible deployment. There are 2 nodes - one for compute and other for Controller+Network
Following the below link, we also upgraded the setup from Victoria to wallaby
https://docs.openstack.org/kolla-ansible/latest/user/operating-kolla.html
During the Upgrade process, we figured out that the VM which was spawned before Upgrade went to shut off during the middle of Upgrade process and post completion of the upgrade process, we had to manually start the VM again.
Ideally we were expecting that the existing Workloads would remain unimpacted in this process.
Can someone please suggest if our observation is correct or any steps that needs to be taken in order to avoid this?
Regards Anirudh Gupta
Hi Radosław Thanks for your reply. Is there any configuration or anything workaround avoid this? Regards Anirudh Gupta On Sat, 17 Jul, 2021, 12:27 am Radosław Piliszek, < radoslaw.piliszek@gmail.com> wrote:
For the record, in the other mailing thread (on stx) I suggested this should not be happening and it was likely some resource pressure.
-yoctozepto
On Fri, Jul 16, 2021 at 8:31 PM Anirudh Gupta <anyrude10@gmail.com> wrote:
Hi Team,
We have deployed the Openstack Victoria release using Multinode Kolla
Ansible deployment.
There are 2 nodes - one for compute and other for Controller+Network
Following the below link, we also upgraded the setup from Victoria to wallaby
https://docs.openstack.org/kolla-ansible/latest/user/operating-kolla.html
During the Upgrade process, we figured out that the VM which was spawned
before Upgrade went to shut off during the middle of Upgrade process and post completion of the upgrade process, we had to manually start the VM again.
Ideally we were expecting that the existing Workloads would remain
unimpacted in this process.
Can someone please suggest if our observation is correct or any steps
that needs to be taken in order to avoid this?
Regards Anirudh Gupta
Hi, I am very interested in your Vpnaas and Wallaby test. Because for me, Vpnaas worked without problem with Victoria (I use Centos and kolla-ansible to install Openstack), but impossible to make vpnaas work with Wallaby. I don't understand why and it's hard to get help. If it works for you, please let me know. FYI, using Vpnaas in horizon is really easy and very well done. Franck
Le 17 juil. 2021 à 15:00, Anirudh Gupta <anyrude10@gmail.com> a écrit :
Hi Radosław
Thanks for your reply. Is there any configuration or anything workaround avoid this?
Regards Anirudh Gupta
On Sat, 17 Jul, 2021, 12:27 am Radosław Piliszek, <radoslaw.piliszek@gmail.com <mailto:radoslaw.piliszek@gmail.com>> wrote: For the record, in the other mailing thread (on stx) I suggested this should not be happening and it was likely some resource pressure.
-yoctozepto
On Fri, Jul 16, 2021 at 8:31 PM Anirudh Gupta <anyrude10@gmail.com <mailto:anyrude10@gmail.com>> wrote:
Hi Team,
We have deployed the Openstack Victoria release using Multinode Kolla Ansible deployment. There are 2 nodes - one for compute and other for Controller+Network
Following the below link, we also upgraded the setup from Victoria to wallaby
https://docs.openstack.org/kolla-ansible/latest/user/operating-kolla.html <https://docs.openstack.org/kolla-ansible/latest/user/operating-kolla.html>
During the Upgrade process, we figured out that the VM which was spawned before Upgrade went to shut off during the middle of Upgrade process and post completion of the upgrade process, we had to manually start the VM again.
Ideally we were expecting that the existing Workloads would remain unimpacted in this process.
Can someone please suggest if our observation is correct or any steps that needs to be taken in order to avoid this?
Regards Anirudh Gupta
Hello, I got same beahaviour during a a deploy for adding new components. I found all vm stopped. I do not know if it the following is the cause: I have masakari with hacluster. Before deploy again a set compute nodes in maintenance node under masakari and deploying again I found instances up. Keep in mind I do not know how masakari works. Ignazio Il Sab 17 Lug 2021, 15:40 Anirudh Gupta <anyrude10@gmail.com> ha scritto:
Hi Radosław
Thanks for your reply. Is there any configuration or anything workaround avoid this?
Regards Anirudh Gupta
On Sat, 17 Jul, 2021, 12:27 am Radosław Piliszek, < radoslaw.piliszek@gmail.com> wrote:
For the record, in the other mailing thread (on stx) I suggested this should not be happening and it was likely some resource pressure.
-yoctozepto
On Fri, Jul 16, 2021 at 8:31 PM Anirudh Gupta <anyrude10@gmail.com> wrote:
Hi Team,
We have deployed the Openstack Victoria release using Multinode Kolla
Ansible deployment.
There are 2 nodes - one for compute and other for Controller+Network
Following the below link, we also upgraded the setup from Victoria to wallaby
https://docs.openstack.org/kolla-ansible/latest/user/operating-kolla.html
During the Upgrade process, we figured out that the VM which was
spawned before Upgrade went to shut off during the middle of Upgrade process and post completion of the upgrade process, we had to manually start the VM again.
Ideally we were expecting that the existing Workloads would remain
unimpacted in this process.
Can someone please suggest if our observation is correct or any steps
that needs to be taken in order to avoid this?
Regards Anirudh Gupta
participants (4)
-
Anirudh Gupta
-
Franck VEDEL
-
Ignazio Cassano
-
Radosław Piliszek