<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
p.msonormal0, li.msonormal0, div.msonormal0
{mso-style-name:msonormal;
mso-margin-top-alt:auto;
margin-right:0in;
mso-margin-bottom-alt:auto;
margin-left:0in;
font-size:12.0pt;
font-family:"Times New Roman",serif;}
span.EmailStyle18
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="#0563C1" vlink="#954F72">
<div class="WordSection1">
<p class="MsoNormal">I’ve created a heat stack and installed Openstack Train to test the Centos7->8 upgrade following the document here:
<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><a href="https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8">https://docs.openstack.org/kolla-ansible/train/user/centos8.html#migrating-from-centos-7-to-centos-8</a><o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">I used the instructions here to successfully remove and replace control0 with a Centos8 box<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><a href="https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers">https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html#removing-existing-controllers</a><o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">After this my RMQ admin page shows all 3 nodes up, including the new control0. The name of the cluster is
<a href="mailto:rabbit@chrnc-void-testupgrade-control-0.dev.chtrse.com">rabbit@chrnc-void-testupgrade-control-0.dev.chtrse.com</a><o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">(rabbitmq)[root@chrnc-void-testupgrade-control-2 /]# rabbitmqctl cluster_status<o:p></o:p></p>
<p class="MsoNormal">Cluster status of node rabbit@chrnc-void-testupgrade-control-2 ...<o:p></o:p></p>
<p class="MsoNormal">[{nodes,[{disc,['rabbit@chrnc-void-testupgrade-control-0',<o:p></o:p></p>
<p class="MsoNormal"> 'rabbit@chrnc-void-testupgrade-control-0-replace',<o:p></o:p></p>
<p class="MsoNormal"> 'rabbit@chrnc-void-testupgrade-control-1',<o:p></o:p></p>
<p class="MsoNormal"> 'rabbit@chrnc-void-testupgrade-control-2']}]},<o:p></o:p></p>
<p class="MsoNormal">{running_nodes,['rabbit@chrnc-void-testupgrade-control-0-replace',<o:p></o:p></p>
<p class="MsoNormal"> 'rabbit@chrnc-void-testupgrade-control-1',<o:p></o:p></p>
<p class="MsoNormal"> 'rabbit@chrnc-void-testupgrade-control-2']},<o:p></o:p></p>
<p class="MsoNormal">{cluster_name,<<"rabbit@chrnc-void-testupgrade-control-0.dev.chtrse.com">>},<o:p></o:p></p>
<p class="MsoNormal">{partitions,[]},<o:p></o:p></p>
<p class="MsoNormal">{alarms,[{'rabbit@chrnc-void-testupgrade-control-0-replace',[]},<o:p></o:p></p>
<p class="MsoNormal"> {'rabbit@chrnc-void-testupgrade-control-1',[]},<o:p></o:p></p>
<p class="MsoNormal"> {'rabbit@chrnc-void-testupgrade-control-2',[]}]}]<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">After that I create a new VM to verify that the cluster is still working, and then perform the same procedure on control1. When I shut down services on control1, the ansible playbook finishes successfully:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">kolla-ansible -i ../multinode stop --yes-i-really-really-mean-it --limit control1<o:p></o:p></p>
<p class="MsoNormal">…<o:p></o:p></p>
<p class="MsoNormal">control1 : ok=45 changed=22 unreachable=0 failed=0 skipped=105 rescued=0 ignored=0<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">After this my RMQ admin page stops responding. When I check RMQ on the new control0 and the existing control2, the container is still up but RMQ is not running:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">(rabbitmq)[root@chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status<o:p></o:p></p>
<p class="MsoNormal">Error: this command requires the 'rabbit' app to be running on the target node. Start it with 'rabbitmqctl start_app'.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">If I start it on control0 and control2, then the cluster seems normal and the admin page starts working again, and cluster status looks normal:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">(rabbitmq)[root@chrnc-void-testupgrade-control-0-replace /]# rabbitmqctl cluster_status<o:p></o:p></p>
<p class="MsoNormal">Cluster status of node rabbit@chrnc-void-testupgrade-control-0-replace ...<o:p></o:p></p>
<p class="MsoNormal">[{nodes,[{disc,['rabbit@chrnc-void-testupgrade-control-0',<o:p></o:p></p>
<p class="MsoNormal"> 'rabbit@chrnc-void-testupgrade-control-0-replace',<o:p></o:p></p>
<p class="MsoNormal"> 'rabbit@chrnc-void-testupgrade-control-1',<o:p></o:p></p>
<p class="MsoNormal"> 'rabbit@chrnc-void-testupgrade-control-2']}]},<o:p></o:p></p>
<p class="MsoNormal">{running_nodes,['rabbit@chrnc-void-testupgrade-control-2',<o:p></o:p></p>
<p class="MsoNormal"> 'rabbit@chrnc-void-testupgrade-control-0-replace']},<o:p></o:p></p>
<p class="MsoNormal">{cluster_name,<<"rabbit@chrnc-void-testupgrade-control-0.dev.chtrse.com">>},<o:p></o:p></p>
<p class="MsoNormal">{partitions,[]},<o:p></o:p></p>
<p class="MsoNormal">{alarms,[{'rabbit@chrnc-void-testupgrade-control-2',[]},<o:p></o:p></p>
<p class="MsoNormal"> {'rabbit@chrnc-void-testupgrade-control-0-replace',[]}]}]<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">But my hypervisors are down:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">(openstack) [root@chrnc-void-testupgrade-build kolla-ansible]# ohll<o:p></o:p></p>
<p class="MsoNormal">+----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+<o:p></o:p></p>
<p class="MsoNormal">| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | vCPUs Used | vCPUs | Memory MB Used | Memory MB |<o:p></o:p></p>
<p class="MsoNormal">+----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+<o:p></o:p></p>
<p class="MsoNormal">| 3 | chrnc-void-testupgrade-compute-2.dev.chtrse.com | QEMU | 172.16.2.106 | down | 5 | 8 | 2560 | 30719 |<o:p></o:p></p>
<p class="MsoNormal">| 6 | chrnc-void-testupgrade-compute-0.dev.chtrse.com | QEMU | 172.16.2.31 | down | 5 | 8 | 2560 | 30719 |<o:p></o:p></p>
<p class="MsoNormal">| 9 | chrnc-void-testupgrade-compute-1.dev.chtrse.com | QEMU | 172.16.0.30 | down | 5 | 8 | 2560 | 30719 |<o:p></o:p></p>
<p class="MsoNormal">+----+-------------------------------------------------+-----------------+--------------+-------+------------+-------+----------------+-----------+<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">When I look at the nova-compute.log on a compute node, I see RMQ failures every 10 seconds:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">172.16.2.31 compute0<o:p></o:p></p>
<p class="MsoNormal">2021-03-30 03:07:54.893 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in
1 seconds.: timeout: timed out<o:p></o:p></p>
<p class="MsoNormal">2021-03-30 03:07:55.905 7 INFO oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] Reconnected to AMQP server on 172.16.1.132:5672 via [amqp] client with port 56422.<o:p></o:p></p>
<p class="MsoNormal">2021-03-30 03:08:05.915 7 ERROR oslo.messaging._drivers.impl_rabbit [req-70d69b45-c3a7-4fbc-b709-4d7d757e09e7 - - - - -] [aeb317a8-873f-49be-a2a0-c6d6e0891a3e] AMQP server on 172.16.1.132:5672 is unreachable: timed out. Trying again in
1 seconds.: timeout: timed out<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">In the RMQ logs I see this every 10 seconds:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">172.16.1.132 control2<o:p></o:p></p>
<p class="MsoNormal">[root@chrnc-void-testupgrade-control-2 ~]# tail -f /var/log/kolla/rabbitmq/rabbit\@chrnc-void-testupgrade-control-2.log |grep 172.16.2.31<o:p></o:p></p>
<p class="MsoNormal">2021-03-30 03:07:54.895 [warning] <0.13247.35> closing AMQP connection <0.13247.35> (172.16.2.31:56420 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'):<o:p></o:p></p>
<p class="MsoNormal">client unexpectedly closed TCP connection<o:p></o:p></p>
<p class="MsoNormal">2021-03-30 03:07:55.901 [info] <0.15288.35> accepting AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672)<o:p></o:p></p>
<p class="MsoNormal">2021-03-30 03:07:55.903 [info] <0.15288.35> Connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672) has a client-provided name: nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e<o:p></o:p></p>
<p class="MsoNormal">2021-03-30 03:07:55.904 [info] <0.15288.35> connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e): user 'openstack' authenticated and granted access to vhost '/'<o:p></o:p></p>
<p class="MsoNormal">2021-03-30 03:08:05.916 [warning] <0.15288.35> closing AMQP connection <0.15288.35> (172.16.2.31:56422 -> 172.16.1.132:5672 - nova-compute:7:aeb317a8-873f-49be-a2a0-c6d6e0891a3e, vhost: '/', user: 'openstack'):<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Why does RMQ fail when I shut down the 2<sup>nd</sup> controller after successfully replacing the first one?<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<p class="MsoNormal">I apologize for the nonsense below. I have not been able to stop it from being attached to my external emails.<o:p></o:p></p>
</div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
The contents of this e-mail message and <br>any attachments are intended solely for the <br>addressee(s) and may contain confidential <br>and/or legally privileged information. If you<br>are not the intended recipient of this message<br>or if this message has been addressed to you <br>in error, please immediately alert the sender<br>by reply e-mail and then delete this message <br>and any attachments. If you are not the <br>intended recipient, you are notified that <br>any use, dissemination, distribution, copying,<br>or storage of this message or any attachment <br>is strictly prohibited.</body>
</html>