<div dir="ltr"><div>Hi all!</div><div><br></div><div>I was able to analyze the attached log files and I hope that the results may help you understand what's going wrong with instance creation.</div><div>You can find <b><i>Log_Tool's unique exported Error blocks</i></b> here: <a href="http://paste.openstack.org/show/795356/" target="_blank">http://paste.openstack.org/show/795356/</a></div><div><br></div><div><b><u>Some statistics and problematical messages:</u></b></div><div><span style="color:rgb(0,0,255)">##### Statistics - Number of Errors/Warnings per Standard OSP log since: 2020-06-30 12:30:00 #####</span><br></div><div><font color="#0000ff">Total_Number_Of_Errors --> 9<br>/home/ashtempl/Ruslanas/controller/neutron/server.log --> 1<br>/home/ashtempl/Ruslanas/compute/stdouts/ovn_controller.log --> 1<br>/home/ashtempl/Ruslanas/compute/nova/nova-compute.log --> 7</font><br><br></div><div><b>nova-compute.log</b><br></div><div><font color="#ff0000"><b>default default] Error launching a defined domain with XML: <domain type='kvm'></b><br></font></div><div><font color="#ff0000">368-2020-06-30 12:30:10.815 7 <b>ERROR</b> nova.compute.manager [req-87bef18f-ad3d-4147-a1b3-196b5b64b688 7bdb8c3bf8004f98aae1b16d938ac09b 69134106b56941698e58c61...<br></font></div><div><font color="#ff0000">70dc50f] Instance <b>failed</b> to spawn: <b>libvirt.libvirtError</b>: internal <b>error</b>: qemu unexpectedly closed the monitor: 2020-06-30T10:30:10.182675Z qemu-kvm: <b>error</b>: failed to set MSR 0...<br>he monitor: 2020-06-30T10:30:10.182675Z <b>qemu-kvm: error: failed to set MSR 0x48e to 0xfff9fffe04006172</b><br>_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' <b>failed</b>.<br> [instance: 128f372c-cb2e-47d9-b1bf-ce17270dc50f] <b>Traceback</b> (most recent call last):<br>375-2020-06-30 12:30:10.815 7<b> ERROR</b> nova.compute.manager [instance: 128f372c-cb2e-47d9-b1bf-ce17270dc50f] File "/usr/lib/python3.6/site-packages/nova/vir...</font><br></div><div><br></div><div><b>server.log </b><br></div><div><font color="#ff0000">5821c815-d213-498d-9394-fe25c6849918', 'status': 'failed', <b>'code': 422} returned with failed status</b></font><br></div><div><br></div><div><b>ovn_controller.log</b><br></div><div><font color="#ff0000">272-2020-06-30T12:30:10.126079625+02:00 stderr F 2020-06-30T10:30:10Z|00247|patch|WARN|<b>Bridge 'br-ex' not found for network 'datacentre'</b></font><br></div><div><br></div><div>Thanks!</div><div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Jun 30, 2020 at 2:13 PM Ruslanas Gžibovskis <<a href="mailto:ruslanas@lpic.lt">ruslanas@lpic.lt</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">attaching logs here. let's see if it will work.</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, 30 Jun 2020 at 12:55, Ruslanas Gžibovskis <<a href="mailto:ruslanas@lpic.lt" target="_blank">ruslanas@lpic.lt</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">hi all,<div><br></div><div>I am back, had some issues with MTU.</div><div>Now looks good, at least deployment part.</div><div><br></div><div>So I have installed back what I had, and still failing at same point as in first message.</div><div><br></div><div>I have tried to use: LogTool, how to use it? well, I launched it, but it always say [0] detailed output:</div><div> File "./PyTool.py", line 596, in <module><br> random_node=random.choice(overcloud_nodes)<br></div><div><br></div><div>I do not get, how to make it work, should it get from stackrc ? as I see in </div><div> overcloud_nodes = []<br> all_nodes = exec_command_line_command('source ' + source_rc_file_path + 'stackrc;openstack server list -f json')[<br></div><div><br></div><div>[0] <a href="http://paste.openstack.org/show/795345/" target="_blank">http://paste.openstack.org/show/795345/</a></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, 24 Jun 2020 at 20:02, Arkady Shtempler <<a href="mailto:ashtempl@redhat.com" target="_blank">ashtempl@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi Ruslanas!</div><div><br></div><div>Is it possible to get all logs under /var/log/containers somehow?</div><div><br></div><div>Thanks!<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jun 24, 2020 at 2:18 AM Ruslanas Gžibovskis <<a href="mailto:ruslanas@lpic.lt" target="_blank">ruslanas@lpic.lt</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div>Hi Alfredo,</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Compute nodes are baremetal or virtualized?, I've seen similar bug reports when using nested virtualization in other OSes.</div></blockquote></div></div></blockquote><div>baremetal. Dell R630 if to be VERY precise. </div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div></div><div><br></div></div></blockquote><div>When using podman, the recommended way to restart containers is using systemd:<br></div><div><br></div><div><a href="https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/tips_tricks.html" target="_blank">https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/tips_tricks.html</a></div></div></div></blockquote><div> </div><div>Thank you, I will try. I also modified a file, and it looked like it relaunched podman container once config was changed. Either way, if I understand Linux config correctly, the default value for user and group is root, if commented out:</div><div>#user = "root"<br>#group = "root"<br></div><div><br></div><div>also in some logs, I saw, that it detected, that it is not AMD CPU :) and it is really not AMD CPU.</div><div><br></div><div><br></div><div>Just for fun, it might be important, here is how my node info looks.<br> ComputeS01Parameters:<br> NovaReservedHostMemory: 16384<br> KernelArgs: "crashkernel=no rhgb"<br> ComputeS01ExtraConfig:<br> nova::cpu_allocation_ratio: 4.0<br> nova::compute::libvirt::rx_queue_size: 1024<br> nova::compute::libvirt::tx_queue_size: 1024<br> nova::compute::resume_guests_state_on_host_boot: true<br></div></div></div>
_______________________________________________<br>
users mailing list<br>
<a href="mailto:users@lists.rdoproject.org" target="_blank">users@lists.rdoproject.org</a><br>
<a href="http://lists.rdoproject.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.rdoproject.org/mailman/listinfo/users</a><br>
<br>
To unsubscribe: <a href="mailto:users-unsubscribe@lists.rdoproject.org" target="_blank">users-unsubscribe@lists.rdoproject.org</a><br>
</blockquote></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div>Ruslanas Gžibovskis<br>+370 6030 7030<br></div></div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div>Ruslanas Gžibovskis<br>+370 6030 7030<br></div></div></div>
</blockquote></div>