<div dir="ltr">Update:<div>In the hypervisor list the compute node state is showing down.</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Mar 15, 2023 at 11:11 PM Swogat Pradhan <<a href="mailto:swogatpradhan22@gmail.com">swogatpradhan22@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Brendan,<div>Now i have deployed another site where i have used 2 linux bonds network template for both 3 compute nodes and 3 ceph nodes.</div><div>The bonding options is set to mode=802.3ad (lacp=active).</div><div>I used a cirros image to launch instance but the instance timed out so i waited for the volume to be created.</div><div>Once the volume was created i tried launching the instance from the volume and still the instance is stuck in spawning state.</div><div><br></div><div>Here is the nova-compute log:</div><div><br></div>2023-03-15 17:35:47.739 185437 INFO oslo.privsep.daemon [-] privsep daemon starting<br>2023-03-15 17:35:47.744 185437 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0<br>2023-03-15 17:35:47.749 185437 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none<br>2023-03-15 17:35:47.749 185437 INFO oslo.privsep.daemon [-] privsep daemon running as pid 185437<br>2023-03-15 17:35:47.974 8 WARNING os_brick.initiator.connectors.nvmeof [req-dbb11a9b-317e-4957-b141-f9e0bdf6a266 b240e3e89d99489284cd731e75f2a5db 4160ce999a31485fa643aed0936dfef0 - default default] Process execution error in _get_host_uuid: Unexpected error while running command.<br>Command: blkid overlay -s UUID -o value<br>Exit code: 2<br>Stdout: ''<br>Stderr: '': oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.<br>2023-03-15 17:35:51.616 8 INFO nova.virt.libvirt.driver [req-dbb11a9b-317e-4957-b141-f9e0bdf6a266 b240e3e89d99489284cd731e75f2a5db 4160ce999a31485fa643aed0936dfef0 - default default] [instance: 450b749c-a10a-4308-80a9-3b8020fee758] Creating image<div><br></div><div>It is stuck in creating image, do i need to run the template mentioned here ?: <a href="https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/post_deployment/pre_cache_images.html" target="_blank">https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/post_deployment/pre_cache_images.html</a></div><div><br></div><div>The volume is already created and i do not understand why the instance is stuck in spawning state.</div><div><br></div><div>With regards,</div><div>Swogat Pradhan<br><div> </div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Mar 5, 2023 at 4:02 PM Brendan Shephard <<a href="mailto:bshephar@redhat.com" target="_blank">bshephar@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>Does your environment use different network interfaces for each of the networks? Or does it have a bond with everything on it?<div><br></div><div>One issue I have seen before is that when launching instances, there is a lot of network traffic between nodes as the hypervisor needs to download the image from Glance. Along with various other services sending normal network traffic, it can be enough to cause issues if everything is running over a single 1Gbe interface.</div><div><br></div><div>I have seen the same situation in fact when using a single active/backup bond on 1Gbe nics. It’s worth checking the network traffic while you try to spawn the instance to see if you’re dropping packets. In the situation I described, there were dropped packets which resulted in a loss of communication between nova_compute and RMQ, so the node appeared offline. You should also confirm that nova_compute is being disconnected in the nova_compute logs if you tail them on the Hypervisor while spawning the instance.</div><div><br></div><div>In my case, changing from active/backup to LACP helped. So, based on that experience, from my perspective, is certainly sounds like some kind of network issue.</div><div><br></div><div>Regards,</div><div><br><div>
<div>Brendan Shephard</div><div>Senior Software Engineer</div><div>Red Hat Australia</div><div><br></div><br>
</div>
<div><br><blockquote type="cite"><div>On 5 Mar 2023, at 6:47 am, Eugen Block <<a href="mailto:eblock@nde.ag" target="_blank">eblock@nde.ag</a>> wrote:</div><br><div><div>Hi,<br><br>I tried to help someone with a similar issue some time ago in this thread:<br><a href="https://serverfault.com/questions/1116771/openstack-oslo-messaging-exception-in-nova-conductor" target="_blank">https://serverfault.com/questions/1116771/openstack-oslo-messaging-exception-in-nova-conductor</a><br><br>But apparently a neutron reinstallation fixed it for that user, not sure if that could apply here. But is it possible that your nova and neutron versions are different between central and edge site? Have you restarted nova and neutron services on the compute nodes after installation? Have you debug logs of nova-conductor and maybe nova-compute? Maybe they can help narrow down the issue.<br>If there isn't any additional information in the debug logs I probably would start "tearing down" rabbitmq. I didn't have to do that in a production system yet so be careful. I can think of two routes:<br><br>- Either remove queues, exchanges etc. while rabbit is running, this will most likely impact client IO depending on your load. Check out the rabbitmqctl commands.<br>- Or stop the rabbitmq cluster, remove the mnesia tables from all nodes and restart rabbitmq so the exchanges, queues etc. rebuild.<br><br>I can imagine that the failed reply "survives" while being replicated across the rabbit nodes. But I don't really know the rabbit internals too well, so maybe someone else can chime in here and give a better advice.<br><br>Regards,<br>Eugen<br><br>Zitat von Swogat Pradhan <<a href="mailto:swogatpradhan22@gmail.com" target="_blank">swogatpradhan22@gmail.com</a>>:<br><br><blockquote type="cite">Hi,<br>Can someone please help me out on this issue?<br><br>With regards,<br>Swogat Pradhan<br><br>On Thu, Mar 2, 2023 at 1:24 PM Swogat Pradhan <<a href="mailto:swogatpradhan22@gmail.com" target="_blank">swogatpradhan22@gmail.com</a>><br>wrote:<br><br><blockquote type="cite">Hi<br>I don't see any major packet loss.<br>It seems the problem is somewhere in rabbitmq maybe but not due to packet<br>loss.<br><br>with regards,<br>Swogat Pradhan<br><br>On Wed, Mar 1, 2023 at 3:34 PM Swogat Pradhan <<a href="mailto:swogatpradhan22@gmail.com" target="_blank">swogatpradhan22@gmail.com</a>><br>wrote:<br><br><blockquote type="cite">Hi,<br>Yes the MTU is the same as the default '1500'.<br>Generally I haven't seen any packet loss, but never checked when<br>launching the instance.<br>I will check that and come back.<br>But everytime i launch an instance the instance gets stuck at spawning<br>state and there the hypervisor becomes down, so not sure if packet loss<br>causes this.<br><br>With regards,<br>Swogat pradhan<br><br>On Wed, Mar 1, 2023 at 3:30 PM Eugen Block <<a href="mailto:eblock@nde.ag" target="_blank">eblock@nde.ag</a>> wrote:<br><br><blockquote type="cite">One more thing coming to mind is MTU size. Are they identical between<br>central and edge site? Do you see packet loss through the tunnel?<br><br>Zitat von Swogat Pradhan <<a href="mailto:swogatpradhan22@gmail.com" target="_blank">swogatpradhan22@gmail.com</a>>:<br><br>> Hi Eugen,<br>> Request you to please add my email either on 'to' or 'cc' as i am not<br>> getting email's from you.<br>> Coming to the issue:<br>><br>> [root@overcloud-controller-no-ceph-3 /]# rabbitmqctl list_policies -p<br>/<br>> Listing policies for vhost "/" ...<br>> vhost name pattern apply-to definition priority<br>> / ha-all ^(?!amq\.).* queues<br>><br>{"ha-mode":"exactly","ha-params":2,"ha-promote-on-shutdown":"always"} 0<br>><br>> I have the edge site compute nodes up, it only goes down when i am<br>trying<br>> to launch an instance and the instance comes to a spawning state and<br>then<br>> gets stuck.<br>><br>> I have a tunnel setup between the central and the edge sites.<br>><br>> With regards,<br>> Swogat Pradhan<br>><br>> On Tue, Feb 28, 2023 at 9:11 PM Swogat Pradhan <<br><a href="mailto:swogatpradhan22@gmail.com" target="_blank">swogatpradhan22@gmail.com</a>><br>> wrote:<br>><br>>> Hi Eugen,<br>>> For some reason i am not getting your email to me directly, i am<br>checking<br>>> the email digest and there i am able to find your reply.<br>>> Here is the log for download: <a href="https://we.tl/t-L8FEkGZFSq" target="_blank">https://we.tl/t-L8FEkGZFSq</a><br>>> Yes, these logs are from the time when the issue occurred.<br>>><br>>> *Note: i am able to create vm's and perform other activities in the<br>>> central site, only facing this issue in the edge site.*<br>>><br>>> With regards,<br>>> Swogat Pradhan<br>>><br>>> On Mon, Feb 27, 2023 at 5:12 PM Swogat Pradhan <<br><a href="mailto:swogatpradhan22@gmail.com" target="_blank">swogatpradhan22@gmail.com</a>><br>>> wrote:<br>>><br>>>> Hi Eugen,<br>>>> Thanks for your response.<br>>>> I have actually a 4 controller setup so here are the details:<br>>>><br>>>> *PCS Status:*<br>>>> * Container bundle set: rabbitmq-bundle [<br>>>> <a href="http://172.25.201.68:8787/tripleomaster/openstack-rabbitmq:pcmklatest" target="_blank">172.25.201.68:8787/tripleomaster/openstack-rabbitmq:pcmklatest</a>]:<br>>>> * rabbitmq-bundle-0 (ocf::heartbeat:rabbitmq-cluster):<br> Started<br>>>> overcloud-controller-no-ceph-3<br>>>> * rabbitmq-bundle-1 (ocf::heartbeat:rabbitmq-cluster):<br> Started<br>>>> overcloud-controller-2<br>>>> * rabbitmq-bundle-2 (ocf::heartbeat:rabbitmq-cluster):<br> Started<br>>>> overcloud-controller-1<br>>>> * rabbitmq-bundle-3 (ocf::heartbeat:rabbitmq-cluster):<br> Started<br>>>> overcloud-controller-0<br>>>><br>>>> I have tried restarting the bundle multiple times but the issue is<br>still<br>>>> present.<br>>>><br>>>> *Cluster status:*<br>>>> [root@overcloud-controller-0 /]# rabbitmqctl cluster_status<br>>>> Cluster status of node<br>>>> <a href="mailto:rabbit@overcloud-controller-0.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-0.internalapi.bdxworld.com</a> ...<br>>>> Basics<br>>>><br>>>> Cluster name: <a href="mailto:rabbit@overcloud-controller-no-ceph-3.bdxworld.com" target="_blank">rabbit@overcloud-controller-no-ceph-3.bdxworld.com</a><br>>>><br>>>> Disk Nodes<br>>>><br>>>> <a href="mailto:rabbit@overcloud-controller-0.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-0.internalapi.bdxworld.com</a><br>>>> <a href="mailto:rabbit@overcloud-controller-1.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-1.internalapi.bdxworld.com</a><br>>>> <a href="mailto:rabbit@overcloud-controller-2.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-2.internalapi.bdxworld.com</a><br>>>> <a href="mailto:rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com</a><br>>>><br>>>> Running Nodes<br>>>><br>>>> <a href="mailto:rabbit@overcloud-controller-0.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-0.internalapi.bdxworld.com</a><br>>>> <a href="mailto:rabbit@overcloud-controller-1.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-1.internalapi.bdxworld.com</a><br>>>> <a href="mailto:rabbit@overcloud-controller-2.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-2.internalapi.bdxworld.com</a><br>>>> <a href="mailto:rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com</a><br>>>><br>>>> Versions<br>>>><br>>>> <a href="mailto:rabbit@overcloud-controller-0.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-0.internalapi.bdxworld.com</a>: RabbitMQ<br>3.8.3<br>>>> on Erlang 22.3.4.1<br>>>> <a href="mailto:rabbit@overcloud-controller-1.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-1.internalapi.bdxworld.com</a>: RabbitMQ<br>3.8.3<br>>>> on Erlang 22.3.4.1<br>>>> <a href="mailto:rabbit@overcloud-controller-2.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-2.internalapi.bdxworld.com</a>: RabbitMQ<br>3.8.3<br>>>> on Erlang 22.3.4.1<br>>>> <a href="mailto:rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com</a>:<br>RabbitMQ<br>>>> 3.8.3 on Erlang 22.3.4.1<br>>>><br>>>> Alarms<br>>>><br>>>> (none)<br>>>><br>>>> Network Partitions<br>>>><br>>>> (none)<br>>>><br>>>> Listeners<br>>>><br>>>> Node: <a href="mailto:rabbit@overcloud-controller-0.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-0.internalapi.bdxworld.com</a>,<br>interface:<br>>>> [::], port: 25672, protocol: clustering, purpose: inter-node and CLI<br>tool<br>>>> communication<br>>>> Node: <a href="mailto:rabbit@overcloud-controller-0.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-0.internalapi.bdxworld.com</a>,<br>interface:<br>>>> 172.25.201.212, port: 5672, protocol: amqp, purpose: AMQP 0-9-1<br>>>> and AMQP 1.0<br>>>> Node: <a href="mailto:rabbit@overcloud-controller-0.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-0.internalapi.bdxworld.com</a>,<br>interface:<br>>>> [::], port: 15672, protocol: http, purpose: HTTP API<br>>>> Node: <a href="mailto:rabbit@overcloud-controller-1.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-1.internalapi.bdxworld.com</a>,<br>interface:<br>>>> [::], port: 25672, protocol: clustering, purpose: inter-node and CLI<br>tool<br>>>> communication<br>>>> Node: <a href="mailto:rabbit@overcloud-controller-1.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-1.internalapi.bdxworld.com</a>,<br>interface:<br>>>> 172.25.201.205, port: 5672, protocol: amqp, purpose: AMQP 0-9-1<br>>>> and AMQP 1.0<br>>>> Node: <a href="mailto:rabbit@overcloud-controller-1.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-1.internalapi.bdxworld.com</a>,<br>interface:<br>>>> [::], port: 15672, protocol: http, purpose: HTTP API<br>>>> Node: <a href="mailto:rabbit@overcloud-controller-2.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-2.internalapi.bdxworld.com</a>,<br>interface:<br>>>> [::], port: 25672, protocol: clustering, purpose: inter-node and CLI<br>tool<br>>>> communication<br>>>> Node: <a href="mailto:rabbit@overcloud-controller-2.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-2.internalapi.bdxworld.com</a>,<br>interface:<br>>>> 172.25.201.201, port: 5672, protocol: amqp, purpose: AMQP 0-9-1<br>>>> and AMQP 1.0<br>>>> Node: <a href="mailto:rabbit@overcloud-controller-2.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-2.internalapi.bdxworld.com</a>,<br>interface:<br>>>> [::], port: 15672, protocol: http, purpose: HTTP API<br>>>> Node: <a href="mailto:rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com</a><br>,<br>>>> interface: [::], port: 25672, protocol: clustering, purpose:<br>inter-node and<br>>>> CLI tool communication<br>>>> Node: <a href="mailto:rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com</a><br>,<br>>>> interface: 172.25.201.209, port: 5672, protocol: amqp, purpose: AMQP<br>0-9-1<br>>>> and AMQP 1.0<br>>>> Node: <a href="mailto:rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com</a><br>,<br>>>> interface: [::], port: 15672, protocol: http, purpose: HTTP API<br>>>><br>>>> Feature flags<br>>>><br>>>> Flag: drop_unroutable_metric, state: enabled<br>>>> Flag: empty_basic_get_metric, state: enabled<br>>>> Flag: implicit_default_bindings, state: enabled<br>>>> Flag: quorum_queue, state: enabled<br>>>> Flag: virtual_host_metadata, state: enabled<br>>>><br>>>> *Logs:*<br>>>> *(Attached)*<br>>>><br>>>> With regards,<br>>>> Swogat Pradhan<br>>>><br>>>> On Sun, Feb 26, 2023 at 2:34 PM Swogat Pradhan <<br><a href="mailto:swogatpradhan22@gmail.com" target="_blank">swogatpradhan22@gmail.com</a>><br>>>> wrote:<br>>>><br>>>>> Hi,<br>>>>> Please find the nova conductor as well as nova api log.<br>>>>><br>>>>> nova-conuctor:<br>>>>><br>>>>> 2023-02-26 08:45:01.108 31 WARNING<br>oslo_messaging._drivers.amqpdriver<br>>>>> [req-caefe26d-153a-4dfd-9ea6-bc5ca0d46679 - - - - -]<br>>>>> reply_349bcb075f8c49329435a0f884b33066 doesn't exist, drop reply to<br>>>>> 16152921c1eb45c2b1f562087140168b<br>>>>> 2023-02-26 08:45:02.144 26 WARNING<br>oslo_messaging._drivers.amqpdriver<br>>>>> [req-7b43c4e5-0475-4598-92c0-fcacb51d9813 - - - - -]<br>>>>> reply_276049ec36a84486a8a406911d9802f4 doesn't exist, drop reply to<br>>>>> 83dbe5f567a940b698acfe986f6194fa<br>>>>> 2023-02-26 08:45:02.314 32 WARNING<br>oslo_messaging._drivers.amqpdriver<br>>>>> [req-7b43c4e5-0475-4598-92c0-fcacb51d9813 - - - - -]<br>>>>> reply_276049ec36a84486a8a406911d9802f4 doesn't exist, drop reply to<br>>>>> f3bfd7f65bd542b18d84cea3033abb43:<br>>>>> oslo_messaging.exceptions.MessageUndeliverable<br>>>>> 2023-02-26 08:45:02.316 32 ERROR oslo_messaging._drivers.amqpdriver<br>>>>> [req-7b43c4e5-0475-4598-92c0-fcacb51d9813 - - - - -] The reply<br>>>>> f3bfd7f65bd542b18d84cea3033abb43 failed to send after 60 seconds<br>due to a<br>>>>> missing queue (reply_276049ec36a84486a8a406911d9802f4).<br>Abandoning...:<br>>>>> oslo_messaging.exceptions.MessageUndeliverable<br>>>>> 2023-02-26 08:48:01.282 35 WARNING<br>oslo_messaging._drivers.amqpdriver<br>>>>> [req-caefe26d-153a-4dfd-9ea6-bc5ca0d46679 - - - - -]<br>>>>> reply_349bcb075f8c49329435a0f884b33066 doesn't exist, drop reply to<br>>>>> d4b9180f91a94f9a82c3c9c4b7595566:<br>>>>> oslo_messaging.exceptions.MessageUndeliverable<br>>>>> 2023-02-26 08:48:01.284 35 ERROR oslo_messaging._drivers.amqpdriver<br>>>>> [req-caefe26d-153a-4dfd-9ea6-bc5ca0d46679 - - - - -] The reply<br>>>>> d4b9180f91a94f9a82c3c9c4b7595566 failed to send after 60 seconds<br>due to a<br>>>>> missing queue (reply_349bcb075f8c49329435a0f884b33066).<br>Abandoning...:<br>>>>> oslo_messaging.exceptions.MessageUndeliverable<br>>>>> 2023-02-26 08:49:01.303 33 WARNING<br>oslo_messaging._drivers.amqpdriver<br>>>>> [req-caefe26d-153a-4dfd-9ea6-bc5ca0d46679 - - - - -]<br>>>>> reply_349bcb075f8c49329435a0f884b33066 doesn't exist, drop reply to<br>>>>> 897911a234a445d8a0d8af02ece40f6f:<br>>>>> oslo_messaging.exceptions.MessageUndeliverable<br>>>>> 2023-02-26 08:49:01.304 33 ERROR oslo_messaging._drivers.amqpdriver<br>>>>> [req-caefe26d-153a-4dfd-9ea6-bc5ca0d46679 - - - - -] The reply<br>>>>> 897911a234a445d8a0d8af02ece40f6f failed to send after 60 seconds<br>due to a<br>>>>> missing queue (reply_349bcb075f8c49329435a0f884b33066).<br>Abandoning...:<br>>>>> oslo_messaging.exceptions.MessageUndeliverable<br>>>>> 2023-02-26 08:49:52.254 31 WARNING nova.cache_utils<br>>>>> [req-3a1547ea-326f-4dd0-9127-7f4a4bdf1e45<br>b240e3e89d99489284cd731e75f2a5db<br>>>>> 4160ce999a31485fa643aed0936dfef0 - default default] Cache enabled<br>with<br>>>>> backend dogpile.cache.null.<br>>>>> 2023-02-26 08:50:01.264 27 WARNING<br>oslo_messaging._drivers.amqpdriver<br>>>>> [req-caefe26d-153a-4dfd-9ea6-bc5ca0d46679 - - - - -]<br>>>>> reply_349bcb075f8c49329435a0f884b33066 doesn't exist, drop reply to<br>>>>> 8f723ceb10c3472db9a9f324861df2bb:<br>>>>> oslo_messaging.exceptions.MessageUndeliverable<br>>>>> 2023-02-26 08:50:01.266 27 ERROR oslo_messaging._drivers.amqpdriver<br>>>>> [req-caefe26d-153a-4dfd-9ea6-bc5ca0d46679 - - - - -] The reply<br>>>>> 8f723ceb10c3472db9a9f324861df2bb failed to send after 60 seconds<br>due to a<br>>>>> missing queue (reply_349bcb075f8c49329435a0f884b33066).<br>Abandoning...:<br>>>>> oslo_messaging.exceptions.MessageUndeliverable<br>>>>><br>>>>> With regards,<br>>>>> Swogat Pradhan<br>>>>><br>>>>> On Sun, Feb 26, 2023 at 2:26 PM Swogat Pradhan <<br>>>>> <a href="mailto:swogatpradhan22@gmail.com" target="_blank">swogatpradhan22@gmail.com</a>> wrote:<br>>>>><br>>>>>> Hi,<br>>>>>> I currently have 3 compute nodes on edge site1 where i am trying to<br>>>>>> launch vm's.<br>>>>>> When the VM is in spawning state the node goes down (openstack<br>compute<br>>>>>> service list), the node comes backup when i restart the nova<br>compute<br>>>>>> service but then the launch of the vm fails.<br>>>>>><br>>>>>> nova-compute.log<br>>>>>><br>>>>>> 2023-02-26 08:15:51.808 7 INFO nova.compute.manager<br>>>>>> [req-bc0f5f2e-53fc-4dae-b1da-82f1f972d617 - - - - -] Running<br>>>>>> instance usage<br>>>>>> audit for host <a href="http://dcn01-hci-0.bdxworld.com" target="_blank">dcn01-hci-0.bdxworld.com</a> from 2023-02-26 07:00:00<br>to<br>>>>>> 2023-02-26 08:00:00. 0 instances.<br>>>>>> 2023-02-26 08:49:52.813 7 INFO nova.compute.claims<br>>>>>> [req-3a1547ea-326f-4dd0-9127-7f4a4bdf1e45<br>>>>>> b240e3e89d99489284cd731e75f2a5db<br>>>>>> 4160ce999a31485fa643aed0936dfef0 - default default] [instance:<br>>>>>> 0c62c1ef-9010-417d-a05f-4db77e901600] Claim successful on node<br>>>>>> <a href="http://dcn01-hci-0.bdxworld.com" target="_blank">dcn01-hci-0.bdxworld.com</a><br>>>>>> 2023-02-26 08:49:54.225 7 INFO nova.virt.libvirt.driver<br>>>>>> [req-3a1547ea-326f-4dd0-9127-7f4a4bdf1e45<br>>>>>> b240e3e89d99489284cd731e75f2a5db<br>>>>>> 4160ce999a31485fa643aed0936dfef0 - default default] [instance:<br>>>>>> 0c62c1ef-9010-417d-a05f-4db77e901600] Ignoring supplied device<br>name:<br>>>>>> /dev/vda. Libvirt can't honour user-supplied dev names<br>>>>>> 2023-02-26 08:49:54.398 7 INFO nova.virt.block_device<br>>>>>> [req-3a1547ea-326f-4dd0-9127-7f4a4bdf1e45<br>>>>>> b240e3e89d99489284cd731e75f2a5db<br>>>>>> 4160ce999a31485fa643aed0936dfef0 - default default] [instance:<br>>>>>> 0c62c1ef-9010-417d-a05f-4db77e901600] Booting with volume<br>>>>>> c4bd7885-5973-4860-bbe6-7a2f726baeee at /dev/vda<br>>>>>> 2023-02-26 08:49:55.216 7 WARNING nova.cache_utils<br>>>>>> [req-3a1547ea-326f-4dd0-9127-7f4a4bdf1e45<br>>>>>> b240e3e89d99489284cd731e75f2a5db<br>>>>>> 4160ce999a31485fa643aed0936dfef0 - default default] Cache enabled<br>with<br>>>>>> backend dogpile.cache.null.<br>>>>>> 2023-02-26 08:49:55.283 7 INFO oslo.privsep.daemon<br>>>>>> [req-3a1547ea-326f-4dd0-9127-7f4a4bdf1e45<br>>>>>> b240e3e89d99489284cd731e75f2a5db<br>>>>>> 4160ce999a31485fa643aed0936dfef0 - default default] Running<br>>>>>> privsep helper:<br>>>>>> ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf',<br>'privsep-helper',<br>>>>>> '--config-file', '/etc/nova/nova.conf', '--config-file',<br>>>>>> '/etc/nova/nova-compute.conf', '--privsep_context',<br>>>>>> 'os_brick.privileged.default', '--privsep_sock_path',<br>>>>>> '/tmp/tmpin40tah6/privsep.sock']<br>>>>>> 2023-02-26 08:49:55.791 7 INFO oslo.privsep.daemon<br>>>>>> [req-3a1547ea-326f-4dd0-9127-7f4a4bdf1e45<br>>>>>> b240e3e89d99489284cd731e75f2a5db<br>>>>>> 4160ce999a31485fa643aed0936dfef0 - default default] Spawned new<br>privsep<br>>>>>> daemon via rootwrap<br>>>>>> 2023-02-26 08:49:55.717 2647 INFO oslo.privsep.daemon [-] privsep<br>>>>>> daemon starting<br>>>>>> 2023-02-26 08:49:55.722 2647 INFO oslo.privsep.daemon [-] privsep<br>>>>>> process running with uid/gid: 0/0<br>>>>>> 2023-02-26 08:49:55.726 2647 INFO oslo.privsep.daemon [-] privsep<br>>>>>> process running with capabilities (eff/prm/inh):<br>>>>>> CAP_SYS_ADMIN/CAP_SYS_ADMIN/none<br>>>>>> 2023-02-26 08:49:55.726 2647 INFO oslo.privsep.daemon [-] privsep<br>>>>>> daemon running as pid 2647<br>>>>>> 2023-02-26 08:49:55.956 7 WARNING<br>os_brick.initiator.connectors.nvmeof<br>>>>>> [req-3a1547ea-326f-4dd0-9127-7f4a4bdf1e45<br>>>>>> b240e3e89d99489284cd731e75f2a5db<br>>>>>> 4160ce999a31485fa643aed0936dfef0 - default default] Process<br>>>>>> execution error<br>>>>>> in _get_host_uuid: Unexpected error while running command.<br>>>>>> Command: blkid overlay -s UUID -o value<br>>>>>> Exit code: 2<br>>>>>> Stdout: ''<br>>>>>> Stderr: '': oslo_concurrency.processutils.ProcessExecutionError:<br>>>>>> Unexpected error while running command.<br>>>>>> 2023-02-26 08:49:58.247 7 INFO nova.virt.libvirt.driver<br>>>>>> [req-3a1547ea-326f-4dd0-9127-7f4a4bdf1e45<br>>>>>> b240e3e89d99489284cd731e75f2a5db<br>>>>>> 4160ce999a31485fa643aed0936dfef0 - default default] [instance:<br>>>>>> 0c62c1ef-9010-417d-a05f-4db77e901600] Creating image<br>>>>>><br>>>>>> Is there a way to solve this issue?<br>>>>>><br>>>>>><br>>>>>> With regards,<br>>>>>><br>>>>>> Swogat Pradhan<br>>>>>><br>>>>><br><br><br><br><br></blockquote></blockquote></blockquote></blockquote><br><br><br><br></div></div></blockquote></div><br></div></div></blockquote></div>
</blockquote></div>