<div dir="ltr">Hi,<div>Yes the MTU is the same as the default '1500'.</div><div>Generally I haven't seen any packet loss, but never checked when launching the instance.</div><div>I will check that and come back.</div><div>But everytime i launch an instance the instance gets stuck at spawning state and there the hypervisor becomes down, so not sure if packet loss causes this.</div><div><br></div><div>With regards,</div><div>Swogat pradhan</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Mar 1, 2023 at 3:30 PM Eugen Block <<a href="mailto:eblock@nde.ag">eblock@nde.ag</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">One more thing coming to mind is MTU size. Are they identical between <br>
central and edge site? Do you see packet loss through the tunnel?<br>
<br>
Zitat von Swogat Pradhan <<a href="mailto:swogatpradhan22@gmail.com" target="_blank">swogatpradhan22@gmail.com</a>>:<br>
<br>
> Hi Eugen,<br>
> Request you to please add my email either on 'to' or 'cc' as i am not<br>
> getting email's from you.<br>
> Coming to the issue:<br>
><br>
> [root@overcloud-controller-no-ceph-3 /]# rabbitmqctl list_policies -p /<br>
> Listing policies for vhost "/" ...<br>
> vhost name pattern apply-to definition priority<br>
> / ha-all ^(?!amq\.).* queues<br>
> {"ha-mode":"exactly","ha-params":2,"ha-promote-on-shutdown":"always"} 0<br>
><br>
> I have the edge site compute nodes up, it only goes down when i am trying<br>
> to launch an instance and the instance comes to a spawning state and then<br>
> gets stuck.<br>
><br>
> I have a tunnel setup between the central and the edge sites.<br>
><br>
> With regards,<br>
> Swogat Pradhan<br>
><br>
> On Tue, Feb 28, 2023 at 9:11 PM Swogat Pradhan <<a href="mailto:swogatpradhan22@gmail.com" target="_blank">swogatpradhan22@gmail.com</a>><br>
> wrote:<br>
><br>
>> Hi Eugen,<br>
>> For some reason i am not getting your email to me directly, i am checking<br>
>> the email digest and there i am able to find your reply.<br>
>> Here is the log for download: <a href="https://we.tl/t-L8FEkGZFSq" rel="noreferrer" target="_blank">https://we.tl/t-L8FEkGZFSq</a><br>
>> Yes, these logs are from the time when the issue occurred.<br>
>><br>
>> *Note: i am able to create vm's and perform other activities in the<br>
>> central site, only facing this issue in the edge site.*<br>
>><br>
>> With regards,<br>
>> Swogat Pradhan<br>
>><br>
>> On Mon, Feb 27, 2023 at 5:12 PM Swogat Pradhan <<a href="mailto:swogatpradhan22@gmail.com" target="_blank">swogatpradhan22@gmail.com</a>><br>
>> wrote:<br>
>><br>
>>> Hi Eugen,<br>
>>> Thanks for your response.<br>
>>> I have actually a 4 controller setup so here are the details:<br>
>>><br>
>>> *PCS Status:*<br>
>>> * Container bundle set: rabbitmq-bundle [<br>
>>> <a href="http://172.25.201.68:8787/tripleomaster/openstack-rabbitmq:pcmklatest" rel="noreferrer" target="_blank">172.25.201.68:8787/tripleomaster/openstack-rabbitmq:pcmklatest</a>]:<br>
>>> * rabbitmq-bundle-0 (ocf::heartbeat:rabbitmq-cluster): Started<br>
>>> overcloud-controller-no-ceph-3<br>
>>> * rabbitmq-bundle-1 (ocf::heartbeat:rabbitmq-cluster): Started<br>
>>> overcloud-controller-2<br>
>>> * rabbitmq-bundle-2 (ocf::heartbeat:rabbitmq-cluster): Started<br>
>>> overcloud-controller-1<br>
>>> * rabbitmq-bundle-3 (ocf::heartbeat:rabbitmq-cluster): Started<br>
>>> overcloud-controller-0<br>
>>><br>
>>> I have tried restarting the bundle multiple times but the issue is still<br>
>>> present.<br>
>>><br>
>>> *Cluster status:*<br>
>>> [root@overcloud-controller-0 /]# rabbitmqctl cluster_status<br>
>>> Cluster status of node<br>
>>> <a href="mailto:rabbit@overcloud-controller-0.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-0.internalapi.bdxworld.com</a> ...<br>
>>> Basics<br>
>>><br>
>>> Cluster name: <a href="mailto:rabbit@overcloud-controller-no-ceph-3.bdxworld.com" target="_blank">rabbit@overcloud-controller-no-ceph-3.bdxworld.com</a><br>
>>><br>
>>> Disk Nodes<br>
>>><br>
>>> <a href="mailto:rabbit@overcloud-controller-0.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-0.internalapi.bdxworld.com</a><br>
>>> <a href="mailto:rabbit@overcloud-controller-1.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-1.internalapi.bdxworld.com</a><br>
>>> <a href="mailto:rabbit@overcloud-controller-2.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-2.internalapi.bdxworld.com</a><br>
>>> <a href="mailto:rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com</a><br>
>>><br>
>>> Running Nodes<br>
>>><br>
>>> <a href="mailto:rabbit@overcloud-controller-0.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-0.internalapi.bdxworld.com</a><br>
>>> <a href="mailto:rabbit@overcloud-controller-1.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-1.internalapi.bdxworld.com</a><br>
>>> <a href="mailto:rabbit@overcloud-controller-2.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-2.internalapi.bdxworld.com</a><br>
>>> <a href="mailto:rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com</a><br>
>>><br>
>>> Versions<br>
>>><br>
>>> <a href="mailto:rabbit@overcloud-controller-0.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-0.internalapi.bdxworld.com</a>: RabbitMQ 3.8.3<br>
>>> on Erlang 22.3.4.1<br>
>>> <a href="mailto:rabbit@overcloud-controller-1.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-1.internalapi.bdxworld.com</a>: RabbitMQ 3.8.3<br>
>>> on Erlang 22.3.4.1<br>
>>> <a href="mailto:rabbit@overcloud-controller-2.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-2.internalapi.bdxworld.com</a>: RabbitMQ 3.8.3<br>
>>> on Erlang 22.3.4.1<br>
>>> <a href="mailto:rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com</a>: RabbitMQ<br>
>>> 3.8.3 on Erlang 22.3.4.1<br>
>>><br>
>>> Alarms<br>
>>><br>
>>> (none)<br>
>>><br>
>>> Network Partitions<br>
>>><br>
>>> (none)<br>
>>><br>
>>> Listeners<br>
>>><br>
>>> Node: <a href="mailto:rabbit@overcloud-controller-0.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-0.internalapi.bdxworld.com</a>, interface:<br>
>>> [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool<br>
>>> communication<br>
>>> Node: <a href="mailto:rabbit@overcloud-controller-0.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-0.internalapi.bdxworld.com</a>, interface:<br>
>>> 172.25.201.212, port: 5672, protocol: amqp, purpose: AMQP 0-9-1 <br>
>>> and AMQP 1.0<br>
>>> Node: <a href="mailto:rabbit@overcloud-controller-0.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-0.internalapi.bdxworld.com</a>, interface:<br>
>>> [::], port: 15672, protocol: http, purpose: HTTP API<br>
>>> Node: <a href="mailto:rabbit@overcloud-controller-1.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-1.internalapi.bdxworld.com</a>, interface:<br>
>>> [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool<br>
>>> communication<br>
>>> Node: <a href="mailto:rabbit@overcloud-controller-1.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-1.internalapi.bdxworld.com</a>, interface:<br>
>>> 172.25.201.205, port: 5672, protocol: amqp, purpose: AMQP 0-9-1 <br>
>>> and AMQP 1.0<br>
>>> Node: <a href="mailto:rabbit@overcloud-controller-1.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-1.internalapi.bdxworld.com</a>, interface:<br>
>>> [::], port: 15672, protocol: http, purpose: HTTP API<br>
>>> Node: <a href="mailto:rabbit@overcloud-controller-2.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-2.internalapi.bdxworld.com</a>, interface:<br>
>>> [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool<br>
>>> communication<br>
>>> Node: <a href="mailto:rabbit@overcloud-controller-2.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-2.internalapi.bdxworld.com</a>, interface:<br>
>>> 172.25.201.201, port: 5672, protocol: amqp, purpose: AMQP 0-9-1 <br>
>>> and AMQP 1.0<br>
>>> Node: <a href="mailto:rabbit@overcloud-controller-2.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-2.internalapi.bdxworld.com</a>, interface:<br>
>>> [::], port: 15672, protocol: http, purpose: HTTP API<br>
>>> Node: <a href="mailto:rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com</a>,<br>
>>> interface: [::], port: 25672, protocol: clustering, purpose: inter-node and<br>
>>> CLI tool communication<br>
>>> Node: <a href="mailto:rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com</a>,<br>
>>> interface: 172.25.201.209, port: 5672, protocol: amqp, purpose: AMQP 0-9-1<br>
>>> and AMQP 1.0<br>
>>> Node: <a href="mailto:rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com" target="_blank">rabbit@overcloud-controller-no-ceph-3.internalapi.bdxworld.com</a>,<br>
>>> interface: [::], port: 15672, protocol: http, purpose: HTTP API<br>
>>><br>
>>> Feature flags<br>
>>><br>
>>> Flag: drop_unroutable_metric, state: enabled<br>
>>> Flag: empty_basic_get_metric, state: enabled<br>
>>> Flag: implicit_default_bindings, state: enabled<br>
>>> Flag: quorum_queue, state: enabled<br>
>>> Flag: virtual_host_metadata, state: enabled<br>
>>><br>
>>> *Logs:*<br>
>>> *(Attached)*<br>
>>><br>
>>> With regards,<br>
>>> Swogat Pradhan<br>
>>><br>
>>> On Sun, Feb 26, 2023 at 2:34 PM Swogat Pradhan <<a href="mailto:swogatpradhan22@gmail.com" target="_blank">swogatpradhan22@gmail.com</a>><br>
>>> wrote:<br>
>>><br>
>>>> Hi,<br>
>>>> Please find the nova conductor as well as nova api log.<br>
>>>><br>
>>>> nova-conuctor:<br>
>>>><br>
>>>> 2023-02-26 08:45:01.108 31 WARNING oslo_messaging._drivers.amqpdriver<br>
>>>> [req-caefe26d-153a-4dfd-9ea6-bc5ca0d46679 - - - - -]<br>
>>>> reply_349bcb075f8c49329435a0f884b33066 doesn't exist, drop reply to<br>
>>>> 16152921c1eb45c2b1f562087140168b<br>
>>>> 2023-02-26 08:45:02.144 26 WARNING oslo_messaging._drivers.amqpdriver<br>
>>>> [req-7b43c4e5-0475-4598-92c0-fcacb51d9813 - - - - -]<br>
>>>> reply_276049ec36a84486a8a406911d9802f4 doesn't exist, drop reply to<br>
>>>> 83dbe5f567a940b698acfe986f6194fa<br>
>>>> 2023-02-26 08:45:02.314 32 WARNING oslo_messaging._drivers.amqpdriver<br>
>>>> [req-7b43c4e5-0475-4598-92c0-fcacb51d9813 - - - - -]<br>
>>>> reply_276049ec36a84486a8a406911d9802f4 doesn't exist, drop reply to<br>
>>>> f3bfd7f65bd542b18d84cea3033abb43:<br>
>>>> oslo_messaging.exceptions.MessageUndeliverable<br>
>>>> 2023-02-26 08:45:02.316 32 ERROR oslo_messaging._drivers.amqpdriver<br>
>>>> [req-7b43c4e5-0475-4598-92c0-fcacb51d9813 - - - - -] The reply<br>
>>>> f3bfd7f65bd542b18d84cea3033abb43 failed to send after 60 seconds due to a<br>
>>>> missing queue (reply_276049ec36a84486a8a406911d9802f4). Abandoning...:<br>
>>>> oslo_messaging.exceptions.MessageUndeliverable<br>
>>>> 2023-02-26 08:48:01.282 35 WARNING oslo_messaging._drivers.amqpdriver<br>
>>>> [req-caefe26d-153a-4dfd-9ea6-bc5ca0d46679 - - - - -]<br>
>>>> reply_349bcb075f8c49329435a0f884b33066 doesn't exist, drop reply to<br>
>>>> d4b9180f91a94f9a82c3c9c4b7595566:<br>
>>>> oslo_messaging.exceptions.MessageUndeliverable<br>
>>>> 2023-02-26 08:48:01.284 35 ERROR oslo_messaging._drivers.amqpdriver<br>
>>>> [req-caefe26d-153a-4dfd-9ea6-bc5ca0d46679 - - - - -] The reply<br>
>>>> d4b9180f91a94f9a82c3c9c4b7595566 failed to send after 60 seconds due to a<br>
>>>> missing queue (reply_349bcb075f8c49329435a0f884b33066). Abandoning...:<br>
>>>> oslo_messaging.exceptions.MessageUndeliverable<br>
>>>> 2023-02-26 08:49:01.303 33 WARNING oslo_messaging._drivers.amqpdriver<br>
>>>> [req-caefe26d-153a-4dfd-9ea6-bc5ca0d46679 - - - - -]<br>
>>>> reply_349bcb075f8c49329435a0f884b33066 doesn't exist, drop reply to<br>
>>>> 897911a234a445d8a0d8af02ece40f6f:<br>
>>>> oslo_messaging.exceptions.MessageUndeliverable<br>
>>>> 2023-02-26 08:49:01.304 33 ERROR oslo_messaging._drivers.amqpdriver<br>
>>>> [req-caefe26d-153a-4dfd-9ea6-bc5ca0d46679 - - - - -] The reply<br>
>>>> 897911a234a445d8a0d8af02ece40f6f failed to send after 60 seconds due to a<br>
>>>> missing queue (reply_349bcb075f8c49329435a0f884b33066). Abandoning...:<br>
>>>> oslo_messaging.exceptions.MessageUndeliverable<br>
>>>> 2023-02-26 08:49:52.254 31 WARNING nova.cache_utils<br>
>>>> [req-3a1547ea-326f-4dd0-9127-7f4a4bdf1e45 b240e3e89d99489284cd731e75f2a5db<br>
>>>> 4160ce999a31485fa643aed0936dfef0 - default default] Cache enabled with<br>
>>>> backend dogpile.cache.null.<br>
>>>> 2023-02-26 08:50:01.264 27 WARNING oslo_messaging._drivers.amqpdriver<br>
>>>> [req-caefe26d-153a-4dfd-9ea6-bc5ca0d46679 - - - - -]<br>
>>>> reply_349bcb075f8c49329435a0f884b33066 doesn't exist, drop reply to<br>
>>>> 8f723ceb10c3472db9a9f324861df2bb:<br>
>>>> oslo_messaging.exceptions.MessageUndeliverable<br>
>>>> 2023-02-26 08:50:01.266 27 ERROR oslo_messaging._drivers.amqpdriver<br>
>>>> [req-caefe26d-153a-4dfd-9ea6-bc5ca0d46679 - - - - -] The reply<br>
>>>> 8f723ceb10c3472db9a9f324861df2bb failed to send after 60 seconds due to a<br>
>>>> missing queue (reply_349bcb075f8c49329435a0f884b33066). Abandoning...:<br>
>>>> oslo_messaging.exceptions.MessageUndeliverable<br>
>>>><br>
>>>> With regards,<br>
>>>> Swogat Pradhan<br>
>>>><br>
>>>> On Sun, Feb 26, 2023 at 2:26 PM Swogat Pradhan <<br>
>>>> <a href="mailto:swogatpradhan22@gmail.com" target="_blank">swogatpradhan22@gmail.com</a>> wrote:<br>
>>>><br>
>>>>> Hi,<br>
>>>>> I currently have 3 compute nodes on edge site1 where i am trying to<br>
>>>>> launch vm's.<br>
>>>>> When the VM is in spawning state the node goes down (openstack compute<br>
>>>>> service list), the node comes backup when i restart the nova compute<br>
>>>>> service but then the launch of the vm fails.<br>
>>>>><br>
>>>>> nova-compute.log<br>
>>>>><br>
>>>>> 2023-02-26 08:15:51.808 7 INFO nova.compute.manager<br>
>>>>> [req-bc0f5f2e-53fc-4dae-b1da-82f1f972d617 - - - - -] Running <br>
>>>>> instance usage<br>
>>>>> audit for host <a href="http://dcn01-hci-0.bdxworld.com" rel="noreferrer" target="_blank">dcn01-hci-0.bdxworld.com</a> from 2023-02-26 07:00:00 to<br>
>>>>> 2023-02-26 08:00:00. 0 instances.<br>
>>>>> 2023-02-26 08:49:52.813 7 INFO nova.compute.claims<br>
>>>>> [req-3a1547ea-326f-4dd0-9127-7f4a4bdf1e45 <br>
>>>>> b240e3e89d99489284cd731e75f2a5db<br>
>>>>> 4160ce999a31485fa643aed0936dfef0 - default default] [instance:<br>
>>>>> 0c62c1ef-9010-417d-a05f-4db77e901600] Claim successful on node<br>
>>>>> <a href="http://dcn01-hci-0.bdxworld.com" rel="noreferrer" target="_blank">dcn01-hci-0.bdxworld.com</a><br>
>>>>> 2023-02-26 08:49:54.225 7 INFO nova.virt.libvirt.driver<br>
>>>>> [req-3a1547ea-326f-4dd0-9127-7f4a4bdf1e45 <br>
>>>>> b240e3e89d99489284cd731e75f2a5db<br>
>>>>> 4160ce999a31485fa643aed0936dfef0 - default default] [instance:<br>
>>>>> 0c62c1ef-9010-417d-a05f-4db77e901600] Ignoring supplied device name:<br>
>>>>> /dev/vda. Libvirt can't honour user-supplied dev names<br>
>>>>> 2023-02-26 08:49:54.398 7 INFO nova.virt.block_device<br>
>>>>> [req-3a1547ea-326f-4dd0-9127-7f4a4bdf1e45 <br>
>>>>> b240e3e89d99489284cd731e75f2a5db<br>
>>>>> 4160ce999a31485fa643aed0936dfef0 - default default] [instance:<br>
>>>>> 0c62c1ef-9010-417d-a05f-4db77e901600] Booting with volume<br>
>>>>> c4bd7885-5973-4860-bbe6-7a2f726baeee at /dev/vda<br>
>>>>> 2023-02-26 08:49:55.216 7 WARNING nova.cache_utils<br>
>>>>> [req-3a1547ea-326f-4dd0-9127-7f4a4bdf1e45 <br>
>>>>> b240e3e89d99489284cd731e75f2a5db<br>
>>>>> 4160ce999a31485fa643aed0936dfef0 - default default] Cache enabled with<br>
>>>>> backend dogpile.cache.null.<br>
>>>>> 2023-02-26 08:49:55.283 7 INFO oslo.privsep.daemon<br>
>>>>> [req-3a1547ea-326f-4dd0-9127-7f4a4bdf1e45 <br>
>>>>> b240e3e89d99489284cd731e75f2a5db<br>
>>>>> 4160ce999a31485fa643aed0936dfef0 - default default] Running <br>
>>>>> privsep helper:<br>
>>>>> ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper',<br>
>>>>> '--config-file', '/etc/nova/nova.conf', '--config-file',<br>
>>>>> '/etc/nova/nova-compute.conf', '--privsep_context',<br>
>>>>> 'os_brick.privileged.default', '--privsep_sock_path',<br>
>>>>> '/tmp/tmpin40tah6/privsep.sock']<br>
>>>>> 2023-02-26 08:49:55.791 7 INFO oslo.privsep.daemon<br>
>>>>> [req-3a1547ea-326f-4dd0-9127-7f4a4bdf1e45 <br>
>>>>> b240e3e89d99489284cd731e75f2a5db<br>
>>>>> 4160ce999a31485fa643aed0936dfef0 - default default] Spawned new privsep<br>
>>>>> daemon via rootwrap<br>
>>>>> 2023-02-26 08:49:55.717 2647 INFO oslo.privsep.daemon [-] privsep<br>
>>>>> daemon starting<br>
>>>>> 2023-02-26 08:49:55.722 2647 INFO oslo.privsep.daemon [-] privsep<br>
>>>>> process running with uid/gid: 0/0<br>
>>>>> 2023-02-26 08:49:55.726 2647 INFO oslo.privsep.daemon [-] privsep<br>
>>>>> process running with capabilities (eff/prm/inh):<br>
>>>>> CAP_SYS_ADMIN/CAP_SYS_ADMIN/none<br>
>>>>> 2023-02-26 08:49:55.726 2647 INFO oslo.privsep.daemon [-] privsep<br>
>>>>> daemon running as pid 2647<br>
>>>>> 2023-02-26 08:49:55.956 7 WARNING os_brick.initiator.connectors.nvmeof<br>
>>>>> [req-3a1547ea-326f-4dd0-9127-7f4a4bdf1e45 <br>
>>>>> b240e3e89d99489284cd731e75f2a5db<br>
>>>>> 4160ce999a31485fa643aed0936dfef0 - default default] Process <br>
>>>>> execution error<br>
>>>>> in _get_host_uuid: Unexpected error while running command.<br>
>>>>> Command: blkid overlay -s UUID -o value<br>
>>>>> Exit code: 2<br>
>>>>> Stdout: ''<br>
>>>>> Stderr: '': oslo_concurrency.processutils.ProcessExecutionError:<br>
>>>>> Unexpected error while running command.<br>
>>>>> 2023-02-26 08:49:58.247 7 INFO nova.virt.libvirt.driver<br>
>>>>> [req-3a1547ea-326f-4dd0-9127-7f4a4bdf1e45 <br>
>>>>> b240e3e89d99489284cd731e75f2a5db<br>
>>>>> 4160ce999a31485fa643aed0936dfef0 - default default] [instance:<br>
>>>>> 0c62c1ef-9010-417d-a05f-4db77e901600] Creating image<br>
>>>>><br>
>>>>> Is there a way to solve this issue?<br>
>>>>><br>
>>>>><br>
>>>>> With regards,<br>
>>>>><br>
>>>>> Swogat Pradhan<br>
>>>>><br>
>>>><br>
<br>
<br>
<br>
</blockquote></div>