Accessing virsh in openstack wallaby | tripleo

Yatin Karel ykarel at redhat.com
Fri Aug 25 06:58:49 UTC 2023


Hi Swogat,

Yes the commands are correct, snippet from local system:-
[root at compute-0 ~]# podman exec -it nova_virtqemud /bin/bash
[root at compute-0 /]# virsh list --all
 Id   Name   State
--------------------

[root at compute-0 /]# virsh uri
qemu:///system

[root at compute-0 /]# virsh -c qemu:///system list --all
 Id   Name   State
--------------------

[root at compute-0 /]# virsh -c qemu:///session list --all
 Id   Name   State
--------------------

[root at compute-0 /]

Thanks and Regards
Yatin Karel


On Fri, Aug 25, 2023 at 11:52 AM Swogat Pradhan <swogatpradhan22 at gmail.com>
wrote:

> Hi Yatin,
> It doesn't seem to work.
> I am not sure if i am running the right command tough can you please
> confirm?
>
> [root at dcn01-hci-2 ~]# podman exec -it nova_virtqemud /bin/bash
> [root at dcn01-hci-2 /]# virsh list
> error: failed to connect to the hypervisor
> error: no connection driver available for <null>
>
> [root at dcn01-hci-2 /]#
>
>
> With regards,
> Swogat Pradhan
>
> On Fri, Aug 25, 2023 at 10:46 AM Yatin Karel <ykarel at redhat.com> wrote:
>
>> Hi Swogat
>>
>> You can use the virsh commands from "nova_virtqemud" container in
>> stable/wallaby+.
>>
>> Thanks and Regards
>> Yatin Karel
>>
>>
>> On Fri, Aug 25, 2023 at 10:38 AM Swogat Pradhan <
>> swogatpradhan22 at gmail.com> wrote:
>>
>>> Hi,
>>> Can someone please tell me how can i access the virsh utility on the
>>> compute node on openstack wallaby?
>>>
>>> Previously we used nova_libvirt but in wallaby there are multiple
>>> containers and i am unable to use virsh utility to edit my domain.xml file.
>>>
>>> List of containers:
>>> [root at dcn01-hci-2 ~]# podman ps | grep nova
>>> 10d981158842
>>> 172.25.201.68:8787/tripleomaster/openstack-nova-libvirt:current-tripleo
>>>                                 kolla_start           2 months ago  Up 2
>>> months ago                         nova_virtlogd_wrapper
>>> e91667516afc
>>> 172.25.201.68:8787/tripleomaster/openstack-nova-libvirt:current-tripleo
>>>                                 kolla_start           2 months ago  Up 2
>>> months ago                         nova_virtsecretd
>>> a4b07fe0e833
>>> 172.25.201.68:8787/tripleomaster/openstack-nova-libvirt:current-tripleo
>>>                                 kolla_start           2 months ago  Up 2
>>> months ago                         nova_virtnodedevd
>>> d7e1db393e9c
>>> 172.25.201.68:8787/tripleomaster/openstack-nova-libvirt:current-tripleo
>>>                                 kolla_start           2 months ago  Up 2
>>> months ago                         nova_virtstoraged
>>> 5308a171793e
>>> 172.25.201.68:8787/tripleomaster/openstack-nova-libvirt:current-tripleo
>>>                                 kolla_start           2 months ago  Up 2
>>> months ago                         nova_virtqemud
>>> f490a6249ba1
>>> 172.25.201.68:8787/tripleomaster/openstack-nova-libvirt:current-tripleo
>>>                                 kolla_start           2 months ago  Up 6
>>> weeks ago                          nova_virtproxyd
>>> ab0441e56957
>>> 172.25.201.68:8787/tripleomaster/openstack-nova-compute:current-tripleo
>>>                                 kolla_start           2 months ago  Up 2
>>> months ago (healthy)               nova_migration_target
>>> f35fbba5b690
>>> 172.25.201.68:8787/tripleomaster/openstack-nova-compute:current-tripleo
>>>                                 kolla_start           2 months ago  Up 6
>>> weeks ago (unhealthy)              nova_compute
>>> 82bc01993106
>>> 172.25.201.68:8787/tripleomaster/openstack-nova-libvirt:current-tripleo
>>>                                 /usr/sbin/virtlog...  2 months ago  Up 2
>>> months ago
>>> Can someone please suggest how I can i edit an instance xml file in this
>>> setup?
>>>
>>> With regards,
>>> Swogat Pradhan
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230825/467f57cd/attachment.htm>


More information about the openstack-discuss mailing list