Hello Klaus, There's a couple places to look for logs: - kubectl logs -n openstack --all-containers nova-0 # this will get you logs from nova-api, nova-scheduler and nova-conductor - sudo snap logs -n all openstack-hypervisor.nova-compute # this will get you all logs from the nova-compute service - sudo snap logs -n all openstack-hypervisor # will get you all logs of all services in the snap Can you try from the CLI to get the reason why the VM is in error? openstack server show <id> # might show you the reason why the VM is in error, such as no host found or else Hope this helps you find the root cause, Guillaume On Wed, Aug 21, 2024 at 3:35 PM klaus.becker@webmen.de < klaus.becker@webmen.de> wrote:
Hi OpenStackers!
After a vacation I continue to get my first OpenStack work (one node guided install).
*What works:*
- I managed to install OpenStack on *Ubuntu 22.04.4 LTS *on an older DELL server with plenty of RAM and Disks - I used Snap Install - I can use horizon to create my VMs
*Problem: * After I made a few instances from the Ubuntu templates with different flavors, I am Getting the status „*ERROR*“ for each of these instances. Trying to shutdown an instance I am getting an Error: [image: PastedGraphic-1.png]
*Questions:* What is the *best practice for troubleshooting* regarding my installation type (*single node, snap*)? Where do I find log-files? Google brings different solutions, but the paths and files do not exist for my installation?
*I tried a few suggestions:* Openstack instance generating status error – Fix it now <https://bobcares.com/blog/openstack-instance-generating-status-error/> ( /var/lib/nova/ and /var/lib/nova and /var/log/libvirt/qemu do not exist )
*I found:* ~snap/juju ~snap/openstack
*~/snap/openstack*$ ls -ahl
total 20K
drwxr-xr-x 5 webmen webmen 4.0K Aug 13 10:54 *.*
drwx------ 4 webmen webmen 4.0K Jul 24 13:59 *..*
drwxr-xr-x 4 webmen webmen 4.0K Jul 19 12:53 *576*
drwxr-xr-x 4 webmen webmen 4.0K Jul 19 12:53 *577*
drwxr-xr-x 4 webmen webmen 4.0K Jul 17 12:16 *common*
lrwxrwxrwx 1 webmen webmen 3 Jul 24 10:04 *current* -> *560*
The link "current —> 560 has no target here!
In „common", there are sunbeam-logs
Maybe there is a misconfiguration with the soft-network. I had a hard time understanding the different networks I need to access VMs from the LAN.
Finaly I found then logs using "*find -name nova*“ /var/snap/openstack-hypervisor/common/log
*webmen@ada*:*/var/snap/openstack-hypervisor/common/log*$ ls -ahl
total 92K
drwxr-xr-x 7 root root 4.0K Jul 17 12:39 *.*
drwxr-xr-x 8 root root 4.0K Jul 24 12:54 *..*
drwxr-xr-x 3 root root 4.0K Jul 17 12:39 *libvirt*
drwxr-xr-x 2 root root 4.0K Jul 17 12:39 *neutron*
-rw-r--r-- 1 root root 58K Aug 16 13:06 neutron.log
drwxr-xr-x 2 root root 4.0K Jul 17 12:39 *nova*
drwxr-xr-x 2 root root 4.0K Jul 17 12:39 *openvswitch*
drwxr-xr-x 2 root root 4.0K Jul 17 12:39 *ovn*
*webmen@ada*:*/var/snap/openstack-hypervisor/common/log*$
*But most dirs are empty*
*webmen@ada*:*/var/snap/openstack-hypervisor/common/log*$ tree
*.*
├── *libvirt*
│ └── *qemu*
├── *neutron*
├── neutron.log
├── *nova*
├── *openvswitch*
│ ├── ovsdb-server.log
│ └── ovs-vswitchd.log
└── *ovn*
└── ovn-controller.log
*But I did not find the logs of glance*
*webmen@ada*:*/*$ sudo find -name glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/995/fs/var/lib/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/995/fs/etc/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1192/fs/var/lib/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1192/fs/etc/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/915/fs/usr/bin/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/894/fs/usr/bin/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/891/fs/usr/bin/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/205/fs/usr/bin/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/234/fs/var/log/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/234/fs/var/lib/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/234/fs/usr/share/doc/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/234/fs/usr/bin/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/234/fs/usr/lib/python3/dist-packages/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/234/fs/etc/glance
./snap/openstack-hypervisor/153/usr/bin/glance
./snap/openstack/577/bin/glance
./snap/openstack/577/lib/python3.10/site-packages/muranoclient/glance
./snap/openstack/576/bin/glance
./snap/openstack/576/lib/python3.10/site-packages/muranoclient/glance
find: ‘./proc/3340174’: No such file or directory
find: ‘./proc/3361710’: No such file or directory
find: ‘./proc/3361712’: No such file or directory
find: ‘./proc/3361763’: No such file or directory
find: ‘./proc/3364581’: No such file or directory
find: ‘./proc/3364766’: No such file or directory
find: ‘./proc/3364791’: No such file or directory
*htop: server lose very high* The server still has a load average of *4.68* running for 4 days by now. The hungry processes are: 5 processes of /charm/bin/pebble run —create-dies —hold —http :38813 —verbose /snap/micro8s/7038/kubelite/…
Here a copy of top (htop did not let me copy the list)
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1272 root 20 0 11.4g 238528 59048 S 83.3 0.1 3596:35 containerd
21051 root 20 0 1638280 260508 7088 S 72.5 0.1 266:36.35 pebble
21945 root 20 0 1305356 52048 7088 S 71.8 0.0 91:01.52 pebble
20543 root 20 0 1704412 220848 7088 S 69.5 0.1 265:03.19 pebble
2994 root 20 0 1854984 606616 104600 S 37.0 0.3 2957:41 kubelite
1275 root 20 0 3586472 340048 19788 S 23.6 0.2 1822:17 k8s-dqlite
3340174 root 20 0 2135364 31952 19516 S 6.2 0.0 0:00.19 alertmanager
1749555 snap_da+ 20 0 3242712 924312 42656 S 5.9 0.5 67:37.06 mysqld
78926 snap_da+ 20 0 3490924 1.1g 42428 S 5.6 0.6 91:53.33 mysqld
1267 root 20 0 4276408 2.3g 55224 S 5.2 1.2 529:17.09 mongod
So… how do I get rid of that status ERROR in Horizon?
Best regards
Klaus Becker
[image: Logo Webmen.png]
Webmen Internet GmbH Faulenstraße 12 28195 Bremen +49 421 24 34 94-0 info@webmen.de www.webmen.de
Geschäftsführerin: Christiane Niebuhr-Redder Prokuristin: Kathleen Marx-Colonius Amtsgericht Bremen HRB 16685 USt-ID DE 179413676
Facebook Instagram LinkedIn