After a vacation I continue to get my first OpenStack work (one node guided install).
After I made a few instances from the Ubuntu templates with different flavors, I am Getting the status „ERROR“ for each of these instances.
Where do I find log-files? Google brings different solutions, but the paths and files do not exist for my installation?
~/snap/openstack$ ls -ahl
total 20K
drwxr-xr-x 5 webmen webmen 4.0K Aug 13 10:54 .
drwx------ 4 webmen webmen 4.0K Jul 24 13:59 ..
drwxr-xr-x 4 webmen webmen 4.0K Jul 19 12:53 576
drwxr-xr-x 4 webmen webmen 4.0K Jul 19 12:53 577
drwxr-xr-x 4 webmen webmen 4.0K Jul 17 12:16 common
lrwxrwxrwx 1 webmen webmen 3 Jul 24 10:04 current -> 560
In „common", there are sunbeam-logs
Maybe there is a misconfiguration with the soft-network. I had a hard time understanding the different networks I need to access VMs from the LAN.
Finaly I found then logs using "find -name nova“
/var/snap/openstack-hypervisor/common/log
webmen@ada:/var/snap/openstack-hypervisor/common/log$ ls -ahl
total 92K
drwxr-xr-x 7 root root 4.0K Jul 17 12:39 .
drwxr-xr-x 8 root root 4.0K Jul 24 12:54 ..
drwxr-xr-x 3 root root 4.0K Jul 17 12:39 libvirt
drwxr-xr-x 2 root root 4.0K Jul 17 12:39 neutron
-rw-r--r-- 1 root root 58K Aug 16 13:06 neutron.log
drwxr-xr-x 2 root root 4.0K Jul 17 12:39 nova
drwxr-xr-x 2 root root 4.0K Jul 17 12:39 openvswitch
drwxr-xr-x 2 root root 4.0K Jul 17 12:39 ovn
webmen@ada:/var/snap/openstack-hypervisor/common/log$
But most dirs are empty
webmen@ada:/var/snap/openstack-hypervisor/common/log$ tree
.
├── libvirt
│ └── qemu
├── neutron
├── neutron.log
├── nova
├── openvswitch
│ ├── ovsdb-server.log
│ └── ovs-vswitchd.log
└── ovn
└── ovn-controller.log
But I did not find the logs of glance
webmen@ada:/$ sudo find -name glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/995/fs/var/lib/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/995/fs/etc/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1192/fs/var/lib/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1192/fs/etc/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/915/fs/usr/bin/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/894/fs/usr/bin/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/891/fs/usr/bin/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/205/fs/usr/bin/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/234/fs/var/log/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/234/fs/var/lib/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/234/fs/usr/share/doc/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/234/fs/usr/bin/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/234/fs/usr/lib/python3/dist-packages/glance
./var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/234/fs/etc/glance
./snap/openstack-hypervisor/153/usr/bin/glance
./snap/openstack/577/bin/glance
./snap/openstack/577/lib/python3.10/site-packages/muranoclient/glance
./snap/openstack/576/bin/glance
./snap/openstack/576/lib/python3.10/site-packages/muranoclient/glance
find: ‘./proc/3340174’: No such file or directory
find: ‘./proc/3361710’: No such file or directory
find: ‘./proc/3361712’: No such file or directory
find: ‘./proc/3361763’: No such file or directory
find: ‘./proc/3364581’: No such file or directory
find: ‘./proc/3364766’: No such file or directory
find: ‘./proc/3364791’: No such file or directory
htop: server lose very high
The server still has a load average of 4.68 running for 4 days by now.
The hungry processes are:
5 processes of /charm/bin/pebble run —create-dies —hold —http :38813 —verbose
/snap/micro8s/7038/kubelite/…
Here a copy of top (htop did not let me copy the list)
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1272 root 20 0 11.4g 238528 59048 S 83.3 0.1 3596:35 containerd
21051 root 20 0 1638280 260508 7088 S 72.5 0.1 266:36.35 pebble
21945 root 20 0 1305356 52048 7088 S 71.8 0.0 91:01.52 pebble
20543 root 20 0 1704412 220848 7088 S 69.5 0.1 265:03.19 pebble
2994 root 20 0 1854984 606616 104600 S 37.0 0.3 2957:41 kubelite
1275 root 20 0 3586472 340048 19788 S 23.6 0.2 1822:17 k8s-dqlite
3340174 root 20 0 2135364 31952 19516 S 6.2 0.0 0:00.19 alertmanager
1749555 snap_da+ 20 0 3242712 924312 42656 S 5.9 0.5 67:37.06 mysqld
78926 snap_da+ 20 0 3490924 1.1g 42428 S 5.6 0.6 91:53.33 mysqld
1267 root 20 0 4276408 2.3g 55224 S 5.2 1.2 529:17.09 mongod
So… how do I get rid of that status ERROR in Horizon?
Best regards
Klaus Becker
<Logo Webmen.png>Webmen Internet GmbH
Faulenstraße 12
28195 Bremen
+49 421 24 34 94-0
info@webmen.de www.webmen.de Geschäftsführerin:
Christiane Niebuhr-Redder
Prokuristin: Kathleen Marx-Colonius
Amtsgericht Bremen HRB 16685
USt-ID DE 179413676
Facebook Instagram LinkedIn