Franck, this help you a lot. Thanks Radoslaw Ignazio Il giorno ven 19 nov 2021 alle ore 12:03 Radosław Piliszek < radoslaw.piliszek@gmail.com> ha scritto:
If one sets glance_file_datadir_volume to non-default, then glance-api gets deployed on all hosts.
-yoctozepto
On Fri, 19 Nov 2021 at 10:51, Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Hello Franck, glance is not deployed on all nodes at default. I got the same problem In my case I have 3 controllers. I created an nfs share on a storage server where to store images. Before deploying glance, I create /var/lib/glance/images on the 3
This is my fstab on the 3 controllers:
10.102.189.182:/netappopenstacktst2_glance /var/lib/glance nfs rw,user=glance,soft,intr,noatime,nodiratime
In my globals.yml I have: glance_file_datadir_volume: "/var/lib/glance" glance_backend_file: "yes"
This means images are on /var/lib/glance and since it is a nfs share all my 3 controlles can share images. Then you must deploy. To be sure the glance container is started on all controllers, since I have 3 controllers, I deployed 3 times changing the order in the inventory. First time: [control] A B C
Second time: [control] B C A
Third time: [control] C B A
Or you can deploy glance 3 times using -t glance and -l <controllername>
As far as the instance stopped, I got I bug with a version of kolla. https://bugs.launchpad.net/kolla-ansible/+bug/1941706 Now is corrected and with kolla 12.2.0 it works. Ignazio
Il giorno mer 17 nov 2021 alle ore 23:17 Franck VEDEL < franck.vedel@univ-grenoble-alpes.fr> ha scritto:
Hello and thank you for the help. I was able to move forward on my problem, without finding a
satisfactory solution.
Normally, I have 2 servers with the role [glance] but I noticed that all my images were on the first server (in / var / lib / docker / volumes / glance / _data / images) before the reconfigure, none on the second. But since the reconfiguration, the images are placed on the second, and no longer on the first. I do not understand why. I haven't changed anything to
so, to get out of this situation quickly as I need this openstack for
I don't understand what happened. There is something wrong.
Is it normal that after updating the certificates, all instances are turned off? thanks again
Franck
Le 17 nov. 2021 à 21:11, Cyril Roelandt <cyril@redhat.com> a écrit :
Hello,
On 2021-11-17 08:59, Franck VEDEL wrote:
Hello everyone
I have a strange problem and I haven't found the solution yet. Following a certificate update I had to do a "kolla-ansible -t multinode reconfigure ». Well, after several attempts (it is not easy to use certificates with Kolla-ansible, and from my advice, not documented enough for beginners), I have my new functional certificates. Perfect ... well almost.
I am trying to create a new instance to check general operation. ERROR. Okay, I look in the logs and I see that Cinder is having problems creating volumes with an error that I never had ("TypeError: 'NoneType' object is not iterable).
We'd like to see the logs as well, especially the stacktrace.
I dig and then I wonder if it is not the Glance images which cannot be used, while they are present (openstack image list is OK).
I create an empty volume: it works. I am creating a volume from an image: Failed.
What commands are you running? What's the output? What's in the logs?
However, I have my list of ten images in glance.
I create a new image and create a volume with this new image: it works. I create an instance with this new image: OK.
What is the problem ? The images present before the "reconfigure" are
controllers and I mount the nfs share. the multinode file. the students, I modified the multinode file and put only one server in [glance] (I put server 1, the one that had the images before reconfigure), I did a reconfigure -t glance and now I have my images usable for instances. listed, visible in horizon for example, but unusable.
Is there a way to fix this, or do we have to reinstall them all?
What's your configuration? What version of OpenStack are you running?
Cyril