I did not understand  very well how your infrastructure is done. 
Generally speaking, I prefer to have 3 controllers , n computer nodes and external storage.
I think using iscsi images must be downloaded and  converted from qcow2 to raw format and it can takes a long time. In this case I used image cache. Probably when you create a volume from image you can see a download phase. If you use image cache the download is executed only the first time a volume from that image is created.
Sorry for my bad english.
Take a look at
https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html#:~:text=Image%2DVolume%20cache%C2%B6,end%20can%20clone%20a%20volume.
Ignazio


Il Gio 18 Nov 2021, 19:58 Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr> ha scritto:
ok ... I got it ... and I think I was doing things wrong.
Okay, so I have another question.
My cinder storage is on an iscsi bay.
I have 3 servers, S1, S2, S2.
Compute is on S1, S2, S3.
Controller is on S1 and S2.
Storage is on S3.
I have Glance on S1. Building an instance from an image is too long, so you have to make a volume first.
If I put the images on the iSCSI bay, I mount a directory in the S1 file system, will the images build faster? Much faster ?
Is this a good idea or not?

Thank you again for your help and your experience


Franck 

Le 18 nov. 2021 à 07:23, Ignazio Cassano <ignaziocassano@gmail.com> a écrit :

Hello, i solved using the following variabile in globals.yml:
glance_file_datadir_volume=somedir
and glance_backend_file="yes'

So if the somedir is a nfs mount point, controllers can share images. Remember you have to deploy glance on all controllers.
Ignazio

Il Mer 17 Nov 2021, 23:17 Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr> ha scritto:
Hello and thank you for the help.
I was able to move forward on my problem, without finding a satisfactory solution.
Normally, I have 2 servers with the role [glance] but I noticed that all my images were on the first server (in / var / lib / docker / volumes / glance / _data / images) before the reconfigure, none on the second. But since the reconfiguration, the images are placed on the second, and no longer on the first. I do not understand why. I haven't changed anything to the multinode file.
so, to get out of this situation quickly as I need this openstack for the students, I modified the multinode file and put only one server in [glance] (I put server 1, the one that had the images before reconfigure), I did a reconfigure -t glance and now I have my images usable for instances.
I don't understand what happened. There is something wrong.

Is it normal that after updating the certificates, all instances are turned off?
thanks again

Franck

Le 17 nov. 2021 à 21:11, Cyril Roelandt <cyril@redhat.com> a écrit :

Hello,


On 2021-11-17 08:59, Franck VEDEL wrote:
Hello everyone

I have a strange problem and I haven't found the solution yet.
Following a certificate update I had to do a "kolla-ansible -t multinode reconfigure ».
Well, after several attempts (it is not easy to use certificates with Kolla-ansible, and from my advice, not documented enough for beginners), I have my new functional certificates. Perfect ... well almost.

I am trying to create a new instance to check general operation. ERROR.
Okay, I look in the logs and I see that Cinder is having problems creating volumes with an error that I never had ("TypeError: 'NoneType' object is not iterable).

We'd like to see the logs as well, especially the stacktrace.

I dig and then I wonder if it is not the Glance images which cannot be used, while they are present (openstack image list is OK).

I create an empty volume: it works.
I am creating a volume from an image: Failed.

What commands are you running? What's the output? What's in the logs?


However, I have my list of ten images in glance.

I create a new image and create a volume with this new image: it works.
I create an instance with this new image: OK.

What is the problem ? The images present before the "reconfigure" are listed, visible in horizon for example, but unusable.
Is there a way to fix this, or do we have to reinstall them all?

What's your configuration? What version of OpenStack are you running?



Cyril