[kolla-ansible][wallaby][glance] Problem with image list after reconfigure
Hello everyone I have a strange problem and I haven't found the solution yet. Following a certificate update I had to do a "kolla-ansible -t multinode reconfigure ». Well, after several attempts (it is not easy to use certificates with Kolla-ansible, and from my advice, not documented enough for beginners), I have my new functional certificates. Perfect ... well almost. I am trying to create a new instance to check general operation. ERROR. Okay, I look in the logs and I see that Cinder is having problems creating volumes with an error that I never had ("TypeError: 'NoneType' object is not iterable). I dig and then I wonder if it is not the Glance images which cannot be used, while they are present (openstack image list is OK). I create an empty volume: it works. I am creating a volume from an image: Failed. However, I have my list of ten images in glance. I create a new image and create a volume with this new image: it works. I create an instance with this new image: OK. What is the problem ? The images present before the "reconfigure" are listed, visible in horizon for example, but unusable. Is there a way to fix this, or do we have to reinstall them all? Thanks in advance for your help if this problem speaks to you. Franck VEDEL Dép. Réseaux Informatiques & Télécoms IUT1 - Univ GRENOBLE Alpes 0476824462 Stages, Alternance, Emploi. http://www.rtgrenoble.fr <http://www.rtgrenoble.fr/>
Hello, On 2021-11-17 08:59, Franck VEDEL wrote:
Hello everyone
I have a strange problem and I haven't found the solution yet. Following a certificate update I had to do a "kolla-ansible -t multinode reconfigure ». Well, after several attempts (it is not easy to use certificates with Kolla-ansible, and from my advice, not documented enough for beginners), I have my new functional certificates. Perfect ... well almost.
I am trying to create a new instance to check general operation. ERROR. Okay, I look in the logs and I see that Cinder is having problems creating volumes with an error that I never had ("TypeError: 'NoneType' object is not iterable).
We'd like to see the logs as well, especially the stacktrace.
I dig and then I wonder if it is not the Glance images which cannot be used, while they are present (openstack image list is OK).
I create an empty volume: it works. I am creating a volume from an image: Failed.
What commands are you running? What's the output? What's in the logs?
However, I have my list of ten images in glance.
I create a new image and create a volume with this new image: it works. I create an instance with this new image: OK.
What is the problem ? The images present before the "reconfigure" are listed, visible in horizon for example, but unusable. Is there a way to fix this, or do we have to reinstall them all?
What's your configuration? What version of OpenStack are you running? Cyril
Hello and thank you for the help. I was able to move forward on my problem, without finding a satisfactory solution. Normally, I have 2 servers with the role [glance] but I noticed that all my images were on the first server (in / var / lib / docker / volumes / glance / _data / images) before the reconfigure, none on the second. But since the reconfiguration, the images are placed on the second, and no longer on the first. I do not understand why. I haven't changed anything to the multinode file. so, to get out of this situation quickly as I need this openstack for the students, I modified the multinode file and put only one server in [glance] (I put server 1, the one that had the images before reconfigure), I did a reconfigure -t glance and now I have my images usable for instances. I don't understand what happened. There is something wrong. Is it normal that after updating the certificates, all instances are turned off? thanks again Franck
Le 17 nov. 2021 à 21:11, Cyril Roelandt <cyril@redhat.com> a écrit :
Hello,
On 2021-11-17 08:59, Franck VEDEL wrote:
Hello everyone
I have a strange problem and I haven't found the solution yet. Following a certificate update I had to do a "kolla-ansible -t multinode reconfigure ». Well, after several attempts (it is not easy to use certificates with Kolla-ansible, and from my advice, not documented enough for beginners), I have my new functional certificates. Perfect ... well almost.
I am trying to create a new instance to check general operation. ERROR. Okay, I look in the logs and I see that Cinder is having problems creating volumes with an error that I never had ("TypeError: 'NoneType' object is not iterable).
We'd like to see the logs as well, especially the stacktrace.
I dig and then I wonder if it is not the Glance images which cannot be used, while they are present (openstack image list is OK).
I create an empty volume: it works. I am creating a volume from an image: Failed.
What commands are you running? What's the output? What's in the logs?
However, I have my list of ten images in glance.
I create a new image and create a volume with this new image: it works. I create an instance with this new image: OK.
What is the problem ? The images present before the "reconfigure" are listed, visible in horizon for example, but unusable. Is there a way to fix this, or do we have to reinstall them all?
What's your configuration? What version of OpenStack are you running?
Cyril
Hello, i solved using the following variabile in globals.yml: glance_file_datadir_volume=somedir and glance_backend_file="yes' So if the somedir is a nfs mount point, controllers can share images. Remember you have to deploy glance on all controllers. Ignazio Il Mer 17 Nov 2021, 23:17 Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr> ha scritto:
Hello and thank you for the help. I was able to move forward on my problem, without finding a satisfactory solution. Normally, I have 2 servers with the role [glance] but I noticed that all my images were on the first server (in / var / lib / docker / volumes / glance / _data / images) before the reconfigure, none on the second. But since the reconfiguration, the images are placed on the second, and no longer on the first. I do not understand why. I haven't changed anything to the multinode file. so, to get out of this situation quickly as I need this openstack for the students, I modified the multinode file and put only one server in [glance] (I put server 1, the one that had the images before reconfigure), I did a reconfigure -t glance and now I have my images usable for instances. I don't understand what happened. There is something wrong.
Is it normal that after updating the certificates, all instances are turned off? thanks again
Franck
Le 17 nov. 2021 à 21:11, Cyril Roelandt <cyril@redhat.com> a écrit :
Hello,
On 2021-11-17 08:59, Franck VEDEL wrote:
Hello everyone
I have a strange problem and I haven't found the solution yet. Following a certificate update I had to do a "kolla-ansible -t multinode reconfigure ». Well, after several attempts (it is not easy to use certificates with Kolla-ansible, and from my advice, not documented enough for beginners), I have my new functional certificates. Perfect ... well almost.
I am trying to create a new instance to check general operation. ERROR. Okay, I look in the logs and I see that Cinder is having problems creating volumes with an error that I never had ("TypeError: 'NoneType' object is not iterable).
We'd like to see the logs as well, especially the stacktrace.
I dig and then I wonder if it is not the Glance images which cannot be used, while they are present (openstack image list is OK).
I create an empty volume: it works. I am creating a volume from an image: Failed.
What commands are you running? What's the output? What's in the logs?
However, I have my list of ten images in glance.
I create a new image and create a volume with this new image: it works. I create an instance with this new image: OK.
What is the problem ? The images present before the "reconfigure" are listed, visible in horizon for example, but unusable. Is there a way to fix this, or do we have to reinstall them all?
What's your configuration? What version of OpenStack are you running?
Cyril
ok ... I got it ... and I think I was doing things wrong. Okay, so I have another question. My cinder storage is on an iscsi bay. I have 3 servers, S1, S2, S2. Compute is on S1, S2, S3. Controller is on S1 and S2. Storage is on S3. I have Glance on S1. Building an instance from an image is too long, so you have to make a volume first. If I put the images on the iSCSI bay, I mount a directory in the S1 file system, will the images build faster? Much faster ? Is this a good idea or not? Thank you again for your help and your experience Franck
Le 18 nov. 2021 à 07:23, Ignazio Cassano <ignaziocassano@gmail.com> a écrit :
Hello, i solved using the following variabile in globals.yml: glance_file_datadir_volume=somedir and glance_backend_file="yes'
So if the somedir is a nfs mount point, controllers can share images. Remember you have to deploy glance on all controllers. Ignazio
Il Mer 17 Nov 2021, 23:17 Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr <mailto:franck.vedel@univ-grenoble-alpes.fr>> ha scritto: Hello and thank you for the help. I was able to move forward on my problem, without finding a satisfactory solution. Normally, I have 2 servers with the role [glance] but I noticed that all my images were on the first server (in / var / lib / docker / volumes / glance / _data / images) before the reconfigure, none on the second. But since the reconfiguration, the images are placed on the second, and no longer on the first. I do not understand why. I haven't changed anything to the multinode file. so, to get out of this situation quickly as I need this openstack for the students, I modified the multinode file and put only one server in [glance] (I put server 1, the one that had the images before reconfigure), I did a reconfigure -t glance and now I have my images usable for instances. I don't understand what happened. There is something wrong.
Is it normal that after updating the certificates, all instances are turned off? thanks again
Franck
Le 17 nov. 2021 à 21:11, Cyril Roelandt <cyril@redhat.com <mailto:cyril@redhat.com>> a écrit :
Hello,
On 2021-11-17 08:59, Franck VEDEL wrote:
Hello everyone
I have a strange problem and I haven't found the solution yet. Following a certificate update I had to do a "kolla-ansible -t multinode reconfigure ». Well, after several attempts (it is not easy to use certificates with Kolla-ansible, and from my advice, not documented enough for beginners), I have my new functional certificates. Perfect ... well almost.
I am trying to create a new instance to check general operation. ERROR. Okay, I look in the logs and I see that Cinder is having problems creating volumes with an error that I never had ("TypeError: 'NoneType' object is not iterable).
We'd like to see the logs as well, especially the stacktrace.
I dig and then I wonder if it is not the Glance images which cannot be used, while they are present (openstack image list is OK).
I create an empty volume: it works. I am creating a volume from an image: Failed.
What commands are you running? What's the output? What's in the logs?
However, I have my list of ten images in glance.
I create a new image and create a volume with this new image: it works. I create an instance with this new image: OK.
What is the problem ? The images present before the "reconfigure" are listed, visible in horizon for example, but unusable. Is there a way to fix this, or do we have to reinstall them all?
What's your configuration? What version of OpenStack are you running?
Cyril
I did not understand very well how your infrastructure is done. Generally speaking, I prefer to have 3 controllers , n computer nodes and external storage. I think using iscsi images must be downloaded and converted from qcow2 to raw format and it can takes a long time. In this case I used image cache. Probably when you create a volume from image you can see a download phase. If you use image cache the download is executed only the first time a volume from that image is created. Sorry for my bad english. Take a look at https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cac... . Ignazio Il Gio 18 Nov 2021, 19:58 Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr> ha scritto:
ok ... I got it ... and I think I was doing things wrong. Okay, so I have another question. My cinder storage is on an iscsi bay. I have 3 servers, S1, S2, S2. Compute is on S1, S2, S3. Controller is on S1 and S2. Storage is on S3. I have Glance on S1. Building an instance from an image is too long, so you have to make a volume first. If I put the images on the iSCSI bay, I mount a directory in the S1 file system, will the images build faster? Much faster ? Is this a good idea or not?
Thank you again for your help and your experience
Franck
Le 18 nov. 2021 à 07:23, Ignazio Cassano <ignaziocassano@gmail.com> a écrit :
Hello, i solved using the following variabile in globals.yml: glance_file_datadir_volume=somedir and glance_backend_file="yes'
So if the somedir is a nfs mount point, controllers can share images. Remember you have to deploy glance on all controllers. Ignazio
Il Mer 17 Nov 2021, 23:17 Franck VEDEL < franck.vedel@univ-grenoble-alpes.fr> ha scritto:
Hello and thank you for the help. I was able to move forward on my problem, without finding a satisfactory solution. Normally, I have 2 servers with the role [glance] but I noticed that all my images were on the first server (in / var / lib / docker / volumes / glance / _data / images) before the reconfigure, none on the second. But since the reconfiguration, the images are placed on the second, and no longer on the first. I do not understand why. I haven't changed anything to the multinode file. so, to get out of this situation quickly as I need this openstack for the students, I modified the multinode file and put only one server in [glance] (I put server 1, the one that had the images before reconfigure), I did a reconfigure -t glance and now I have my images usable for instances. I don't understand what happened. There is something wrong.
Is it normal that after updating the certificates, all instances are turned off? thanks again
Franck
Le 17 nov. 2021 à 21:11, Cyril Roelandt <cyril@redhat.com> a écrit :
Hello,
On 2021-11-17 08:59, Franck VEDEL wrote:
Hello everyone
I have a strange problem and I haven't found the solution yet. Following a certificate update I had to do a "kolla-ansible -t multinode reconfigure ». Well, after several attempts (it is not easy to use certificates with Kolla-ansible, and from my advice, not documented enough for beginners), I have my new functional certificates. Perfect ... well almost.
I am trying to create a new instance to check general operation. ERROR. Okay, I look in the logs and I see that Cinder is having problems creating volumes with an error that I never had ("TypeError: 'NoneType' object is not iterable).
We'd like to see the logs as well, especially the stacktrace.
I dig and then I wonder if it is not the Glance images which cannot be used, while they are present (openstack image list is OK).
I create an empty volume: it works. I am creating a volume from an image: Failed.
What commands are you running? What's the output? What's in the logs?
However, I have my list of ten images in glance.
I create a new image and create a volume with this new image: it works. I create an instance with this new image: OK.
What is the problem ? The images present before the "reconfigure" are listed, visible in horizon for example, but unusable. Is there a way to fix this, or do we have to reinstall them all?
What's your configuration? What version of OpenStack are you running?
Cyril
hello ignacio and thank you for all this information. I also think that a structure with 3 servers may not be built properly, once again, arriving on such a project, without help (human help, because we find documents, documentations to be taken in order, with many different directions, choose the right OS, don't run into a bug (vpnaas for me), do tests, etc.). You have to make choices in order to move forward. I agree that I probably didn't do things the best way. And I regret it. Thank you for this help on how the images work. Yes, in my case the images can be used after the "download" because they are in Qcow2. I will change this, I did not understand it. It is clear that if a professional came to see my Openstack, they would tell me what is wrong, what I need to change, but hey, in the end, it still works a bit. Thanks Ingnacio, really. Franck
Le 18 nov. 2021 à 20:34, Ignazio Cassano <ignaziocassano@gmail.com> a écrit :
I did not understand very well how your infrastructure is done. Generally speaking, I prefer to have 3 controllers , n computer nodes and external storage. I think using iscsi images must be downloaded and converted from qcow2 to raw format and it can takes a long time. In this case I used image cache. Probably when you create a volume from image you can see a download phase. If you use image cache the download is executed only the first time a volume from that image is created. Sorry for my bad english. Take a look at https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cac... <https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html#:~:text=Image%2DVolume%20cache%C2%B6,end%20can%20clone%20a%20volume>. Ignazio
Il Gio 18 Nov 2021, 19:58 Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr <mailto:franck.vedel@univ-grenoble-alpes.fr>> ha scritto: ok ... I got it ... and I think I was doing things wrong. Okay, so I have another question. My cinder storage is on an iscsi bay. I have 3 servers, S1, S2, S2. Compute is on S1, S2, S3. Controller is on S1 and S2. Storage is on S3. I have Glance on S1. Building an instance from an image is too long, so you have to make a volume first. If I put the images on the iSCSI bay, I mount a directory in the S1 file system, will the images build faster? Much faster ? Is this a good idea or not?
Thank you again for your help and your experience
Franck
Le 18 nov. 2021 à 07:23, Ignazio Cassano <ignaziocassano@gmail.com <mailto:ignaziocassano@gmail.com>> a écrit :
Hello, i solved using the following variabile in globals.yml: glance_file_datadir_volume=somedir and glance_backend_file="yes'
So if the somedir is a nfs mount point, controllers can share images. Remember you have to deploy glance on all controllers. Ignazio
Il Mer 17 Nov 2021, 23:17 Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr <mailto:franck.vedel@univ-grenoble-alpes.fr>> ha scritto: Hello and thank you for the help. I was able to move forward on my problem, without finding a satisfactory solution. Normally, I have 2 servers with the role [glance] but I noticed that all my images were on the first server (in / var / lib / docker / volumes / glance / _data / images) before the reconfigure, none on the second. But since the reconfiguration, the images are placed on the second, and no longer on the first. I do not understand why. I haven't changed anything to the multinode file. so, to get out of this situation quickly as I need this openstack for the students, I modified the multinode file and put only one server in [glance] (I put server 1, the one that had the images before reconfigure), I did a reconfigure -t glance and now I have my images usable for instances. I don't understand what happened. There is something wrong.
Is it normal that after updating the certificates, all instances are turned off? thanks again
Franck
Le 17 nov. 2021 à 21:11, Cyril Roelandt <cyril@redhat.com <mailto:cyril@redhat.com>> a écrit :
Hello,
On 2021-11-17 08:59, Franck VEDEL wrote:
Hello everyone
I have a strange problem and I haven't found the solution yet. Following a certificate update I had to do a "kolla-ansible -t multinode reconfigure ». Well, after several attempts (it is not easy to use certificates with Kolla-ansible, and from my advice, not documented enough for beginners), I have my new functional certificates. Perfect ... well almost.
I am trying to create a new instance to check general operation. ERROR. Okay, I look in the logs and I see that Cinder is having problems creating volumes with an error that I never had ("TypeError: 'NoneType' object is not iterable).
We'd like to see the logs as well, especially the stacktrace.
I dig and then I wonder if it is not the Glance images which cannot be used, while they are present (openstack image list is OK).
I create an empty volume: it works. I am creating a volume from an image: Failed.
What commands are you running? What's the output? What's in the logs?
However, I have my list of ten images in glance.
I create a new image and create a volume with this new image: it works. I create an instance with this new image: OK.
What is the problem ? The images present before the "reconfigure" are listed, visible in horizon for example, but unusable. Is there a way to fix this, or do we have to reinstall them all?
What's your configuration? What version of OpenStack are you running?
Cyril
Hello Franck, glance is not deployed on all nodes at default. I got the same problem In my case I have 3 controllers. I created an nfs share on a storage server where to store images. Before deploying glance, I create /var/lib/glance/images on the 3 controllers and I mount the nfs share. This is my fstab on the 3 controllers: 10.102.189.182:/netappopenstacktst2_glance /var/lib/glance nfs rw,user=glance,soft,intr,noatime,nodiratime In my globals.yml I have: glance_file_datadir_volume: "/var/lib/glance" glance_backend_file: "yes" This means images are on /var/lib/glance and since it is a nfs share all my 3 controlles can share images. Then you must deploy. To be sure the glance container is started on all controllers, since I have 3 controllers, I deployed 3 times changing the order in the inventory. First time: [control] A B C Second time: [control] B C A Third time: [control] C B A Or you can deploy glance 3 times using -t glance and -l <controllername> As far as the instance stopped, I got I bug with a version of kolla. https://bugs.launchpad.net/kolla-ansible/+bug/1941706 Now is corrected and with kolla 12.2.0 it works. Ignazio Il giorno mer 17 nov 2021 alle ore 23:17 Franck VEDEL < franck.vedel@univ-grenoble-alpes.fr> ha scritto:
Hello and thank you for the help. I was able to move forward on my problem, without finding a satisfactory solution. Normally, I have 2 servers with the role [glance] but I noticed that all my images were on the first server (in / var / lib / docker / volumes / glance / _data / images) before the reconfigure, none on the second. But since the reconfiguration, the images are placed on the second, and no longer on the first. I do not understand why. I haven't changed anything to the multinode file. so, to get out of this situation quickly as I need this openstack for the students, I modified the multinode file and put only one server in [glance] (I put server 1, the one that had the images before reconfigure), I did a reconfigure -t glance and now I have my images usable for instances. I don't understand what happened. There is something wrong.
Is it normal that after updating the certificates, all instances are turned off? thanks again
Franck
Le 17 nov. 2021 à 21:11, Cyril Roelandt <cyril@redhat.com> a écrit :
Hello,
On 2021-11-17 08:59, Franck VEDEL wrote:
Hello everyone
I have a strange problem and I haven't found the solution yet. Following a certificate update I had to do a "kolla-ansible -t multinode reconfigure ». Well, after several attempts (it is not easy to use certificates with Kolla-ansible, and from my advice, not documented enough for beginners), I have my new functional certificates. Perfect ... well almost.
I am trying to create a new instance to check general operation. ERROR. Okay, I look in the logs and I see that Cinder is having problems creating volumes with an error that I never had ("TypeError: 'NoneType' object is not iterable).
We'd like to see the logs as well, especially the stacktrace.
I dig and then I wonder if it is not the Glance images which cannot be used, while they are present (openstack image list is OK).
I create an empty volume: it works. I am creating a volume from an image: Failed.
What commands are you running? What's the output? What's in the logs?
However, I have my list of ten images in glance.
I create a new image and create a volume with this new image: it works. I create an instance with this new image: OK.
What is the problem ? The images present before the "reconfigure" are listed, visible in horizon for example, but unusable. Is there a way to fix this, or do we have to reinstall them all?
What's your configuration? What version of OpenStack are you running?
Cyril
If one sets glance_file_datadir_volume to non-default, then glance-api gets deployed on all hosts. -yoctozepto On Fri, 19 Nov 2021 at 10:51, Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Hello Franck, glance is not deployed on all nodes at default. I got the same problem In my case I have 3 controllers. I created an nfs share on a storage server where to store images. Before deploying glance, I create /var/lib/glance/images on the 3 controllers and I mount the nfs share. This is my fstab on the 3 controllers:
10.102.189.182:/netappopenstacktst2_glance /var/lib/glance nfs rw,user=glance,soft,intr,noatime,nodiratime
In my globals.yml I have: glance_file_datadir_volume: "/var/lib/glance" glance_backend_file: "yes"
This means images are on /var/lib/glance and since it is a nfs share all my 3 controlles can share images. Then you must deploy. To be sure the glance container is started on all controllers, since I have 3 controllers, I deployed 3 times changing the order in the inventory. First time: [control] A B C
Second time: [control] B C A
Third time: [control] C B A
Or you can deploy glance 3 times using -t glance and -l <controllername>
As far as the instance stopped, I got I bug with a version of kolla. https://bugs.launchpad.net/kolla-ansible/+bug/1941706 Now is corrected and with kolla 12.2.0 it works. Ignazio
Il giorno mer 17 nov 2021 alle ore 23:17 Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr> ha scritto:
Hello and thank you for the help. I was able to move forward on my problem, without finding a satisfactory solution. Normally, I have 2 servers with the role [glance] but I noticed that all my images were on the first server (in / var / lib / docker / volumes / glance / _data / images) before the reconfigure, none on the second. But since the reconfiguration, the images are placed on the second, and no longer on the first. I do not understand why. I haven't changed anything to the multinode file. so, to get out of this situation quickly as I need this openstack for the students, I modified the multinode file and put only one server in [glance] (I put server 1, the one that had the images before reconfigure), I did a reconfigure -t glance and now I have my images usable for instances. I don't understand what happened. There is something wrong.
Is it normal that after updating the certificates, all instances are turned off? thanks again
Franck
Le 17 nov. 2021 à 21:11, Cyril Roelandt <cyril@redhat.com> a écrit :
Hello,
On 2021-11-17 08:59, Franck VEDEL wrote:
Hello everyone
I have a strange problem and I haven't found the solution yet. Following a certificate update I had to do a "kolla-ansible -t multinode reconfigure ». Well, after several attempts (it is not easy to use certificates with Kolla-ansible, and from my advice, not documented enough for beginners), I have my new functional certificates. Perfect ... well almost.
I am trying to create a new instance to check general operation. ERROR. Okay, I look in the logs and I see that Cinder is having problems creating volumes with an error that I never had ("TypeError: 'NoneType' object is not iterable).
We'd like to see the logs as well, especially the stacktrace.
I dig and then I wonder if it is not the Glance images which cannot be used, while they are present (openstack image list is OK).
I create an empty volume: it works. I am creating a volume from an image: Failed.
What commands are you running? What's the output? What's in the logs?
However, I have my list of ten images in glance.
I create a new image and create a volume with this new image: it works. I create an instance with this new image: OK.
What is the problem ? The images present before the "reconfigure" are listed, visible in horizon for example, but unusable. Is there a way to fix this, or do we have to reinstall them all?
What's your configuration? What version of OpenStack are you running?
Cyril
Franck, this help you a lot. Thanks Radoslaw Ignazio Il giorno ven 19 nov 2021 alle ore 12:03 Radosław Piliszek < radoslaw.piliszek@gmail.com> ha scritto:
If one sets glance_file_datadir_volume to non-default, then glance-api gets deployed on all hosts.
-yoctozepto
On Fri, 19 Nov 2021 at 10:51, Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Hello Franck, glance is not deployed on all nodes at default. I got the same problem In my case I have 3 controllers. I created an nfs share on a storage server where to store images. Before deploying glance, I create /var/lib/glance/images on the 3
This is my fstab on the 3 controllers:
10.102.189.182:/netappopenstacktst2_glance /var/lib/glance nfs rw,user=glance,soft,intr,noatime,nodiratime
In my globals.yml I have: glance_file_datadir_volume: "/var/lib/glance" glance_backend_file: "yes"
This means images are on /var/lib/glance and since it is a nfs share all my 3 controlles can share images. Then you must deploy. To be sure the glance container is started on all controllers, since I have 3 controllers, I deployed 3 times changing the order in the inventory. First time: [control] A B C
Second time: [control] B C A
Third time: [control] C B A
Or you can deploy glance 3 times using -t glance and -l <controllername>
As far as the instance stopped, I got I bug with a version of kolla. https://bugs.launchpad.net/kolla-ansible/+bug/1941706 Now is corrected and with kolla 12.2.0 it works. Ignazio
Il giorno mer 17 nov 2021 alle ore 23:17 Franck VEDEL < franck.vedel@univ-grenoble-alpes.fr> ha scritto:
Hello and thank you for the help. I was able to move forward on my problem, without finding a
satisfactory solution.
Normally, I have 2 servers with the role [glance] but I noticed that all my images were on the first server (in / var / lib / docker / volumes / glance / _data / images) before the reconfigure, none on the second. But since the reconfiguration, the images are placed on the second, and no longer on the first. I do not understand why. I haven't changed anything to
so, to get out of this situation quickly as I need this openstack for
I don't understand what happened. There is something wrong.
Is it normal that after updating the certificates, all instances are turned off? thanks again
Franck
Le 17 nov. 2021 à 21:11, Cyril Roelandt <cyril@redhat.com> a écrit :
Hello,
On 2021-11-17 08:59, Franck VEDEL wrote:
Hello everyone
I have a strange problem and I haven't found the solution yet. Following a certificate update I had to do a "kolla-ansible -t multinode reconfigure ». Well, after several attempts (it is not easy to use certificates with Kolla-ansible, and from my advice, not documented enough for beginners), I have my new functional certificates. Perfect ... well almost.
I am trying to create a new instance to check general operation. ERROR. Okay, I look in the logs and I see that Cinder is having problems creating volumes with an error that I never had ("TypeError: 'NoneType' object is not iterable).
We'd like to see the logs as well, especially the stacktrace.
I dig and then I wonder if it is not the Glance images which cannot be used, while they are present (openstack image list is OK).
I create an empty volume: it works. I am creating a volume from an image: Failed.
What commands are you running? What's the output? What's in the logs?
However, I have my list of ten images in glance.
I create a new image and create a volume with this new image: it works. I create an instance with this new image: OK.
What is the problem ? The images present before the "reconfigure" are
controllers and I mount the nfs share. the multinode file. the students, I modified the multinode file and put only one server in [glance] (I put server 1, the one that had the images before reconfigure), I did a reconfigure -t glance and now I have my images usable for instances. listed, visible in horizon for example, but unusable.
Is there a way to fix this, or do we have to reinstall them all?
What's your configuration? What version of OpenStack are you running?
Cyril
Hello, thanks a lot , you help me to understand a lot of things. in particular that I have a lot of modifications to make to have an operational openstack and with good performance. If my iscsi bay is attached to S3 (I have S1, S2 and S3), I have to put glance on S3 with a mount in the filesystem of S3, and enable the cache. My images are in qcow2. suddenly I do not know if I modify them. Finally, and I don't know if this is the best solution, to make images that work well, I go through virtualbox, then from VDI to RAW (then from RAW to QCOW2 but it was a big mistake if I well understood). For example, I am having trouble with an opnsense image if I create the iinstance from iso and Horizon. If I go through virtualbox on another computer, then copy the files, the image is OK. Weird …. Ah, I forgot, I didn't realize that order was important in a module [module]. Really not easy to handle all of this. Anyway thank you for your help. I will check the docs again and try to change this next week. Franck
Le 19 nov. 2021 à 14:50, Ignazio Cassano <ignaziocassano@gmail.com> a écrit :
Franck, this help you a lot. Thanks Radoslaw Ignazio
Il giorno ven 19 nov 2021 alle ore 12:03 Radosław Piliszek <radoslaw.piliszek@gmail.com <mailto:radoslaw.piliszek@gmail.com>> ha scritto: If one sets glance_file_datadir_volume to non-default, then glance-api gets deployed on all hosts.
-yoctozepto
On Fri, 19 Nov 2021 at 10:51, Ignazio Cassano <ignaziocassano@gmail.com <mailto:ignaziocassano@gmail.com>> wrote:
Hello Franck, glance is not deployed on all nodes at default. I got the same problem In my case I have 3 controllers. I created an nfs share on a storage server where to store images. Before deploying glance, I create /var/lib/glance/images on the 3 controllers and I mount the nfs share. This is my fstab on the 3 controllers:
10.102.189.182:/netappopenstacktst2_glance /var/lib/glance nfs rw,user=glance,soft,intr,noatime,nodiratime
In my globals.yml I have: glance_file_datadir_volume: "/var/lib/glance" glance_backend_file: "yes"
This means images are on /var/lib/glance and since it is a nfs share all my 3 controlles can share images. Then you must deploy. To be sure the glance container is started on all controllers, since I have 3 controllers, I deployed 3 times changing the order in the inventory. First time: [control] A B C
Second time: [control] B C A
Third time: [control] C B A
Or you can deploy glance 3 times using -t glance and -l <controllername>
As far as the instance stopped, I got I bug with a version of kolla. https://bugs.launchpad.net/kolla-ansible/+bug/1941706 <https://bugs.launchpad.net/kolla-ansible/+bug/1941706> Now is corrected and with kolla 12.2.0 it works. Ignazio
Il giorno mer 17 nov 2021 alle ore 23:17 Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr <mailto:franck.vedel@univ-grenoble-alpes.fr>> ha scritto:
Hello and thank you for the help. I was able to move forward on my problem, without finding a satisfactory solution. Normally, I have 2 servers with the role [glance] but I noticed that all my images were on the first server (in / var / lib / docker / volumes / glance / _data / images) before the reconfigure, none on the second. But since the reconfiguration, the images are placed on the second, and no longer on the first. I do not understand why. I haven't changed anything to the multinode file. so, to get out of this situation quickly as I need this openstack for the students, I modified the multinode file and put only one server in [glance] (I put server 1, the one that had the images before reconfigure), I did a reconfigure -t glance and now I have my images usable for instances. I don't understand what happened. There is something wrong.
Is it normal that after updating the certificates, all instances are turned off? thanks again
Franck
Le 17 nov. 2021 à 21:11, Cyril Roelandt <cyril@redhat.com <mailto:cyril@redhat.com>> a écrit :
Hello,
On 2021-11-17 08:59, Franck VEDEL wrote:
Hello everyone
I have a strange problem and I haven't found the solution yet. Following a certificate update I had to do a "kolla-ansible -t multinode reconfigure ». Well, after several attempts (it is not easy to use certificates with Kolla-ansible, and from my advice, not documented enough for beginners), I have my new functional certificates. Perfect ... well almost.
I am trying to create a new instance to check general operation. ERROR. Okay, I look in the logs and I see that Cinder is having problems creating volumes with an error that I never had ("TypeError: 'NoneType' object is not iterable).
We'd like to see the logs as well, especially the stacktrace.
I dig and then I wonder if it is not the Glance images which cannot be used, while they are present (openstack image list is OK).
I create an empty volume: it works. I am creating a volume from an image: Failed.
What commands are you running? What's the output? What's in the logs?
However, I have my list of ten images in glance.
I create a new image and create a volume with this new image: it works. I create an instance with this new image: OK.
What is the problem ? The images present before the "reconfigure" are listed, visible in horizon for example, but unusable. Is there a way to fix this, or do we have to reinstall them all?
What's your configuration? What version of OpenStack are you running?
Cyril
Ignazio, Radoslaw, thanks to you, I made some modifications and my environment seems to work better (the images are placed on the iiscsi bay on which the volumes are stored). I installed the cache for glance. It works, well I think it does. My question is: between the different formats (qcow2, raw or other), which is the most efficient if - we create a volume then an instance from the volume - we create an instance from the image - we create an instance without volume - we create a snapshot then an instance from the snapshot Franck
Le 19 nov. 2021 à 14:50, Ignazio Cassano <ignaziocassano@gmail.com <mailto:ignaziocassano@gmail.com>> a écrit :
Franck, this help you a lot. Thanks Radoslaw Ignazio
Il giorno ven 19 nov 2021 alle ore 12:03 Radosław Piliszek <radoslaw.piliszek@gmail.com <mailto:radoslaw.piliszek@gmail.com>> ha scritto: If one sets glance_file_datadir_volume to non-default, then glance-api gets deployed on all hosts.
-yoctozepto
On Fri, 19 Nov 2021 at 10:51, Ignazio Cassano <ignaziocassano@gmail.com <mailto:ignaziocassano@gmail.com>> wrote:
Hello Franck, glance is not deployed on all nodes at default. I got the same problem In my case I have 3 controllers. I created an nfs share on a storage server where to store images. Before deploying glance, I create /var/lib/glance/images on the 3 controllers and I mount the nfs share. This is my fstab on the 3 controllers:
10.102.189.182:/netappopenstacktst2_glance /var/lib/glance nfs rw,user=glance,soft,intr,noatime,nodiratime
In my globals.yml I have: glance_file_datadir_volume: "/var/lib/glance" glance_backend_file: "yes"
This means images are on /var/lib/glance and since it is a nfs share all my 3 controlles can share images. Then you must deploy. To be sure the glance container is started on all controllers, since I have 3 controllers, I deployed 3 times changing the order in the inventory. First time: [control] A B C
Second time: [control] B C A
Third time: [control] C B A
Or you can deploy glance 3 times using -t glance and -l <controllername>
As far as the instance stopped, I got I bug with a version of kolla. https://bugs.launchpad.net/kolla-ansible/+bug/1941706 <https://bugs.launchpad.net/kolla-ansible/+bug/1941706> Now is corrected and with kolla 12.2.0 it works. Ignazio
Il giorno mer 17 nov 2021 alle ore 23:17 Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr <mailto:franck.vedel@univ-grenoble-alpes.fr>> ha scritto:
Franck, If the cache works fine , I think glance image format could be qcow2. The volume is created in raw format but the download phase is executed only the fisrt time you create a volume from a new image. With this setup I can create 20-30 instance in a shot and it takes few minutes to complete. I always use general purpose small images and colplete the instance configuration (package installation and so on) with heat or ansible. Ignazio Il giorno mar 23 nov 2021 alle ore 08:57 Franck VEDEL < franck.vedel@univ-grenoble-alpes.fr> ha scritto:
Ignazio, Radoslaw,
thanks to you, I made some modifications and my environment seems to work better (the images are placed on the iiscsi bay on which the volumes are stored). I installed the cache for glance. It works, well I think it does.
My question is: between the different formats (qcow2, raw or other), which is the most efficient if - we create a volume then an instance from the volume - we create an instance from the image - we create an instance without volume - we create a snapshot then an instance from the snapshot
Franck
Le 19 nov. 2021 à 14:50, Ignazio Cassano <ignaziocassano@gmail.com> a écrit :
Franck, this help you a lot. Thanks Radoslaw Ignazio
Il giorno ven 19 nov 2021 alle ore 12:03 Radosław Piliszek < radoslaw.piliszek@gmail.com> ha scritto:
If one sets glance_file_datadir_volume to non-default, then glance-api gets deployed on all hosts.
-yoctozepto
On Fri, 19 Nov 2021 at 10:51, Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Hello Franck, glance is not deployed on all nodes at default. I got the same problem In my case I have 3 controllers. I created an nfs share on a storage server where to store images. Before deploying glance, I create /var/lib/glance/images on the 3
controllers and I mount the nfs share.
This is my fstab on the 3 controllers:
10.102.189.182:/netappopenstacktst2_glance /var/lib/glance nfs rw,user=glance,soft,intr,noatime,nodiratime
In my globals.yml I have: glance_file_datadir_volume: "/var/lib/glance" glance_backend_file: "yes"
This means images are on /var/lib/glance and since it is a nfs share all my 3 controlles can share images. Then you must deploy. To be sure the glance container is started on all controllers, since I have 3 controllers, I deployed 3 times changing the order in the inventory. First time: [control] A B C
Second time: [control] B C A
Third time: [control] C B A
Or you can deploy glance 3 times using -t glance and -l <controllername>
As far as the instance stopped, I got I bug with a version of kolla. https://bugs.launchpad.net/kolla-ansible/+bug/1941706 Now is corrected and with kolla 12.2.0 it works. Ignazio
Il giorno mer 17 nov 2021 alle ore 23:17 Franck VEDEL < franck.vedel@univ-grenoble-alpes.fr> ha scritto:
Thanks again. I'll do some speed tests. In my case, I need ready-to-use images (debian, centos, ubuntu, pfsense, kali, windows 10, windows 2016), sometimes big images. This is why I am trying to find out what is the best solution with the use of an iscsi bay. Ah .... if I could change that and use disks and ceph ... Franck
Le 23 nov. 2021 à 09:14, Ignazio Cassano <ignaziocassano@gmail.com> a écrit :
Franck, If the cache works fine , I think glance image format could be qcow2. The volume is created in raw format but the download phase is executed only the fisrt time you create a volume from a new image. With this setup I can create 20-30 instance in a shot and it takes few minutes to complete. I always use general purpose small images and colplete the instance configuration (package installation and so on) with heat or ansible. Ignazio
Il giorno mar 23 nov 2021 alle ore 08:57 Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr <mailto:franck.vedel@univ-grenoble-alpes.fr>> ha scritto: Ignazio, Radoslaw,
thanks to you, I made some modifications and my environment seems to work better (the images are placed on the iiscsi bay on which the volumes are stored). I installed the cache for glance. It works, well I think it does.
My question is: between the different formats (qcow2, raw or other), which is the most efficient if - we create a volume then an instance from the volume - we create an instance from the image - we create an instance without volume - we create a snapshot then an instance from the snapshot
Franck
Le 19 nov. 2021 à 14:50, Ignazio Cassano <ignaziocassano@gmail.com <mailto:ignaziocassano@gmail.com>> a écrit :
Franck, this help you a lot. Thanks Radoslaw Ignazio
Il giorno ven 19 nov 2021 alle ore 12:03 Radosław Piliszek <radoslaw.piliszek@gmail.com <mailto:radoslaw.piliszek@gmail.com>> ha scritto: If one sets glance_file_datadir_volume to non-default, then glance-api gets deployed on all hosts.
-yoctozepto
On Fri, 19 Nov 2021 at 10:51, Ignazio Cassano <ignaziocassano@gmail.com <mailto:ignaziocassano@gmail.com>> wrote:
Hello Franck, glance is not deployed on all nodes at default. I got the same problem In my case I have 3 controllers. I created an nfs share on a storage server where to store images. Before deploying glance, I create /var/lib/glance/images on the 3 controllers and I mount the nfs share. This is my fstab on the 3 controllers:
10.102.189.182:/netappopenstacktst2_glance /var/lib/glance nfs rw,user=glance,soft,intr,noatime,nodiratime
In my globals.yml I have: glance_file_datadir_volume: "/var/lib/glance" glance_backend_file: "yes"
This means images are on /var/lib/glance and since it is a nfs share all my 3 controlles can share images. Then you must deploy. To be sure the glance container is started on all controllers, since I have 3 controllers, I deployed 3 times changing the order in the inventory. First time: [control] A B C
Second time: [control] B C A
Third time: [control] C B A
Or you can deploy glance 3 times using -t glance and -l <controllername>
As far as the instance stopped, I got I bug with a version of kolla. https://bugs.launchpad.net/kolla-ansible/+bug/1941706 <https://bugs.launchpad.net/kolla-ansible/+bug/1941706> Now is corrected and with kolla 12.2.0 it works. Ignazio
Il giorno mer 17 nov 2021 alle ore 23:17 Franck VEDEL <franck.vedel@univ-grenoble-alpes.fr <mailto:franck.vedel@univ-grenoble-alpes.fr>> ha scritto:
participants (4)
-
Cyril Roelandt
-
Franck VEDEL
-
Ignazio Cassano
-
Radosław Piliszek