[kolla-ansible][wallaby][glance] Problem with image list after reconfigure

Franck VEDEL franck.vedel at univ-grenoble-alpes.fr
Fri Nov 19 20:56:09 UTC 2021


Hello,
thanks a lot , you help me to understand a lot of things.

in particular that I have a lot of modifications to make to have an operational openstack and with good performance.
If my iscsi bay is attached to S3 (I have S1, S2 and S3), I have to put glance on S3 with a mount in the filesystem of S3, and enable the cache.
My images are in qcow2. suddenly I do not know if I modify them.
Finally, and I don't know if this is the best solution, to make images that work well, I go through virtualbox, then from VDI to RAW (then from RAW to QCOW2 but it was a big mistake if I well understood). For example, I am having trouble with an opnsense image if I create the iinstance from iso and Horizon. If I go through virtualbox on another computer, then copy the files, the image is OK. Weird ….

Ah, I forgot, I didn't realize that order was important in a module [module]. Really not easy to handle all of this.


Anyway thank you for your help. I will check the docs again and try to change this next week.

Franck 

> Le 19 nov. 2021 à 14:50, Ignazio Cassano <ignaziocassano at gmail.com> a écrit :
> 
> Franck, this help you a lot.
> Thanks Radoslaw
> Ignazio
> 
> Il giorno ven 19 nov 2021 alle ore 12:03 Radosław Piliszek <radoslaw.piliszek at gmail.com <mailto:radoslaw.piliszek at gmail.com>> ha scritto:
> If one sets glance_file_datadir_volume to non-default, then glance-api
> gets deployed on all hosts.
> 
> -yoctozepto
> 
> On Fri, 19 Nov 2021 at 10:51, Ignazio Cassano <ignaziocassano at gmail.com <mailto:ignaziocassano at gmail.com>> wrote:
> >
> > Hello Franck, glance is not deployed on all nodes at default.
> > I got the same problem
> > In my case I have 3 controllers.
> > I created an nfs share on a storage server where to store images.
> > Before deploying glance, I create /var/lib/glance/images on the 3 controllers and I mount the nfs share.
> > This is my fstab on the 3 controllers:
> >
> > 10.102.189.182:/netappopenstacktst2_glance /var/lib/glance nfs  rw,user=glance,soft,intr,noatime,nodiratime
> >
> > In my globals.yml I have:
> > glance_file_datadir_volume: "/var/lib/glance"
> > glance_backend_file: "yes"
> >
> > This means images are on /var/lib/glance and since it is a nfs share all my 3 controlles can share images.
> > Then you must deploy.
> > To be sure the glance container is started on all controllers, since I have 3 controllers, I deployed 3 times changing the order in the inventory.
> > First time:
> > [control]
> > A
> > B
> > C
> >
> > Second time:
> > [control]
> > B
> > C
> > A
> >
> > Third time:
> > [control]
> > C
> > B
> > A
> >
> > Or you can deploy glance 3 times using -t glance and -l <controllername>
> >
> > As far as the instance  stopped, I got I bug with a version of kolla.
> > https://bugs.launchpad.net/kolla-ansible/+bug/1941706 <https://bugs.launchpad.net/kolla-ansible/+bug/1941706>
> > Now is corrected and with kolla 12.2.0 it works.
> > Ignazio
> >
> >
> > Il giorno mer 17 nov 2021 alle ore 23:17 Franck VEDEL <franck.vedel at univ-grenoble-alpes.fr <mailto:franck.vedel at univ-grenoble-alpes.fr>> ha scritto:
> >>
> >> Hello and thank you for the help.
> >> I was able to move forward on my problem, without finding a satisfactory solution.
> >> Normally, I have 2 servers with the role [glance] but I noticed that all my images were on the first server (in / var / lib / docker / volumes / glance / _data / images) before the reconfigure, none on the second. But since the reconfiguration, the images are placed on the second, and no longer on the first. I do not understand why. I haven't changed anything to the multinode file.
> >> so, to get out of this situation quickly as I need this openstack for the students, I modified the multinode file and put only one server in [glance] (I put server 1, the one that had the images before reconfigure), I did a reconfigure -t glance and now I have my images usable for instances.
> >> I don't understand what happened. There is something wrong.
> >>
> >> Is it normal that after updating the certificates, all instances are turned off?
> >> thanks again
> >>
> >> Franck
> >>
> >> Le 17 nov. 2021 à 21:11, Cyril Roelandt <cyril at redhat.com <mailto:cyril at redhat.com>> a écrit :
> >>
> >> Hello,
> >>
> >>
> >> On 2021-11-17 08:59, Franck VEDEL wrote:
> >>
> >> Hello everyone
> >>
> >> I have a strange problem and I haven't found the solution yet.
> >> Following a certificate update I had to do a "kolla-ansible -t multinode reconfigure ».
> >> Well, after several attempts (it is not easy to use certificates with Kolla-ansible, and from my advice, not documented enough for beginners), I have my new functional certificates. Perfect ... well almost.
> >>
> >> I am trying to create a new instance to check general operation. ERROR.
> >> Okay, I look in the logs and I see that Cinder is having problems creating volumes with an error that I never had ("TypeError: 'NoneType' object is not iterable).
> >>
> >>
> >> We'd like to see the logs as well, especially the stacktrace.
> >>
> >> I dig and then I wonder if it is not the Glance images which cannot be used, while they are present (openstack image list is OK).
> >>
> >> I create an empty volume: it works.
> >> I am creating a volume from an image: Failed.
> >>
> >>
> >> What commands are you running? What's the output? What's in the logs?
> >>
> >>
> >> However, I have my list of ten images in glance.
> >>
> >> I create a new image and create a volume with this new image: it works.
> >> I create an instance with this new image: OK.
> >>
> >> What is the problem ? The images present before the "reconfigure" are listed, visible in horizon for example, but unusable.
> >> Is there a way to fix this, or do we have to reinstall them all?
> >>
> >>
> >> What's your configuration? What version of OpenStack are you running?
> >>
> >>
> >>
> >> Cyril
> >>
> >>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20211119/2de0c5ed/attachment.htm>


More information about the openstack-discuss mailing list