[kolla-ansible][wallaby][glance] Problem with image list after reconfigure

Franck VEDEL franck.vedel at univ-grenoble-alpes.fr
Fri Nov 19 07:41:21 UTC 2021


hello ignacio and thank you for all this information.
I also think that a structure with 3 servers may not be built properly, once again, arriving on such a project, without help (human help, because we find documents, documentations to be taken in order, with many different directions, choose the right OS, don't run into a bug (vpnaas for me), do tests, etc.). You have to make choices in order to move forward. I agree that I probably didn't do things the best way. And I regret it.
Thank you for this help on how the images work. Yes, in my case the images can be used after the "download" because they are in Qcow2. I will change this, I did not understand it.
It is clear that if a professional came to see my Openstack, they would tell me what is wrong, what I need to change, but hey, in the end, it still works a bit.

Thanks Ingnacio, really.

Franck 

> Le 18 nov. 2021 à 20:34, Ignazio Cassano <ignaziocassano at gmail.com> a écrit :
> 
> I did not understand  very well how your infrastructure is done. 
> Generally speaking, I prefer to have 3 controllers , n computer nodes and external storage.
> I think using iscsi images must be downloaded and  converted from qcow2 to raw format and it can takes a long time. In this case I used image cache. Probably when you create a volume from image you can see a download phase. If you use image cache the download is executed only the first time a volume from that image is created.
> Sorry for my bad english.
> Take a look at
> https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html#:~:text=Image%2DVolume%20cache%C2%B6,end%20can%20clone%20a%20volume <https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html#:~:text=Image%2DVolume%20cache%C2%B6,end%20can%20clone%20a%20volume>.
> Ignazio
> 
> 
> Il Gio 18 Nov 2021, 19:58 Franck VEDEL <franck.vedel at univ-grenoble-alpes.fr <mailto:franck.vedel at univ-grenoble-alpes.fr>> ha scritto:
> ok ... I got it ... and I think I was doing things wrong.
> Okay, so I have another question.
> My cinder storage is on an iscsi bay.
> I have 3 servers, S1, S2, S2.
> Compute is on S1, S2, S3.
> Controller is on S1 and S2.
> Storage is on S3.
> I have Glance on S1. Building an instance from an image is too long, so you have to make a volume first.
> If I put the images on the iSCSI bay, I mount a directory in the S1 file system, will the images build faster? Much faster ?
> Is this a good idea or not?
> 
> Thank you again for your help and your experience
> 
> 
> Franck 
> 
>> Le 18 nov. 2021 à 07:23, Ignazio Cassano <ignaziocassano at gmail.com <mailto:ignaziocassano at gmail.com>> a écrit :
>> 
>> Hello, i solved using the following variabile in globals.yml:
>> glance_file_datadir_volume=somedir
>> and glance_backend_file="yes'
>> 
>> So if the somedir is a nfs mount point, controllers can share images. Remember you have to deploy glance on all controllers.
>> Ignazio
>> 
>> Il Mer 17 Nov 2021, 23:17 Franck VEDEL <franck.vedel at univ-grenoble-alpes.fr <mailto:franck.vedel at univ-grenoble-alpes.fr>> ha scritto:
>> Hello and thank you for the help.
>> I was able to move forward on my problem, without finding a satisfactory solution.
>> Normally, I have 2 servers with the role [glance] but I noticed that all my images were on the first server (in / var / lib / docker / volumes / glance / _data / images) before the reconfigure, none on the second. But since the reconfiguration, the images are placed on the second, and no longer on the first. I do not understand why. I haven't changed anything to the multinode file.
>> so, to get out of this situation quickly as I need this openstack for the students, I modified the multinode file and put only one server in [glance] (I put server 1, the one that had the images before reconfigure), I did a reconfigure -t glance and now I have my images usable for instances.
>> I don't understand what happened. There is something wrong.
>> 
>> Is it normal that after updating the certificates, all instances are turned off?
>> thanks again
>> 
>> Franck
>> 
>>> Le 17 nov. 2021 à 21:11, Cyril Roelandt <cyril at redhat.com <mailto:cyril at redhat.com>> a écrit :
>>> 
>>> Hello,
>>> 
>>> 
>>> On 2021-11-17 08:59, Franck VEDEL wrote:
>>>> Hello everyone 
>>>> 
>>>> I have a strange problem and I haven't found the solution yet. 
>>>> Following a certificate update I had to do a "kolla-ansible -t multinode reconfigure ». 
>>>> Well, after several attempts (it is not easy to use certificates with Kolla-ansible, and from my advice, not documented enough for beginners), I have my new functional certificates. Perfect ... well almost.
>>>> 
>>>> I am trying to create a new instance to check general operation. ERROR. 
>>>> Okay, I look in the logs and I see that Cinder is having problems creating volumes with an error that I never had ("TypeError: 'NoneType' object is not iterable).
>>> 
>>> We'd like to see the logs as well, especially the stacktrace.
>>> 
>>>> I dig and then I wonder if it is not the Glance images which cannot be used, while they are present (openstack image list is OK). 
>>>> 
>>>> I create an empty volume: it works.
>>>> I am creating a volume from an image: Failed. 
>>> 
>>> What commands are you running? What's the output? What's in the logs?
>>> 
>>>> 
>>>> However, I have my list of ten images in glance. 
>>>> 
>>>> I create a new image and create a volume with this new image: it works. 
>>>> I create an instance with this new image: OK. 
>>>> 
>>>> What is the problem ? The images present before the "reconfigure" are listed, visible in horizon for example, but unusable. 
>>>> Is there a way to fix this, or do we have to reinstall them all? 
>>> 
>>> What's your configuration? What version of OpenStack are you running?
>>> 
>>> 
>>> 
>>> Cyril
>>> 
>> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20211119/5f253fe2/attachment-0001.htm>


More information about the openstack-discuss mailing list