[glance] Does glance not support using local filesystem storage in a cluster

Felix Hüttner felix.huettner at mail.schwarz
Mon Jan 10 08:11:54 UTC 2022


Hi everyone,

we are actually using glance with filesystem storage in our clusters.
To do this we use an NFS share / k8s PVC as the backing storage for the images on all nodes.

However we only go this route since we are using NFS for our cinder storage as well.
So it probably only makes sense if you have a shared Filesystem available to your setup anyway.

Best Regards
Felix

-----Original Message-----
From: 韩光宇 <hanguangyu2 at gmail.com>
Sent: Thursday, January 6, 2022 10:42 AM
To: openstack-discuss <openstack-discuss at lists.openstack.org>
Cc: wangleic at uniontech.com; hanguangyu at uniontech.com
Subject: Re: [glance] Does glance not support using local filesystem storage in a cluster

Hi,

Yes, you are right. I have deploied a ceph as the backend storage of glance, cinder and nova.  And it can resolve this question.

But I wonder why it's designed this way. It doesn't fit my perception of OpenStack.

As currently designed, local storage of glance must not be used in the cluster. Why not record the host where the image resides? Just like the local storage of the nova-compute node, if a Glance node breaks down, the image on the host cannot be accessed.

Sorry that maybe this idea is unreasonable and stupid. Could anyone tell me the reason or what's the problem with that

best wishes to you, love you.

Thank you,
Han Guangyu

Eugen Block <eblock at nde.ag> 于2022年1月6日周四 17:03写道:
>
> Hi,
>
> if you really aim towards a highly available cluster you'll also need
> a ha storage solution like ceph. Having glance images or VMs on local
> storage can make it easier to deploy, maybe for testing and getting
> involved with openstack, but it's not really recommended for
> production use. You'll probably have the same issue with cinder
> volumes, I believe. Or do you have a different backend for cinder?
>
> Regards,
> Eugen
>
>
> Zitat von 韩光宇 <hanguangyu2 at gmail.com>:
>
> > Deal all,
> >
> > Sorry that maybe I ask a stupid question. But I'm really confused
> > with it and didn't find discuss in glance
> > document(https://docs.openstack.org/glance/latest/).
> >
> >  I have a OpenStack Victoria cluster with three all-in-one node in
> > centos8. I implemented it with reference to
> > https://docs.openstack.org/ha-guide/. So this cluster use Pacemaker,
> > HAproxy and Galera. "To implement high availability, run an instance
> > of the database on each controller node and use Galera Cluster to
> > provide replication between them."
> >
> > I found that I will encounter an error If I configure Glance backend
> > to use local storage driver to store image files on the local disk.
> > If I upload a image, this image only will be storaged in one node.
> > But the database only storage the file path of image such as
> > "/v2/images/aa3cbee0-717f-4699-8cca-61243302d693/file", don't have
> > the host information. The database data is same in three node.
> >
> > If I upload a image in node1, image only is storaged in node1. The
> > database of three node stores the local filesystem path of image.
> > And If The create Instance task is assigned to node2, It will find
> > image in node2, but image can't be found in node2. So we get the
> > "Image has no associated data" error.
> >
> > So I want to ask:
> > 1. Wheter glance does not support using local filesystem storage in
> > a cluster?
> > 2. If 1 was right, why do we do this design instead of storing
> > information about the host on which images is located, as nova does
> > with instance.
> >
> > I would appreciate any kind of guidance or help.
> >
> > Thank you,
> > Han Guangyu
>
>
>
>

Diese E Mail enthält möglicherweise vertrauliche Inhalte und ist nur für die Verwertung durch den vorgesehenen Empfänger bestimmt. Sollten Sie nicht der vorgesehene Empfänger sein, setzen Sie den Absender bitte unverzüglich in Kenntnis und löschen diese E Mail. Hinweise zum Datenschutz finden Sie hier<https://www.datenschutz.schwarz>.


More information about the openstack-discuss mailing list