[glance] Does glance not support using local filesystem storage in a cluster
Thomas Goirand
zigo at debian.org
Thu Jan 6 12:44:13 UTC 2022
On 1/6/22 03:33, 韩光宇 wrote:
> Deal all,
>
> Sorry that maybe I ask a stupid question. But I'm really confused with
> it and didn't find discuss in glance
> document(https://docs.openstack.org/glance/latest/).
>
> I have a OpenStack Victoria cluster with three all-in-one node in
> centos8. I implemented it with reference to
> https://docs.openstack.org/ha-guide/. So this cluster use Pacemaker,
> HAproxy and Galera. "To implement high availability, run an instance
> of the database on each controller node and use Galera Cluster to
> provide replication between them."
>
> I found that I will encounter an error If I configure Glance backend
> to use local storage driver to store image files on the local disk. If
> I upload a image, this image only will be storaged in one node. But
> the database only storage the file path of image such as
> "/v2/images/aa3cbee0-717f-4699-8cca-61243302d693/file", don't have the
> host information. The database data is same in three node.
>
> If I upload a image in node1, image only is storaged in node1. The
> database of three node stores the local filesystem path of image. And
> If The create Instance task is assigned to node2, It will find image
> in node2, but image can't be found in node2. So we get the "Image has
> no associated data" error.
>
> So I want to ask:
> 1. Wheter glance does not support using local filesystem storage in a cluster?
> 2. If 1 was right, why do we do this design instead of storing
> information about the host on which images is located, as nova does
> with instance.
>
> I would appreciate any kind of guidance or help.
>
> Thank you,
> Han Guangyu
>
Hi 光宇,
It is possible to setup Glance with a local storage in a HA way.
The way to do this, is simply to get your HAproxy to use one node,
always, and the other as backups. Then have a cron job that does the
rsync from the first node to the other 2. A simple command as Glance
user like this is enough (to be run as Glance user, and having the ssh
host keys thingy fixed (we sign host keys, so we don't have this problem)):
rsync -e ssh -avz --delete /var/lib/glance/images/ \
<dest-host>:/var/lib/glance/images/ >/dev/null 2>&1
We have some internal logic to iterate through all the backup nodes and
replace dest-host accordingly...
This way, if the first node fails, yes, you do have a problem because
there wont be the primary node that is up, so saving new Glance image
will be a problem as it wont be replicated to other nodes. But existing
image will be there already, so it ok until you repair the first node.
I hope this helps,
Cheers,
Thomas Goirand (zigo)
More information about the openstack-discuss
mailing list