[Openstack-operators] Small openstack (part 2), distributed glance

Abel Lopez alopgeek at gmail.com
Thu Jan 15 22:31:24 UTC 2015


That specific bottleneck can be solved by running glance on ceph, and
running ephemeral instances also on ceph. Snapshots are a quick backend
operation then. But you've made your installation on a house of cards.

On Thursday, January 15, 2015, George Shuklin <george.shuklin at gmail.com>
wrote:

> Hello everyone.
>
> One more thing in the light of small openstack.
>
> I really dislike tripple network load caused by current glance snapshot
> operations. When compute do snapshot, it playing with files locally, than
> it sends them to glance-api, and (if glance API is linked to swift), glance
> sends them to swift. Basically, for each 100Gb disk there is 300Gb on
> network operations. It is specially painful for glance-api, which need to
> get more CPU and network bandwidth than we want to spend on it.
>
> So idea: put glance-api on each compute node without cache.
>
> To help compute to go to the proper glance, endpoint points to fqdn, and
> on each compute that fqdn is pointing to localhost (where glance-api is
> live). Plus normal glance-api on API/controller node to serve dashboard/api
> clients.
>
> I didn't test it yet.
>
> Any ideas on possible problems/bottlenecks? And how many glance-registry I
> need for this?
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20150115/37176c9a/attachment.html>


More information about the OpenStack-operators mailing list