[Openstack-operators] Small openstack (part 2), distributed glance

Jay Pipes jaypipes at gmail.com
Mon Jan 19 03:05:27 UTC 2015

On 01/15/2015 05:20 PM, George Shuklin wrote:
> Hello everyone.
> One more thing in the light of small openstack.
> I really dislike tripple network load caused by current glance snapshot
> operations. When compute do snapshot, it playing with files locally,
> than it sends them to glance-api, and (if glance API is linked to
> swift), glance sends them to swift. Basically, for each 100Gb disk there
> is 300Gb on network operations. It is specially painful for glance-api,
> which need to get more CPU and network bandwidth than we want to spend
> on it.
> So idea: put glance-api on each compute node without cache.
> To help compute to go to the proper glance, endpoint points to fqdn, and
> on each compute that fqdn is pointing to localhost (where glance-api is
> live). Plus normal glance-api on API/controller node to serve
> dashboard/api clients.
> I didn't test it yet.
> Any ideas on possible problems/bottlenecks? And how many glance-registry
> I need for this?

Honestly, the Glance project just needs to go away, IMO.

The glance_store library should be the focus of all new image and volume 
bit-moving functionality, and the glance_store library should replace 
all current code in Nova and Cinder that does any copying through the 
Glance API nodes.

The Glance REST API should just be an artifact repository that the 
glance_store library (should be renamed to oslo.bitmover or something) 
should call the Glance REST API for the URIs of image locations (source 
and targets) and handle all the bit moving operations in as efficient a 
manner as possible.


More information about the OpenStack-operators mailing list