That specific bottleneck can be solved by running glance on ceph, and running ephemeral instances also on ceph. Snapshots are a quick backend operation then. But you've made your installation on a house of cards. <br><br>On Thursday, January 15, 2015, George Shuklin <<a href="mailto:george.shuklin@gmail.com">george.shuklin@gmail.com</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello everyone.<br>
<br>
One more thing in the light of small openstack.<br>
<br>
I really dislike tripple network load caused by current glance snapshot operations. When compute do snapshot, it playing with files locally, than it sends them to glance-api, and (if glance API is linked to swift), glance sends them to swift. Basically, for each 100Gb disk there is 300Gb on network operations. It is specially painful for glance-api, which need to get more CPU and network bandwidth than we want to spend on it.<br>
<br>
So idea: put glance-api on each compute node without cache.<br>
<br>
To help compute to go to the proper glance, endpoint points to fqdn, and on each compute that fqdn is pointing to localhost (where glance-api is live). Plus normal glance-api on API/controller node to serve dashboard/api clients.<br>
<br>
I didn't test it yet.<br>
<br>
Any ideas on possible problems/bottlenecks? And how many glance-registry I need for this?<br>
<br>
______________________________<u></u>_________________<br>
OpenStack-operators mailing list<br>
<a>OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/<u></u>cgi-bin/mailman/listinfo/<u></u>openstack-operators</a><br>
</blockquote>