<div dir="ltr">Thanks Mike!<div><br></div><div>Let me try and clarify the difference between the two features and why I think they both have a place in Cinder. I don't necessarily mean these two implementations as they exist today need to be there, but the features they provide do have their own uses and I think we will want them. That being said I'm definitely open to suggestions for alternative approaches that can achieve our goals.</div><div><br></div><div>As-is with the implementations the Cinder store will be able to create volumes that have image data on them. Upon subsequent volume create calls that are using the image as the source we can look for a cinder:// url for the image data. Then if the image is a raw type and the volume was created on the same Cinder backend the new volume is targeted for we can clone it.</div><div><br></div><div>Now from the other side of things, the image cache will (upon request for creating a volume from image) check the cache for the image on the Cinder backend. If it is there we clone it. If it isn't then we create the volume at its minimal size using the image virtual_size, download it, and then take a clone of this minimal volume and stick it in the cache. The original requested volume gets extended to its requested size and we are ready for subsequent calls. Note that if both are available we check for cinder:// urls first, and only start doing cache things if it cannot be used (ie raw image or wrong host).</div><div><br></div><div>Lets compare some of this. The Cinder store approach can only do fast clones if the image is raw and if the volume is on the same host. If you get a 'miss' like with the cache you do not get fast clones on subsequent requests. You are effectively stuck with only getting fast clones on that single Cinder backend. This may not matter if you are using a single backend for the whole cloud, but does matter if you have several storage arrays or different tiers of storage being offered. </div><div><br></div><div>A way around this would be to duplicate the image-volume on every backend and store their urls in glance, perhaps as needed like the cache. The issue then is that to get fast image clones you would be required to give up space on every backend for images you may or may not use ever again. Maybe this is a problem, maybe not. In the case of someone with a backend which is tier 1 flash and maybe another that is tier 2 disk they might not want to have *all* of the raw glance images on the tier 1 system, but could maybe spare a few GB or LUN's here or there for the most used images (sounding familiar?). This is where the cache starts to shine (in all its middle of the road glory). A deployer could then still have Cinder storing all of their images, but not on each backend, and get improved create-from-volume operations on all backends that scales in how effective it is by how big they allow the caches to be. That model has both features happily co-existing. Maybe this isn't a real user scenario... I don't really know enough to say definitively whether it is or isn't. What I do know though is that for some Cinder backends they have limitations on how many volumes can be created. This is a very real problem, and puts the cost of additional volumes to hold images at a premium. In this kind of setup having the ability to cache only a few images instead of potentially all of them is the difference between using the feature or not.</div><div><br></div><div>One cool thing that the Cinder store has which the cache doesn't (right now) is to do essentially no-op image uploads. So when you say to create an image from volume we only need to clone the volume and set a new url in glance for it with the cinder:// url scheme. Easy-peasy. Again though this only works for raw images. We could keep a clone of the volume in its raw form in addition to uploading the format specified, but the question then is where do we track that and use it later?</div><div><br></div><div>Well, there are a couple options that I've looked into and I'm sure many more that I haven't though of (so if you see a better way please speak up!). We can register a new url in glance like cinder-raw:// or something like that, or register it as just cinder:// but add some sort of meta-data to say that it is a raw version not really suitable for others to use. I'm not a huge fan of this approach because it means that the glance image location isn't actually pointing to what the image metadata is describing. We have at that point effectively pushed Cinder implementation details into Glance and made it their problem.</div><div><br></div><div>Another option is to keep our details specific to Cinder. This means we have some sort of DB entry that tracks the image volumes and we can do lookups on this table to do efficient image clones upon being asked to create a volume from an image (sound familiar again?).</div><div><br></div><div>This leads me to think that the framework provided by the cache is the suitable place for that info and workflow, I would guess that even if we ditch the cache right now for L we would very soon be adding in most of that machinery anyway for this type of further optimization inside Cinder (and probably end up with caches again). IMO it works much nicer. No need to overload the details into existing Glance fields, add some new apis, or expose implementation details to users through metadata. We keep it all inside Cinder and as far as a user is concerned you just get nice fast response times when you create a volume from an image (in the right conditions).</div><div><br></div><div>So, this is not an exhaustive list of what could possibly be done to solve the problems or all of the stuff I've looked into. It is hopefully useful for some to understand more about how the two features are working, and how they can work together. To reiterate my stance on this, I don't see the functionality entirely duplicated in either solution, and not a very good way to get there. If we can achieve that I'm 100% on board with a unified approach.</div><div class="gmail_extra"><br></div><div class="gmail_extra">Maybe if we need more discussion we can talk about this at the mid-cycle meeting next week.</div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr">-Patrick</div></div></div>
<br><div class="gmail_quote">On Mon, Jul 27, 2015 at 6:09 PM, Mike Perez <span dir="ltr"><<a href="mailto:thingee@gmail.com" target="_blank">thingee@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 23:04 Jul 02, Tomoki Sekiyama wrote:<br>
</span><span class="">> Hi Cinder experts,<br>
><br>
> Currently Glance has cinder backend but it is broken for a long time.<br>
> I am proposing a glance-spec/patch to fix it by implementing the<br>
> uploading/downloading images to/from cinder volumes.<br>
><br>
> Glance-spec: <a href="https://review.openstack.org/#/c/183363/" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/183363/</a><br>
> Glance_store patch: <a href="https://review.openstack.org/#/c/166414/" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/166414/</a><br>
><br>
> This will be also useful for sharing volume data among tenants (similar<br>
> use-case with public snapshots proposal discussed at the design summit).<br>
><br>
><br>
> I need a review for them from cinder developers to make it progress.<br>
<br>
</span>So I tested out this glance_store patch, along with the Cinder patch [1] and<br>
Glance patch [2] necessary to get things working.<br>
<br>
This made us notice that Glance v2 + Cinder does not work [3].<br>
<br>
Regardless, I had Patrick East who is working on the Image Caching [4] effort<br>
in Cinder to help copying images faster explain what the difference between the<br>
glance_store and his efforts were [5].<br>
<br>
It's not exactly clear from the operator's perspective what the use cases are<br>
for using one or the other. This worries me on duplicated effort, two supported<br>
efforts we have to support once people deploy these different options,<br>
confusion on which to use, etc.<br>
<br>
I think we should decide on one approach and just use that one. Thoughts?<br>
<br>
<br>
[1] - <a href="https://review.openstack.org/#/c/201754/10" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/201754/10</a><br>
[2] - <a href="https://review.openstack.org/#/c/186201/11" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/186201/11</a><br>
[3] - <a href="https://bugs.launchpad.net/cinder/+bug/1478737" rel="noreferrer" target="_blank">https://bugs.launchpad.net/cinder/+bug/1478737</a><br>
[4] - <a href="http://specs.openstack.org/openstack/cinder-specs/specs/liberty/image-volume-cache.html" rel="noreferrer" target="_blank">http://specs.openstack.org/openstack/cinder-specs/specs/liberty/image-volume-cache.html</a><br>
[5] - <a href="http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/%23openstack-cinder.2015-07-28.log.html#t2015-07-28T00:45:14" rel="noreferrer" target="_blank">http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/%23openstack-cinder.2015-07-28.log.html#t2015-07-28T00:45:14</a><br>
<span class="HOEnZb"><font color="#888888"><br>
--<br>
Mike Perez<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div></div>