[openstack-dev] [heat] [glance] Heater Proposal
Vishvananda Ishaya
vishvananda at gmail.com
Fri Dec 6 18:39:29 UTC 2013
On Dec 6, 2013, at 10:07 AM, Georgy Okrokvertskhov <gokrokvertskhov at mirantis.com> wrote:
> Hi,
>
> I am really inspired by this thread. Frankly saying, Glance for Murano was a kind of sacred entity, as it is a service with a long history in OpenStack. We even did not think in the direction of changing Glance. Spending a night with these ideas, I am kind of having a dream about unified catalog where the full range of different entities are presented. Just imagine that we have everything as first class citizens of catalog treated equally: single VM (image), Heat template (fixed number of VMs\ autoscaling groups), Murano Application (generated Heat templates), Solum assemblies
>
> Projects like Solum will highly benefit from this catalog as it can use all varieties of VM configurations talking with one service.
> This catalog will be able not just list all possible deployable entities but can be also a registry for already deployed configurations. This is perfectly aligned with the goal for catalog to be a kind of market place which provides billing information too.
>
> OpenStack users also will benefit from this as they will have the unified approach for manage deployments and deployable entities.
>
> I doubt that it could be done by a single team. But if all teams join this effort we can do this. From my perspective, this could be a part of Glance program and it is not necessary to add a new program for that. As it was mentioned earlier in this thread an idea of market place for images in Glance was here for some time. I think we can extend it to the idea of creating a marketplace for a deployable entity regardless of the way of deployment. As Glance is a core project which means it always exist in OpenStack deployment it makes sense to as a central catalog for everything.
+1
Vish
>
> Thanks
> Georgy
>
>
> On Fri, Dec 6, 2013 at 8:57 AM, Mark Washenberger <mark.washenberger at markwash.net> wrote:
>
>
>
> On Thu, Dec 5, 2013 at 9:32 PM, Jay Pipes <jaypipes at gmail.com> wrote:
> On 12/05/2013 04:25 PM, Clint Byrum wrote:
> Excerpts from Andrew Plunk's message of 2013-12-05 12:42:49 -0800:
> Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:
> On Dec 5, 2013, at 10:10 AM, Clint Byrum <clint at fewbar.com>
> wrote:
>
> Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
> Why not just use glance?
>
>
> I've asked that question a few times, and I think I can collate the
> responses I've received below. I think enhancing glance to do these
> things is on the table:
>
> 1. Glance is for big blobs of data not tiny templates.
> 2. Versioning of a single resource is desired.
> 3. Tagging/classifying/listing/sorting
> 4. Glance is designed to expose the uploaded blobs to nova, not users
>
> My responses:
>
> 1: Irrelevant. Smaller things will fit in it just fine.
>
> Fitting is one thing, optimizations around particular assumptions about the size of data and the frequency of reads/writes might be an issue, but I admit to ignorance about those details in Glance.
>
>
> Optimizations can be improved for various use cases. The design, however,
> has no assumptions that I know about that would invalidate storing blobs
> of yaml/json vs. blobs of kernel/qcow2/raw image.
>
> I think we are getting out into the weeds a little bit here. It is important to think about these apis in terms of what they actually do, before the decision of combining them or not can be made.
>
> I think of HeatR as a template storage service, it provides extra data and operations on templates. HeatR should not care about how those templates are stored.
> Glance is an image storage service, it provides extra data and operations on images (not blobs), and it happens to use swift as a backend.
>
> If HeatR and Glance were combined, it would result in taking two very different types of data (template metadata vs image metadata) and mashing them into one service. How would adding the complexity of HeatR benefit Glance, when they are dealing with conceptually two very different types of data? For instance, should a template ever care about the field "minRam" that is stored with an image? Combining them adds a huge development complexity with a very small operations payoff, and so Openstack is already so operationally complex that HeatR as a separate service would be knowledgeable. Only clients of Heat will ever care about data and operations on templates, so I move that HeatR becomes it's own service, or becomes part of Heat.
>
>
> I spoke at length via G+ with Randall and Tim about this earlier today.
> I think I understand the impetus for all of this a little better now.
>
> Basically what I'm suggesting is that Glance is only narrow in scope
> because that was the only object that OpenStack needed a catalog for
> before now.
>
> However, the overlap between a catalog of images and a catalog of
> templates is quite comprehensive. The individual fields that matter to
> images are different than the ones that matter to templates, but that
> is a really minor detail isn't it?
>
> I would suggest that Glance be slightly expanded in scope to be an
> object catalog. Each object type can have its own set of fields that
> matter to it.
>
> This doesn't have to be a minor change to glance to still have many
> advantages over writing something from scratch and asking people to
> deploy another service that is 99% the same as Glance.
>
> My suggestion for long-term architecture would be to use Murano for catalog/metadata information (for images/templates/whatever) and move the block-streaming drivers into Cinder, and get rid of the Glance project entirely. Murano would then become the catalog/registry of objects in the OpenStack world, Cinder would be the thing that manages and streams blocks of data or block devices, and Glance could go away. Imagine it... OpenStack actually *reducing* the number of projects instead of expanding! :)
>
> I think it is good to mention the idea of shrinking the overall OpenStack code base. The fact that the best code offers a lot of features without a hugely expanded codebase often seems forgotten--perhaps because it is somewhat incompatible with our low-barrier-to-entry model of development.
>
> However, as a mild defense of Glance's place in the OpenStack ecosystem, I'm not sure yet that a general catalog/metadata service would be a proper replacement. There are two key distinctions between Glance and a catalog/metadata service. One is that Glance *owns* the reference to the underlying data--meaning Glance can control the consistency of its references. I.e. you should not be able to delete the image data out from underneath Glance while the Image entry exists, in order to avoid a terrible user experience. Two is that Glance understands and coordinates the meaning and relationships of Image metadata. Without these distinctions, I'm not sure we need any OpenStack project at all--we should probably just publish an LDAP schema for Images/Templates/what-have-you and use OpenLDAP.
>
> To clarify, I think these functions are critical to Glance's role as a gatekeeper and helper, especially in public clouds--but having this role in your deployment is probably something that should ultimately become optional. Perhaps Glance should not be in the required path for all deployments.
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Georgy Okrokvertskhov
> Technical Program Manager,
> Cloud and Infrastructure Services,
> Mirantis
> http://www.mirantis.com
> Tel. +1 650 963 9828
> Mob. +1 650 996 3284
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131206/c14ed903/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131206/c14ed903/attachment.pgp>
More information about the OpenStack-dev
mailing list