[openstack-dev] [nova] os-capabilities library created

Tripp, Travis S travis.tripp at hpe.com
Fri Aug 12 00:00:58 UTC 2016


    Excerpts from Jay Pipes's message of 2016-08-03 19:47:37 -0400:
    >>  Hi Novas and anyone interested in how to represent capabilities in a 
    >>  consistent fashion.
    >>
    >>  I spent an hour creating a new os-capabilities Python library this evening:
    > 
    >>  http://github.com/jaypipes/os-capabilities
    >> Anyway, lemme know your initial thoughts please.

   On 8/11/16, 3:52 PM, "Clint Byrum" <clint at fewbar.com> wrote:    

    > Did we ever resolve the similarities between this and Searchlight's
    > similar goals of providing consistent metadata to the users?
    
    > http://docs.openstack.org/developer/glance/metadefs-concepts.html
    
    > I understand your library is for operators and developers to
    > collaborate, but it seems like there should be some alignment with this
    > UI that wants to do the same thing for the end user where appropriate.
    
The metadefs catalog wasn’t written just as a UI construct, it was actually
a derivative of an effort called graffiti [0] that was entirely about
capability and requirement matching and providing a way for portability
across clouds.

The Graffiti project was proposed at the Atlanta (Juno) OpenStack summit.
Since then quite a bit of the concepts have been adopted and are covered as
part of multiple different OpenStack Projects. 

[0] https://wiki.openstack.org/wiki/Graffiti/Architecture

A key concept is to both support defining standard metadata that can
be attached to various resources as well as providing a common service
for deployers to register their own metadata with visibility restrictions.
This can be anything from “Gold, Silver, Bronze” to “some hardware
capability”. Ultimately it is up to the deployer to activate, publish,
or discover the capabilities in their environment and enable them in the catalog.

Glance Metadata Definition Catalog (Box 1 in the workflow diagram)
 * http://docs.openstack.org/developer/glance/metadefs-concepts.html
* https://github.com/openstack/glance/tree/master/etc/metadefs
* https://youtu.be/zJpHXdBOoeM

Horizon features (Box 2 in the diagram – but also CLI)
* An admin UI for managing the catalog
* (Admin —> Metadata Definitions) (Kilo)
* A widget for associating metadata to different resources
* (Update Metadata action on each row item below)
* admin -> images (Juno)
* admin -> flavors (Kilo)
* admin —> Host Aggregates (Kilo)
* project —> images (Liberty)
* project —> instances (Mitaka)
* The ability to add metadata at launch time
* project —> Launch Instance (ng launch instance enabled) (Mitaka)

Searchlight (Box 3 in the workflow diagram)
* http://launchpad.net/searchlight
* https://wiki.openstack.org/wiki/Searchlight

The Searchlight project is primarily intended for high performance
searching across the cloud. Fulfilling the concepts of Graffiti is a side
effect, but did provide some of the original inspiration for the project.
We actually have blueprint we will do in “O” that will provide
data mapping from metadefs to the schemas in ElasticSearch [1].

* [1] https://blueprints.launchpad.net/searchlight/+spec/configurable-dynamic-properties

In addition, when this popped up in my mailbox today, a search revealed
This message from last August with a few points that I’d like to help clarify below:

On 8/10/15, 8:05 AM, "Jay Pipes" <jaypipes at gmail.com> wrote:

>  The Glance metadefs stuff is problematic in a number of ways:

> 1) It wasn't written with Nova in mind at all, but rather for UI needs.
> This means it introduces a bunch of constants that are different from
> the constants in Nova.

This is actually not the case. This was originally co-sponsored by Intel
to help expose out all the CPU capabilities in Nova. The constants in
the metadef catalog all come from combing through the code in Nova
was a complete maze and were not available at the time from
Nova (or cinder or glance or …) See overview here [2]:

 [2] https://wiki.openstack.org/wiki/Graffiti

> 2) It uses a custom JSON format instead of JSONSchema, so we now need to
> figure out the schema for these metadef documents and keep up to date
> with that schema as it changes.

It uses JSON schema, but surrounds it with a very lightweight envelope.
The envelope is called a namespace and is simply a container of JSON
schema, allowing us to manage it as a programmatic unit and as a way
for cloud deployers to share the capabilities across clouds very easily.

We did place a limitation on it that it cannot support nested objects. This
was primarily due to the extreme difficulty of representing that construct
to users in an easy to understand way:

http://docs.openstack.org/developer/glance/metadefs-concepts.html#catalog-terminology

> 3) It mixes qualitative things -- CPU model, features, etc -- with
> quantitative things -- amount of cores, threads, etc. These two things
> are precisely what we are trying to decouple from each other in the next
> generation of Nova's "flavors".
>
> 4) It is still missing the user-facing representation of these things.
> Users -- i.e. the people launching VMs in Nova -- do not want or need to
> know whether the underlying hardware is running Nehalem processors or
> Sandybridge processors. They only need to know the set of generic
> features that are exposed to the workloads running on the host. In other
> words, we are looking for a way to map "service levels" such as "Silver
> Compute" or "Whizbang Amazing Compute" to a set of hardware that exposes
> some features. We need that compatibility layer on top of the low-level
> hardware specifics that will allow service providers to create these
> "product categories" if you will.

It is true that the catalog can contain anything. This include you defining
a namespace as “Service Levels” with GOLD, SILVER, BRONZE in addition
to specific items as mentioned above. The operator could even make
the service levels Public (visible via the API to all and hence in the UI)
and the other items private (only visible to the project owning them
and not visible in the UI). So, the operator could tag images or flavors
with both service levels and with specific other properties and decide
which ones to expose the definitions to. In Horizon, we have some
widgets that honor this and will selectively display metadata based
on if the definition is visible to that user.

It was originally envisioned to be more similar to some of the specs
that have gone through on capabilities in Nova, but due to the various
limitations in OpenStack at the time at least, a number of design
simplifications were made to work within the way OpenStack
services work. For example, all the resource types (Images, Flavors,
Aggregates) have a very simple mechanism for adding data to them.
Taking advantage of host aggregate filtering with flavors or
images properties matching allows operators to group hosts according to
an aggregate and to simply tag something like “GOLD” on both
the aggregate and the image or the flavor. As hosts are discovered
into the environment, they simply are added to the correct aggregate.

Of course, that is a very course grained example, but leveraging simple
tagging or key value pairs could be done to tag service level info on the
flavor and the flavors could be filtered that way.

Anyway, all of the above is not meant in any way as a challenge to the
current work, but meant as a way to help understand what is already
in place and to help clear up a few points so that they may be considered
in the current work.

Thanks,
Travis



More information about the OpenStack-dev mailing list