[openstack-dev] Treating notifications as a contract
Sandy Walsh
sandy.walsh at RACKSPACE.COM
Wed Sep 10 19:59:29 UTC 2014
> Jay Pipes - Wednesday, September 10, 2014 3:56 PM
>On 09/03/2014 11:21 AM, Sandy Walsh wrote:
>> On 9/3/2014 11:32 AM, Chris Dent wrote:
>>> I took some notes on this a few weeks ago and extracted what seemed
>>> to be the two main threads or ideas the were revealed by the
>>> conversation that happened in this thread:
>>>
>>> * At the micro level have versioned schema for notifications such that
>>> one end can declare "I am sending version X of notification
>>> foo.bar.Y" and the other end can effectively deal.
>>
>> Yes, that's table-stakes I think. Putting structure around the payload
>> section.
>>
>> Beyond type and version we should be able to attach meta information
>> like public/private visibility and perhaps hints for external mapping
>> (this trait -> that trait in CADF, for example).
>
>CADF doesn't address the underlying problem that Chris mentions above:
>that our notification events themselves needs to have a version
>associated with them.
>
>Instead of versioning the message payloads themselves, instead CADF
>focuses versioning on the CADF spec itself, which is less than useful,
>IMO, and a sympton of what I like to call "XML-itis".
Well, the spec is the payload, so you can't change the payload without changing
the spec. Could be semantics, but I see your point.
>Where I *do* see some value in CADF is the primitive string codes it
>defines for resource classifications, actions, and outcomes (Sections
>A.2.5, A.3.5., and A.4.5 respectively in the CADF spec). I see no value
>in the long-form XML-itis fully-qualified URI long-forms of the
>primitive string codes.
+1 to the xml-itis, but do we really get any value from the resource
classifications without them? Other than, "yes, that's a good list to work
from."?
>For resource classifications, it defines things like "compute",
>"storage", "service", etc, as well as a structured hierarchy for
>sub-classifications, like "storage/volume" or "service/block". Actions
>are string codes for verbs like "create", "configure" or "authenticate".
>Outcomes are string codes for "success", "failure", etc.
>
>What I feel we need is a library that matches a (resource_type, action,
>version) tuple to a JSONSchema document that describes the payload for
>that combination of resource_type, action, and version.
The 7-W's that CADF define are quite useful and we should try to ensure our
notification payloads address as many of them as possible.
Who, What, When, Where, Why, On-What, To-Whom, To-Where ... not all are applicable for
every notification type.
Also, we need to define standard units-of-measure for numeric fields:
mb vs. gb, bps vs. kbps, image type definitions ... ideally all this should be
part of the standard openstack nomenclature. These are the things that really
belong in oslo and used by everything from notifications to the scheduler
to flavor definitions, etc.
>If I were king for a day, I'd have a standardized notification message
>format that simply consisted of:
>
>resource_class (string) <-- From CADF, e.g. "service/block"
>occurred_on (timestamp) <-- when the event was published
>action (string) <-- From CADF, e.g. "create"
>version (int or tuple) <-- version of the (resource_class, action)
>payload (json-encoded string) <-- the message itself
>outcome (string) <-- Still on fence for this, versus just using payload
Yep, not a problem with that, so long as the payload has all the other things
we need (versioning, data types, visibility, etc)
>There would be an Oslo library that would store the codification of the
>resource classes and actions, along with the mapping of (resource_class,
>action, version) to the JSONSchema document describing the payload field.
>
>Producers of messages would consume the oslo lib like so:
>
>```python
>from oslo.notifications import resource_classes
>from oslo.notifications import actions
>from oslo.notifications import message
Not sure how this would look from a packaging perspective, but sure.
I'm not sure if I like having to define every resource/action type in code
and then having an explosion of types in notification.actions ... perhaps
that should just be part of the schema definition
'action_type': <string> [acceptable values: "create, delete, update"]
I'd rather see these schemas defined in some machine readable
format (yaml or something) vs. code. Other languages are going to want
to consume these notifications and should be able to reuse the definitions.
>from nova.compute import power_states
>from nova.compute import task_states
>...
>
> msg = message.Message(resource_classes.compute.machine,
> actions.update,
> version=1)
>
> # msg is now an object that is guarded by the JSONSchema document
> # that describes the version 1.0 schema of the UPDATE action
> # for the resource class representing a VM (compute.machine)
> # This means that if the producer attempts to set an
> # attribute of the msg object that is *not* in that JSONSchema
> # document, then an AttributeError would be raised. This essentially
> # codifies the message's resource_class and action attributes
> # (as constants in the oslo.notifications.resource_classes and
> # oslo.notifications.actions module) as well as codifies the
> # schema of the (resource_class, action, version) combo.
Right, that's what we want. A schema to enforce payloads both in the
payload and across-payloads.
> # Assume the JSONSchema document for a
> # (resource_class, action, version) of
> # ("compute.machine", "update", 1) looks like this:
> # {
> # "properties": {
> # "state": {
> # "type": "string",
> # "description": "The new power state of VM"
> # },
> # "state": {
> # "type": "string",
> # "description": "The old power state of VM"
> # },
> # "task_state": {
> # "type": "string",
> # "description": "The new task state of VM"
> # },
> # "old_task_state": {
> # "type": "string",
> # "description": "The old task state of VM"
> # }
> # "additionalProperties": false
> # }
>
> msg.old_state = power_states.RUNNING
> msg.state = power_states.SHUTDOWN
> msg.old_taskkk_state = None # <--- would blow up with TypeError,
> # since taskkk is mispelled.
>
> # Send the message over the wire...
> message.send(...)
>
>```
Yep. We're saying the same thing here.
The objects/classes can load the machine-readable schema definition to
enforce the setters (vs. having to patch implementations every time)
>Similarly, on the consumer side, the message for a particular
>resource_class, action and version can be constructed from the
>oslo.notifications library and the JSONSchema document could be
>consulted for payload introspection:
>
>```python
>from oslo.notifications import resource_classes
>from oslo.notifications import actions
>from oslo.notifications import message
>...
>
> # This would construct a Message object by looking at the
> # resource_class, action, and version fields in the message
> # envelope...
> msg = message.from_publisher(...)
> if msg.resource_class == resource_classes.compute.machine:
So long as resource.classes.compute.machine is dynamically fabricated, yes.
> if msg.action == actions.update:
> # do something with the event...
>
> # Print all the fields in the message. Used as an example of
> # using the JSONSchema document associated with the event_type
> # and version in order to do introspection of the message
> schema = message.get_schema()
> for name, field in schema.properties.iteritems():
> value = getattr(msg, name, None)
> if 'string' in field.types or 'number' in field.types:
> print "Field: %s Value: %s" % (name, value)
> elif 'object' in field.types:
> pretty_value = jsonutils.loads(value, indent=4)
if field.is_public:
print "Field: %s Value: %s" % (name, pretty_value)
else:
print "Field: %s Value: <<REDACTED>>" % (name, )
>```
>
>This way, we take the good stuff from CADF (the standardized string
>primitives, and add real payload versioning to the mix, along with an
>OpenStack-style Oslo library to use it all)
I think we're all in agreement on the schema stuff. What I'd like to see
(if I were king for a day) would be the transformation engine that
can take an oslo.notification and convert it to something CADF-compliant,
or to scrub it so it could be exported for end-user consumption.
But yes, the schema definitions and enforcement on producer and consumer-side are
definitely table-stakes for next-gen notifications.
>Thoughts?
>-jay
>>> * At the macro level standardize a packaging or envelope of all
>>> notifications so that they can be consumed by very similar code.
>>> That is: constrain the notifications in some way so we can also
>>> constrain the consumer code.
>> That's the intention of what we have now. The top level traits are
>> standard, the payload is open. We really only require: message_id,
>> timestamp and event_type. For auditing we need to cover Who, What, When,
>> Where, Why, OnWhat, OnWhere, FromWhere.
>>
>>> These ideas serve two different purposes: One is to ensure that
>>> existing notification use cases are satisfied with robustness and
>>> provide a contract between two endpoints. The other is to allow a
>>> fecund notification environment that allows and enables many
>>> participants.
>> Good goals. When Producer and Consumer know what to expect, things are
>> good ... "I know to find the Instance ID <here>". When the consumer
>> wants to deal with a notification as a generic object, things get tricky
>> ("find the instance ID in the payload", "What is the image type?", "Is
>> this an error notification?")
>>
>> Basically, how do we define the principle artifacts for each service and
>> grant the consumer easy/consistent access to them? (like the 7-W's above)
>>
>> I'd really like to find a way to solve that problem.
>>
>>> Is that a good summary? What did I leave out or get wrong?
>>>
>>
>> Great start! Let's keep it simple and do-able.
>>
>> We should also review the oslo.messaging notification api ... I've got
>> some concerns we've lost our way there.
>>
>> -S
>>
More information about the OpenStack-dev
mailing list