[openstack-dev] Where should Schema files live?

Sandy Walsh sandy.walsh at RACKSPACE.COM
Thu Nov 27 20:34:29 UTC 2014

>From: Eoghan Glynn [eglynn at redhat.com] Tuesday, November 25, 2014 1:49 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] Where should Schema files live?
>> I think Doug's suggestion of keeping the schema files in-tree and pushing
>> them to a well-known tarball maker in a build step is best so far.
>> It's still a little clunky, but not as clunky as having to sync two repos.
>Yes, I tend to agree.
>So just to confirm that my understanding lines up:
>* the tarball would be used by the consumer-side for unit tests and
>  limited functional tests (where the emitter service is not running)

yep, sounds right. 

>* the tarball would be also be used by the consumer-side in DSVM-based
>  CI and in a full production deployments (where the emitter service is
>  running)

Depends. We could also expose the schema via the emitter service rest api.
(/schema resource, for example). I think that might be preferred for syncing

>* the tarballs will be versioned, with old versions remaining accessible
>  (as per the current practice with released source on tarballs.openstack.org)

Yes, we would likely need some versioning on the tarball. Though, the tarball
would contain all versions of the schemas, not just the latest. The versioning
would be for identifying new editions. We could probably just use a timestamp

>* the consumer side will know which version of each schema it expects to
>  support, and will download the appropriate tarball at runtime

I was thinking the tarball would contain all versions and the client would use
the version it best understands. The client may have to deal with older schemas
at runtime and wouldn't know that until old data arrives. 

>* the emitter side will signal the schema version that's it actually using,
>  via say a well-known field in the notification body


>* the consumer will reject notification payloads with a mismatched major
>  version to what it's expecting to support

It can reject messages with higher versions than it knows about, but should 
be able to deal with older versions. 

>> >[snip]
>> >> >> d. Should we make separate distro packages? Install to a well known
>> >> >> location all the time? This would work for local dev and integration
>> >> >> testing and we could fall back on B and C for production distribution.
>> >> >> Of
>> >> >> course, this will likely require people to add a new distro repo. Is
>> >> >> that
>> >> >> a concern?
>> >>
>> >> >Quick clarification ... when you say "distro packages", do you mean
>> >> >Linux-distro-specific package formats such as .rpm or .deb?
>> >>
>> >> Yep.
>> >So that would indeed work, but just to sound a small note of caution
>> >that keeping an oft-changing package (assumption #5) up-to-date for
>> >fedora20/21 & epel6/7, or precise/trusty, would involve some work.
>> >I don't know much about the Debian/Ubuntu packaging pipeline, in
>> >particular how it could be automated.
>> >But in my small experience of Fedora/EL packaging, the process is
>> >somewhat resistant to many fine-grained updates.
>> Ah, good to know. So, if we go with the tarball approach, we should be able
>> to avoid this. And it allows the service to easily service up the schema
>> using their existing REST API.
>I'm not clear on how servicing up the schema via an existing API would
>avoid the co-ordination issue identified in the original option (b)?

I was just thinking about the "making the schemas available at runtime" problem.
This wouldn't solve the CI/gate situation.

>Would that API just be a very simple proxying in front of the well-known
>source of these tarballs?

No need. The schema would live with the source, so it would serve them
up just as static files. 

Likely /schema for the latest and /schema/<version> for older. The
client would request the older versions after it recieves an older 
notification. "Hmm, I don't know about that one. Let me get the details."

>For production deployments, is it likely that some shops will not want
>to require access to an external site such as tarballs.openstack.org?

Agreed. I was thinking client would get the schema from the service and
not from tarballs.openstack.org. 

>So in that case, would we require that they mirror, or just assume that
>downstream packagers will bundle the appropriate schema versions with
>the packages for the emitter and consumer services?

That is a possibility. We could package the client to include a copy of the 
schema vs. a runtime request. Makes sense since the client needs to know the
schema in order to write code to deal with it. 


(perhaps we need to start a spec page or a wiki page with some diagrams/

More information about the OpenStack-dev mailing list