[Openstack] OpenStack Summit Session Ideas

Joshua Harlow harlowja at yahoo-inc.com
Fri Sep 9 18:58:41 UTC 2011


Would how to handle failures/message lose in a tolerant manner be a good #6 (so that vm's aren't lost, state isn't corrupted...), this seems like it could affect architecture & code.
Maybe a #7 is how to program in a highly available manner (or how to setup for this), which might be related to #6.
The rest of these seem like a great start!

-Josh

On 9/9/11 10:45 AM, "Brian Lamar" <brian.lamar at rackspace.com> wrote:

I've been thinking about submitting a brainstorm session (or two) for the conference where we can go over some 'big picture' items I've been wrestling with lately.

The attached image illustrates a couple points, and while I'd like to discuss them in great detail, I feel like this list might not be the most ideal place for said discussion and that brainstorming session(s) might be appropriate. Below is a list of potential topics:

Topic #1 - Separating API projects from OpenStack Nova. I would push for separate projects for the EC2-compatible API, the Rackspace-compatible API, and the OpenStack Official API. Nova is not the place to stick things which deal with cross-project public APIs. Nova, in my opinion, needs to be more focused and not so much of the place where everything gets put.

Topic #2 - Standardizing on and documenting programmatic interfaces to each OpenStack project. This means that the Compute API would be documented with all methods/input parameters so that it could be implemented in any other language. This creates clean coupling points for the future and so higher-level APIs can be sufficiently stable. If Topic #1 were to become a reality, we would absolutely need to depend on the interfaces of the Compute/Volume/Network APIs. If you're looking at the current Compute API [1] you'll see that it might need a little cleanup for consistency and coherency.

Topic #3 - Use the database as a *cache* and nothing more. For a while we've talked about no-db-messaging and if you've ever followed Udi Dahan [2] or seen slides on CQRS [3] you can see we're eerily close to making a CQRS-ish system here. Maybe it's a buzz-word [4] architecture, but we can at least discuss the benefits/pitfalls of having the API have read-only access to the database and having the Managers have write-access to the database.

Topic #4 - As far as I know Glance doesn't follow some of the same patterns that Nova does. Nova pioneered the API/Manager paradigm and since Glance doesn't use AMQP it never really needed a Manager process. As such there is, from what I can tell, not a clear delineation between it's "internal" API (python) and it's "external" API (http). I feel these should be two distinct well-defined entities and as such we might look at making Glance follow some of the design principles found in Nova. This would also allow us to remove GlanceImageService from Nova.

Topic #5 - Standardize and document all messages which are sent to AMQP from each project. This has been brought up on the message list a couple times and I'm in agreement that we really need to document and standardize these messages so that other projects can potentially write to our queues.

So basically I'd love to know if anyone has interest in any or all of these topics. If you're interested, want more information, or think think that my ideas are just bad feel free to respond to me or the list.

Thanks!

Brian

[1] - http://paste.openstack.org/show/2408/ (All 'public' methods in nova.compute.api)
[2] - http://www.udidahan.com/ (Currently down at the time of writing)
[3] - http://www.slideshare.net/jonathanoliver/high-performance-distributed-systems-with-cqrs-3575257 (Not the slides I was looking for, but since Udi's site is down they'll have to do)
[4] - http://www.udidahan.com/2011/04/22/when-to-avoid-cqrs/ (Also down, but the summary is: Don't use CQRS because CQRS is new and fun! Use it only in certain situations.)

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20110909/06c52a85/attachment.html>


More information about the Openstack mailing list