[Openstack] Architecture for Shared Components

Jorge Williams jorge.williams at rackspace.com
Mon Aug 2 20:55:06 UTC 2010


On Aug 2, 2010, at 12:40 PM, Eric Day wrote:

> Hi Jorge, Michael,
> 
> Yeah, this is pretty much what I had in mind. Beyond having services
> that get called out from APIs, the implementation within the projects
> should not be specific as well. For example, there should be a generic
> auth API that can be used across projects, but this could be backed by
> LDAP, MySQL, or something else. This keeps things modular for future
> changes and also makes testing easy since you can load in hard-coded
> modules that don't depend on having a setup.

I like the idea of having a modular implementation, but let's not lose track of the fact that as long as we keep the protocol between the proxies consistent
we can easily swap one proxy for another.  We need to make sure we get the protocol right, I like the idea of third-party proxy services :-).

> 
> On Mon, Aug 02, 2010 at 08:30:03AM -0400, Michael Gundlach wrote:
>>      Being able to purge something from an HTTP cache by simply making a
>>     "purge" call in whatever language I'm using to write my API is a win.
>>      That said, I'm not envisioning a lot of communication going upstream in
>>     this manner.  An authentication proxy service, for example, may need to
>>     communicate with an IDM system, but should require no input from the API
>>     service itself.  In fact, I would try to discourage such
>> 
>>     communication just to avoid chatter.
>> 
>>   I want to push back a little on that point -- I don't know think we should
>>   optimize for low network chatter as much as for simplicity of design.  The
>>   example that started this thread was authentication having to happen at
>>   two points in the request chain.  If we tried to eliminate the deeper of
>>   the two requests to the auth server in order to reduce network chatter,
>>   the trade off is having to bubble state up to the shallower server, making
>>   that server more complicated and making it harder to separate what each
>>   server in the chain does.  If we find that we're saturating the network
>>   with calls to a particular service, only then do I think we should start
>>   looking at alternatives like changing the request flow.
> 
> I agree we should avoid any pre-optimizations, but we also want to
> make sure things don't break out of the gate. When it comes time
> to making decisions like this we can usually hack together a quick
> prototype and run a few numbers to see what may be a problem.

I like the idea of prototyping  as part of our decision process.

> 
>>     In cases where this can't be avoided  I would require the proxy services
>>     expose a rest endpoint so we can take advantage of it even if a binding
>>     isn't available. 
>> 
>>   I would definitely advocate for using REST for *all* our service
>>   communication unless there were a really strong case for doing otherwise
>>   in an individual case.  Makes the system more consistent, makes it easy to
>>   interface with it if you need to, makes it easy to write a unified logging
>>   module, lets us easily tcpdump the internal traffic if we ever needed to,
>>   etc.  Putting a language binding on top of that is just added convenience.
> 
> Agreed, mostly. I would advocate using the native API/protocol, and
> when one is not available or we are starting from scratch, fall back
> to REST. For example, there is no reason to force a queuing service
> like RabbitMQ to speak REST if AMQP is already supported. This means
> abstracting at that API layer and not the protocol layer as much
> as possible.

Agreed,  there's a distinction between our internal services and what we expose to our clients. It's totally appropriate for us to use AMQP internally, we'll need to expose something like Atom to our clients.

-jOrGe W.





More information about the Openstack mailing list