[Openstack] OpenStack Compute API 1.1

Justin Santa Barbara justin at fathomdb.com
Fri Feb 18 17:57:12 UTC 2011


> How is the 1.1 api proposal breaking this?

Because if we launch an OpenStack API, the expectation is that this will be
the OpenStack API :-)

If we support a third-party API (CloudServers or EC2), then people will
continue to use their existing wrappers (e.g. jclouds)  Once there's an
OpenStack API, then end-users will want to find a library for that, and we
don't want that to be a poor experience.  To maintain a good experience, we
either can't break the API, or we need to write and maintain a lot of
proxying code to maintain compatibility.  We know we're not ready for the
first commitment, and I don't think we get enough to justify the second.

> I think the proxy would make sense if you wanted to have a single api. Not
all service providers will but I see this as entirely optional, not required
to use the services.

But then we have two OpenStack APIs?  Our ultimate end users don't use the
API, they use a wrapper library.  They want a stable library that works and
is kept up to date with recent changes and don't care about what's going on
under the covers.  Wrapper library authors want an API that is (1) one API
and (2) stable with reasonable evolution, otherwise they'll abandon their
wrapper or not update it.

> The extensions mechanism is the biggest change, iirc.

I'm not a big fan of the extensions idea, because it feels more like a
reflection of a management goal, rather than a technical decision
("OpenStack is open to extensions")  Supporting separate APIs feels like a
better way to do that.  I'm very open to be corrected here, but I think we
need to see code that wants to use the extension API and isn't better done
as a separate API.  Right now I haven't seen any patches, and that makes me
uneasy.





On Fri, Feb 18, 2011 at 9:29 AM, Paul Voccio <paul.voccio at rackspace.com>wrote:

>  The spec for 1.0 and 1.1 are pretty close. The extensions mechanism is
> the biggest change, iirc.
>
>  I think the proxy would make sense if you wanted to have a single api.
> Not all service providers will but I see this as entirely optional, not
> required to use the services.
>
>  The push to get a completed compute api is the desire move away from the
> ec2 api to something that we can guide, extend and vote on as a community.
> The sooner we do the the better.
>
>  How is the 1.1 api proposal breaking this?
>
>   From: Justin Santa Barbara <justin at fathomdb.com>
> Date: Fri, 18 Feb 2011 09:10:19 -0800
> To: Paul Voccio <paul.voccio at rackspace.com>
> Cc: Jay Pipes <jaypipes at gmail.com>, "openstack at lists.launchpad.net" <
> openstack at lists.launchpad.net>
>
> Subject: Re: [Openstack] OpenStack Compute API 1.1
>
>  Jay: The AMQP->REST was the re-architecting I was referring to, which
> would not be customer-facing (other than likely introducing new bugs.)
>  Spinning off the services, if this is visible at the API level, is much
> more concerning to me.
>
>  So Paul, I think the proxy is good because it acknowledges the importance
> of keeping a consistent API.  But - if our API isn't finalized - why push it
> out at all, particularly if we're then going to have the overhead of
> maintaining another translation layer?  For Cactus, let's just support EC2
> and/or CloudServers 1.0 API compatibility (again a translation layer, but
> one we probably have to support anyway.)  Then we can design the right
> OpenStack API at our leisure and meet all of our goals: a stable Cactus and
> stable APIs.  If anyone ends up coding to a Cactus OpenStack API, we
> shouldn't have them become second-class citizens 3 months later.
>
> Justin
>
>
>
>
>
> On Fri, Feb 18, 2011 at 6:31 AM, Paul Voccio <paul.voccio at rackspace.com>wrote:
>
>> Jay,
>>
>> I understand Justin's concern if we move /network and /images and /volume
>> to their own endpoints then it would be a change to the customer. I think
>> this could be solved by putting a proxy in front of each endpoint and
>> routing back to the appropriate service endpoint.
>>
>> I added another image on the wiki page to describe what I'm trying to say.
>> http://wiki.openstack.org/api_transition
>>
>>  I think might not be as bad of a transition since the compute worker
>> would
>> receive a request for a new compute node then it would proxy over to the
>> admin or public api of the network or volume node to request information.
>> It would work very similar to how the queues work now.
>>
>> pvo
>>
>> On 2/17/11 8:33 PM, "Jay Pipes" <jaypipes at gmail.com> wrote:
>>
>> >Sorry, I don't view the proposed changes from AMQP to REST as being
>> >"customer facing API changes". Could you explain? These are internal
>> >interfaces, no?
>> >
>> >-jay
>> >
>> >On Thu, Feb 17, 2011 at 8:13 PM, Justin Santa Barbara
>> ><justin at fathomdb.com> wrote:
>> >> An API is for life, not just for Cactus.
>> >> I agree that stability is important.  I don't see how we can claim to
>> >> deliver 'stability' when the plan is then immediately to destablize
>> >> everything with a very disruptive change soon after, including customer
>> >> facing API changes and massive internal re-architecting.
>> >>
>> >>
>> >> On Thu, Feb 17, 2011 at 4:18 PM, Jay Pipes <jaypipes at gmail.com> wrote:
>> >>>
>> >>> On Thu, Feb 17, 2011 at 6:57 PM, Justin Santa Barbara
>> >>> <justin at fathomdb.com> wrote:
>> >>> > Pulling volumes & images out into separate services (and moving from
>> >>> > AMQP to
>> >>> > REST) sounds like a huge breaking change, so if that is indeed the
>> >>>plan,
>> >>> > let's do that asap (i.e. Cactus).
>> >>>
>> >>> Sorry, I have to disagree with you here, Justin :)  The Cactus release
>> >>> is supposed to be about stability and the only features going into
>> >>> Cactus should be to achieve API parity of the OpenStack Compute API
>> >>> with the Rackspace Cloud Servers API. Doing such a huge change like
>> >>> moving communication from AMQP to HTTP for volume and network would be
>> >>> a change that would likely undermine the stability of the Cactus
>> >>> release severely.
>> >>>
>> >>> -jay
>> >>
>> >>
>>
>>
>>
>>   Confidentiality Notice: This e-mail message (including any attached or
>> embedded documents) is intended for the exclusive and confidential use of
>> the
>> individual or entity to which this message is addressed, and unless
>> otherwise
>> expressly indicated, is confidential and privileged information of
>> Rackspace.
>> Any dissemination, distribution or copying of the enclosed material is
>> prohibited.
>> If you receive this transmission in error, please notify us immediately by
>> e-mail
>> at abuse at rackspace.com, and delete the original message.
>> Your cooperation is appreciated.
>>
>>
>   Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of the
> individual or entity to which this message is addressed, and unless otherwise
> expressly indicated, is confidential and privileged information of Rackspace.
> Any dissemination, distribution or copying of the enclosed material is prohibited.
> If you receive this transmission in error, please notify us immediately by e-mail
> at abuse at rackspace.com, and delete the original message.
> Your cooperation is appreciated.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20110218/fdf2eb1e/attachment.html>


More information about the Openstack mailing list