[openstack-tc] [Foundation Board] Spider "What is Core" Discussion Continued - Monday 7/15 1-3pm Central

Monty Taylor mordred at inaugust.com
Thu Jul 11 19:15:04 UTC 2013



On 07/11/2013 03:14 PM, Mark Collier wrote:
> I agree that "snowflakes" are undesirable, and didn't intend to imply otherwise. 
> 
> I was trying to endorse the idea you expressed recently (perhaps more eloquently) that "...we need to do better at discoverable
> capabilities, so that an end-user client would be able to discover that
> it should not attempt to use a missing feature."  
> 
> Perhaps the point of clarity is that "snowflakes" happen when two clouds have the same feature but implemented differently to the point of impacting interop (bad news), as opposed to the notion that some clouds simply lack a certain feature altogether, which should be discoverable by an end-user client. The latter is what I had in mind. 

Totes.

> 
> On Jul 11, 2013, at 1:02 PM, Monty Taylor <mordred at inaugust.com> wrote:
> 
>>
>>
>> On 07/11/2013 01:15 PM, Mark Collier wrote:
>>> +1 to your last sentence... pointing out that the current policy /
>>> license agreements specifically mandate a product must pass a
>>> TC-approved interop test (a.k.a.. FITS).
>>>
>>>
>>>
>>> On a practical level, I see the development of the test (leveraging
>>> Tempest?) and decisions about what are "must pass" vs. "nice to pass" as
>>> the critical next steps.
>>>
>>>
>>>
>>> I'm not sure if it makes things simpler or more complex to equate "must
>>> pass" with "core" and "nice to pass" with "non-core integrated"...
>>
>> As a quick data point - both the conversations at the board that Rob,
>> Josh and I had and the tech ones we had at the last summit started with
>> getting a scoreboard done at all. There are several related tasks around
>> getting this done which Josh and I have both been calling refstack, but
>> which I am coming to believe are actually two completely separate projects..
>>
>> The thing we discussed at the last summit was, as a next step, being
>> able to run tempest against a cloud with a standard tempest config (not
>> customized per cloud) This would then produce some number of failures
>> and some number of passes, and that's expected. Analyzing and reporting
>> on those passes and failures in some understandable manner so that board
>> people can look at the status of a given feature or concept and start to
>> make decisions. That's the part that the openstack project has slated to
>> work on, and is pretty nicely tied to other infra work anyway.
>>
>> The other effort, which is what Josh has been calling refstack, is about
>> having a system for registering endpoints, requesting to be tested and
>> presenting a dashboard of results. This isn't a thing that the OpenStack
>> project itself is really involved with, and may or may not even be a
>> thing that the foundation officially runs - but I believe if it runs
>> whatever the output of the first half of this is, then it can be a
>> useful service. I think, for what it's worth, that Josh's thing is most
>> likely the thing to retain the refstack name, and the other thing is
>> going to be named something else. Like Larry. I dunno. not important
>> right now.
>>
>> I mention this because there are efforts going in parallel to this
>> discussion so that we'll be in a position to actually report on and
>> respond to whatever the outcome here are.
>>
>>> "core" and "must pass" are both, to me, about doing whatever we can to
>>> create, in the real world marketplace, lots of clouds calling themselves
>>> "openstack" that have a set of functionality and behavior that can be
>>> relied upon by end users (app developers etc).
>>>
>>>
>>>
>>> Discoverability via API of what's inside a particular cloud is certainly
>>> a desirable direction to account for the fact that deployments in the
>>> real world are quite diverse.
>>
>> I think that certainly is a strong word, and I think that there is a
>> world view in which it's a terrible idea. I think it's a worthwhile
>> discussion to have. "to account for the fact that deployments in the
>> real world are quite diverse" is the current situation we are in because
>> we started off very focused on the needs of the service provider. It can
>> certainly be argued that allowing that divergence comes at a cost. In
>> fact, as someone who runs a massive cloud application across two public
>> OpenStack clouds, I can tell you that the user experience is that in the
>> places where they do not diverge, multiple openstack clouds is AMAZING
>> and I am able to produce AMAZING applications. In the places where they
>> do diverge, I want to kill people, because it makes those features
>> completely and totally useless to me as a consumer.
>>
>> I'll call out Swift CDN as an example. CDN is an extension at both
>> Rackspace and HP because swift core does not do cdn. That means that I
>> cannot do cdn things with python-swiftclient, which means that I cannot
>> consistently use the two swifts I have access to - which means I use
>> neither. I'm sad about that, because swift is great technology. Instead,
>> I have a nova vm connected to a very large cinder volume, and I run an
>> apache server on it.
>>
>> So one can argue that it's important to let providers make their own
>> choices and do things differently, and that's great if the value
>> proposition that we were trying to get at here was to make Rackspace
>> Cloud or HP Cloud the best cloud in the world.  But we're not. The value
>> proposition that I'm working on is making OpenStack the best meta-cloud.
>> Every way in which Rackspace and HP's public clouds diverge is a nail in
>> our coffin, and one more step we're taking down the path of Open Group,
>> posix and the death of the traditional unixes. Every way in which the
>> constituent clouds that are part of the global OpenStack meta-cloud is a
>> step towards us winning. AND growing the market. AND making tons of
>> money for both Rackspace and HP's clouds.
>>
>> But we have GOT to continue fighting the urge to think of each cloud as
>> a beautiful unique snowflake.
>>
>>>
>>> I believe the spirit of the "must enable plug-ins to be considered for
>>> core" (and having at least one open source reference that's useable) is
>>> philosophically about ensuring the real world of "openstack clouds" are
>>> flexible enough to accommodate multiple technologies to solve a
>>> particular domain's problems, while guarding against a trojan vendor
>>> lock in scenario.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Thursday, July 11, 2013 11:56am, "Monty Taylor"
>>> <mordred at inaugust.com> said:
>>>
>>>>
>>>>
>>>> On 07/11/2013 12:39 PM, Thierry Carrez wrote:
>>>>> Russell Bryant wrote:
>>>>>> On 07/10/2013 02:53 PM, Rob_Hirschfeld at Dell.com wrote:
>>>>>>> Background info:
>>>> https://etherpad.openstack.org/Board-2013-SpiderDiscussion
>>>>>>
>>>>>> This is the first time I've seen this. I must admit that my initial
>>>>>> reaction is that I'm not comfortable with the direction this seems
>>> to be
>>>>>> taking.
>>>>>>
>>>>>> I understand the need to have a solid definition of what "core" means.
>>>>>> I also assume that the goal here is to eventually arrive at some set of
>>>>>> definitions and policies.
>>>>>>
>>>>>> However, some of the specific items discussed on this etherpad are
>>>>>> things that are in my opinion, in TC (or even project specific
>>>>>> governance) territory, and should be considered out of scope for any
>>>>>> policy coming from the board.
>>>>>
>>>>> This is new to me too, but AFAICT it's an effort to define the list of
>>>>> criteria the board intends to apply for granting the "core" label on a
>>>>> given project.
>>>>>
>>>>> We ruled that the TC was free to produce the stuff it wanted, and that
>>>>> the board was free to apply a "core" label to a subset of that. they are
>>>>> also free to define what they mean by "core" (or any other label they
>>>>> may want to create).
>>>>>
>>>>> As an example:
>>>>>
>>>>>> * In the introduction, the secondary issue identified is whether
>>>>>> projects should be pluggable. I believe this is TC territory.
>>>>>
>>>>> If they want to grant the "core" label only to pluggable projects, I'm
>>>>> not sure that would be in our territory ?
>>>>
>>>> No, I believe Russell is correct, and I'm sorry I did not catch/raise
>>>> this earlier. The reason we have a board/tc split is separation of
>>>> specialty. It is not expected that people on the board have the
>>>> technical background to make technical decisions, it is conversely not
>>>> expected that members of the TC have the business/legal background to
>>>> make decisions on issues around brand or trademark. That some of us on
>>>> the board have technical backgrounds is a thing I think we must be
>>>> vigilant about and not forget the role we have been asked to play on
>>>> that body. In that regard, I believe I have failed at the moment.
>>>>
>>>> The split between integrated and core is similarly intended to let the
>>>> technical body decide about implementation issues and let the board make
>>>> decisions on the *what*, as Russel says. While the language may
>>>> theoretically allow the board to apply whatever criteria it wants to to
>>>> grant the core label, I think it's very important we don't create a
>>>> shadow TC of folks making additional technical judgment calls and using
>>>> trademark to enforce them. It's not an us vs. them thing - it's quite
>>>> simply a scope-of-body-of-people thing. If both bodies have 'final' say
>>>> on a technical matter but with a different label, no one anywhere is
>>>> going to be able to figure out what the heck OpenStack is.
>>>>
>>>> Back to the matter at hand, I think Doug's suggestions move in the
>>>> direction of where the language should go.
>>>>
>>>> "The cloud must pass the automated test suite designated by the TC as
>>>> defining interoperability"
>>>>
>>>> both states an outcome the board wants to see, and lets the TC decide.
>>>> I'd even remove the word 'automated' - although I'm _certain_ that the
>>>> TC would want it to be automated and not manual. That sentence above is
>>>> actually quite similar to one that's in our current trademark policy, btw.
>>>>
>>>>
>>>> _______________________________________________
>>>> Foundation-board mailing list
>>>> Foundation-board at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation-board
>>>
> 



More information about the OpenStack-TC mailing list