[openstack-dev] [all] The future of the integrated release
harlowja at outlook.com
Wed Aug 13 13:29:44 UTC 2014
On Wed, Aug 13, 2014 at 5:37 AM, Mark McLoughlin <markmc at redhat.com>
> On Fri, 2014-08-08 at 15:36 -0700, Devananda van der Veen wrote:
>> On Tue, Aug 5, 2014 at 10:02 AM, Monty Taylor
>> <mordred at inaugust.com> wrote:
>> > Yes.
>> > Additionally, and I think we've been getting better at this in
>> the 2 cycles
>> > that we've had an all-elected TC, I think we need to learn how to
>> say no on
>> > technical merit - and we need to learn how to say "thank you for
>> > effort, but this isn't working out" Breaking up with someone is
>> hard to do,
>> > but sometimes it's best for everyone involved.
>> I agree.
>> The challenge is scaling the technical assessment of projects. We're
>> all busy, and digging deeply enough into a new project to make an
>> accurate assessment of it is time consuming. Some times, there are
>> impartial subject-matter experts who can spot problems very quickly,
>> but how do we actually gauge fitness?
> Yes, it's important the TC does this and it's obvious we need to get a
> lot better at it.
> The Marconi architecture threads are an example of us trying harder
> kudos to you for taking the time), but it's a little disappointing how
> it has turned out. On the one hand there's what seems like a "this
> doesn't make any sense" gut feeling and on the other hand an earnest,
> but hardly bite-sized justification for how the API was chosen and how
> it lead to the architecture. Frustrating that appears to not be
> resulting in either improved shared understanding, or improved
> architecture. Yet everyone is trying really hard.
>> Letting the industry field-test a project and feed their experience
>> back into the community is a slow process, but that is the best
>> measure of a project's success. I seem to recall this being an
>> implicit expectation a few years ago, but haven't seen it discussed
>> a while.
> I think I recall us discussing a "must have feedback that it's
> successfully deployed" requirement in the last cycle, but we
> that deployers often wait until a project is integrated.
>> I'm not suggesting we make a policy of it, but if, after a
>> few cycles, a project is still not meeting the needs of users, I
>> that's a very good reason to free up the hold on that role within
>> stack so other projects can try and fill it (assuming that is even a
>> role we would want filled).
> I'm certainly not against discussing de-integration proposals. But I
> could imagine a case for de-integrating every single one of our
> integrated projects. None of our software is perfect. How do we make
> sure we approach this sanely, rather than run the risk of someone
> starting a witch hunt because of a particular pet peeve?
> I could imagine a really useful dashboard showing the current state of
> projects along a bunch of different lines - summary of latest
> deployments data from the user survey, links to known scalability
> issues, limitations that operators should take into account, some
> capturing of trends so we know whether things are improving. All of
> data would be useful to the TC, but also hugely useful to operators.
This seems to be the only way to determine when a project isn't working
out for the users in the community.
With such unbiased data being available, it would make a great case for
why de-integration could happen. It would then allow the project to go
back and fix itself, or allow for a replacement to be created that
doesn't have the same set of limitations/problems. This would seem like
a way that let's the project that works for users best to eventually be
selected (survival of the fittest); although we also have to be
careful, software isn't static and instead can be reshaped and molded
and we should give the project that has issues a chance to reshape
itself (giving the benefit of the doubt vs not).
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
More information about the OpenStack-dev