[openstack-dev] R: OpenStack-dev Digest, Vol 28, Issue 92

Angelo angelo.matarazzo at dektech.com.au
Fri Aug 29 04:57:38 UTC 2014


Thanks for your review.

Inviata dal mio Xperia™ smartphone

-------- Messaggio originale --------
Oggetto: OpenStack-dev Digest, Vol 28, Issue 92
Da: openstack-dev-request at lists.openstack.org
A: openstack-dev at lists.openstack.org
CC: 

Send OpenStack-dev mailing list submissions to
	openstack-dev at lists.openstack.org

To subscribe or unsubscribe via the World Wide Web, visit
	http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
or, via email, send a message with subject or body 'help' to
	openstack-dev-request at lists.openstack.org

You can reach the person managing the list at
	openstack-dev-owner at lists.openstack.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of OpenStack-dev digest..."


Today's Topics:

   1. Re: [Neutron][LBass] Design sessions for Neutron LBaaS. What
      do we want/need? (Susanne Balle)
   2. Re: [all] Design Summit reloaded (Sean Dague)
   3. Re: [all] Design Summit reloaded (Anne Gentle)
   4. Re: [Octavia] Octavia VM image design (Susanne Balle)
   5. Re: [all] gate debugging (Doug Hellmann)
   6. Re: [all] gate debugging (Doug Hellmann)
   7. Re: [nova] Is the BP approval process broken? (Jay Pipes)
   8. Re: [all] Design Summit reloaded (Jay Pipes)
   9. Re: [oslo] change to deprecation policy in the	incubator
      (Doug Hellmann)
  10. Re: [all] Design Summit reloaded (Anita Kuno)
  11. Re: [nova] Is the BP approval process broken? (Chris Friesen)
  12. Re: [all] Design Summit reloaded (Doug Hellmann)
  13. Re: [QA] Picking a Name for the Tempest Library (Matthew Treinish)
  14. Re: [Octavia] Using Nova Scheduling Affinity and AntiAffinity
      (Brandon Logan)
  15. Re: [nova] Is the BP approval process broken? (Jay Pipes)
  16. Re: [nova] Is the BP approval process broken? (Dugger, Donald D)
  17. Re: [Octavia] Using Nova Scheduling Affinity and	AntiAffinity
      (Stephen Balukoff)
  18. Re: [oslo.messaging] Request to include AMQP 1.0	support in
      Juno-3 (Ken Giusti)
  19. Re: [nova] Is the BP approval process broken? (Chris Friesen)
  20. Re: [nova] Is the BP approval process broken? (Jay Pipes)
  21. Re: [Octavia] Using Nova Scheduling Affinity and AntiAffinity
      (Brandon Logan)
  22. Re: [nova] Is the BP approval process broken? (Chris Friesen)
  23. Re: [Octavia] Using Nova Scheduling Affinity and	AntiAffinity
      (Susanne Balle)
  24. Re: [nova] Is the BP approval process broken? (Alan Kavanagh)
  25. Re: [nova] Is the BP approval process broken? (Alan Kavanagh)
  26. Re: [nova] Is the BP approval process broken? (Alan Kavanagh)
  27. Re: [nova] [neutron] Specs for K release (Alan Kavanagh)
  28. [neutron][lbaas][octavia] (Susanne Balle)
  29. Re: [nova] Is the BP approval process broken? (Joe Gordon)
  30. Re: [neutron][lbaas][octavia] (Susanne Balle)
  31. Re: [nova] Is the BP approval process broken? (Boris Pavlovic)
  32. Re: [nova] Is the BP approval process broken? (Chris Friesen)
  33. Re: [nova] Is the BP approval process broken? (Alan Kavanagh)
  34. Re: [all] [ptls] The Czar system,	or how to scale PTLs
      (James Polley)


----------------------------------------------------------------------

Message: 1
Date: Thu, 28 Aug 2014 15:26:43 -0400
From: Susanne Balle <sleipnir012 at gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Neutron][LBass] Design sessions for
	Neutron LBaaS. What do we want/need?
Message-ID:
	<CADBYD+zu3smCBtsa4hsMEprXzDZgGzNSD7-i-ZUQEFEw67qFbg at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Let's use a different email thread to discuss if Octavia should be part of
the Neutron incubator project right away or not. I would like to keep the
two discussions separate.



Susanne


On Thu, Aug 28, 2014 at 3:20 PM, Stephen Balukoff <sbalukoff at bluebox.net>
wrote:

> Hi Susanne--
>
> Regarding the Octavia sessions:  I think we probably will have enough to
> discuss that we could use two design sessions.  However, I also think that
> we can probably come to conclusions on whether Octavia should become a part
> of Neutron Incubator right away via discussion on this mailing list.  Do we
> want to have that discussion in another thread, or should we use this one?
>
> Stephen
>
>
> On Thu, Aug 28, 2014 at 7:51 AM, Susanne Balle <sleipnir012 at gmail.com>
> wrote:
>
>> With a corrected Subject. Susanne
>>
>>
>>
>> On Thu, Aug 28, 2014 at 10:49 AM, Susanne Balle <sleipnir012 at gmail.com>
>> wrote:
>>
>>>
>>> LBaaS team,
>>>
>>> As we discussed in the Weekly LBaaS meeting this morning we should make
>>> sure we get the design sessions scheduled that we are interested in.
>>>
>>> We currently agreed on the following:
>>>
>>> * Neutron LBaaS. we want to schedule 2 sessions. I am assuming that we
>>> want to go over status and also the whole incubator thingy and how we will
>>> best move forward.
>>>
>>> * Octavia: We want to schedule 2 sessions.
>>> ---  During one of the sessions I would like to discuss the pros and
>>> cons of putting Octavia into the Neutron LBaaS incubator project right
>>> away. If it is going to be the reference implementation for LBaaS v 2 then
>>> I believe Octavia belong in Neutron LBaaS v2 incubator.
>>>
>>> * Flavors which should be coordinated with markmcclain and enikanorov.
>>> --- https://review.openstack.org/#/c/102723/
>>>
>>> Is this too many sessions given the constraints? I am assuming that we
>>> can also meet at the pods like we did at the last summit.
>>>
>>> thoughts?
>>>
>>> Regards Susanne
>>>
>>> Thierry Carrez <thierry at openstack.org>
>>> Aug 27 (1 day ago)
>>>  to OpenStack
>>>  Hi everyone,
>>>
>>> I've been thinking about what changes we can bring to the Design Summit
>>> format to make it more productive. I've heard the feedback from the
>>> mid-cycle meetups and would like to apply some of those ideas for Paris,
>>> within the constraints we have (already booked space and time). Here is
>>> something we could do:
>>>
>>> Day 1. Cross-project sessions / incubated projects / other projects
>>>
>>> I think that worked well last time. 3 parallel rooms where we can
>>> address top cross-project questions, discuss the results of the various
>>> experiments we conducted during juno. Don't hesitate to schedule 2 slots
>>> for discussions, so that we have time to come to the bottom of those
>>> issues. Incubated projects (and maybe "other" projects, if space allows)
>>> occupy the remaining space on day 1, and could occupy "pods" on the
>>> other days.
>>>
>>> Day 2 and Day 3. Scheduled sessions for various programs
>>>
>>> That's our traditional scheduled space. We'll have a 33% less slots
>>> available. So, rather than trying to cover all the scope, the idea would
>>> be to focus those sessions on specific issues which really require
>>> face-to-face discussion (which can't be solved on the ML or using spec
>>> discussion) *or* require a lot of user feedback. That way, appearing in
>>> the general schedule is very helpful. This will require us to be a lot
>>> stricter on what we accept there and what we don't -- we won't have
>>> space for courtesy sessions anymore, and traditional/unnecessary
>>> sessions (like my traditional "release schedule" one) should just move
>>> to the mailing-list.
>>>
>>> Day 4. Contributors meetups
>>>
>>> On the last day, we could try to split the space so that we can conduct
>>> parallel midcycle-meetup-like contributors gatherings, with no time
>>> boundaries and an open agenda. Large projects could get a full day,
>>> smaller projects would get half a day (but could continue the discussion
>>> in a local bar). Ideally that meetup would end with some alignment on
>>> release goals, but the idea is to make the best of that time together to
>>> solve the issues you have. Friday would finish with the design summit
>>> feedback session, for those who are still around.
>>>
>>>
>>> I think this proposal makes the best use of our setup: discuss clear
>>> cross-project issues, address key specific topics which need
>>> face-to-face time and broader attendance, then try to replicate the
>>> success of midcycle meetup-like open unscheduled time to discuss
>>> whatever is hot at this point.
>>>
>>> There are still details to work out (is it possible split the space,
>>> should we use the usual design summit CFP website to organize the
>>> "scheduled" time...), but I would first like to have your feedback on
>>> this format. Also if you have alternative proposals that would make a
>>> better use of our 4 days, let me know.
>>>
>>> Cheers,
>>>
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140828/0ce64eba/attachment-0001.html>

------------------------------

Message: 2
Date: Thu, 28 Aug 2014 15:31:25 -0400
From: Sean Dague <sean at dague.net>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [all] Design Summit reloaded
Message-ID: <53FF838D.1010504 at dague.net>
Content-Type: text/plain; charset="utf-8"

On 08/28/2014 03:06 PM, Jay Pipes wrote:
> On 08/28/2014 02:21 PM, Sean Dague wrote:
>> On 08/28/2014 01:58 PM, Jay Pipes wrote:
>>> On 08/27/2014 11:34 AM, Doug Hellmann wrote:
>>>>
>>>> On Aug 27, 2014, at 8:51 AM, Thierry Carrez <thierry at openstack.org>
>>>> wrote:
>>>>
>>>>> Hi everyone,
>>>>>
>>>>> I've been thinking about what changes we can bring to the Design
>>>>> Summit format to make it more productive. I've heard the feedback
>>>>> from the mid-cycle meetups and would like to apply some of those
>>>>> ideas for Paris, within the constraints we have (already booked
>>>>> space and time). Here is something we could do:
>>>>>
>>>>> Day 1. Cross-project sessions / incubated projects / other
>>>>> projects
>>>>>
>>>>> I think that worked well last time. 3 parallel rooms where we can
>>>>> address top cross-project questions, discuss the results of the
>>>>> various experiments we conducted during juno. Don't hesitate to
>>>>> schedule 2 slots for discussions, so that we have time to come to
>>>>> the bottom of those issues. Incubated projects (and maybe "other"
>>>>> projects, if space allows) occupy the remaining space on day 1, and
>>>>> could occupy "pods" on the other days.
>>>>
>>>> If anything, I?d like to have fewer cross-project tracks running
>>>> simultaneously. Depending on which are proposed, maybe we can make
>>>> that happen. On the other hand, cross-project issues is a big theme
>>>> right now so maybe we should consider devoting more than a day to
>>>> dealing with them.
>>>
>>> I agree with Doug here. I'd almost say having a single cross-project
>>> room, with serialized content would be better than 3 separate
>>> cross-project tracks. By nature, the cross-project sessions will attract
>>> developers that work or are interested in a set of projects that looks
>>> like a big Venn diagram. By having 3 separate cross-project tracks, we
>>> would increase the likelihood that developers would once more have to
>>> choose among simultaneous sessions that they have equal interest in. For
>>> Infra and QA folks, this likelihood is even greater...
>>>
>>> I think I'd prefer a single cross-project track on the first day.
>>
>> So the fallout of that is there will be 6 or 7 cross-project slots for
>> the design summit. Maybe that's the right mix if the TC does a good job
>> picking the top 5 things we want accomplished from a cross project
>> standpoint during the cycle. But it's going to have to be a pretty
>> directed pick. I think last time we had 21 slots, and with a couple of
>> doubling up that gave 19 sessions. (about 30 - 35 proposals for that
>> slot set).
> 
> I'm not sure that would be a bad thing :)
> 
> I think one of the reasons the mid-cycles have been successful is that
> they have adequately limited the scope of discussions and I think by
> doing our homework by fully vetting and voting on cross-project sessions
> and being OK with saying "No, not this time.", we will be more
> productive than if we had 20+ cross-project sessions.
> 
> Just my two cents, though..

I'm not sure it would be a bad thing either. I just wanted to be
explicit about what we are saying the cross projects sessions are for in
this case: the 5 key cross project activities the TC believes should be
worked on this next cycle.

The other question is if we did that what's running in competition to
cross project day? Is it another free form pod day for people not
working on those things?

	-Sean

> 
> -jay
> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Sean Dague
http://dague.net

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 482 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140828/a4ef0f5e/attachment-0001.pgp>

------------------------------

Message: 3
Date: Thu, 28 Aug 2014 14:32:51 -0500
From: Anne Gentle <anne at openstack.org>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [all] Design Summit reloaded
Message-ID:
	<CAD0KtVHuPTjkRR7xCTejL_fRV5Ds_E7n+krkhOv+hzm4KkFm1w at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

On Wed, Aug 27, 2014 at 7:51 AM, Thierry Carrez <thierry at openstack.org>
wrote:

> Hi everyone,
>
> I've been thinking about what changes we can bring to the Design Summit
> format to make it more productive. I've heard the feedback from the
> mid-cycle meetups and would like to apply some of those ideas for Paris,
> within the constraints we have (already booked space and time). Here is
> something we could do:
>
> Day 1. Cross-project sessions / incubated projects / other projects
>
> I think that worked well last time. 3 parallel rooms where we can
> address top cross-project questions, discuss the results of the various
> experiments we conducted during juno. Don't hesitate to schedule 2 slots
> for discussions, so that we have time to come to the bottom of those
> issues. Incubated projects (and maybe "other" projects, if space allows)
> occupy the remaining space on day 1, and could occupy "pods" on the
> other days.
>
>
Yep, I think this works in theory, the tough part will be when all the
incubating projects realize they're sending people for a single day? Maybe
it'll work out differently than I think though. It means fitting ironic,
barbican, designate, manila, marconi in a day?

Also since QA, Infra, and Docs are cross-project AND Programs, where do
they land?


> Day 2 and Day 3. Scheduled sessions for various programs
>
> That's our traditional scheduled space. We'll have a 33% less slots
> available. So, rather than trying to cover all the scope, the idea would
> be to focus those sessions on specific issues which really require
> face-to-face discussion (which can't be solved on the ML or using spec
> discussion) *or* require a lot of user feedback. That way, appearing in
> the general schedule is very helpful. This will require us to be a lot
> stricter on what we accept there and what we don't -- we won't have
> space for courtesy sessions anymore, and traditional/unnecessary
> sessions (like my traditional "release schedule" one) should just move
> to the mailing-list.
>

I like thinking about what we can move to the mailing lists. Nice.


>
> Day 4. Contributors meetups
>
> On the last day, we could try to split the space so that we can conduct
> parallel midcycle-meetup-like contributors gatherings, with no time
> boundaries and an open agenda. Large projects could get a full day,
> smaller projects would get half a day (but could continue the discussion
> in a local bar). Ideally that meetup would end with some alignment on
> release goals, but the idea is to make the best of that time together to
> solve the issues you have. Friday would finish with the design summit
> feedback session, for those who are still around.
>
>
Sounds good.


>
> I think this proposal makes the best use of our setup: discuss clear
> cross-project issues, address key specific topics which need
> face-to-face time and broader attendance, then try to replicate the
> success of midcycle meetup-like open unscheduled time to discuss
> whatever is hot at this point.
>
> There are still details to work out (is it possible split the space,
> should we use the usual design summit CFP website to organize the
> "scheduled" time...), but I would first like to have your feedback on
> this format. Also if you have alternative proposals that would make a
> better use of our 4 days, let me know.
>
> Cheers,
>
> --
> Thierry Carrez (ttx)
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140828/2ab62e64/attachment-0001.html>

------------------------------

Message: 4
Date: Thu, 28 Aug 2014 15:34:23 -0400
From: Susanne Balle <sleipnir012 at gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Octavia] Octavia VM image design
Message-ID:
	<CADBYD+xGG3JHY_63LvrK5XfdwVk-0ySp780nD+R_enFN9Etb9A at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

I agree with Michael. We need to use the OpenStack tooling.

Sahara is encountering some of the same issues we are as they are building
up their hadoop VM/clusters.

See

http://docs.openstack.org/developer/sahara/userdoc/vanilla_plugin.html
http://docs.openstack.org/developer/sahara/userdoc/diskimagebuilder.html

for inspiration,

Susanne



On Wed, Aug 27, 2014 at 6:21 PM, Michael Johnson <johnsomor at gmail.com>
wrote:

> I am investigating building scripts that use diskimage-builder
> (https://github.com/openstack/diskimage-builder) to create a "purpose
> built" image.  This should allow some flexibility in the base image
> and the output image format (including a path to docker).
>
> The definition of "purpose built" is open at this point.  I will
> likely try to have a minimal Ubuntu based VM image as a starting
> point/test case and we can add/change as necessary.
>
> Michael
>
>
> On Wed, Aug 27, 2014 at 2:12 PM, Dustin Lundquist <dustin at null-ptr.net>
> wrote:
> > It seems to me there are two major approaches to the Octavia VM design:
> >
> > Start with a standard Linux distribution (e.g. Ubuntu 14.04 LTS) and
> install
> > HAProxy 1.5 and Octavia control layer
> > Develop a minimal purpose driven distribution (similar to m0n0wall) with
> > just HAProxy, iproute2 and a Python runtime for the control layer.
> >
> > The primary difference here is additional development effort for option
> 2,
> > verses the increased image size of option 1. Using Ubuntu and CirrOS
> images
> > a representative of the two options it looks like the image size
> difference
> > is on the about 20 times larger for a full featured distribution. If one
> of
> > the HA models is to spin up a replacement instance on failure the image
> size
> > could be significantly affect fail-over time.
> >
> > For initial work I think starting with a standard distribution would be
> > sensible, but we should target systemd (Debian adopted systemd as new
> > default, and Ubuntu is following suit). I wanted to find out if there is
> > interest in a minimal Octavia image, and if so this may affect design
> > decisions on the instance control plane component.
> >
> >
> > -Dustin
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140828/193ec254/attachment-0001.html>

------------------------------

Message: 5
Date: Thu, 28 Aug 2014 15:40:43 -0400
From: Doug Hellmann <doug at doughellmann.com>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [all] gate debugging
Message-ID: <0D334089-BCF3-4E36-8219-5DC6A1EB5F74 at doughellmann.com>
Content-Type: text/plain; charset=windows-1252


On Aug 28, 2014, at 2:15 PM, Sean Dague <sean at dague.net> wrote:

> On 08/28/2014 01:48 PM, Doug Hellmann wrote:
>> 
>> On Aug 28, 2014, at 1:17 PM, Sean Dague <sean at dague.net> wrote:
>> 
>>> On 08/28/2014 12:48 PM, Doug Hellmann wrote:
>>>> 
>>>> On Aug 27, 2014, at 5:56 PM, Sean Dague <sean at dague.net> wrote:
>>>> 
>>>>> On 08/27/2014 05:27 PM, Doug Hellmann wrote:
>>>>>> 
>>>>>> On Aug 27, 2014, at 2:54 PM, Sean Dague <sean at dague.net> wrote:
>>>>>> 
>>>>>>> Note: thread intentionally broken, this is really a different topic.
>>>>>>> 
>>>>>>> On 08/27/2014 02:30 PM, Doug Hellmann wrote:>
>>>>>>>> On Aug 27, 2014, at 1:30 PM, Chris Dent <chdent at redhat.com> wrote:
>>>>>>>> 
>>>>>>>>> On Wed, 27 Aug 2014, Doug Hellmann wrote:
>>>>>>>>> 
>>>>>>>>>> I have found it immensely helpful, for example, to have a written set
>>>>>>>>>> of the steps involved in creating a new library, from importing the
>>>>>>>>>> git repo all the way through to making it available to other projects.
>>>>>>>>>> Without those instructions, it would have been much harder to split up
>>>>>>>>>> the work. The team would have had to train each other by word of
>>>>>>>>>> mouth, and we would have had constant issues with inconsistent
>>>>>>>>>> approaches triggering different failures. The time we spent building
>>>>>>>>>> and verifying the instructions has paid off to the extent that we even
>>>>>>>>>> had one developer not on the core team handle a graduation for us.
>>>>>>>>> 
>>>>>>>>> +many more for the relatively simple act of just writing stuff down
>>>>>>>> 
>>>>>>>> "Write it down.? is my theme for Kilo.
>>>>>>> 
>>>>>>> I definitely get the sentiment. "Write it down" is also hard when you
>>>>>>> are talking about things that do change around quite a bit. OpenStack as
>>>>>>> a whole sees 250 - 500 changes a week, so the interaction pattern moves
>>>>>>> around enough that it's really easy to have *very* stale information
>>>>>>> written down. Stale information is even more dangerous than no
>>>>>>> information some times, as it takes people down very wrong paths.
>>>>>>> 
>>>>>>> I think we break down on communication when we get into a conversation
>>>>>>> of "I want to learn gate debugging" because I don't quite know what that
>>>>>>> means, or where the starting point of understanding is. So those
>>>>>>> intentions are well meaning, but tend to stall. The reality was there
>>>>>>> was no road map for those of us that dive in, it's just understanding
>>>>>>> how OpenStack holds together as a whole and where some of the high risk
>>>>>>> parts are. And a lot of that comes with days staring at code and logs
>>>>>>> until patterns emerge.
>>>>>>> 
>>>>>>> Maybe if we can get smaller more targeted questions, we can help folks
>>>>>>> better? I'm personally a big fan of answering the targeted questions
>>>>>>> because then I also know that the time spent exposing that information
>>>>>>> was directly useful.
>>>>>>> 
>>>>>>> I'm more than happy to mentor folks. But I just end up finding the "I
>>>>>>> want to learn" at the generic level something that's hard to grasp onto
>>>>>>> or figure out how we turn it into action. I'd love to hear more ideas
>>>>>>> from folks about ways we might do that better.
>>>>>> 
>>>>>> You and a few others have developed an expertise in this important skill. I am so far away from that level of expertise that I don?t know the questions to ask. More often than not I start with the console log, find something that looks significant, spend an hour or so tracking it down, and then have someone tell me that it is a red herring and the issue is really some other thing that they figured out very quickly by looking at a file I never got to.
>>>>>> 
>>>>>> I guess what I?m looking for is some help with the patterns. What made you think to look in one log file versus another? Some of these jobs save a zillion little files, which ones are actually useful? What tools are you using to correlate log entries across all of those files? Are you doing it by hand? Is logstash useful for that, or is that more useful for finding multiple occurrences of the same issue?
>>>>>> 
>>>>>> I realize there?s not a way to write a how-to that will live forever. Maybe one way to deal with that is to write up the research done on bugs soon after they are solved, and publish that to the mailing list. Even the retrospective view is useful because we can all learn from it without having to live through it. The mailing list is a fairly ephemeral medium, and something very old in the archives is understood to have a good chance of being out of date so we don?t have to keep adding disclaimers.
>>>>> 
>>>>> Sure. Matt's actually working up a blog post describing the thing he
>>>>> nailed earlier in the week.
>>>> 
>>>> Yes, I appreciate that both of you are responding to my questions. :-)
>>>> 
>>>> I have some more specific questions/comments below. Please take all of this in the spirit of trying to make this process easier by pointing out where I?ve found it hard, and not just me complaining. I?d like to work on fixing any of these things that can be fixed, by writing or reviewing patches for early in kilo.
>>>> 
>>>>> 
>>>>> Here is my off the cuff set of guidelines:
>>>>> 
>>>>> #1 - is it a test failure or a setup failure
>>>>> 
>>>>> This should be pretty easy to figure out. Test failures come at the end
>>>>> of console log and say that tests failed (after you see a bunch of
>>>>> passing tempest tests).
>>>>> 
>>>>> Always start at *the end* of files and work backwards.
>>>> 
>>>> That?s interesting because in my case I saw a lot of failures after the initial ?real? problem. So I usually read the logs like C compiler output: Assume the first error is real, and the others might have been caused by that one. Do you work from the bottom up to a point where you don?t see any more errors instead of reading top down?
>>> 
>>> Bottom up to get to problems, then figure out if it's in a subprocess so
>>> the problems could exist for a while. That being said, not all tools do
>>> useful things like actually error when they fail (I'm looking at you
>>> yum....) so there are always edge cases here.
>>> 
>>>>> 
>>>>> #2 - if it's a test failure, what API call was unsuccessful.
>>>>> 
>>>>> Start with looking at the API logs for the service at the top level, and
>>>>> see if there is a simple traceback at the right timestamp. If not,
>>>>> figure out what that API call was calling out to, again look at the
>>>>> simple cases assuming failures will create ERRORS or TRACES (though they
>>>>> often don't).
>>>> 
>>>> In my case, a neutron call failed. Most of the other services seem to have a *-api.log file, but neutron doesn?t. It took a little while to find the API-related messages in screen-q-svc.txt (I?m glad I?ve been around long enough to know it used to be called ?quantum?). I get that screen-n-*.txt would collide with nova. Is it necessary to abbreviate those filenames at all?
>>> 
>>> Yeh... service naming could definitely be better, especially with
>>> neutron. There are implications for long names in screen, but maybe we
>>> just get over it as we already have too many tabs to be in one page in
>>> the console anymore anyway.
>>> 
>>>>> Hints on the service log order you should go after are on the footer
>>>>> over every log page -
>>>>> http://logs.openstack.org/76/79776/15/gate/gate-tempest-dsvm-full/700ee7e/logs/
>>>>> (it's included as an Apache footer) for some services. It's been there
>>>>> for about 18 months, I think people are fully blind to it at this point.
>>>> 
>>>> Where would I go to edit that footer to add information about the neutron log files? Is that Apache footer defined in an infra repo?
>>> 
>>> Note the following at the end of the footer output:
>>> 
>>> About this Help
>>> 
>>> This help file is part of the openstack-infra/config project, and can be
>>> found at modules/openstack_project/files/logs/help/tempest_logs.html .
>>> The file can be updated via the standard OpenStack Gerrit Review process.
>> 
>> /me smacks forehead
> 
> :)
> 
> Also note an early version of this base email is at the top level for
> all runs - (i.e. -
> http://logs.openstack.org/76/79776/15/gate/gate-tempest-dsvm-full/700ee7e/)
> 
> It's been there about 18 months. People look right past it. Which is
> part of where my skepticism on just writing things down being the
> solution. Because a bunch of it has been written down. But until people
> are in a mode of pulling the information in, pushing it out doesn't help.

Fair enough.

> 
>>>> Another specific issue I?ve seen is a message that says something to the effect ?the setup for this job failed, check the appropriate log?. I found 2 files with ?setup? in the name, but the failure was actually logged in a different file (devstacklog.txt). Is the job definition too far ?removed? from the scripts to know what the real filename is? Is it running scripts that log to multiple files during the setup phase, and so it doesn?t know which to refer me to? Or maybe I overlooked a message about when logging to a specific file started.
>>> 
>>> Part of the issue here is that devstack-gate runs a lot of different
>>> gate_hooks. So that's about as specific as we can get unless you can
>>> figure out how to introspect that info in bash... which I couldn?t.
>> 
>> Are all of the hooks logging to the same file? If not, why not? Would it make sense to change that so the error messages could be more specific?
> 
> They are not, output direction is actually typically a function of the
> hook script and not devstack gate.
> 
> Some of this is because the tools when run locally need to be able to
> natively support logging. Some of this is because processing logs into
> elastic search requires that we know we understand the log format (a
> generic gate_hook log wouldn't work well there). Some of it is historical.

OK, that makes sense.

> 
> I did spend a bunch of time cleaning up the grenade summary log so in
> the console you get some basic idea of what's going on, and what part
> you failed in. Definitely could be better. Taking some of those summary
> lessons into devstack wouldn't hurt either.

I don't think I've hit a grenade issue, so I haven?t seen that.

> 
> So patches here are definitely accepted. Which is very much not a blow
> off, but in cleaning d-g up over the last 6 months ?the setup for this

Yep, I?m asking if I?m even thinking in the right directions, and that sounds like a ?yes? rather than a blow off.

> job failed, check the appropriate log? was about as good as we could
> figure out. Previously the script just died and people usually blamed an
> error message about uploading artifacts in the jenkins output for the
> failure. So if you can figure out a better UX given the constraints
> we're in, definitely appreciated.

I?ll look at the job definitions and see if I can come up with a way to parameterize them or automate the step of figuring out which file is meant for each phase.

> 
>>>>> If nothing jumps out at ERROR or TRACE, go back to DEBUG level and
>>>>> figure out what's happening at the time of failure, especially keeping
>>>>> an eye out of areas where other workers are doing interesting things at
>>>>> the same time, possibly indicating state corruption in OpenStack as a race.
>>>>> 
>>>>> #3 - if it's a console failure, start at the end and work backwards
>>>>> 
>>>>> devstack and grenade run under set -o errexit so that they will
>>>>> critically exit if a command fails. They will typically dump some debug
>>>>> when they do that. So the failing command won't be the last line in the
>>>>> file, but it will be close. The words 'error' typically aren't useful at
>>>>> all in shell because lots of things say error when they aren't, we mask
>>>>> their exit codes if their failure is generally irrelevant.
>>>>> 
>>>>> #4 - general principle the closer to root cause the better
>>>>> 
>>>>> If we think of exposure of bugs as layers we probably end up
>>>>> withsomething like this
>>>>> 
>>>>> - Console log
>>>>> - Test Name + Failure
>>>>> - Failure inside an API service
>>>>> - Failure inside a worker process
>>>>> - Actual failure figured out in OpenStack code path
>>>>> - Failure in something below OpenStack (kernel, libvirt)
>>>>> 
>>>>> This is why signatures that are just test names aren't all that useful
>>>>> much of the time (and why we try not to add those to ER), as that's
>>>>> going to be hitting an API, but the why of things is very much still
>>>>> undiscovered.
>>>>> 
>>>>> #5 - if it's an infrastructure level setup bug (failing to download or
>>>>> install something) figure out if there are other likewise events at the
>>>>> same time (i.e. it's a network issue, which we can't fix) vs. a
>>>>> structural issue.
>>>>> 
>>>>> 
>>>>> I find Elastic Search good for step 5, but realistically for all other
>>>>> steps it's manual log sifting. I open lots of tabs in Chrome, and search
>>>>> by timestamp.
>>>> 
>>>> This feels like something we could improve on. If we had a tool to download the logs and interleave the messages using their timestamps, would that make it easier? We could probably make the job log everything to a single file, but I can see where sometimes only having part of the data to look at would be more useful.
>>> 
>>> Maybe, I find the ability to change the filtering level dynamically to
>>> be pretty important. We actually did some of this once when we used
>>> syslog. Personally I found it a ton harder to get to the bottom of things.
>>> 
>>> A gate run has 25+ services running, it's a rare issue that combines
>>> interactions between > 4 of them to get to a solution. So I expect you'd
>>> exchange context jumping, for tons of irrelevancy. That's a personal
>>> opinion based on personal workflow, and why I never spent time on it.
>>> Instead I built os-loganalyze that does the filtering and coloring
>>> dynamically on the server side, as it was a zero install solution that
>>> provided additional benefits of being able to link to a timestamp in a
>>> log for sharing purposes.
>> 
>> Sure, that makes sense.
>> 
>>> 
>>>> 
>>>>> 
>>>>> 
>>>>> A big part of the experience also just comes from a manual bayesian
>>>>> filter. Certain scary looking things in the console log aren't, but you
>>>>> don't know that unless you look at setup logs enough (either in gate or
>>>>> in your own devstacks) to realize that. Sanitizing the output of that
>>>>> part of the process is pretty intractable... because shell (though I've
>>>>> put some serious effort into it over the last 6 months).
>>>> 
>>>> Maybe our scripts can emit messages to explain the scary stuff? ?This is going to report a problem, but you can ignore it unless X happens.??
>>> 
>>> Maybe, like I said it's a lot better than it used to be. But very few
>>> people are putting in effort here, and I'm not convinced it's really
>>> solveable in bash.
>> 
>> OK, well, if the answers to these questions are ?yes? then I should have time to help, which is why I?m exploring options.
> 
> Yeh, the issue is you'd need a couple hundred different messages like
> that, and realistically I think they'd lead to more confusion rather
> than less.
> 
> Honestly, I did a huge amount of selective filtering out of xtrace logs
> in the last six months and was able to drop the size of the devstack
> logs by over 50% getting rid of some of the more confusing trace bits.
> But it's something that you make progress on 1% at a time.
> 
> At some point we do need to say "you have to understand OpenStack and
> the Test run process ^this much^ to be able to ride", because cleaning
> up every small thing isn't really possible.
> 
> Now, providing a better flow explaining the parts here might be good. We
> do it during Infra bootcamps, and people find it helpful. But again,
> that's a mostly pull model because the people showing up did so
> specifically to learn, so are much more receptive to gaining the
> information at hand.
> 
>>>>> Sanitizing the OpenStack logs to be crisp about actual things going
>>>>> wrong, vs. not, shouldn't be intractable, but it feels like it some
>>>>> times. Which is why all operators run at DEBUG level. The thing that
>>>>> makes it hard for developers to see the issues here is the same thing
>>>>> that makes it *really* hard for operators to figure out failures. It's
>>>>> also why I tried (though executed poorly on, sorry about that) getting
>>>>> log cleanups rolling this cycle.
>>>> 
>>>> I would like to have the TC back an official cross-project effort to clean up the logs for Kilo, and get all of the integrated projects to commit to working on it as a priority.
>>>> 
>>>> Doug
>>>> 
>>>>> 
>>>>> 	-Sean
>>>>> 
>>>>> -- 
>>>>> Sean Dague
>>>>> http://dague.net
>>>>> 
>>>>> _______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> OpenStack-dev at lists.openstack.org
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> 
>>>> 
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> 
>>> 
>>> 
>>> -- 
>>> Sean Dague
>>> http://dague.net
>>> 
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> -- 
> Sean Dague
> http://dague.net
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




------------------------------

Message: 6
Date: Thu, 28 Aug 2014 15:41:06 -0400
From: Doug Hellmann <doug at doughellmann.com>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [all] gate debugging
Message-ID: <78513F50-DF6A-43C2-B002-5726B0E18952 at doughellmann.com>
Content-Type: text/plain; charset=windows-1252


On Aug 28, 2014, at 2:16 PM, Sean Dague <sean at dague.net> wrote:

> On 08/28/2014 02:07 PM, Joe Gordon wrote:
>> 
>> 
>> 
>> On Thu, Aug 28, 2014 at 10:17 AM, Sean Dague <sean at dague.net
>> <mailto:sean at dague.net>> wrote:
>> 
>>    On 08/28/2014 12:48 PM, Doug Hellmann wrote:
>>> 
>>> On Aug 27, 2014, at 5:56 PM, Sean Dague <sean at dague.net
>>    <mailto:sean at dague.net>> wrote:
>>> 
>>>> On 08/27/2014 05:27 PM, Doug Hellmann wrote:
>>>>> 
>>>>> On Aug 27, 2014, at 2:54 PM, Sean Dague <sean at dague.net
>>    <mailto:sean at dague.net>> wrote:
>>>>> 
>>>>>> Note: thread intentionally broken, this is really a different
>>    topic.
>>>>>> 
>>>>>> On 08/27/2014 02:30 PM, Doug Hellmann wrote:>
>>>>>>> On Aug 27, 2014, at 1:30 PM, Chris Dent <chdent at redhat.com
>>    <mailto:chdent at redhat.com>> wrote:
>>>>>>> 
>>>>>>>> On Wed, 27 Aug 2014, Doug Hellmann wrote:
>>>>>>>> 
>>>>>>>>> I have found it immensely helpful, for example, to have a
>>    written set
>>>>>>>>> of the steps involved in creating a new library, from
>>    importing the
>>>>>>>>> git repo all the way through to making it available to other
>>    projects.
>>>>>>>>> Without those instructions, it would have been much harder
>>    to split up
>>>>>>>>> the work. The team would have had to train each other by word of
>>>>>>>>> mouth, and we would have had constant issues with inconsistent
>>>>>>>>> approaches triggering different failures. The time we spent
>>    building
>>>>>>>>> and verifying the instructions has paid off to the extent
>>    that we even
>>>>>>>>> had one developer not on the core team handle a graduation
>>    for us.
>>>>>>>> 
>>>>>>>> +many more for the relatively simple act of just writing
>>    stuff down
>>>>>>> 
>>>>>>> "Write it down.? is my theme for Kilo.
>>>>>> 
>>>>>> I definitely get the sentiment. "Write it down" is also hard
>>    when you
>>>>>> are talking about things that do change around quite a bit.
>>    OpenStack as
>>>>>> a whole sees 250 - 500 changes a week, so the interaction
>>    pattern moves
>>>>>> around enough that it's really easy to have *very* stale
>>    information
>>>>>> written down. Stale information is even more dangerous than no
>>>>>> information some times, as it takes people down very wrong paths.
>>>>>> 
>>>>>> I think we break down on communication when we get into a
>>    conversation
>>>>>> of "I want to learn gate debugging" because I don't quite know
>>    what that
>>>>>> means, or where the starting point of understanding is. So those
>>>>>> intentions are well meaning, but tend to stall. The reality was
>>    there
>>>>>> was no road map for those of us that dive in, it's just
>>    understanding
>>>>>> how OpenStack holds together as a whole and where some of the
>>    high risk
>>>>>> parts are. And a lot of that comes with days staring at code
>>    and logs
>>>>>> until patterns emerge.
>>>>>> 
>>>>>> Maybe if we can get smaller more targeted questions, we can
>>    help folks
>>>>>> better? I'm personally a big fan of answering the targeted
>>    questions
>>>>>> because then I also know that the time spent exposing that
>>    information
>>>>>> was directly useful.
>>>>>> 
>>>>>> I'm more than happy to mentor folks. But I just end up finding
>>    the "I
>>>>>> want to learn" at the generic level something that's hard to
>>    grasp onto
>>>>>> or figure out how we turn it into action. I'd love to hear more
>>    ideas
>>>>>> from folks about ways we might do that better.
>>>>> 
>>>>> You and a few others have developed an expertise in this
>>    important skill. I am so far away from that level of expertise that
>>    I don?t know the questions to ask. More often than not I start with
>>    the console log, find something that looks significant, spend an
>>    hour or so tracking it down, and then have someone tell me that it
>>    is a red herring and the issue is really some other thing that they
>>    figured out very quickly by looking at a file I never got to.
>>>>> 
>>>>> I guess what I?m looking for is some help with the patterns.
>>    What made you think to look in one log file versus another? Some of
>>    these jobs save a zillion little files, which ones are actually
>>    useful? What tools are you using to correlate log entries across all
>>    of those files? Are you doing it by hand? Is logstash useful for
>>    that, or is that more useful for finding multiple occurrences of the
>>    same issue?
>>>>> 
>>>>> I realize there?s not a way to write a how-to that will live
>>    forever. Maybe one way to deal with that is to write up the research
>>    done on bugs soon after they are solved, and publish that to the
>>    mailing list. Even the retrospective view is useful because we can
>>    all learn from it without having to live through it. The mailing
>>    list is a fairly ephemeral medium, and something very old in the
>>    archives is understood to have a good chance of being out of date so
>>    we don?t have to keep adding disclaimers.
>>>> 
>>>> Sure. Matt's actually working up a blog post describing the thing he
>>>> nailed earlier in the week.
>>> 
>>> Yes, I appreciate that both of you are responding to my questions. :-)
>>> 
>>> I have some more specific questions/comments below. Please take
>>    all of this in the spirit of trying to make this process easier by
>>    pointing out where I?ve found it hard, and not just me complaining.
>>    I?d like to work on fixing any of these things that can be fixed, by
>>    writing or reviewing patches for early in kilo.
>>> 
>>>> 
>>>> Here is my off the cuff set of guidelines:
>>>> 
>>>> #1 - is it a test failure or a setup failure
>>>> 
>>>> This should be pretty easy to figure out. Test failures come at
>>    the end
>>>> of console log and say that tests failed (after you see a bunch of
>>>> passing tempest tests).
>>>> 
>>>> Always start at *the end* of files and work backwards.
>>> 
>>> That?s interesting because in my case I saw a lot of failures
>>    after the initial ?real? problem. So I usually read the logs like C
>>    compiler output: Assume the first error is real, and the others
>>    might have been caused by that one. Do you work from the bottom up
>>    to a point where you don?t see any more errors instead of reading
>>    top down?
>> 
>>    Bottom up to get to problems, then figure out if it's in a subprocess so
>>    the problems could exist for a while. That being said, not all tools do
>>    useful things like actually error when they fail (I'm looking at you
>>    yum....) so there are always edge cases here.
>> 
>>>> 
>>>> #2 - if it's a test failure, what API call was unsuccessful.
>>>> 
>>>> Start with looking at the API logs for the service at the top
>>    level, and
>>>> see if there is a simple traceback at the right timestamp. If not,
>>>> figure out what that API call was calling out to, again look at the
>>>> simple cases assuming failures will create ERRORS or TRACES
>>    (though they
>>>> often don't).
>>> 
>>> In my case, a neutron call failed. Most of the other services seem
>>    to have a *-api.log file, but neutron doesn?t. It took a little
>>    while to find the API-related messages in screen-q-svc.txt (I?m glad
>>    I?ve been around long enough to know it used to be called
>>    ?quantum?). I get that screen-n-*.txt would collide with nova. Is it
>>    necessary to abbreviate those filenames at all?
>> 
>>    Yeh... service naming could definitely be better, especially with
>>    neutron. There are implications for long names in screen, but maybe we
>>    just get over it as we already have too many tabs to be in one page in
>>    the console anymore anyway.
>> 
>>>> Hints on the service log order you should go after are on the footer
>>>> over every log page -
>>>> 
>>    http://logs.openstack.org/76/79776/15/gate/gate-tempest-dsvm-full/700ee7e/logs/
>>>> (it's included as an Apache footer) for some services. It's been
>>    there
>>>> for about 18 months, I think people are fully blind to it at this
>>    point.
>>> 
>>> Where would I go to edit that footer to add information about the
>>    neutron log files? Is that Apache footer defined in an infra repo?
>> 
>>    Note the following at the end of the footer output:
>> 
>>    About this Help
>> 
>>    This help file is part of the openstack-infra/config project, and can be
>>    found at modules/openstack_project/files/logs/help/tempest_logs.html .
>>    The file can be updated via the standard OpenStack Gerrit Review
>>    process.
>> 
>> 
>> I took a first whack at trying to add some more information to the
>> footer here: https://review.openstack.org/#/c/117390/
> 
> \o/ - you rock joe!

+1!!

Doug




------------------------------

Message: 7
Date: Thu, 28 Aug 2014 15:44:25 -0400
From: Jay Pipes <jaypipes at gmail.com>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
Message-ID: <53FF8699.1000402 at gmail.com>
Content-Type: text/plain; charset=windows-1252; format=flowed

On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
> I?ll try and not whine about my pet project but I do think there is a
> problem here.  For the Gantt project to split out the scheduler there is
> a crucial BP that needs to be implemented (
> https://review.openstack.org/#/c/89893/ ) and, unfortunately, the BP has
> been rejected and we?ll have to try again for Kilo.  My question is did
> we do something wrong or is the process broken?
>
> Note that we originally proposed the BP on 4/23/14, went through 10
> iterations to the final version on 7/25/14 and the final version got
> three +1s and a +2 by 8/5.  Unfortunately, even after reaching out to
> specific people, we didn?t get the second +2, hence the rejection.
>
> I understand that reviews are a burden and very hard but it seems wrong
> that a BP with multiple positive reviews and no negative reviews is
> dropped because of what looks like indifference.

I would posit that this is not actually indifference. The reason that 
there may not have been >1 +2 from a core team member may very well have 
been that the core team members did not feel that the blueprint's 
priority was high enough to put before other work, or that the core team 
members did have the time to comment on the spec (due to them not 
feeling the blueprint had the priority to justify the time to do a full 
review).

Note that I'm not a core drivers team member.

Best,
-jay




------------------------------

Message: 8
Date: Thu, 28 Aug 2014 15:53:48 -0400
From: Jay Pipes <jaypipes at gmail.com>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [all] Design Summit reloaded
Message-ID: <53FF88CC.7090200 at gmail.com>
Content-Type: text/plain; charset=windows-1252; format=flowed

On 08/28/2014 03:31 PM, Sean Dague wrote:
> On 08/28/2014 03:06 PM, Jay Pipes wrote:
>> On 08/28/2014 02:21 PM, Sean Dague wrote:
>>> On 08/28/2014 01:58 PM, Jay Pipes wrote:
>>>> On 08/27/2014 11:34 AM, Doug Hellmann wrote:
>>>>>
>>>>> On Aug 27, 2014, at 8:51 AM, Thierry Carrez <thierry at openstack.org>
>>>>> wrote:
>>>>>
>>>>>> Hi everyone,
>>>>>>
>>>>>> I've been thinking about what changes we can bring to the Design
>>>>>> Summit format to make it more productive. I've heard the feedback
>>>>>> from the mid-cycle meetups and would like to apply some of those
>>>>>> ideas for Paris, within the constraints we have (already booked
>>>>>> space and time). Here is something we could do:
>>>>>>
>>>>>> Day 1. Cross-project sessions / incubated projects / other
>>>>>> projects
>>>>>>
>>>>>> I think that worked well last time. 3 parallel rooms where we can
>>>>>> address top cross-project questions, discuss the results of the
>>>>>> various experiments we conducted during juno. Don't hesitate to
>>>>>> schedule 2 slots for discussions, so that we have time to come to
>>>>>> the bottom of those issues. Incubated projects (and maybe "other"
>>>>>> projects, if space allows) occupy the remaining space on day 1, and
>>>>>> could occupy "pods" on the other days.
>>>>>
>>>>> If anything, I?d like to have fewer cross-project tracks running
>>>>> simultaneously. Depending on which are proposed, maybe we can make
>>>>> that happen. On the other hand, cross-project issues is a big theme
>>>>> right now so maybe we should consider devoting more than a day to
>>>>> dealing with them.
>>>>
>>>> I agree with Doug here. I'd almost say having a single cross-project
>>>> room, with serialized content would be better than 3 separate
>>>> cross-project tracks. By nature, the cross-project sessions will attract
>>>> developers that work or are interested in a set of projects that looks
>>>> like a big Venn diagram. By having 3 separate cross-project tracks, we
>>>> would increase the likelihood that developers would once more have to
>>>> choose among simultaneous sessions that they have equal interest in. For
>>>> Infra and QA folks, this likelihood is even greater...
>>>>
>>>> I think I'd prefer a single cross-project track on the first day.
>>>
>>> So the fallout of that is there will be 6 or 7 cross-project slots for
>>> the design summit. Maybe that's the right mix if the TC does a good job
>>> picking the top 5 things we want accomplished from a cross project
>>> standpoint during the cycle. But it's going to have to be a pretty
>>> directed pick. I think last time we had 21 slots, and with a couple of
>>> doubling up that gave 19 sessions. (about 30 - 35 proposals for that
>>> slot set).
>>
>> I'm not sure that would be a bad thing :)
>>
>> I think one of the reasons the mid-cycles have been successful is that
>> they have adequately limited the scope of discussions and I think by
>> doing our homework by fully vetting and voting on cross-project sessions
>> and being OK with saying "No, not this time.", we will be more
>> productive than if we had 20+ cross-project sessions.
>>
>> Just my two cents, though..
>
> I'm not sure it would be a bad thing either. I just wanted to be
> explicit about what we are saying the cross projects sessions are for in
> this case: the 5 key cross project activities the TC believes should be
> worked on this next cycle.

Yes.

> The other question is if we did that what's running in competition to
> cross project day? Is it another free form pod day for people not
> working on those things?

It could be a pod day, sure. Or just an extended hallway session day... :)

-jay



------------------------------

Message: 9
Date: Thu, 28 Aug 2014 15:59:12 -0400
From: Doug Hellmann <doug at doughellmann.com>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [oslo] change to deprecation policy in
	the	incubator
Message-ID: <3ED2FC1B-66C8-4DE1-B76F-2D3155D9A6F1 at doughellmann.com>
Content-Type: text/plain; charset=windows-1252


On Aug 28, 2014, at 12:14 PM, Doug Hellmann <doug at doughellmann.com> wrote:

> Before Juno we set a deprecation policy for graduating libraries that said the incubated versions of the modules would stay in the incubator repository for one full cycle after graduation. This gives projects time to adopt the libraries and still receive bug fixes to the incubated version (see https://wiki.openstack.org/wiki/Oslo#Graduation).
> 
> That policy worked well early on, but has recently introduced some challenges with the low level modules. Other modules in the incubator are still importing the incubated versions of, for example, timeutils, and so tests that rely on mocking out or modifying the behavior of timeutils do not work as expected when different parts of the application code end up calling different versions of timeutils. We had similar issues with the notifiers and RPC code, and I expect to find other cases as we continue with the graduations.
> 
> To deal with this problem, I propose that for Kilo we delete graduating modules as soon as the new library is released, rather than waiting to the end of the cycle. We can update the other incubated modules at the same time, so that the incubator will always use the new libraries and be consistent.
> 
> We have not had a lot of patches where backports were necessary, but there have been a few important ones, so we need to retain the ability to handle them and allow projects to adopt libraries at a reasonable pace. To handle backports cleanly, we can ?freeze? all changes to the master branch version of modules slated for graduation during Kilo (we would need to make a good list very early in the cycle), and use the stable/juno branch for backports.
> 
> The new process would be:
> 
> 1. Declare which modules we expect to graduate during Kilo.
> 2. Changes to those pre-graduation modules could be made in the master branch before their library is released, as long as the change is also backported to the stable/juno branch at the same time (we should enforce this by having both patches submitted before accepting either).
> 3. When graduation for a library starts, freeze those modules in all branches until the library is released.
> 4. Remove modules from the incubator?s master branch after the library is released.
> 5. Land changes in the library first.
> 6. Backport changes, as needed, to stable/juno instead of master.
> 
> It would be better to begin the export/import process as early as possible in Kilo to keep the window where point 2 applies very short.
> 
> If there are objections to using stable/juno, we could introduce a new branch with a name like backports/kilo, but I am afraid having the extra branch to manage would just cause confusion.
> 
> I would like to move ahead with this plan by creating the stable/juno branch and starting to update the incubator as soon as the oslo.log repository is imported (https://review.openstack.org/116934).

That change has merged and the oslo.log repository has been created.

Doug

> 
> Thoughts?
> 
> Doug
> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




------------------------------

Message: 10
Date: Thu, 28 Aug 2014 16:02:10 -0400
From: Anita Kuno <anteaya at anteaya.info>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [all] Design Summit reloaded
Message-ID: <53FF8AC2.1090501 at anteaya.info>
Content-Type: text/plain; charset=windows-1252

On 08/28/2014 03:31 PM, Sean Dague wrote:
> On 08/28/2014 03:06 PM, Jay Pipes wrote:
>> On 08/28/2014 02:21 PM, Sean Dague wrote:
>>> On 08/28/2014 01:58 PM, Jay Pipes wrote:
>>>> On 08/27/2014 11:34 AM, Doug Hellmann wrote:
>>>>>
>>>>> On Aug 27, 2014, at 8:51 AM, Thierry Carrez <thierry at openstack.org>
>>>>> wrote:
>>>>>
>>>>>> Hi everyone,
>>>>>>
>>>>>> I've been thinking about what changes we can bring to the Design
>>>>>> Summit format to make it more productive. I've heard the feedback
>>>>>> from the mid-cycle meetups and would like to apply some of those
>>>>>> ideas for Paris, within the constraints we have (already booked
>>>>>> space and time). Here is something we could do:
>>>>>>
>>>>>> Day 1. Cross-project sessions / incubated projects / other
>>>>>> projects
>>>>>>
>>>>>> I think that worked well last time. 3 parallel rooms where we can
>>>>>> address top cross-project questions, discuss the results of the
>>>>>> various experiments we conducted during juno. Don't hesitate to
>>>>>> schedule 2 slots for discussions, so that we have time to come to
>>>>>> the bottom of those issues. Incubated projects (and maybe "other"
>>>>>> projects, if space allows) occupy the remaining space on day 1, and
>>>>>> could occupy "pods" on the other days.
>>>>>
>>>>> If anything, I?d like to have fewer cross-project tracks running
>>>>> simultaneously. Depending on which are proposed, maybe we can make
>>>>> that happen. On the other hand, cross-project issues is a big theme
>>>>> right now so maybe we should consider devoting more than a day to
>>>>> dealing with them.
>>>>
>>>> I agree with Doug here. I'd almost say having a single cross-project
>>>> room, with serialized content would be better than 3 separate
>>>> cross-project tracks. By nature, the cross-project sessions will attract
>>>> developers that work or are interested in a set of projects that looks
>>>> like a big Venn diagram. By having 3 separate cross-project tracks, we
>>>> would increase the likelihood that developers would once more have to
>>>> choose among simultaneous sessions that they have equal interest in. For
>>>> Infra and QA folks, this likelihood is even greater...
>>>>
>>>> I think I'd prefer a single cross-project track on the first day.
>>>
>>> So the fallout of that is there will be 6 or 7 cross-project slots for
>>> the design summit. Maybe that's the right mix if the TC does a good job
>>> picking the top 5 things we want accomplished from a cross project
>>> standpoint during the cycle. But it's going to have to be a pretty
>>> directed pick. I think last time we had 21 slots, and with a couple of
>>> doubling up that gave 19 sessions. (about 30 - 35 proposals for that
>>> slot set).
>>
>> I'm not sure that would be a bad thing :)
>>
>> I think one of the reasons the mid-cycles have been successful is that
>> they have adequately limited the scope of discussions and I think by
>> doing our homework by fully vetting and voting on cross-project sessions
>> and being OK with saying "No, not this time.", we will be more
>> productive than if we had 20+ cross-project sessions.
>>
>> Just my two cents, though..
> 
> I'm not sure it would be a bad thing either. I just wanted to be
> explicit about what we are saying the cross projects sessions are for in
> this case: the 5 key cross project activities the TC believes should be
> worked on this next cycle.
> 
> The other question is if we did that what's running in competition to
> cross project day? Is it another free form pod day for people not
> working on those things?
> 
> 	-Sean
> 
>>
>> -jay
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
I'm curious to know how many people would be expected to be all in the
same room? And what percentage of these folks are participating in the
conversation and how many are audience?

One of the issues that seem to be universal in the identified discontent
area with summit sessions currently (which gets discussed after each of
the mid-cycles) is that 30 people talking in a room with an audience of
200 isn't very efficient. I wonder if this well intentioned direction
might end up with this result which many folks I talked to don't want.

The other issue that comes to mind for me is trying to allow everyone to
be included in the discussion while keeping it focusing and reducing the
side conversations. If folks are impatient to have their point (or off
topic joke) heard, they won't wait for a turn from whoever is chairing,
they will just start talking. This can create tension for the rest of
the folks who *are* patiently trying to wait their turn. I chaired a day
and a half of discussions at the qa/infra mid-cycle (the rest of the
time was code sprinting) and it was a real challenge in a room of 30
people with a full spectrum of contributor experience (at least one
person made their first contribution in Germany plus there were folks
who have been involved since the beginning) to keep everyone focused on
the topic at hand. Even with just 30 people I had folks upset at me for
asking them to eliminate the side conversations, some left to go to a
breakout room to code or talk and I was told at a break to ensure I
included some folks in a corner who wanted to speak but didn't get a
chance. Unfortunately the (s)he who talks loudest and first format
doesn't favour those contributors whose first language is something
other than English.

I'm for the direction, I just don't want to see it fall over due to
numbers. Plus we have to give ttx a fighting change to chair it.

I welcome your thoughts,
Anita.



------------------------------

Message: 11
Date: Thu, 28 Aug 2014 14:05:35 -0600
From: Chris Friesen <chris.friesen at windriver.com>
To: <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
Message-ID: <53FF8B8F.8060108 at windriver.com>
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed

On 08/28/2014 01:44 PM, Jay Pipes wrote:
> On 08/27/2014 09:04 PM, Dugger, Donald D wrote:

>> I understand that reviews are a burden and very hard but it seems wrong
>> that a BP with multiple positive reviews and no negative reviews is
>> dropped because of what looks like indifference.
>
> I would posit that this is not actually indifference. The reason that
> there may not have been >1 +2 from a core team member may very well have
> been that the core team members did not feel that the blueprint's
> priority was high enough to put before other work, or that the core team
> members did have the time to comment on the spec (due to them not
> feeling the blueprint had the priority to justify the time to do a full
> review).

The overall "scheduler-lib" Blueprint is marked with a "high" priority 
at "http://status.openstack.org/release/".  Hopefully that would apply 
to sub-blueprints as well.

Chris



------------------------------

Message: 12
Date: Thu, 28 Aug 2014 16:11:02 -0400
From: Doug Hellmann <doug at doughellmann.com>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [all] Design Summit reloaded
Message-ID: <CF030D25-7396-4A1A-A3F2-0145572050D3 at doughellmann.com>
Content-Type: text/plain; charset=windows-1252


On Aug 28, 2014, at 3:31 PM, Sean Dague <sean at dague.net> wrote:

> On 08/28/2014 03:06 PM, Jay Pipes wrote:
>> On 08/28/2014 02:21 PM, Sean Dague wrote:
>>> On 08/28/2014 01:58 PM, Jay Pipes wrote:
>>>> On 08/27/2014 11:34 AM, Doug Hellmann wrote:
>>>>> 
>>>>> On Aug 27, 2014, at 8:51 AM, Thierry Carrez <thierry at openstack.org>
>>>>> wrote:
>>>>> 
>>>>>> Hi everyone,
>>>>>> 
>>>>>> I've been thinking about what changes we can bring to the Design
>>>>>> Summit format to make it more productive. I've heard the feedback
>>>>>> from the mid-cycle meetups and would like to apply some of those
>>>>>> ideas for Paris, within the constraints we have (already booked
>>>>>> space and time). Here is something we could do:
>>>>>> 
>>>>>> Day 1. Cross-project sessions / incubated projects / other
>>>>>> projects
>>>>>> 
>>>>>> I think that worked well last time. 3 parallel rooms where we can
>>>>>> address top cross-project questions, discuss the results of the
>>>>>> various experiments we conducted during juno. Don't hesitate to
>>>>>> schedule 2 slots for discussions, so that we have time to come to
>>>>>> the bottom of those issues. Incubated projects (and maybe "other"
>>>>>> projects, if space allows) occupy the remaining space on day 1, and
>>>>>> could occupy "pods" on the other days.
>>>>> 
>>>>> If anything, I?d like to have fewer cross-project tracks running
>>>>> simultaneously. Depending on which are proposed, maybe we can make
>>>>> that happen. On the other hand, cross-project issues is a big theme
>>>>> right now so maybe we should consider devoting more than a day to
>>>>> dealing with them.
>>>> 
>>>> I agree with Doug here. I'd almost say having a single cross-project
>>>> room, with serialized content would be better than 3 separate
>>>> cross-project tracks. By nature, the cross-project sessions will attract
>>>> developers that work or are interested in a set of projects that looks
>>>> like a big Venn diagram. By having 3 separate cross-project tracks, we
>>>> would increase the likelihood that developers would once more have to
>>>> choose among simultaneous sessions that they have equal interest in. For
>>>> Infra and QA folks, this likelihood is even greater...
>>>> 
>>>> I think I'd prefer a single cross-project track on the first day.
>>> 
>>> So the fallout of that is there will be 6 or 7 cross-project slots for
>>> the design summit. Maybe that's the right mix if the TC does a good job
>>> picking the top 5 things we want accomplished from a cross project
>>> standpoint during the cycle. But it's going to have to be a pretty
>>> directed pick. I think last time we had 21 slots, and with a couple of
>>> doubling up that gave 19 sessions. (about 30 - 35 proposals for that
>>> slot set).
>> 
>> I'm not sure that would be a bad thing :)
>> 
>> I think one of the reasons the mid-cycles have been successful is that
>> they have adequately limited the scope of discussions and I think by
>> doing our homework by fully vetting and voting on cross-project sessions
>> and being OK with saying "No, not this time.", we will be more
>> productive than if we had 20+ cross-project sessions.
>> 
>> Just my two cents, though..
> 
> I'm not sure it would be a bad thing either. I just wanted to be
> explicit about what we are saying the cross projects sessions are for in
> this case: the 5 key cross project activities the TC believes should be
> worked on this next cycle.

We?ve talked about several cross-project needs recently. Let?s start a list of things we think we?re ready to make significant progress on during Kilo (not just things we *need* to do, but things we think we *can* do *now*):

1. logging cleanup and standardization


> 
> The other question is if we did that what's running in competition to
> cross project day? Is it another free form pod day for people not
> working on those things?

That seems like a good use of time.

> 
> 	-Sean
> 
>> 
>> -jay
>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> -- 
> Sean Dague
> http://dague.net
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




------------------------------

Message: 13
Date: Thu, 28 Aug 2014 16:10:42 -0400
From: Matthew Treinish <mtreinish at kortar.org>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [QA] Picking a Name for the Tempest
	Library
Message-ID: <20140828201042.GA3782 at Sazabi.treinish>
Content-Type: text/plain; charset="us-ascii"

On Fri, Aug 22, 2014 at 11:26:25AM -0400, Matthew Treinish wrote:
> On Fri, Aug 15, 2014 at 03:14:21PM -0400, Matthew Treinish wrote:
> > Hi Everyone,
> > 
> > So as part of splitting out common functionality from tempest into a library [1]
> > we need to create a new repository. Which means we have the fun task of coming
> > up with something to name it. I'm personally thought we should call it:
> > 
> >  - mesocyclone
> > 
> > Which has the advantage of being a cloud/weather thing, and the name sort of
> > fits because it's a precursor to a tornado. Also, it's an available namespace on
> > both launchpad and pypi. But there has been expressed concern that both it is a
> > bit on the long side (which might have 80 char line length implications) and
> > it's unclear from the name what it does. 
> > 
> > During the last QA meeting some alternatives were also brought up:
> > 
> >  - tempest-lib / lib-tempest
> >  - tsepmet
> >  - blackstorm
> >  - calm
> >  - tempit
> >  - integration-test-lib
> > 
> > (although I'm not entirely sure I remember which ones were serious suggestions
> > or just jokes)
> > 
> > So as a first step I figured that I'd bring it up on the ML to see if anyone had
> > any other suggestions. (or maybe get a consensus around one choice) I'll take
> > the list, check if the namespaces are available, and make a survey so that
> > everyone can vote and hopefully we'll have a clear choice for a name from that.
> > 
> 
> Since the consensus was for renaming tempest and making tempest the library name,
> which wasn't really feasible, I opened up a survey to poll everyone on the which
> name to use:
> 
> https://www.surveymonkey.com/s/RLLZRGJ
> 
> The choices were taken from the initial list I posted and from the suggestions
> which people posted based on the availability of the names.
> 
> I'll keep it open for about a week, or until a clear favorite emerges.
> 

So I just closed the survey because one name had a commanding lead and in the
past 48hrs there was only 1 vote. The winner is tempest-lib, with 13 of 33
votes. The results from the survey are:

tempest-lib: 13
lib-tempest: 2
libtempest: 4
mesocyclone: 4
blackstorm: 4
caliban: 2
tempit: 3
pocovento: 1

-Matt Treinish
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140828/5b4fb2bc/attachment-0001.pgp>

------------------------------

Message: 14
Date: Thu, 28 Aug 2014 20:13:28 +0000
From: Brandon Logan <brandon.logan at RACKSPACE.COM>
To: "openstack-dev at lists.openstack.org"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity
	and AntiAffinity
Message-ID: <1409256984.16118.26.camel at localhost>
Content-Type: text/plain; charset="utf-8"

Yeah we were looking at the SameHost and DifferentHost filters and that
will probably do what we need.  Though I was hoping we could do a
combination of both but we can make it work with those filters I
believe.

Thanks,
Brandon

On Thu, 2014-08-28 at 14:56 -0400, Susanne Balle wrote:
> Brandon
> 
> 
> I am not sure how ready that nova feature is for general use and have
> asked our nova lead about that. He is on vacation but should be back
> by the start of next week. I believe this is the right approach for us
> moving forward.
> 
> 
> 
> We cannot make it mandatory to run the 2 filters but we can say in the
> documentation that if these two filters aren't set that we cannot
> guaranty Anti-affinity or Affinity. 
> 
> 
> The other way we can implement this is by using availability zones and
> host aggregates. This is one technique we use to make sure we deploy
> our in-cloud services in an HA model. This also would assume that the
> operator is setting up Availabiltiy zones which we can't.
> 
> 
> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
> 
> 
> 
> Sahara is currently using the following filters to support host
> affinity which is probably due to the fact that they did the work
> before ServerGroups. I am not advocating the use of those filters but
> just showing you that we can document the feature and it will be up to
> the operator to set it up to get the right behavior.
> 
> 
> Regards
> 
> 
> Susanne 
> 
> 
> 
> Anti-affinity
> One of the problems in Hadoop running on OpenStack is that there is no
> ability to control where machine is actually running. We cannot be
> sure that two new virtual machines are started on different physical
> machines. As a result, any replication with cluster is not reliable
> because all replicas may turn up on one physical machine.
> Anti-affinity feature provides an ability to explicitly tell Sahara to
> run specified processes on different compute nodes. This is especially
> useful for Hadoop datanode process to make HDFS replicas reliable.
> The Anti-Affinity feature requires certain scheduler filters to be
> enabled on Nova. Edit your/etc/nova/nova.conf in the following way:
> 
> [DEFAULT]
> 
> ...
> 
> scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
> scheduler_default_filters=DifferentHostFilter,SameHostFilter
> This feature is supported by all plugins out of the box.
> 
> 
> http://docs.openstack.org/developer/sahara/userdoc/features.html
> 
> 
> 
> 
> 
> On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan
> <brandon.logan at rackspace.com> wrote:
>         Nova scheduler has ServerGroupAffinityFilter and
>         ServerGroupAntiAffinityFilter which does the colocation and
>         apolocation
>         for VMs.  I think this is something we've discussed before
>         about taking
>         advantage of nova's scheduling.  I need to verify that this
>         will work
>         with what we (RAX) plan to do, but I'd like to get everyone
>         else's
>         thoughts.  Also, if we do decide this works for everyone
>         involved,
>         should we make it mandatory that the nova-compute services are
>         running
>         these two filters?  I'm also trying to see if we can use this
>         to also do
>         our own colocation and apolocation on load balancers, but it
>         looks like
>         it will be a bit complex if it can even work.  Hopefully, I
>         can have
>         something definitive on that soon.
>         
>         Thanks,
>         Brandon
>         _______________________________________________
>         OpenStack-dev mailing list
>         OpenStack-dev at lists.openstack.org
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


------------------------------

Message: 15
Date: Thu, 28 Aug 2014 16:25:25 -0400
From: Jay Pipes <jaypipes at gmail.com>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
Message-ID: <53FF9035.2060603 at gmail.com>
Content-Type: text/plain; charset=windows-1252; format=flowed

On 08/28/2014 04:05 PM, Chris Friesen wrote:
> On 08/28/2014 01:44 PM, Jay Pipes wrote:
>> On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
>
>>> I understand that reviews are a burden and very hard but it seems wrong
>>> that a BP with multiple positive reviews and no negative reviews is
>>> dropped because of what looks like indifference.
>>
>> I would posit that this is not actually indifference. The reason that
>> there may not have been >1 +2 from a core team member may very well have
>> been that the core team members did not feel that the blueprint's
>> priority was high enough to put before other work, or that the core team
>> members did have the time to comment on the spec (due to them not
>> feeling the blueprint had the priority to justify the time to do a full
>> review).
>
> The overall "scheduler-lib" Blueprint is marked with a "high" priority
> at "http://status.openstack.org/release/".  Hopefully that would apply
> to sub-blueprints as well.

a) There are no sub-blueprints to that scheduler-lib blueprint

b) If there were sub-blueprints, that does not mean that they would 
necessarily take the same priority as their parent blueprint

c) There's no reason priorities can't be revisited when necessary

-jay



------------------------------

Message: 16
Date: Thu, 28 Aug 2014 20:42:39 +0000
From: "Dugger, Donald D" <donald.d.dugger at intel.com>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
Message-ID:
	<6AF484C0160C61439DE06F17668F3BCB53381895 at ORSMSX114.amr.corp.intel.com>
	
Content-Type: text/plain; charset="us-ascii"

I would contend that that right there is an indication that there's a problem with the process.  You submit a BP and then you have no idea of what is happening and no way of addressing any issues.  If the priority is wrong I can explain why I think the priority should be higher, getting stonewalled leaves me with no idea what's wrong and no way to address any problems.

I think, in general, almost everyone is more than willing to adjust proposals based upon feedback.  Tell me what you think is wrong and I'll either explain why the proposal is correct or I'll change it to address the concerns.

Trying to deal with silence is really hard and really frustrating.  Especially given that we're not supposed to spam the mailing it's really hard to know what to do.  I don't know the solution but we need to do something.  More core team members would help, maybe something like an automatic timeout where BPs/patches with no negative scores and no activity for a week get flagged for special handling.

I feel we need to change the process somehow.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-----Original Message-----
From: Jay Pipes [mailto:jaypipes at gmail.com] 
Sent: Thursday, August 28, 2014 1:44 PM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?

On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
> I'll try and not whine about my pet project but I do think there is a 
> problem here.  For the Gantt project to split out the scheduler there 
> is a crucial BP that needs to be implemented ( 
> https://review.openstack.org/#/c/89893/ ) and, unfortunately, the BP 
> has been rejected and we'll have to try again for Kilo.  My question 
> is did we do something wrong or is the process broken?
>
> Note that we originally proposed the BP on 4/23/14, went through 10 
> iterations to the final version on 7/25/14 and the final version got 
> three +1s and a +2 by 8/5.  Unfortunately, even after reaching out to 
> specific people, we didn't get the second +2, hence the rejection.
>
> I understand that reviews are a burden and very hard but it seems 
> wrong that a BP with multiple positive reviews and no negative reviews 
> is dropped because of what looks like indifference.

I would posit that this is not actually indifference. The reason that there may not have been >1 +2 from a core team member may very well have been that the core team members did not feel that the blueprint's priority was high enough to put before other work, or that the core team members did have the time to comment on the spec (due to them not feeling the blueprint had the priority to justify the time to do a full review).

Note that I'm not a core drivers team member.

Best,
-jay


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



------------------------------

Message: 17
Date: Thu, 28 Aug 2014 13:49:49 -0700
From: Stephen Balukoff <sbalukoff at bluebox.net>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity
	and	AntiAffinity
Message-ID:
	<CAAGw+ZroZfTctKRXJUKP1_Oy-6tXpr7sTXFD3ufu5B=+Pv9aRA at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

I'm trying to think of a use case that wouldn't be satisfied using those
filters and am not coming up with anything. As such, I don't see a problem
using them to fulfill our requirements around colocation and apolocation.

Stephen


On Thu, Aug 28, 2014 at 1:13 PM, Brandon Logan <brandon.logan at rackspace.com>
wrote:

> Yeah we were looking at the SameHost and DifferentHost filters and that
> will probably do what we need.  Though I was hoping we could do a
> combination of both but we can make it work with those filters I
> believe.
>
> Thanks,
> Brandon
>
> On Thu, 2014-08-28 at 14:56 -0400, Susanne Balle wrote:
> > Brandon
> >
> >
> > I am not sure how ready that nova feature is for general use and have
> > asked our nova lead about that. He is on vacation but should be back
> > by the start of next week. I believe this is the right approach for us
> > moving forward.
> >
> >
> >
> > We cannot make it mandatory to run the 2 filters but we can say in the
> > documentation that if these two filters aren't set that we cannot
> > guaranty Anti-affinity or Affinity.
> >
> >
> > The other way we can implement this is by using availability zones and
> > host aggregates. This is one technique we use to make sure we deploy
> > our in-cloud services in an HA model. This also would assume that the
> > operator is setting up Availabiltiy zones which we can't.
> >
> >
> >
> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
> >
> >
> >
> > Sahara is currently using the following filters to support host
> > affinity which is probably due to the fact that they did the work
> > before ServerGroups. I am not advocating the use of those filters but
> > just showing you that we can document the feature and it will be up to
> > the operator to set it up to get the right behavior.
> >
> >
> > Regards
> >
> >
> > Susanne
> >
> >
> >
> > Anti-affinity
> > One of the problems in Hadoop running on OpenStack is that there is no
> > ability to control where machine is actually running. We cannot be
> > sure that two new virtual machines are started on different physical
> > machines. As a result, any replication with cluster is not reliable
> > because all replicas may turn up on one physical machine.
> > Anti-affinity feature provides an ability to explicitly tell Sahara to
> > run specified processes on different compute nodes. This is especially
> > useful for Hadoop datanode process to make HDFS replicas reliable.
> > The Anti-Affinity feature requires certain scheduler filters to be
> > enabled on Nova. Edit your/etc/nova/nova.conf in the following way:
> >
> > [DEFAULT]
> >
> > ...
> >
> > scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
> > scheduler_default_filters=DifferentHostFilter,SameHostFilter
> > This feature is supported by all plugins out of the box.
> >
> >
> > http://docs.openstack.org/developer/sahara/userdoc/features.html
> >
> >
> >
> >
> >
> > On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan
> > <brandon.logan at rackspace.com> wrote:
> >         Nova scheduler has ServerGroupAffinityFilter and
> >         ServerGroupAntiAffinityFilter which does the colocation and
> >         apolocation
> >         for VMs.  I think this is something we've discussed before
> >         about taking
> >         advantage of nova's scheduling.  I need to verify that this
> >         will work
> >         with what we (RAX) plan to do, but I'd like to get everyone
> >         else's
> >         thoughts.  Also, if we do decide this works for everyone
> >         involved,
> >         should we make it mandatory that the nova-compute services are
> >         running
> >         these two filters?  I'm also trying to see if we can use this
> >         to also do
> >         our own colocation and apolocation on load balancers, but it
> >         looks like
> >         it will be a bit complex if it can even work.  Hopefully, I
> >         can have
> >         something definitive on that soon.
> >
> >         Thanks,
> >         Brandon
> >         _______________________________________________
> >         OpenStack-dev mailing list
> >         OpenStack-dev at lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140828/ae1dbb44/attachment-0001.html>

------------------------------

Message: 18
Date: Thu, 28 Aug 2014 16:50:28 -0400
From: Ken Giusti <kgiusti at gmail.com>
To: openstack-dev at lists.openstack.org, markmc at redhat.com
Subject: Re: [openstack-dev] [oslo.messaging] Request to include AMQP
	1.0	support in Juno-3
Message-ID:
	<CAJoCO=O1-5v84fPK13KrK=2mqOowBvZmYifo+R0ayTm6vEm+PA at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

On Thu, 28 Aug 2014 13:36:46 +0100, Mark McLoughlin wrote:
> On Thu, 2014-08-28 at 13:24 +0200, Flavio Percoco wrote:
> > On 08/27/2014 03:35 PM, Ken Giusti wrote:
> > > Hi All,
> > >
> > > I believe Juno-3 is our last chance to get this feature [1] included
> > > into olso.messaging.
> > >
<SNIP!>
> >
> >
> > Hi Ken,
> >
> > Thanks a lot for your hard work here. As I stated in my last comment on
> > the driver's review, I think we should let this driver land and let
> > future patches improve it where/when needed.
> >
> > I agreed on letting the driver land as-is based on the fact that there
> > are patches already submitted ready to enable the gates for this driver.
>
> I feel bad that the driver has been in a pretty complete state for quite
> a while but hasn't received a whole lot of reviews. There's a lot of
> promise to this idea, so it would be ideal if we could unblock it.
>
> One thing I've been meaning to do this cycle is add concrete advice for
> operators on the state of each driver. I think we'd be a lot more
> comfortable merging this in Juno if we could somehow make it clear to
> operators that it's experimental right now. My idea was:
>
>   - Write up some notes which discusses the state of each driver e.g.
>
>       - RabbitMQ - the default, used by the majority of OpenStack
>         deployments, perhaps list some of the known bugs, particularly
>         around HA.
>
>       - Qpid - suitable for production, but used in a limited number of
>         deployments. Again, list known issues. Mention that it will
>         probably be removed with the amqp10 driver matures.
>
>       - Proton/AMQP 1.0 - experimental, in active development, will
>         support  multiple brokers and topologies, perhaps a pointer to a
>         wiki page with the current TODO list
>
>       - ZeroMQ - unmaintained and deprecated, planned for removal in
>         Kilo

Sounds like a plan - I'll take on the Qpid and Proton notes.  I've
been (trying) to keep the status of the Proton stuff up to date on the
blueprint page:

https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation

Is there a more appropriate home for these notes?  Etherpad?

>
>   - Propose this addition to the API docs and ask the operators list
>     for feedback
>
>   - Propose a patch which adds a load-time deprecation warning to the
>     ZeroMQ driver
>
>   - Include a load-time experimental warning in the proton driver

Done!

>
> Thoughts on that?
>
> (I understand the ZeroMQ situation needs further discussion - I don't
> think that's on-topic for the thread, I was just using it as example of
> what kind of advice we'd be giving in these docs)
>
> Mark.
>
> -
Ken Giusti  (kgiusti at gmail.com)



------------------------------

Message: 19
Date: Thu, 28 Aug 2014 14:58:28 -0600
From: Chris Friesen <chris.friesen at windriver.com>
To: <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
Message-ID: <53FF97F4.5080607 at windriver.com>
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed

On 08/28/2014 02:25 PM, Jay Pipes wrote:
> On 08/28/2014 04:05 PM, Chris Friesen wrote:
>> On 08/28/2014 01:44 PM, Jay Pipes wrote:
>>> On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
>>
>>>> I understand that reviews are a burden and very hard but it seems wrong
>>>> that a BP with multiple positive reviews and no negative reviews is
>>>> dropped because of what looks like indifference.
>>>
>>> I would posit that this is not actually indifference. The reason that
>>> there may not have been >1 +2 from a core team member may very well have
>>> been that the core team members did not feel that the blueprint's
>>> priority was high enough to put before other work, or that the core team
>>> members did have the time to comment on the spec (due to them not
>>> feeling the blueprint had the priority to justify the time to do a full
>>> review).
>>
>> The overall "scheduler-lib" Blueprint is marked with a "high" priority
>> at "http://status.openstack.org/release/".  Hopefully that would apply
>> to sub-blueprints as well.
>
> a) There are no sub-blueprints to that scheduler-lib blueprint

I guess my terminology was wrong.  The original email referred to 
"https://review.openstack.org/#/c/89893/" as the "crucial BP that needs 
to be implemented".  That is part of 
"https://review.openstack.org/#/q/topic:bp/isolate-scheduler-db,n,z", 
which is listed as a Gerrit topic in the "scheduler-lib" blueprint that 
I pointed out.

> b) If there were sub-blueprints, that does not mean that they would
> necessarily take the same priority as their parent blueprint

I'm not sure how that would work.  If we have a high-priority blueprint 
depending on work that is considered low-priority, that would seem to 
set up a classic priority inversion scenario.

> c) There's no reason priorities can't be revisited when necessary

Sure, but in that case it might be a good idea to make the updated 
priority explicit.

Chris



------------------------------

Message: 20
Date: Thu, 28 Aug 2014 17:02:20 -0400
From: Jay Pipes <jaypipes at gmail.com>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
Message-ID: <53FF98DC.7070108 at gmail.com>
Content-Type: text/plain; charset=windows-1252; format=flowed


On 08/28/2014 04:42 PM, Dugger, Donald D wrote:
> I would contend that that right there is an indication that there's a
> problem with the process.  You submit a BP and then you have no idea
> of what is happening and no way of addressing any issues.  If the
> priority is wrong I can explain why I think the priority should be
> higher, getting stonewalled leaves me with no idea what's wrong and
> no way to address any problems.
>
> I think, in general, almost everyone is more than willing to adjust
> proposals based upon feedback.  Tell me what you think is wrong and
> I'll either explain why the proposal is correct or I'll change it to
> address the concerns.

In many of the Gantt IRC meetings as well as the ML, I and others have 
repeatedly raised concerns about the scheduler split being premature and 
not a priority compared to the cleanup of the internal interfaces around 
the resource tracker and scheduler. This feedback was echoed in the 
mid-cycle meetup session as well. Sylvain and I have begun the work of 
cleaning up those interfaces and fixing the bugs around non-versioned 
data structures and inconsistent calling interfaces in the scheduler and 
resource tracker. Progress is being made towards these things.

> Trying to deal with silence is really hard and really frustrating.
> Especially given that we're not supposed to spam the mailing it's
> really hard to know what to do.  I don't know the solution but we
> need to do something.  More core team members would help, maybe
> something like an automatic timeout where BPs/patches with no
> negative scores and no activity for a week get flagged for special
> handling.

Yes, I think flagging blueprints for special handling would be a good 
thing. Keep in mind, though, that there are an enormous number of 
proposed specifications, with the vast majority of folks only caring 
about their own proposed specs, and very few doing reviews on anything 
other than their own patches or specific area of interest.

Doing reviews on other folks' patches and blueprints would certainly 
help in this regard. If cores only see someone contributing to a small, 
isolated section of the code or only to their own blueprints/patches, 
they generally tend to implicitly down-play that person's reviews in 
favor of patches/blueprints from folks that are reviewing non-related 
patches and contributing to reduce the total review load.

I understand your frustration about the silence, but the silence from 
core team members may actually be a loud statement about where their 
priorities are.

Best,
-jay

> I feel we need to change the process somehow.
>
> -- Don Dugger "Censeo Toto nos in Kansa esse decisse." - D. Gale Ph:
> 303/443-3786
>
> -----Original Message----- From: Jay Pipes
> [mailto:jaypipes at gmail.com] Sent: Thursday, August 28, 2014 1:44 PM
> To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev]
> [nova] Is the BP approval process broken?
>
> On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
>> I'll try and not whine about my pet project but I do think there is
>> a problem here.  For the Gantt project to split out the scheduler
>> there is a crucial BP that needs to be implemented (
>> https://review.openstack.org/#/c/89893/ ) and, unfortunately, the
>> BP has been rejected and we'll have to try again for Kilo.  My
>> question is did we do something wrong or is the process broken?
>>
>> Note that we originally proposed the BP on 4/23/14, went through
>> 10 iterations to the final version on 7/25/14 and the final version
>> got three +1s and a +2 by 8/5.  Unfortunately, even after reaching
>> out to specific people, we didn't get the second +2, hence the
>> rejection.
>>
>> I understand that reviews are a burden and very hard but it seems
>> wrong that a BP with multiple positive reviews and no negative
>> reviews is dropped because of what looks like indifference.
>
> I would posit that this is not actually indifference. The reason that
> there may not have been >1 +2 from a core team member may very well
> have been that the core team members did not feel that the
> blueprint's priority was high enough to put before other work, or
> that the core team members did have the time to comment on the spec
> (due to them not feeling the blueprint had the priority to justify
> the time to do a full review).
>
> Note that I'm not a core drivers team member.
>
> Best, -jay
>
>
> _______________________________________________ OpenStack-dev mailing
> list OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________ OpenStack-dev mailing
> list OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>




------------------------------

Message: 21
Date: Thu, 28 Aug 2014 21:12:26 +0000
From: Brandon Logan <brandon.logan at RACKSPACE.COM>
To: "openstack-dev at lists.openstack.org"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity
	and AntiAffinity
Message-ID: <1409260523.16118.35.camel at localhost>
Content-Type: text/plain; charset="utf-8"

Trevor and I just worked through some scenarios to make sure it can
handle colocation and apolocation.  It looks like it does, however not
everything will so simple, especially when we introduce horizontal
scaling.  Trevor's going to write up an email about some of the caveats
but so far just using a table to track what LB has what VMs and on what
hosts will be sufficient.

Thanks,
Brandon

On Thu, 2014-08-28 at 13:49 -0700, Stephen Balukoff wrote:
> I'm trying to think of a use case that wouldn't be satisfied using
> those filters and am not coming up with anything. As such, I don't see
> a problem using them to fulfill our requirements around colocation and
> apolocation.
> 
> 
> Stephen
> 
> 
> On Thu, Aug 28, 2014 at 1:13 PM, Brandon Logan
> <brandon.logan at rackspace.com> wrote:
>         Yeah we were looking at the SameHost and DifferentHost filters
>         and that
>         will probably do what we need.  Though I was hoping we could
>         do a
>         combination of both but we can make it work with those filters
>         I
>         believe.
>         
>         Thanks,
>         Brandon
>         
>         On Thu, 2014-08-28 at 14:56 -0400, Susanne Balle wrote:
>         > Brandon
>         >
>         >
>         > I am not sure how ready that nova feature is for general use
>         and have
>         > asked our nova lead about that. He is on vacation but should
>         be back
>         > by the start of next week. I believe this is the right
>         approach for us
>         > moving forward.
>         >
>         >
>         >
>         > We cannot make it mandatory to run the 2 filters but we can
>         say in the
>         > documentation that if these two filters aren't set that we
>         cannot
>         > guaranty Anti-affinity or Affinity.
>         >
>         >
>         > The other way we can implement this is by using availability
>         zones and
>         > host aggregates. This is one technique we use to make sure
>         we deploy
>         > our in-cloud services in an HA model. This also would assume
>         that the
>         > operator is setting up Availabiltiy zones which we can't.
>         >
>         >
>         >
>         http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
>         >
>         >
>         >
>         > Sahara is currently using the following filters to support
>         host
>         > affinity which is probably due to the fact that they did the
>         work
>         > before ServerGroups. I am not advocating the use of those
>         filters but
>         > just showing you that we can document the feature and it
>         will be up to
>         > the operator to set it up to get the right behavior.
>         >
>         >
>         > Regards
>         >
>         >
>         > Susanne
>         >
>         >
>         >
>         > Anti-affinity
>         > One of the problems in Hadoop running on OpenStack is that
>         there is no
>         > ability to control where machine is actually running. We
>         cannot be
>         > sure that two new virtual machines are started on different
>         physical
>         > machines. As a result, any replication with cluster is not
>         reliable
>         > because all replicas may turn up on one physical machine.
>         > Anti-affinity feature provides an ability to explicitly tell
>         Sahara to
>         > run specified processes on different compute nodes. This is
>         especially
>         > useful for Hadoop datanode process to make HDFS replicas
>         reliable.
>         > The Anti-Affinity feature requires certain scheduler filters
>         to be
>         > enabled on Nova. Edit your/etc/nova/nova.conf in the
>         following way:
>         >
>         > [DEFAULT]
>         >
>         > ...
>         >
>         >
>         scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
>         > scheduler_default_filters=DifferentHostFilter,SameHostFilter
>         > This feature is supported by all plugins out of the box.
>         >
>         >
>         >
>         http://docs.openstack.org/developer/sahara/userdoc/features.html
>         >
>         >
>         >
>         >
>         >
>         > On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan
>         > <brandon.logan at rackspace.com> wrote:
>         >         Nova scheduler has ServerGroupAffinityFilter and
>         >         ServerGroupAntiAffinityFilter which does the
>         colocation and
>         >         apolocation
>         >         for VMs.  I think this is something we've discussed
>         before
>         >         about taking
>         >         advantage of nova's scheduling.  I need to verify
>         that this
>         >         will work
>         >         with what we (RAX) plan to do, but I'd like to get
>         everyone
>         >         else's
>         >         thoughts.  Also, if we do decide this works for
>         everyone
>         >         involved,
>         >         should we make it mandatory that the nova-compute
>         services are
>         >         running
>         >         these two filters?  I'm also trying to see if we can
>         use this
>         >         to also do
>         >         our own colocation and apolocation on load
>         balancers, but it
>         >         looks like
>         >         it will be a bit complex if it can even work.
>         Hopefully, I
>         >         can have
>         >         something definitive on that soon.
>         >
>         >         Thanks,
>         >         Brandon
>         >         _______________________________________________
>         >         OpenStack-dev mailing list
>         >         OpenStack-dev at lists.openstack.org
>         >
>          http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         >
>         >
>         > _______________________________________________
>         > OpenStack-dev mailing list
>         > OpenStack-dev at lists.openstack.org
>         >
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         
>         _______________________________________________
>         OpenStack-dev mailing list
>         OpenStack-dev at lists.openstack.org
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         
> 
> 
> 
> 
> -- 
> Stephen Balukoff 
> Blue Box Group, LLC 
> (800)613-4305 x807
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


------------------------------

Message: 22
Date: Thu, 28 Aug 2014 15:17:36 -0600
From: Chris Friesen <chris.friesen at windriver.com>
To: <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
Message-ID: <53FF9C70.7040106 at windriver.com>
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed

On 08/28/2014 03:02 PM, Jay Pipes wrote:

> I understand your frustration about the silence, but the silence from
> core team members may actually be a loud statement about where their
> priorities are.

Or it could be that they haven't looked at it, aren't aware of it, or 
haven't been paying attention.

I think it would be better to make feedback explicit and remove any 
uncertainty/ambiguity.

Chris



------------------------------

Message: 23
Date: Thu, 28 Aug 2014 17:36:54 -0400
From: Susanne Balle <sleipnir012 at gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity
	and	AntiAffinity
Message-ID:
	<CADBYD+zrC98Rn6vtmsHpDrRr9FPDR2KC+f6e8ZV+J+mLAROUQQ at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

We need to be careful. I believe that a user can use these filters to keep
requesting VMs in the case of nova to get to the size of your cloud.

Also given that nova now has ServerGroups let's not make a quick decision
on using something that is being replaced with something better. I suggest
we investigated ServerGroups a little more before we discard it.

The operator should really decide how he/she wants Anti-affinity by setting
the right filters in nova.

Susanne


On Thu, Aug 28, 2014 at 5:12 PM, Brandon Logan <brandon.logan at rackspace.com>
wrote:

> Trevor and I just worked through some scenarios to make sure it can
> handle colocation and apolocation.  It looks like it does, however not
> everything will so simple, especially when we introduce horizontal
> scaling.  Trevor's going to write up an email about some of the caveats
> but so far just using a table to track what LB has what VMs and on what
> hosts will be sufficient.
>
> Thanks,
> Brandon
>
> On Thu, 2014-08-28 at 13:49 -0700, Stephen Balukoff wrote:
> > I'm trying to think of a use case that wouldn't be satisfied using
> > those filters and am not coming up with anything. As such, I don't see
> > a problem using them to fulfill our requirements around colocation and
> > apolocation.
> >
> >
> > Stephen
> >
> >
> > On Thu, Aug 28, 2014 at 1:13 PM, Brandon Logan
> > <brandon.logan at rackspace.com> wrote:
> >         Yeah we were looking at the SameHost and DifferentHost filters
> >         and that
> >         will probably do what we need.  Though I was hoping we could
> >         do a
> >         combination of both but we can make it work with those filters
> >         I
> >         believe.
> >
> >         Thanks,
> >         Brandon
> >
> >         On Thu, 2014-08-28 at 14:56 -0400, Susanne Balle wrote:
> >         > Brandon
> >         >
> >         >
> >         > I am not sure how ready that nova feature is for general use
> >         and have
> >         > asked our nova lead about that. He is on vacation but should
> >         be back
> >         > by the start of next week. I believe this is the right
> >         approach for us
> >         > moving forward.
> >         >
> >         >
> >         >
> >         > We cannot make it mandatory to run the 2 filters but we can
> >         say in the
> >         > documentation that if these two filters aren't set that we
> >         cannot
> >         > guaranty Anti-affinity or Affinity.
> >         >
> >         >
> >         > The other way we can implement this is by using availability
> >         zones and
> >         > host aggregates. This is one technique we use to make sure
> >         we deploy
> >         > our in-cloud services in an HA model. This also would assume
> >         that the
> >         > operator is setting up Availabiltiy zones which we can't.
> >         >
> >         >
> >         >
> >
> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
> >         >
> >         >
> >         >
> >         > Sahara is currently using the following filters to support
> >         host
> >         > affinity which is probably due to the fact that they did the
> >         work
> >         > before ServerGroups. I am not advocating the use of those
> >         filters but
> >         > just showing you that we can document the feature and it
> >         will be up to
> >         > the operator to set it up to get the right behavior.
> >         >
> >         >
> >         > Regards
> >         >
> >         >
> >         > Susanne
> >         >
> >         >
> >         >
> >         > Anti-affinity
> >         > One of the problems in Hadoop running on OpenStack is that
> >         there is no
> >         > ability to control where machine is actually running. We
> >         cannot be
> >         > sure that two new virtual machines are started on different
> >         physical
> >         > machines. As a result, any replication with cluster is not
> >         reliable
> >         > because all replicas may turn up on one physical machine.
> >         > Anti-affinity feature provides an ability to explicitly tell
> >         Sahara to
> >         > run specified processes on different compute nodes. This is
> >         especially
> >         > useful for Hadoop datanode process to make HDFS replicas
> >         reliable.
> >         > The Anti-Affinity feature requires certain scheduler filters
> >         to be
> >         > enabled on Nova. Edit your/etc/nova/nova.conf in the
> >         following way:
> >         >
> >         > [DEFAULT]
> >         >
> >         > ...
> >         >
> >         >
> >         scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
> >         > scheduler_default_filters=DifferentHostFilter,SameHostFilter
> >         > This feature is supported by all plugins out of the box.
> >         >
> >         >
> >         >
> >         http://docs.openstack.org/developer/sahara/userdoc/features.html
> >         >
> >         >
> >         >
> >         >
> >         >
> >         > On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan
> >         > <brandon.logan at rackspace.com> wrote:
> >         >         Nova scheduler has ServerGroupAffinityFilter and
> >         >         ServerGroupAntiAffinityFilter which does the
> >         colocation and
> >         >         apolocation
> >         >         for VMs.  I think this is something we've discussed
> >         before
> >         >         about taking
> >         >         advantage of nova's scheduling.  I need to verify
> >         that this
> >         >         will work
> >         >         with what we (RAX) plan to do, but I'd like to get
> >         everyone
> >         >         else's
> >         >         thoughts.  Also, if we do decide this works for
> >         everyone
> >         >         involved,
> >         >         should we make it mandatory that the nova-compute
> >         services are
> >         >         running
> >         >         these two filters?  I'm also trying to see if we can
> >         use this
> >         >         to also do
> >         >         our own colocation and apolocation on load
> >         balancers, but it
> >         >         looks like
> >         >         it will be a bit complex if it can even work.
> >         Hopefully, I
> >         >         can have
> >         >         something definitive on that soon.
> >         >
> >         >         Thanks,
> >         >         Brandon
> >         >         _______________________________________________
> >         >         OpenStack-dev mailing list
> >         >         OpenStack-dev at lists.openstack.org
> >         >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >         >
> >         >
> >         > _______________________________________________
> >         > OpenStack-dev mailing list
> >         > OpenStack-dev at lists.openstack.org
> >         >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >         _______________________________________________
> >         OpenStack-dev mailing list
> >         OpenStack-dev at lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> > --
> > Stephen Balukoff
> > Blue Box Group, LLC
> > (800)613-4305 x807
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140828/41b92595/attachment-0001.html>

------------------------------

Message: 24
Date: Thu, 28 Aug 2014 21:42:58 +0000
From: Alan Kavanagh <alan.kavanagh at ericsson.com>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
Message-ID:
	<C977B257ADF8814C8EB4FB66BB9D0C2E6E86A8 at eusaamb109.ericsson.se>
Content-Type: text/plain; charset="us-ascii"

I don't think silence ever helps, its better to respond even if it is to disagree, one on one with the person.
Alan

-----Original Message-----
From: Jay Pipes [mailto:jaypipes at gmail.com] 
Sent: August-28-14 11:02 PM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?


On 08/28/2014 04:42 PM, Dugger, Donald D wrote:
> I would contend that that right there is an indication that there's a 
> problem with the process.  You submit a BP and then you have no idea 
> of what is happening and no way of addressing any issues.  If the 
> priority is wrong I can explain why I think the priority should be 
> higher, getting stonewalled leaves me with no idea what's wrong and no 
> way to address any problems.
>
> I think, in general, almost everyone is more than willing to adjust 
> proposals based upon feedback.  Tell me what you think is wrong and 
> I'll either explain why the proposal is correct or I'll change it to 
> address the concerns.

In many of the Gantt IRC meetings as well as the ML, I and others have repeatedly raised concerns about the scheduler split being premature and not a priority compared to the cleanup of the internal interfaces around the resource tracker and scheduler. This feedback was echoed in the mid-cycle meetup session as well. Sylvain and I have begun the work of cleaning up those interfaces and fixing the bugs around non-versioned data structures and inconsistent calling interfaces in the scheduler and resource tracker. Progress is being made towards these things.

> Trying to deal with silence is really hard and really frustrating.
> Especially given that we're not supposed to spam the mailing it's 
> really hard to know what to do.  I don't know the solution but we need 
> to do something.  More core team members would help, maybe something 
> like an automatic timeout where BPs/patches with no negative scores 
> and no activity for a week get flagged for special handling.

Yes, I think flagging blueprints for special handling would be a good thing. Keep in mind, though, that there are an enormous number of proposed specifications, with the vast majority of folks only caring about their own proposed specs, and very few doing reviews on anything other than their own patches or specific area of interest.

Doing reviews on other folks' patches and blueprints would certainly help in this regard. If cores only see someone contributing to a small, isolated section of the code or only to their own blueprints/patches, they generally tend to implicitly down-play that person's reviews in favor of patches/blueprints from folks that are reviewing non-related patches and contributing to reduce the total review load.

I understand your frustration about the silence, but the silence from core team members may actually be a loud statement about where their priorities are.

Best,
-jay

> I feel we need to change the process somehow.
>
> -- Don Dugger "Censeo Toto nos in Kansa esse decisse." - D. Gale Ph:
> 303/443-3786
>
> -----Original Message----- From: Jay Pipes [mailto:jaypipes at gmail.com] 
> Sent: Thursday, August 28, 2014 1:44 PM
> To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] 
> [nova] Is the BP approval process broken?
>
> On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
>> I'll try and not whine about my pet project but I do think there is a 
>> problem here.  For the Gantt project to split out the scheduler there 
>> is a crucial BP that needs to be implemented ( 
>> https://review.openstack.org/#/c/89893/ ) and, unfortunately, the BP 
>> has been rejected and we'll have to try again for Kilo.  My question 
>> is did we do something wrong or is the process broken?
>>
>> Note that we originally proposed the BP on 4/23/14, went through
>> 10 iterations to the final version on 7/25/14 and the final version 
>> got three +1s and a +2 by 8/5.  Unfortunately, even after reaching 
>> out to specific people, we didn't get the second +2, hence the 
>> rejection.
>>
>> I understand that reviews are a burden and very hard but it seems 
>> wrong that a BP with multiple positive reviews and no negative 
>> reviews is dropped because of what looks like indifference.
>
> I would posit that this is not actually indifference. The reason that 
> there may not have been >1 +2 from a core team member may very well 
> have been that the core team members did not feel that the blueprint's 
> priority was high enough to put before other work, or that the core 
> team members did have the time to comment on the spec (due to them not 
> feeling the blueprint had the priority to justify the time to do a 
> full review).
>
> Note that I'm not a core drivers team member.
>
> Best, -jay
>
>
> _______________________________________________ OpenStack-dev mailing 
> list OpenStack-dev at lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________ OpenStack-dev mailing 
> list OpenStack-dev at lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



------------------------------

Message: 25
Date: Thu, 28 Aug 2014 21:43:03 +0000
From: Alan Kavanagh <alan.kavanagh at ericsson.com>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
Message-ID:
	<C977B257ADF8814C8EB4FB66BB9D0C2E6E86F8 at eusaamb109.ericsson.se>
Content-Type: text/plain; charset="us-ascii"

+1, that would be the most pragmatic way to address this, silence has different meanings to different people, a response would clarify the ambiguity and misunderstanding.
/Alan

-----Original Message-----
From: Chris Friesen [mailto:chris.friesen at windriver.com] 
Sent: August-28-14 11:18 PM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?

On 08/28/2014 03:02 PM, Jay Pipes wrote:

> I understand your frustration about the silence, but the silence from 
> core team members may actually be a loud statement about where their 
> priorities are.

Or it could be that they haven't looked at it, aren't aware of it, or haven't been paying attention.

I think it would be better to make feedback explicit and remove any uncertainty/ambiguity.

Chris

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



------------------------------

Message: 26
Date: Thu, 28 Aug 2014 21:43:20 +0000
From: Alan Kavanagh <alan.kavanagh at ericsson.com>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
Message-ID:
	<C977B257ADF8814C8EB4FB66BB9D0C2E6E871E at eusaamb109.ericsson.se>
Content-Type: text/plain; charset="us-ascii"

I share Donald's points here, I believe what would help is to clearly describe in the Wiki the process and workflow for the BP approval process and build in this process how to deal with discrepancies/disagreements and build timeframes for each stage and process of appeal etc.
The current process would benefit from some fine tuning and helping to build safe guards and time limits/deadlines so folks can expect responses within a reasonable time and not be left waiting in the cold. 
My 2cents!
/Alan

-----Original Message-----
From: Dugger, Donald D [mailto:donald.d.dugger at intel.com] 
Sent: August-28-14 10:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?

I would contend that that right there is an indication that there's a problem with the process.  You submit a BP and then you have no idea of what is happening and no way of addressing any issues.  If the priority is wrong I can explain why I think the priority should be higher, getting stonewalled leaves me with no idea what's wrong and no way to address any problems.

I think, in general, almost everyone is more than willing to adjust proposals based upon feedback.  Tell me what you think is wrong and I'll either explain why the proposal is correct or I'll change it to address the concerns.

Trying to deal with silence is really hard and really frustrating.  Especially given that we're not supposed to spam the mailing it's really hard to know what to do.  I don't know the solution but we need to do something.  More core team members would help, maybe something like an automatic timeout where BPs/patches with no negative scores and no activity for a week get flagged for special handling.

I feel we need to change the process somehow.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-----Original Message-----
From: Jay Pipes [mailto:jaypipes at gmail.com]
Sent: Thursday, August 28, 2014 1:44 PM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?

On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
> I'll try and not whine about my pet project but I do think there is a 
> problem here.  For the Gantt project to split out the scheduler there 
> is a crucial BP that needs to be implemented ( 
> https://review.openstack.org/#/c/89893/ ) and, unfortunately, the BP 
> has been rejected and we'll have to try again for Kilo.  My question 
> is did we do something wrong or is the process broken?
>
> Note that we originally proposed the BP on 4/23/14, went through 10 
> iterations to the final version on 7/25/14 and the final version got 
> three +1s and a +2 by 8/5.  Unfortunately, even after reaching out to 
> specific people, we didn't get the second +2, hence the rejection.
>
> I understand that reviews are a burden and very hard but it seems 
> wrong that a BP with multiple positive reviews and no negative reviews 
> is dropped because of what looks like indifference.

I would posit that this is not actually indifference. The reason that there may not have been >1 +2 from a core team member may very well have been that the core team members did not feel that the blueprint's priority was high enough to put before other work, or that the core team members did have the time to comment on the spec (due to them not feeling the blueprint had the priority to justify the time to do a full review).

Note that I'm not a core drivers team member.

Best,
-jay


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



------------------------------

Message: 27
Date: Thu, 28 Aug 2014 21:47:33 +0000
From: Alan Kavanagh <alan.kavanagh at ericsson.com>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [nova] [neutron] Specs for K release
Message-ID:
	<C977B257ADF8814C8EB4FB66BB9D0C2E6E87AD at eusaamb109.ericsson.se>
Content-Type: text/plain; charset="us-ascii"

That's a fairly good point Michael, and if that can get correlated to the proposed incubation section for that project then I believe this would help alleviate a lot of frustration and help folks understand what to expect and what are the next steps etc. 
How do we get this formulated and agreed so we can have this approved and proceed?
/Alan
-----Original Message-----
From: Michael Still [mailto:mikal at stillhq.com] 
Sent: August-28-14 6:51 PM
To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] Specs for K release

On Thu, Aug 28, 2014 at 6:53 AM, Daniel P. Berrange <berrange at redhat.com> wrote:
> On Thu, Aug 28, 2014 at 11:51:32AM +0000, Alan Kavanagh wrote:
>> How to do we handle specs that have slipped through the cracks
>> and did not make it for Juno?
>
> Rebase the proposal so it is under the 'kilo' directory path
> instead of 'juno' and submit it for review again. Make sure
> to keep the ChangeId line intact so people see the history
> of any review comments in the earlier Juno proposal.

Yes, but...

I think we should talk about tweaking the structure of the juno
directory. Something like having proposed, approved, and implemented
directories. That would provide better signalling to operators about
what we actually did, what we thought we'd do, and what we didn't do.

I worry that gerrit is a terrible place to archive the things which
were proposed by not approved. If someone else wants to pick something
up later, its super hard for them to find.

Michael

-- 
Rackspace Australia

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



------------------------------

Message: 28
Date: Thu, 28 Aug 2014 17:57:24 -0400
From: Susanne Balle <sleipnir012 at gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [neutron][lbaas][octavia]
Message-ID:
	<CADBYD+wg5P-x73FquMHYskfkrGXPz=Sk83m7OC2t6fnyDcHwZQ at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

 I would like to discuss the pros and cons of putting Octavia into the
Neutron LBaaS incubator project right away. If it is going to be the
reference implementation for LBaaS v 2 then I believe Octavia belong in
Neutron LBaaS v2 incubator.

The Pros:
* Octavia is in Openstack incubation right away along with the lbaas v2
code. We do not have to apply for incubation later on.
* As incubation project we have our own core and should be able ot commit
our code
* We are starting out as an OpenStack incubated project

The Cons:
* Not sure of the velocity of the project
* Incubation not well defined.

If Octavia starts as a standalone stackforge project we are assuming that
it would be looked favorable on when time is to move it into incubated
status.

Susanne
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140828/f3d356a9/attachment-0001.html>

------------------------------

Message: 29
Date: Thu, 28 Aug 2014 15:01:28 -0700
From: Joe Gordon <joe.gordon0 at gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
Message-ID:
	<CAHXdxOeQymM1VohNNUy37yLCse=33+R188v4tHBkVnCuVwd=bg at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

On Thu, Aug 28, 2014 at 2:43 PM, Alan Kavanagh <alan.kavanagh at ericsson.com>
wrote:

> I share Donald's points here, I believe what would help is to clearly
> describe in the Wiki the process and workflow for the BP approval process
> and build in this process how to deal with discrepancies/disagreements and
> build timeframes for each stage and process of appeal etc.
> The current process would benefit from some fine tuning and helping to
> build safe guards and time limits/deadlines so folks can expect responses
> within a reasonable time and not be left waiting in the cold.
>


This is a resource problem, the nova team simply does not have enough
people doing enough reviews to make this possible.


> My 2cents!
> /Alan
>
> -----Original Message-----
> From: Dugger, Donald D [mailto:donald.d.dugger at intel.com]
> Sent: August-28-14 10:43 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
>
> I would contend that that right there is an indication that there's a
> problem with the process.  You submit a BP and then you have no idea of
> what is happening and no way of addressing any issues.  If the priority is
> wrong I can explain why I think the priority should be higher, getting
> stonewalled leaves me with no idea what's wrong and no way to address any
> problems.
>
> I think, in general, almost everyone is more than willing to adjust
> proposals based upon feedback.  Tell me what you think is wrong and I'll
> either explain why the proposal is correct or I'll change it to address the
> concerns.
>
> Trying to deal with silence is really hard and really frustrating.
> Especially given that we're not supposed to spam the mailing it's really
> hard to know what to do.  I don't know the solution but we need to do
> something.  More core team members would help, maybe something like an
> automatic timeout where BPs/patches with no negative scores and no activity
> for a week get flagged for special handling.
>
> I feel we need to change the process somehow.
>
> --
> Don Dugger
> "Censeo Toto nos in Kansa esse decisse." - D. Gale
> Ph: 303/443-3786
>
> -----Original Message-----
> From: Jay Pipes [mailto:jaypipes at gmail.com]
> Sent: Thursday, August 28, 2014 1:44 PM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
>
> On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
> > I'll try and not whine about my pet project but I do think there is a
> > problem here.  For the Gantt project to split out the scheduler there
> > is a crucial BP that needs to be implemented (
> > https://review.openstack.org/#/c/89893/ ) and, unfortunately, the BP
> > has been rejected and we'll have to try again for Kilo.  My question
> > is did we do something wrong or is the process broken?
> >
> > Note that we originally proposed the BP on 4/23/14, went through 10
> > iterations to the final version on 7/25/14 and the final version got
> > three +1s and a +2 by 8/5.  Unfortunately, even after reaching out to
> > specific people, we didn't get the second +2, hence the rejection.
> >
> > I understand that reviews are a burden and very hard but it seems
> > wrong that a BP with multiple positive reviews and no negative reviews
> > is dropped because of what looks like indifference.
>
> I would posit that this is not actually indifference. The reason that
> there may not have been >1 +2 from a core team member may very well have
> been that the core team members did not feel that the blueprint's priority
> was high enough to put before other work, or that the core team members did
> have the time to comment on the spec (due to them not feeling the blueprint
> had the priority to justify the time to do a full review).
>
> Note that I'm not a core drivers team member.
>
> Best,
> -jay
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140828/a99f0085/attachment-0001.html>

------------------------------

Message: 30
Date: Thu, 28 Aug 2014 18:04:28 -0400
From: Susanne Balle <sleipnir012 at gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
Message-ID:
	<CADBYD+yAyjXkmmQd3xX3NNW4Bcg5MDCadUPgWwVRwr815P0PqQ at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Just for us to learn about the incubator status, here are some of the info
on incubation:

https://wiki.openstack.org/wiki/Governance/Approved/Incubation
https://wiki.openstack.org/wiki/Governance/NewProjects

Susanne


On Thu, Aug 28, 2014 at 5:57 PM, Susanne Balle <sleipnir012 at gmail.com>
wrote:

>  I would like to discuss the pros and cons of putting Octavia into the
> Neutron LBaaS incubator project right away. If it is going to be the
> reference implementation for LBaaS v 2 then I believe Octavia belong in
> Neutron LBaaS v2 incubator.
>
> The Pros:
> * Octavia is in Openstack incubation right away along with the lbaas v2
> code. We do not have to apply for incubation later on.
> * As incubation project we have our own core and should be able ot commit
> our code
> * We are starting out as an OpenStack incubated project
>
> The Cons:
> * Not sure of the velocity of the project
> * Incubation not well defined.
>
> If Octavia starts as a standalone stackforge project we are assuming that
> it would be looked favorable on when time is to move it into incubated
> status.
>
> Susanne
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140828/aa07a721/attachment-0001.html>

------------------------------

Message: 31
Date: Fri, 29 Aug 2014 02:13:08 +0400
From: Boris Pavlovic <bpavlovic at mirantis.com>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
Message-ID:
	<CAD85om0pnwc+HD4Tb5AZLiOF+=gUyqVS4f-1h2bFwZt9GDrkTw at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Joe,


This is a resource problem, the nova team simply does not have enough
> people doing enough reviews to make this possible.


Adding in such case more bureaucracy (specs) is not the best way to resolve
team throughput issues...

my 2cents


Best regards,
Boris Pavlovic


On Fri, Aug 29, 2014 at 2:01 AM, Joe Gordon <joe.gordon0 at gmail.com> wrote:

>
>
>
> On Thu, Aug 28, 2014 at 2:43 PM, Alan Kavanagh <alan.kavanagh at ericsson.com
> > wrote:
>
>> I share Donald's points here, I believe what would help is to clearly
>> describe in the Wiki the process and workflow for the BP approval process
>> and build in this process how to deal with discrepancies/disagreements and
>> build timeframes for each stage and process of appeal etc.
>> The current process would benefit from some fine tuning and helping to
>> build safe guards and time limits/deadlines so folks can expect responses
>> within a reasonable time and not be left waiting in the cold.
>>
>
>
> This is a resource problem, the nova team simply does not have enough
> people doing enough reviews to make this possible.
>
>
>> My 2cents!
>> /Alan
>>
>> -----Original Message-----
>> From: Dugger, Donald D [mailto:donald.d.dugger at intel.com]
>> Sent: August-28-14 10:43 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
>>
>> I would contend that that right there is an indication that there's a
>> problem with the process.  You submit a BP and then you have no idea of
>> what is happening and no way of addressing any issues.  If the priority is
>> wrong I can explain why I think the priority should be higher, getting
>> stonewalled leaves me with no idea what's wrong and no way to address any
>> problems.
>>
>> I think, in general, almost everyone is more than willing to adjust
>> proposals based upon feedback.  Tell me what you think is wrong and I'll
>> either explain why the proposal is correct or I'll change it to address the
>> concerns.
>>
>> Trying to deal with silence is really hard and really frustrating.
>> Especially given that we're not supposed to spam the mailing it's really
>> hard to know what to do.  I don't know the solution but we need to do
>> something.  More core team members would help, maybe something like an
>> automatic timeout where BPs/patches with no negative scores and no activity
>> for a week get flagged for special handling.
>>
>> I feel we need to change the process somehow.
>>
>> --
>> Don Dugger
>> "Censeo Toto nos in Kansa esse decisse." - D. Gale
>> Ph: 303/443-3786
>>
>> -----Original Message-----
>> From: Jay Pipes [mailto:jaypipes at gmail.com]
>> Sent: Thursday, August 28, 2014 1:44 PM
>> To: openstack-dev at lists.openstack.org
>> Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
>>
>> On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
>> > I'll try and not whine about my pet project but I do think there is a
>> > problem here.  For the Gantt project to split out the scheduler there
>> > is a crucial BP that needs to be implemented (
>> > https://review.openstack.org/#/c/89893/ ) and, unfortunately, the BP
>> > has been rejected and we'll have to try again for Kilo.  My question
>> > is did we do something wrong or is the process broken?
>> >
>> > Note that we originally proposed the BP on 4/23/14, went through 10
>> > iterations to the final version on 7/25/14 and the final version got
>> > three +1s and a +2 by 8/5.  Unfortunately, even after reaching out to
>> > specific people, we didn't get the second +2, hence the rejection.
>> >
>> > I understand that reviews are a burden and very hard but it seems
>> > wrong that a BP with multiple positive reviews and no negative reviews
>> > is dropped because of what looks like indifference.
>>
>> I would posit that this is not actually indifference. The reason that
>> there may not have been >1 +2 from a core team member may very well have
>> been that the core team members did not feel that the blueprint's priority
>> was high enough to put before other work, or that the core team members did
>> have the time to comment on the spec (due to them not feeling the blueprint
>> had the priority to justify the time to do a full review).
>>
>> Note that I'm not a core drivers team member.
>>
>> Best,
>> -jay
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140829/b0032726/attachment-0001.html>

------------------------------

Message: 32
Date: Thu, 28 Aug 2014 16:27:59 -0600
From: Chris Friesen <chris.friesen at windriver.com>
To: <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
Message-ID: <53FFACEF.7040400 at windriver.com>
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed

On 08/28/2014 04:01 PM, Joe Gordon wrote:
>
>
>
> On Thu, Aug 28, 2014 at 2:43 PM, Alan Kavanagh
> <alan.kavanagh at ericsson.com <mailto:alan.kavanagh at ericsson.com>> wrote:
>
>     I share Donald's points here, I believe what would help is to
>     clearly describe in the Wiki the process and workflow for the BP
>     approval process and build in this process how to deal with
>     discrepancies/disagreements and build timeframes for each stage and
>     process of appeal etc.
>     The current process would benefit from some fine tuning and helping
>     to build safe guards and time limits/deadlines so folks can expect
>     responses within a reasonable time and not be left waiting in the cold.
>
>
> This is a resource problem, the nova team simply does not have enough
> people doing enough reviews to make this possible.

All the more reason to make it obvious which reviews are not being 
addressed in a timely fashion.  (I'm thinking something akin to the 
order screen at a fast food restaurant that starts blinking in red and 
beeping if an order hasn't been filled in a certain amount of time.)

Perhaps by making it clear that reviews are a bottleneck this will 
actually help to address the problem.

Chris




------------------------------

Message: 33
Date: Thu, 28 Aug 2014 22:32:14 +0000
From: Alan Kavanagh <alan.kavanagh at ericsson.com>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?
Message-ID:
	<C977B257ADF8814C8EB4FB66BB9D0C2E6E880E at eusaamb109.ericsson.se>
Content-Type: text/plain; charset="us-ascii"

+1 my sentiments exactly, and this will actually help folks contribute in a more meaningful and productive way.
/Alan

-----Original Message-----
From: Chris Friesen [mailto:chris.friesen at windriver.com] 
Sent: August-29-14 12:28 AM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [nova] Is the BP approval process broken?

On 08/28/2014 04:01 PM, Joe Gordon wrote:
>
>
>
> On Thu, Aug 28, 2014 at 2:43 PM, Alan Kavanagh 
> <alan.kavanagh at ericsson.com <mailto:alan.kavanagh at ericsson.com>> wrote:
>
>     I share Donald's points here, I believe what would help is to
>     clearly describe in the Wiki the process and workflow for the BP
>     approval process and build in this process how to deal with
>     discrepancies/disagreements and build timeframes for each stage and
>     process of appeal etc.
>     The current process would benefit from some fine tuning and helping
>     to build safe guards and time limits/deadlines so folks can expect
>     responses within a reasonable time and not be left waiting in the cold.
>
>
> This is a resource problem, the nova team simply does not have enough 
> people doing enough reviews to make this possible.

All the more reason to make it obvious which reviews are not being addressed in a timely fashion.  (I'm thinking something akin to the order screen at a fast food restaurant that starts blinking in red and beeping if an order hasn't been filled in a certain amount of time.)

Perhaps by making it clear that reviews are a bottleneck this will actually help to address the problem.

Chris


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



------------------------------

Message: 34
Date: Fri, 29 Aug 2014 08:36:45 +1000
From: James Polley <jp at jamezpolley.com>
To: "OpenStack Development Mailing List (not for usage questions)"
	<openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [all] [ptls] The Czar system,	or how to
	scale PTLs
Message-ID:
	<CAPtRfUFqhMCcbuqn7QU-gkumchnj5qLOQarXWVJnN_BTy5s9ug at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

On Thu, Aug 28, 2014 at 10:40 PM, Thierry Carrez <thierry at openstack.org>
wrote:

> James Polley wrote:
> >>>         Point of clarification:  I've heard PTL=Project Technical Lead
> >>>         and PTL=Program Technical Lead. Which is it?  It is kind of
> >>>         important as OpenStack grows, because the first is responsible
> >>>         for *a* project, and the second is responsible for all projects
> >>>         within a program.
> >>
> >>     Now Program, formerly Project.
> >
> > I think this is worthy of more exploration. Our docs seem to be very
> > inconsistent about what a PTL is - and more broadly, what the difference
> > is between a Project and a Program.
> >
> > Just a few examples:
> >
> > https://wiki.openstack.org/wiki/PTLguide says "Program Technical
> > Lead". https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014
> > simply says PTL - but does say that each PTL is elected by/for a
> > Program. However, Thierry pointed
> > to https://wiki.openstack.org/wiki/Governance/Foundation/Structure which
> > still refers to Project Technical Leads and says explicitly that they
> > lead individual projects, not programs. I actually have edit access to
> > that page, so I could at least update that with a simple
> > "s/Project/Program/", if I was sure that was the right thing to do.
>
> Don't underestimate how stale wiki pages can become! Yes, fix it.
>

I don't know if I've fixed it, but I've certainly replaced all users of the
word Project with Program.

Whether or not it now matches reality, I'm not sure.

I alsp removed (what I assume is) a stale reference to the PPB and added a
new heading for the TC.


> > http://www.openstack.org/ has a link in the bottom nav that says
> > "Projects"; it points to http://www.openstack.org/projects/ which
> > redirects to http://www.openstack.org/software/ which has a list of
> > things like "Compute" and "Storage" - which as far as I know are
> > Programs, not Projects. I don't know how to update that link in the nav
> > panel.
>
> That's because the same word ("compute") is used for two different
> things: a program name ("Compute") and an "official OpenStack name" for
> a project ("OpenStack Compute a.k.a. Nova"). Basically official
> OpenStack names reduce confusion for newcomers ("What is Nova ?"), but
> they confuse old-timers at some point ("so the Compute program produces
> Nova a.k.a. OpenStack Compute ?").
>

That's confusing to me. I had thought that part of the reason for the
separation was to enable a level of indirection - if the Compute program
team decide that a new project called (for example) SuperNova should be the
main project, that just means that Openstack Compute is now a pointer to a
different project, supported by the same program team.

It sounds like that isn't the intent though?


> > I wasn't around when the original Programs/Projects discussion was
> > happening - which, I suspect, has a lot to do with why I'm confused
> > today - it seems as though people who were around at the time understand
> > the difference, but people who have joined since then are relying on
> > multiple conflicting verbal definitions. I believe, though,
> > that
> http://lists.openstack.org/pipermail/openstack-dev/2013-June/010821.html
> > was one of the earliest starting points of the discussion. That page
> > points at https://wiki.openstack.org/wiki/Projects, which today contains
> > a list of Programs. That page does have a definition of what a Program
> > is, but doesn't explain what a Project is or how they relate to
> > Programs. This page seems to be locked down, so I can't edit it.
>
> https://wiki.openstack.org/wiki/Projects was renamed to
> https://wiki.openstack.org/wiki/Programs with the wiki helpfully leaving
> a redirect behind. So the content you are seeing here is the "Programs"
> wiki page, which is why it doesn't define "projects".
>
> We don't really use the word "project" that much anymore, we prefer to
> talk about code repositories. Programs are teams working on a set of
> code repositories. Some of those code repositories may appear in the
> integrated release.
>

This explanation of the difference between projects and programs sounds
like it would be useful to add to /Programs - but I can't edit that page.

>
> > That page does mention projects, once. The context makes it read, to me,
> > as though a program can follow one process to "become part of OpenStack"
> > and then another process to "become an Integrated project and part of
> > the OpenStack coordinated release" - when my understanding of reality is
> > that the second process applies to Projects, not Programs.
> >
> > I've tried to find any other page that talks about what a Project is and
> > how they relate to Programs, but I haven't been able to find anything.
> > Perhaps there's some definition locked up in a mailing list thread or
> > some TC minutes, but I haven't been able to find it.
> >
> > During the previous megathread, I got the feeling that at least some of
> > the differing viewpoints we saw were possibly down to some people
> > thinking of a PTL as responsible for just one project, while others
> > think of a PTL as being responsible for any projects that might fit
> > within a Program's scope. As we approach the next PTL elections, I think
> > it would be helpful for us to recap the discussions that led to the
> > Program/Project split and make sure our docs are consistent, so that
> > people who weren't following the discussion this time last year can
> > still be clear what they're voting for.
>
> Programs are just acknowledging that code repositories should be
> organized in the way that makes the most sense technically. They should
> not be artificially organized to match our governance structure.
>
> Before programs existed, it was difficult for teams to organize their
> code the way they wanted, because there was only one code repository
> ("The Project"), so everything had to be in it. Then we added an
> exception for the Python client projects, so the Nova team could work on
> the Nova project *and* the Python client for it. But then it made sense
> to organize the code differently, so rather than continuing to add
> exceptions (which you can see traces of at stale page [1]), the easiest
> way to organize that was to just say that a given team could rule a set
> of code repositories, and organize them as they preferred.
>
> So teams, organized around a clear mission statement, could decide which
> code repositories they wanted to organize their code in. We call those
> teams "programs".
>
> [1] https://wiki.openstack.org/wiki/ProjectTypes


I *can* edit that page; I'd like to bring it up-to-date. It seems like a
good basis for explaining the difference between Programs and Projects and
the historical reasons for the split. I'll aim to take a stab at this next
week.


>
>
> --
> Thierry Carrez (ttx)
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140829/9bc09913/attachment.html>

------------------------------

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


End of OpenStack-dev Digest, Vol 28, Issue 92
*********************************************


More information about the OpenStack-dev mailing list