From stig.openstack at telfer.org Wed Aug 1 09:06:18 2018 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 1 Aug 2018 10:06:18 +0100 Subject: [Openstack-sigs] [scientific] IRC meeting today 1100UTC - CFPs and so on Message-ID: <26D9E0E2-05C8-4FEB-9088-6730000B9035@telfer.org> Hi All - The Scientific SIG has an IRC meeting coming up at 1100UTC (about 2 hours time) in channel #openstack-meeting. Everyone is welcome. It’s a fairly quiet week this week - we have a couple of events to announce plus AOB. https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_August_1st_2018 Hope to see you there, Stig -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Thu Aug 2 13:11:06 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 2 Aug 2018 21:11:06 +0800 Subject: [Openstack-sigs] [publiccloud-wg]WG bi-weekly Meeting Today Message-ID: Hi Folks, We will have our EU friendly WG meeting roughly 50 mins later today starting UTC1400 at #openstack-publiccloud -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From msm at redhat.com Thu Aug 2 16:49:39 2018 From: msm at redhat.com (Michael McCune) Date: Thu, 2 Aug 2018 12:49:39 -0400 Subject: [Openstack-sigs] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, Today's meeting was primarily focused around two topics: the IETF[7] draft proposal for Best Practices when building HTTP protocols[8], and the upcoming OpenStack Project Teams Gathering (PTG)[9]. The group had taken a collective action to read the aforementioned draft[8], and as such we were well prepared to discuss its nuances. For the most part, we agreed that the draft is a good prepartory text when approaching HTTP APIs and that we should provide a link to it from the guidelines. Although there are a few areas that we identified as points of discussion regarding the text of the draft, on balance it was seen as helpful to the OpenStack community and consistent with our established guidelines. On the topic of the PTG, the group has started planning for the event and is in the early stages gathering content. We will soon have an etherpad available for topic collection and as an added bonus mordred himself made a pronouncement about the API-SIG meeting being a priority in his schedule for this PTG. We hope to see you all there! The OpenStack infra team will be doing the final rename from API-WG to API-SIG this Friday. Although there are not expected to be any issues from this rename, we will be updating documentation references, and appreciate any help in chasing down bugs. There were no new guidelines to discuss, nor bugs that have arisen since last week. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * None # API Guidelines Proposed for Freeze * None # Guidelines that are ready for wider review by the whole community. * None # Guidelines Currently Under Review [3] * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://ietf.org/ [8] https://tools.ietf.org/html/draft-ietf-httpbis-bcp56bis-06 [9] https://www.openstack.org/ptg/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg From aspiers at suse.com Fri Aug 3 15:41:57 2018 From: aspiers at suse.com (Adam Spiers) Date: Fri, 3 Aug 2018 16:41:57 +0100 Subject: [Openstack-sigs] [openstack-dev] [sig][upgrades][ansible][charms][tripleo][kolla][airship] reboot or poweroff? In-Reply-To: References: Message-ID: <20180803154157.7h33v5pxdbbcmdtx@pacific.linksys.moosehall> [Adding openstack-sigs list too; apologies for the extreme cross-posting, but I think in this case the discussion deserves wide visibility. Happy to be corrected if there's a better way to handle this.] Hi James, James Page wrote: >Hi All > >tl;dr we (the original founders) have not managed to invest the time to get >the Upgrades SIG booted - time to hit reboot or time to poweroff? TL;DR response: reboot, absolutely no question! My full response is below. >Since Vancouver, two of the original SIG chairs have stepped down leaving >me in the hot seat with minimal participation from either deployment >projects or operators in the IRC meetings. In addition I've only been able >to make every 3rd IRC meeting, so they have generally not being happening. > >I think the current timing is not good for a lot of folk so finding a >better slot is probably a must-have if the SIG is going to continue - and >maybe moving to a monthly or bi-weekly schedule rather than the weekly slot >we have now. > >In addition I need some willing folk to help with leadership in the SIG. >If you have an interest and would like to help please let me know! > >I'd also like to better engage with all deployment projects - upgrades is >something that deployment tools should be looking to encapsulate as >features, so it would be good to get deployment projects engaged in the SIG >with nominated representatives. > >Based on the attendance in upgrades sessions in Vancouver and >developer/operator appetite to discuss all things upgrade at said sessions >I'm assuming that there is still interest in having a SIG for Upgrades but >I may be wrong! > >Thoughts? As a SIG leader in a similar position (albeit with one other very helpful person on board), let me throw my £0.02 in ... With both upgrades and self-healing I think there is a big disparity between supply (developers with time to work on the functionality) and demand (operators who need the functionality). And perhaps also the high demand leads to a lot of developers being interested in the topic whilst not having much spare time to help out. That is probably why we both see high attendance at the summit / PTG events but relatively little activity in between. I also freely admit that the inevitable conflicts with downstream requirements mean that I have struggled to find time to be as proactive with driving momentum as I had wanted, although I'm hoping to pick this up again over the next weeks leading up to the PTG. It sounds like maybe you have encountered similar challenges. That said, I strongly believe that both of these SIGs offer a *lot* of value, and even if we aren't yet seeing the level of online activity that we would like, I think it's really important that they both continue. If for no other reasons, the offline sessions at the summits and PTGs are hugely useful for helping converge the community on common approaches, and the associated repositories / wikis serve as a great focal point too. Regarding online collaboration, yes, building momentum for IRC meetings is tough, especially with the timezone challenges. Maybe a monthly cadence is a reasonable starting point, or twice a month in alternating timezones - but maybe with both meetings within ~24 hours of each other, to reduce accidental creation of geographic silos. Another possibility would be to offer "open clinic" office hours, like the TC and other projects have done. If the TC or anyone else has established best practices in this space, it'd be great to hear them. Either way, I sincerely hope that you decide to continue with the SIG, and that other people step up to help out. These things don't develop overnight but it is a tremendously worthwhile initiative; after all, everyone needs to upgrade OpenStack. Keep the faith! ;-) Cheers, Adam From fungi at yuggoth.org Fri Aug 3 21:20:03 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 3 Aug 2018 21:20:03 +0000 Subject: [Openstack-sigs] [sig][upgrades][ansible][charms][tripleo][kolla][airship] reboot or poweroff? In-Reply-To: <20180803154157.7h33v5pxdbbcmdtx@pacific.linksys.moosehall> References: <20180803154157.7h33v5pxdbbcmdtx@pacific.linksys.moosehall> Message-ID: <20180803212003.5xosa4lqkub4kk2o@yuggoth.org> On 2018-08-03 16:41:57 +0100 (+0100), Adam Spiers wrote: [...] > Another possibility would be to offer "open clinic" office hours, > like the TC and other projects have done. If the TC or anyone > else has established best practices in this space, it'd be great > to hear them. [...] First and foremost, office hours shouldn't be about constraining when and where fruitful conversation can occur. Make sure people with a common interest in the topic know where to find each other for discussion at whatever times they happen to be available. When you have that, "office hours" are simply a means of coordinating and publicizing specific times when an increased number of participants expect to be around. This is especially useful for having consensus-building discussions more quickly than can be done asynchronously through people responding to comments they see in scrollback/logs or on mailing list threads. Some options I've seen toyed with: Using the meetbot to provide minutes of an office hour session... this tends to result in making the session feel a lot more formal and meeting-like, causing people to withhold conversation until the appointed hour (avoiding bringing things up earlier); also having a hard stop curtails continued discussion out of concern those comments won't make it into the meeting log. An alternative is to merely annotate discussion with some consistent tags so that they can be easily found in the channel log later, though this depends on people remembering to do that and also probably isn't much use unless you intend to build/publish retrospective summaries. The lack of a dedicated log for office hour discussions isn't a significant loss, since our HTML-based IRC log viewer site has the ability to deep-link to the time where you started the discussion anyway. Producing an agenda for the office hour in advance... much like using the meetbot, this has the effect of making things feel a lot more formal and dissuades participants from bringing up topics not covered by the agenda or allowing discussion to evolve organically from topic to topic. On the other hand, if you find that you have people showing up at office hour with nothing to talk about and that worries you (occasional dead office hours might be fine for some groups and not others), you can attempt to prepare a list of possible conversation starters to give people something to talk about for long enough to prompt them to continue on to other topics as they come to mind. Continuing discussion from recent mailing list threads, relevant code reviews, important bug reports or interesting presentations might be good candidates for this. Declaring multiple office hours at different times to increase participation from subgroups in a variety of timezones/regions... this might work depending on the group and topics, but you're just as likely to find that some of those times are consistently not well-attended while others are where a bulk of the discussions take place. If this is a model you're considering, you may find you need to move the consistently dead hours around or condense some; or you could accept that some of your designated office hours are just rarely used as long as that's not a huge time-waster for the handful of people who do show up for them only to find there's not much going on. Hope this helps! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From amy at demarco.com Sat Aug 4 01:15:30 2018 From: amy at demarco.com (Amy Marrich) Date: Fri, 3 Aug 2018 20:15:30 -0500 Subject: [Openstack-sigs] New AUC Criteria Message-ID: *Are you an Active User Contributor (AUC)? Well you may be and not even know it! Historically, AUCs met the following criteria: - Organizers of Official OpenStack User Groups: from the Groups Portal- Active members and contributors to functional teams and/or working groups (currently also manually calculated for WGs not using IRC): from IRC logs- Moderators of any of the operators official meet-up sessions: Currently manually calculated.- Contributors to any repository under the UC governance: from Gerrit- Track chairs for OpenStack summits: from the Track Chair tool- Contributors to Superuser (articles, interviews, user stories, etc.): from the Superuser backend- Active moderators on ask.openstack.org : from Ask OpenStackIn July, the User Committee (UC) voted to add the following criteria to becoming an AUC in order to meet the needs of the evolving OpenStack Community. So in addition to the above ways, you can now earn AUC status by meeting the following: - User survey participants who completed a deployment survey- Ops midcycle session moderators- OpenStack Days organizers- SIG Members nominated by SIG leaders- Active Women of OpenStack participants- Active Diversity WG participantsWell that’s great you have met the requirements to become an AUC but what does that mean? AUCs can run for open UC positions and can vote in the elections. AUCs also receive a discounted $300 ticket for OpenStack Summit as well as having the coveted AUC insignia on your badge!* And remember nominations for the User Committee open on Monday, August 6 and end on August, 17 with voting August 20 to August 24. Amy Marrich (spotz) User Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Mon Aug 6 04:52:52 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Mon, 6 Aug 2018 12:52:52 +0800 Subject: [Openstack-sigs] [self-healing][all] Expose SIG to user/ops In-Reply-To: References: Message-ID: bump this topic again, Would really like to hear from all:) On Wed, Jul 11, 2018 at 8:50 PM Rico Lin wrote: > Hi all > > As we went through some discussion form Summit for self-healing sig, To > collect Use case is one of our goal in Rocky cycle. > Keep thinking how can we expose This SIG to users/ops and make this become > a regular thing. > Here's some idea that might help, also might be able to help other SIG as > well: > > ** Join user survey:* > It's possible for SIG to propose options in User survey. > If we going to do so, we should provide questions which can be answered by > selecting from options or let's said minimal written is preferred. > So what will the question be? Would like to hear from everyone for any > idea. > > ** Expose our StoryBoard to user/ops* > Another idea is to expose our StoryBoard to user/ops. OpenStack > community currently didn't have any effective way to raise issues for > self-healing. If we expose StoryBoard to user/ops to allow them to raise > issues, users can directly file the entire story, instead of just reporting > part of the issue and that usually reply with `Oh, that's XXX > project's issue, we got nothing to do with it`. > Don't get this wrong, we got nothing to block user to raise story(issues) > in any project, including self-healing SIG. But I believe to specific tell > user where you can drop that story to trigger cross-project discussions > will be the right way instead of telling nothing and user not even know any > valid way to deal with issues. Imaging that when you first join a > community, there is a line tell you if you have a question about > self-healing/k8s/upgrade/etc here is where you can raise the issue, and > find help. > I will imagine we need to have people from teams to be around to deal with > issues and tell users/ops when they come. But for what I know, we actually > got attention from most of teams that concerns about self-healing. > I think in order to do so (if that's a good idea), we need someplace > better than ML to tell users/ops that here is where you can go when you > found your self-healing not working or you need any help. Also, I think > this might actually apply to other SIGs. > > ** Build gate job for self-healing task* > We have some use cases that already been demo around self-healing cases, > like Vitrage+Mistral, Heat+Mistral+Aodh, etc. Also, some scenarios are > under development. I believe there are values to generate a periodic task, > or even a cross-project gate to make sure we didn't break the general > self-healing use cases. If we can do so, I think users/ops will have the > better confidence to say self-healing is absolutely working in OpenStack. > Also, we don't need to build separate tempest plugin if we can find any > projects willing to host those test. Not speaking for the entire team, but > I think Heat might be able to provide something here. > > > Those are my proposal, please help to give your opinions. Thanks all. > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Mon Aug 6 05:43:11 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Mon, 6 Aug 2018 13:43:11 +0800 Subject: [Openstack-sigs] [self-healing] PTG etherpad Message-ID: Hi self-healing SIG forks and Adam As the PTG is near, I would like to trigger the discussion of what we should talk about in PTG and what we looking forward to achieved before the end of PTG. I create an etherpad link to collect topics: https://etherpad.openstack.org/p/self-healing-sig-stein-ptg Let's add topics and ideas to it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin at easystack.cn Mon Aug 6 04:52:33 2018 From: rico.lin at easystack.cn (Rico Lin) Date: Mon, 6 Aug 2018 12:52:33 +0800 Subject: [Openstack-sigs] [self-healing][all] Expose SIG to user/ops In-Reply-To: References: Message-ID: bump this topic again, Would really like to hear from all:) On Wed, Jul 11, 2018 at 8:50 PM Rico Lin wrote: > Hi all > > As we went through some discussion form Summit for self-healing sig, To > collect Use case is one of our goal in Rocky cycle. > Keep thinking how can we expose This SIG to users/ops and make this become > a regular thing. > Here's some idea that might help, also might be able to help other SIG as > well: > > ** Join user survey:* > It's possible for SIG to propose options in User survey. > If we going to do so, we should provide questions which can be answered by > selecting from options or let's said minimal written is preferred. > So what will the question be? Would like to hear from everyone for any > idea. > > ** Expose our StoryBoard to user/ops* > Another idea is to expose our StoryBoard to user/ops. OpenStack > community currently didn't have any effective way to raise issues for > self-healing. If we expose StoryBoard to user/ops to allow them to raise > issues, users can directly file the entire story, instead of just reporting > part of the issue and that usually reply with `Oh, that's XXX > project's issue, we got nothing to do with it`. > Don't get this wrong, we got nothing to block user to raise story(issues) > in any project, including self-healing SIG. But I believe to specific tell > user where you can drop that story to trigger cross-project discussions > will be the right way instead of telling nothing and user not even know any > valid way to deal with issues. Imaging that when you first join a > community, there is a line tell you if you have a question about > self-healing/k8s/upgrade/etc here is where you can raise the issue, and > find help. > I will imagine we need to have people from teams to be around to deal with > issues and tell users/ops when they come. But for what I know, we actually > got attention from most of teams that concerns about self-healing. > I think in order to do so (if that's a good idea), we need someplace > better than ML to tell users/ops that here is where you can go when you > found your self-healing not working or you need any help. Also, I think > this might actually apply to other SIGs. > > ** Build gate job for self-healing task* > We have some use cases that already been demo around self-healing cases, > like Vitrage+Mistral, Heat+Mistral+Aodh, etc. Also, some scenarios are > under development. I believe there are values to generate a periodic task, > or even a cross-project gate to make sure we didn't break the general > self-healing use cases. If we can do so, I think users/ops will have the > better confidence to say self-healing is absolutely working in OpenStack. > Also, we don't need to build separate tempest plugin if we can find any > projects willing to host those test. Not speaking for the entire team, but > I think Heat might be able to provide something here. > > > Those are my proposal, please help to give your opinions. Thanks all. > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > > -- May The Force of OpenStack Be With You, *Rico* Lin 林冠宇 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Mon Aug 6 12:56:45 2018 From: aspiers at suse.com (Adam Spiers) Date: Mon, 6 Aug 2018 13:56:45 +0100 Subject: [Openstack-sigs] [self-healing] PTG etherpad In-Reply-To: References: Message-ID: <20180806125645.ixbc37lir4dsi62i@pacific.linksys.moosehall> Rico Lin wrote: >Hi self-healing SIG forks and Adam > >As the PTG is near, I would like to trigger the discussion of what we >should talk about in PTG and what we looking forward to achieved before the >end of PTG. > >I create an etherpad link to collect topics: >https://etherpad.openstack.org/p/self-healing-sig-stein-ptg > >Let's add topics and ideas to it. Thanks a lot Rico! I finally have some time to catch up on self-healing work, and will reply to your other mails shortly. From aspiers at suse.com Mon Aug 6 15:44:20 2018 From: aspiers at suse.com (Adam Spiers) Date: Mon, 6 Aug 2018 16:44:20 +0100 Subject: [Openstack-sigs] [self-healing] OPNFV Barometer & Self-healing In-Reply-To: References: Message-ID: <20180806154420.2ogef3fxxv6szkrq@pacific.linksys.moosehall> Hi Sunku, Firstly, really sorry for the late reply. Ranganath, Sunku wrote: >Hi All, > >Telemetry and monitoring are a fundamental aspect required for implementing comprehensive Self-healing. >OPNFV Barometer provides the ability to monitor Network Function Virtualization Infrastructure (NFVI) to help with Service Assurance. >Barometer home: https://wiki.opnfv.org/display/fastpath/Barometer+Home > >However the work done by the Barometer community helps out any cloud deployment by exposing various platform metrics including but not limited: > >- Last level cache > >- CPU metrics > >- Memory metrics > >- Thermals > >- Fan Speeds > >- Voltages > >- Machine check exceptions > >- RedFish support > >- IPMI > >- SNMP support > >- OVS metrics > >- DPDK metrics > >- PMU metrics > >- Prometheus support, etc. > >With the ongoing work, the Barometer community is looking forward to work with Self-healing SIG in providing use cases and solutions and help wide variety of audience. >Feel free to join in with Barometer community for more questions/comments/feedback. Thanks a lot for the information and for reaching out to us! Would you be willing to act as the contact point between Barometer and the self-healing SIG, as explained in this etherpad? https://etherpad.openstack.org/p/self-healing-contacts Also, it would be great to document any existing integration points between Barometer and other related self-healing projects: https://etherpad.openstack.org/p/self-healing-project-integrations Will you or any other members of the Barometer community be at the Denver PTG? Thanks again :-) Adam From aspiers at suse.com Mon Aug 6 16:37:47 2018 From: aspiers at suse.com (Adam Spiers) Date: Mon, 6 Aug 2018 17:37:47 +0100 Subject: [Openstack-sigs] [self-healing] OPNFV Barometer & Self-healing In-Reply-To: <20180806154420.2ogef3fxxv6szkrq@pacific.linksys.moosehall> References: <20180806154420.2ogef3fxxv6szkrq@pacific.linksys.moosehall> Message-ID: <20180806163747.6naw234h5y66jrkd@pacific.linksys.moosehall> Adam Spiers wrote: >Hi Sunku, [snipped] >Would you be willing to act as the contact point between Barometer and >the self-healing SIG, as explained in this etherpad? > > https://etherpad.openstack.org/p/self-healing-contacts Ignore this question - I see that you are already listed ;-) https://wiki.openstack.org/wiki/Self-healing_SIG#Project_contacts From ed at leafe.com Mon Aug 6 16:52:38 2018 From: ed at leafe.com (Ed Leafe) Date: Mon, 6 Aug 2018 11:52:38 -0500 Subject: [Openstack-sigs] UC nomination period is now open! Message-ID: <277DC0C9-C34D-47D9-B14F-81E41F136909@leafe.com> As the subject says, the nomination period for the summer[0] User Committee elections is now open. Any individual member of the Foundation who is an Active User Contributor (AUC) can propose their candidacy (except the three sitting UC members elected in the previous election). Self-nomination is common; no third party nomination is required. Nominations are made by sending an email to the user-committee at lists.openstack.org mailing-list, with the subject: “UC candidacy” by August 17, 05:59 UTC. The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate. [0] Sorry, southern hemisphere people! -- Ed Leafe From sunku.ranganath at intel.com Mon Aug 6 19:37:45 2018 From: sunku.ranganath at intel.com (Ranganath, Sunku) Date: Mon, 6 Aug 2018 19:37:45 +0000 Subject: [Openstack-sigs] [self-healing] OPNFV Barometer & Self-healing In-Reply-To: <20180806154420.2ogef3fxxv6szkrq@pacific.linksys.moosehall> References: <20180806154420.2ogef3fxxv6szkrq@pacific.linksys.moosehall> Message-ID: Hi Adam, Happy to be a contact point between Self-Healing SIG & Barometer. Updated the "Integration Point" etherpad. I will check with Barometer to see if anyone from Barometer community would be at the PTG. Thanks, Sunku Ranganath -----Original Message----- From: Adam Spiers [mailto:aspiers at suse.com] Sent: Monday, August 6, 2018 4:44 PM To: openstack-sigs at lists.openstack.org Cc: Ranganath, Sunku Subject: Re: [Openstack-sigs] [self-healing] OPNFV Barometer & Self-healing Hi Sunku, Firstly, really sorry for the late reply. Ranganath, Sunku wrote: >Hi All, > >Telemetry and monitoring are a fundamental aspect required for implementing comprehensive Self-healing. >OPNFV Barometer provides the ability to monitor Network Function Virtualization Infrastructure (NFVI) to help with Service Assurance. >Barometer home: https://wiki.opnfv.org/display/fastpath/Barometer+Home > >However the work done by the Barometer community helps out any cloud deployment by exposing various platform metrics including but not limited: > >- Last level cache > >- CPU metrics > >- Memory metrics > >- Thermals > >- Fan Speeds > >- Voltages > >- Machine check exceptions > >- RedFish support > >- IPMI > >- SNMP support > >- OVS metrics > >- DPDK metrics > >- PMU metrics > >- Prometheus support, etc. > >With the ongoing work, the Barometer community is looking forward to work with Self-healing SIG in providing use cases and solutions and help wide variety of audience. >Feel free to join in with Barometer community for more questions/comments/feedback. Thanks a lot for the information and for reaching out to us! Would you be willing to act as the contact point between Barometer and the self-healing SIG, as explained in this etherpad? https://etherpad.openstack.org/p/self-healing-contacts Also, it would be great to document any existing integration points between Barometer and other related self-healing projects: https://etherpad.openstack.org/p/self-healing-project-integrations Will you or any other members of the Barometer community be at the Denver PTG? Thanks again :-) Adam From aspiers at suse.com Mon Aug 6 21:16:13 2018 From: aspiers at suse.com (Adam Spiers) Date: Mon, 6 Aug 2018 22:16:13 +0100 Subject: [Openstack-sigs] [self-healing] OPNFV Barometer & Self-healing In-Reply-To: References: <20180806154420.2ogef3fxxv6szkrq@pacific.linksys.moosehall> Message-ID: <20180806211613.oq6epwwy5wt24cis@pacific.linksys.moosehall> Cool, thanks a lot Sunku! Ranganath, Sunku wrote: >Hi Adam, > >Happy to be a contact point between Self-Healing SIG & Barometer. >Updated the "Integration Point" etherpad. > >I will check with Barometer to see if anyone from Barometer community would be at the PTG. > >Thanks, >Sunku Ranganath > >-----Original Message----- >From: Adam Spiers [mailto:aspiers at suse.com] >Sent: Monday, August 6, 2018 4:44 PM >To: openstack-sigs at lists.openstack.org >Cc: Ranganath, Sunku >Subject: Re: [Openstack-sigs] [self-healing] OPNFV Barometer & Self-healing > >Hi Sunku, > >Firstly, really sorry for the late reply. > >Ranganath, Sunku wrote: >>Hi All, >> >>Telemetry and monitoring are a fundamental aspect required for implementing comprehensive Self-healing. >>OPNFV Barometer provides the ability to monitor Network Function Virtualization Infrastructure (NFVI) to help with Service Assurance. >>Barometer home: https://wiki.opnfv.org/display/fastpath/Barometer+Home >> >>However the work done by the Barometer community helps out any cloud deployment by exposing various platform metrics including but not limited: >> >>- Last level cache >> >>- CPU metrics >> >>- Memory metrics >> >>- Thermals >> >>- Fan Speeds >> >>- Voltages >> >>- Machine check exceptions >> >>- RedFish support >> >>- IPMI >> >>- SNMP support >> >>- OVS metrics >> >>- DPDK metrics >> >>- PMU metrics >> >>- Prometheus support, etc. >> >>With the ongoing work, the Barometer community is looking forward to work with Self-healing SIG in providing use cases and solutions and help wide variety of audience. >>Feel free to join in with Barometer community for more questions/comments/feedback. > >Thanks a lot for the information and for reaching out to us! > >Would you be willing to act as the contact point between Barometer and the self-healing SIG, as explained in this etherpad? > > https://etherpad.openstack.org/p/self-healing-contacts > >Also, it would be great to document any existing integration points between Barometer and other related self-healing projects: > > https://etherpad.openstack.org/p/self-healing-project-integrations > >Will you or any other members of the Barometer community be at the Denver PTG? > >Thanks again :-) > >Adam > From mriedemos at gmail.com Mon Aug 6 22:03:28 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 6 Aug 2018 17:03:28 -0500 Subject: [Openstack-sigs] [nova] StarlingX diff analysis Message-ID: In case you haven't heard, there was this StarlingX thing announced at the last summit. I have gone through the enormous nova diff in their repo and the results are in a spreadsheet [1]. Given the enormous spreadsheet (see a pattern?), I have further refined that into a set of high-level charts [2]. I suspect there might be some negative reactions to even doing this type of analysis lest it might seem like promoting throwing a huge pile of code over the wall and expecting the OpenStack (or more specifically the nova) community to pick it up. That's not my intention at all, nor do I expect nova maintainers to be responsible for upstreaming any of this. This is all educational to figure out what the major differences and overlaps are and what could be constructively upstreamed from the starlingx staging repo since it's not all NFV and Edge dragons in here, there are some legitimate bug fixes and good ideas. I'm sharing it because I want to feel like my time spent on this in the last week wasn't all for nothing. [1] https://docs.google.com/spreadsheets/d/1ugp1FVWMsu4x3KgrmPf7HGX8Mh1n80v-KVzweSDZunU/edit?usp=sharing [2] https://docs.google.com/presentation/d/1P-__JnxCFUbSVlEoPX26Jz6VaOyNg-jZbBsmmKA2f0c/edit?usp=sharing -- Thanks, Matt From gael.therond at gmail.com Tue Aug 7 06:10:53 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Tue, 7 Aug 2018 08:10:53 +0200 Subject: [Openstack-sigs] [Openstack-operators] [nova] StarlingX diff analysis In-Reply-To: References: Message-ID: Hi matt, everyone, I just read your analysis and would like to thank you for such work. I really think there are numerous features included/used on this Nova rework that would be highly beneficial for Nova and users of it. I hope people will fairly appreciate you work. I didn’t had time to check StarlingX code quality, how did you feel it while you were doing your analysis? Thanks a lot for this share. I’ll have a closer look at it this afternoon as my company may be interested by some features. Kind regards, G. Le mar. 7 août 2018 à 00:03, Matt Riedemann a écrit : > In case you haven't heard, there was this StarlingX thing announced at > the last summit. I have gone through the enormous nova diff in their > repo and the results are in a spreadsheet [1]. Given the enormous > spreadsheet (see a pattern?), I have further refined that into a set of > high-level charts [2]. > > I suspect there might be some negative reactions to even doing this type > of analysis lest it might seem like promoting throwing a huge pile of > code over the wall and expecting the OpenStack (or more specifically the > nova) community to pick it up. That's not my intention at all, nor do I > expect nova maintainers to be responsible for upstreaming any of this. > > This is all educational to figure out what the major differences and > overlaps are and what could be constructively upstreamed from the > starlingx staging repo since it's not all NFV and Edge dragons in here, > there are some legitimate bug fixes and good ideas. I'm sharing it > because I want to feel like my time spent on this in the last week > wasn't all for nothing. > > [1] > > https://docs.google.com/spreadsheets/d/1ugp1FVWMsu4x3KgrmPf7HGX8Mh1n80v-KVzweSDZunU/edit?usp=sharing > [2] > > https://docs.google.com/presentation/d/1P-__JnxCFUbSVlEoPX26Jz6VaOyNg-jZbBsmmKA2f0c/edit?usp=sharing > > -- > > Thanks, > > Matt > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Aug 7 13:29:04 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 7 Aug 2018 08:29:04 -0500 Subject: [Openstack-sigs] [Openstack-operators] [nova] StarlingX diff analysis In-Reply-To: References: Message-ID: <45bd7236-b9f8-026d-620b-7356d4effa49@gmail.com> On 8/7/2018 1:10 AM, Flint WALRUS wrote: > I didn’t had time to check StarlingX code quality, how did you feel it > while you were doing your analysis? I didn't dig into the test diffs themselves, but it was my impression that from what I was poking around in the local git repo, there were several changes which didn't have any test coverage. For the really big full stack changes (L3 CAT, CPU scaling and shared/pinned CPUs on same host), toward the end I just started glossing over a lot of that because it's so much code in so many places, so I can't really speak very well to how it was written or how well it is tested (maybe WindRiver had a more robust CI system running integration tests, I don't know). There were also some things which would have been caught in code review upstream. For example, they ignore the "force" parameter for live migration so that live migration requests always go through the scheduler. However, the "force" parameter is only on newer microversions. Before that, if you specified a host at all it would bypass the scheduler, but the change didn't take that into account, so they still have gaps in some of the things they were trying to essentially disable in the API. On the whole I think the quality is OK. It's not really possible to accurately judge that when looking at a single diff this large. -- Thanks, Matt From mrhillsman at gmail.com Tue Aug 7 22:40:42 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Tue, 7 Aug 2018 17:40:42 -0500 Subject: [Openstack-sigs] OpenStack UC Elections Message-ID: Hi SIG Leads, UC election time is upon us and we would love to have active SIG members who would not fall under ATCs be provided so we can add them as AUCs and ensure they have access to vote in the election. Details on the UC election can be found here - https://governance.openstack.org/uc/reference/uc-election-aug2018.html August 6 - August 17, 05:59 UTC: Open candidacy for UC positions August 20 - August 24, 11:59 UTC: UC elections (voting) You can reply to me directly or simply provide your AUCs via this etherpad - https://etherpad.openstack.org/p/sig-aucs -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Wed Aug 8 02:08:05 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 8 Aug 2018 10:08:05 +0800 Subject: [Openstack-sigs] [publiccloud-wg] Asia-EU friendly meeting today Message-ID: Hi team, A kind reminder for the UTC 7:00 meeting today, please do remember to register yourself to irc due to new channel policy. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Wed Aug 8 13:16:49 2018 From: ed at leafe.com (Ed Leafe) Date: Wed, 8 Aug 2018 08:16:49 -0500 Subject: [Openstack-sigs] OpenStack UC Elections In-Reply-To: References: Message-ID: <8DFFB114-EE32-4A80-9854-F837F8986375@leafe.com> On Aug 7, 2018, at 5:40 PM, Melvin Hillsman wrote: > > UC election time is upon us and we would love to have active SIG members who would not fall under ATCs be provided so we can add them as AUCs and ensure they have access to vote in the election. The API-SIG has 4 core members, 3 of whom I know are ATC. The only core who I’m not sure about would be Michael McCune: msm at redhat.com -- Ed Leafe From msm at redhat.com Wed Aug 8 15:21:36 2018 From: msm at redhat.com (Michael McCune) Date: Wed, 8 Aug 2018 11:21:36 -0400 Subject: [Openstack-sigs] OpenStack UC Elections In-Reply-To: <8DFFB114-EE32-4A80-9854-F837F8986375@leafe.com> References: <8DFFB114-EE32-4A80-9854-F837F8986375@leafe.com> Message-ID: hey, i am not sure that i have ATC, my commits to upstream have been quite sparse the last few cycles. my main participation at this point is with the meetings and perhaps a few face-to-face interactions. peace o/ On Wed, Aug 8, 2018 at 9:17 AM Ed Leafe wrote: > > On Aug 7, 2018, at 5:40 PM, Melvin Hillsman wrote: > > > > UC election time is upon us and we would love to have active SIG members who would not fall under ATCs be provided so we can add them as AUCs and ensure they have access to vote in the election. > > The API-SIG has 4 core members, 3 of whom I know are ATC. The only core who I’m not sure about would be Michael McCune: msm at redhat.com > > > -- Ed Leafe > > > > > > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs From cdent+os at anticdent.org Thu Aug 9 16:44:03 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 9 Aug 2018 17:44:03 +0100 (BST) Subject: [Openstack-sigs] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, As is our recent custom, short meeting this week. Our main topic of conversation was discussing the planning etherpad [7] for the API-SIG gathering at the Denver PTG. If you will be there, and have topics of interest, please add them to the etherpad. There are no new guidelines under review, but there is a stack of changes which do some reformatting and explicitly link to useful resources [8]. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * None # API Guidelines Proposed for Freeze * None # Guidelines that are ready for wider review by the whole community. * None # Guidelines Currently Under Review [3] * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://etherpad.openstack.org/p/api-sig-stein-ptg [8] https://review.openstack.org/#/c/589131/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From amy at demarco.com Mon Aug 13 14:27:13 2018 From: amy at demarco.com (Amy Marrich) Date: Mon, 13 Aug 2018 09:27:13 -0500 Subject: [Openstack-sigs] User Committee Election Nominations Reminder Message-ID: Just wanted to remind everyone that the nomination period for the User Committee elections are open until August 17, 05:59 UTC. If you are an AUC and thinking about running what's stopping you? If you know of someone who would make a great committee member nominate them! Help make a difference for Operators, Users and the Community! Thanks, Amy Marrich (spotz) User Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Mon Aug 13 15:47:30 2018 From: aspiers at suse.com (Adam Spiers) Date: Mon, 13 Aug 2018 16:47:30 +0100 Subject: [Openstack-sigs] [self-healing][openstack-dev][heat][vitrage][mistral] Self-Healing with Vitrage, Heat, and Mistral In-Reply-To: References: Message-ID: <20180813154730.5w7lgrltggooqdow@pacific.linksys.moosehall> Hi Rico, Firstly sorry for the slow reply! I am finally catching up on my backlog. Rico Lin wrote: >Dear all > >Back to Vancouver Summit, Ifat brings out the idea of integrating Heat, >Vitrage, and Mistral to bring better self-healing scenario. >For previous works, There already works cross Heat, Mistral, and Zaqar for >self-healing [1]. >And there is works cross Vitrage, and Mistral [2]. >Now we plan to start working on integrating two works (as much as it >can/should be) and to make sure the scenario works and keep it working. >The integrated scenario flow will look something like this: >An existing monitor detect host/network failure and send an alarm to >Vitrage -> Vitrage deduces that the instance is down (based on the topology >and based on Vitrage templates [2]) -> Vitrage triggers Mistral to fix the >instance -> application is recovered >We created an Etherpad [3] to document all discussion/feedbacks/plans (and >will add more detail through time) >Also, create a story in self-healing SIG to track all task. > >The current plans are: > > - A spec for Vitrage resources in Heat [5] > - Create Vitrage resources in Heat > - Write Heat Template and Vitrage Template for this scenario > - A tempest task for above scenario > - Add periodic job for this scenario (with above task). The best place > to host this job (IMO) is under self-healing SIG This is great! It's a perfect example of the kind of cross-project collaboration which I always hoped the SIG would host. And I really love the idea of Heat making it even easier to deploy Vitrage templates automatically. Originally I thought that this would be too hard and that the SIG would initially need to focus on documenting how to manually deploy self-healing configurations, but supporting automation early on is a very nice bonus :-) So I expect that implementing this can make lives a lot easier for operators (and users) who need self-healing :-) And yes, I agree that the SIG would be the best place to host this job. >To create a periodic job for self-healing sig means we might also need a >place to manage those self-healing tempest test. For this scenario, I think >it will make sense if we use heat-tempest-plugin to store that scenario >test (since it will wrap as a Heat template) or use vitrage-tempest-plugin >(since most of the test scenario are actually already there). Sounds good. >Not sure what will happen if we create a new tempest plugin for >self-healing and no manager for it. Sorry for my ignorance - do you mean manager objects here[0], or some other kind of manager? [0] https://docs.openstack.org/tempest/latest/write_tests.html#manager-objects >We still got some uncertainty to clear during working on it, but the big >picture looks like all will works(if we doing all well on above tasks). >Please provide your feedback or question if you have any. >We do needs feedbacks and reviews on patches or any works. >If you're interested in this, please join us (we need users/ops/devs!). > >[1] https://github.com/openstack/heat-templates/tree/master/hot/autohealing >[2] >https://github.com/openstack/self-healing-sig/blob/master/specs/vitrage-mistral-integration.rst >[3] https://etherpad.openstack.org/p/self-healing-with-vitrage-mistral-heat >[4] https://storyboard.openstack.org/#!/story/2002684 >[5] https://review.openstack.org/#/c/578786 Thanks a lot for creating the story in Storyboard - this is really helpful :-) I'll try to help with reviews etc. and maybe even testing if I can find some extra time for it over the next few months. I can also try to help "market" this initiative in the community by promoting awareness and trying to get operators more involved. Thanks again! Excited about the direction this is heading in :-) Adam From gagehugo at gmail.com Mon Aug 13 15:53:38 2018 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 13 Aug 2018 10:53:38 -0500 Subject: [Openstack-sigs] [Security] No meeting August 16th Message-ID: Hello, Due to multiple members being out on PTO, the Security-SIG meeting will be canceled for August 16th. If anyone has a topic they would want to discuss this week, please feel free to reach out to us on #openstack-security. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Mon Aug 13 16:56:15 2018 From: aspiers at suse.com (Adam Spiers) Date: Mon, 13 Aug 2018 17:56:15 +0100 Subject: [Openstack-sigs] [self-healing][all] Expose SIG to user/ops In-Reply-To: References: Message-ID: <20180813165615.m7yltwe5zqkbjdx6@pacific.linksys.moosehall> Hi Rico, Rico Lin wrote: >bump this topic again, Would really like to hear from all:) Again sorry for the slow reply! >On Wed, Jul 11, 2018 at 8:50 PM Rico Lin wrote: >> As we went through some discussion form Summit for self-healing sig, To >> collect Use case is one of our goal in Rocky cycle. Exactly. >> Keep thinking how can we expose This SIG to users/ops and make this become >> a regular thing. Yes I agree - promoting visibility is really important in order for the SIG to gain momentum, and we need to keep pushing on a regular basis to achieve this. >> Here's some idea that might help, also might be able to help other SIG as >> well: >> >> ** Join user survey:* >> It's possible for SIG to propose options in User survey. >> If we going to do so, we should provide questions which can be answered by >> selecting from options or let's said minimal written is preferred. >> So what will the question be? Would like to hear from everyone for any >> idea. This sounds like a great idea to me! I've submitted a story for this: https://storyboard.openstack.org/#!/story/2003423 and set up an etherpad for brainstorming: https://etherpad.openstack.org/p/self-healing-user-survey-questions >> ** Expose our StoryBoard to user/ops* >> Another idea is to expose our StoryBoard to user/ops. OpenStack >> community currently didn't have any effective way to raise issues for >> self-healing. If we expose StoryBoard to user/ops to allow them to raise >> issues, users can directly file the entire story, instead of just reporting >> part of the issue and that usually reply with `Oh, that's XXX >> project's issue, we got nothing to do with it`. >> Don't get this wrong, we got nothing to block user to raise story(issues) >> in any project, including self-healing SIG. But I believe to specific tell >> user where you can drop that story to trigger cross-project discussions >> will be the right way instead of telling nothing and user not even know any >> valid way to deal with issues. Imaging that when you first join a >> community, there is a line tell you if you have a question about >> self-healing/k8s/upgrade/etc here is where you can raise the issue, and >> find help. >> I will imagine we need to have people from teams to be around to deal with >> issues and tell users/ops when they come. But for what I know, we actually >> got attention from most of teams that concerns about self-healing. >> I think in order to do so (if that's a good idea), we need someplace >> better than ML to tell users/ops that here is where you can go when you >> found your self-healing not working or you need any help. Also, I think >> this might actually apply to other SIGs. This sounds reasonable. We already link to the StoryBoard from the SIG portal wiki page: https://wiki.openstack.org/wiki/Self-healing_SIG#Community_Infrastructure_.2F_Resources but yes we could also proactively announce this in places which would reach more users and operators, inviting them to submit stories. Can you suggest how best to do this? We could email the openstack and openstack-operators lists, although TBH I have done this several times in the past and not gotten much engagement - probably because both lists are very high traffic. >> ** Build gate job for self-healing task* >> We have some use cases that already been demo around self-healing cases, >> like Vitrage+Mistral, Heat+Mistral+Aodh, etc. Also, some scenarios are >> under development. I believe there are values to generate a periodic task, >> or even a cross-project gate to make sure we didn't break the general >> self-healing use cases. If we can do so, I think users/ops will have the >> better confidence to say self-healing is absolutely working in OpenStack. >> Also, we don't need to build separate tempest plugin if we can find any >> projects willing to host those test. Not speaking for the entire team, but >> I think Heat might be able to provide something here. I love this idea, and yes the self-healing-sig git repository could absolutely be the home for this gating code. I suspect that a big part of the challenge will be to simulate failures in order to test the self-healing functionality. In fact we already have a story regarding automated testing: https://storyboard.openstack.org/#!/story/2002129 although that is much more ambitious in scope, i.e. building a complete framework which could support testing of many different self-healing scenarios. I have some documentation on the Eris project which I am planning to upload to the repository on this. However your proposal sounds less ambitious and more likely to be achievable in the short-term, so I'd love to learn more about how you think this might work (unfortunately I don't know much about Tempest internals yet). Thanks a lot for your ideas! They are great - please keep them coming ;-) Adam From amy at demarco.com Mon Aug 13 23:10:33 2018 From: amy at demarco.com (Amy Marrich) Date: Mon, 13 Aug 2018 18:10:33 -0500 Subject: [Openstack-sigs] =?utf-8?q?=28no_subject=29?= Message-ID: Hi everyone, If you’re running OpenStack, please participate in the User Survey to share more about the technology you are using and provide feedback for the community by *August 21 - hurry, it’s next week!!* By completing a deployment, you will qualify as an AUC and receive a $300 USD ticket to the two upcoming Summits. Please help us spread the word. a we're trying to gather as much real-world deployment data as possible to share back with both the operator and developer communities. We are only conducting one survey this year, and the report will be published at the Berlin Summit . II you would like OpenStack user data in the meantime, check out the analytics dashboard updates in real time, throughout the year. The information provided is confidential and will only be presented in aggregate unless you consent to make it public. The deadline to complete the survey and be part of the next report is next *Tuesday, August 21** at 23:59 UTC.* - You can login and complete the OpenStack User Survey here: http://www.openstack.org/user-survey - If you’re interested in joining the OpenStack User Survey Working Group to help with the survey analysis, please complete this form: https://openstackfoundation.formstack.com/forms/user_survey_working_group - Help us promote the User Survey: https://twitter.com/Op enStack/status/993589356312088577 Please let me know if you have any questions. Thanks, Amy Amy Marrich (spotz) OpenStack User Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From martialmichel at datamachines.io Tue Aug 14 20:18:16 2018 From: martialmichel at datamachines.io (Martial Michel) Date: Tue, 14 Aug 2018 16:18:16 -0400 Subject: [Openstack-sigs] [Scientific] Scientific SIG meeting Aug 15 1100UTC Message-ID: The next Scientific SIG meeting will an IRC on Meeting August 15th 2018: 2018-08-15 1100 UTC in channel #openstack-meeting Agenda is as follow: 1. PTG Topics 2. CFP: HPC Advisory Council Spain Conference - 21st September http://hpcadvisorycouncil.com/events/2018/spain-conference/ 3. Ceph day Berlin - November 12th (the day before the summit) https://ceph.com/cephdays/ceph-day-berlin/ 4. AOB All are welcome to attend https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_August_15th_2018 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Tue Aug 14 20:30:30 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Tue, 14 Aug 2018 13:30:30 -0700 Subject: [Openstack-sigs] [self-healing][all] Expose SIG to user/ops In-Reply-To: References: Message-ID: Hi Rico, Great ideas! For engaging ops, we have an opportunity at the co-located ops meetup in denver. I'm not very familiar with the process there, but the planning etherpad just went live not long ago: https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018 On Wed, Jul 11, 2018 at 5:50 AM, Rico Lin wrote: > Hi all > > As we went through some discussion form Summit for self-healing sig, To > collect Use case is one of our goal in Rocky cycle. > Keep thinking how can we expose This SIG to users/ops and make this become > a regular thing. > Here's some idea that might help, also might be able to help other SIG as > well: > > ** Join user survey:* > It's possible for SIG to propose options in User survey. > If we going to do so, we should provide questions which can be answered by > selecting from options or let's said minimal written is preferred. > So what will the question be? Would like to hear from everyone for any > idea. > > ** Expose our StoryBoard to user/ops* > Another idea is to expose our StoryBoard to user/ops. OpenStack > community currently didn't have any effective way to raise issues for > self-healing. If we expose StoryBoard to user/ops to allow them to raise > issues, users can directly file the entire story, instead of just reporting > part of the issue and that usually reply with `Oh, that's XXX > project's issue, we got nothing to do with it`. > Don't get this wrong, we got nothing to block user to raise story(issues) > in any project, including self-healing SIG. But I believe to specific tell > user where you can drop that story to trigger cross-project discussions > will be the right way instead of telling nothing and user not even know any > valid way to deal with issues. Imaging that when you first join a > community, there is a line tell you if you have a question about > self-healing/k8s/upgrade/etc here is where you can raise the issue, and > find help. > I will imagine we need to have people from teams to be around to deal with > issues and tell users/ops when they come. But for what I know, we actually > got attention from most of teams that concerns about self-healing. > I think in order to do so (if that's a good idea), we need someplace > better than ML to tell users/ops that here is where you can go when you > found your self-healing not working or you need any help. Also, I think > this might actually apply to other SIGs. > > ** Build gate job for self-healing task* > We have some use cases that already been demo around self-healing cases, > like Vitrage+Mistral, Heat+Mistral+Aodh, etc. Also, some scenarios are > under development. I believe there are values to generate a periodic task, > or even a cross-project gate to make sure we didn't break the general > self-healing use cases. If we can do so, I think users/ops will have the > better confidence to say self-healing is absolutely working in OpenStack. > Also, we don't need to build separate tempest plugin if we can find any > projects willing to host those test. Not speaking for the entire team, but > I think Heat might be able to provide something here. > > > Those are my proposal, please help to give your opinions. Thanks all. > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Wed Aug 15 20:00:34 2018 From: amy at demarco.com (Amy Marrich) Date: Wed, 15 Aug 2018 15:00:34 -0500 Subject: [Openstack-sigs] OpenStack Diversity and Inclusion Survey Message-ID: The Diversity and Inclusion WG is asking for your assistance. We have revised the Diversity Survey that was originally distributed to the Community in the Fall of 2015 and are looking to update our view of the OpenStack community and it's diversity. We are pleased to be working with members of the CHAOSS project who have signed confidentiality agreements in order to assist us in the following ways: 1) Assistance in analyzing the results 2) And feeding the results into the CHAOSS software and metrics development work so that we can help other Open Source projects Please take the time to fill out the survey and share it with others in the community. The survey can be found at: https://www.surveymonkey.com/r/OpenStackDiversity Thank you for assisting us in this important task! Amy Marrich (spotz) Diversity and Inclusion Working Group Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Thu Aug 16 17:21:11 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 16 Aug 2018 12:21:11 -0500 Subject: [Openstack-sigs] User Committee Nominations Closing Soon! Message-ID: <699C3850-848C-438B-9AFB-FD6A1197EF1D@leafe.com> As I write this, there are just over 12 hours left to get in your nominations for the OpenStack User Committee. Nominations close at August 17, 05:59 UTC. If you are an AUC and thinking about running what's stopping you? If you know of someone who would make a great committee member nominate them (with their permission, of course)! Help make a difference for Operators, Users and the Community! -- Ed Leafe From aspiers at suse.com Fri Aug 17 17:00:30 2018 From: aspiers at suse.com (Adam Spiers) Date: Fri, 17 Aug 2018 18:00:30 +0100 Subject: [Openstack-sigs] [sig][upgrades][ansible][charms][tripleo][kolla][airship] reboot or poweroff? In-Reply-To: <20180803212003.5xosa4lqkub4kk2o@yuggoth.org> References: <20180803154157.7h33v5pxdbbcmdtx@pacific.linksys.moosehall> <20180803212003.5xosa4lqkub4kk2o@yuggoth.org> Message-ID: <20180817170030.bkgkkg22pmoch5ic@pacific.linksys.moosehall> Jeremy Stanley wrote: >On 2018-08-03 16:41:57 +0100 (+0100), Adam Spiers wrote: >[...] >> Another possibility would be to offer "open clinic" office hours, >> like the TC and other projects have done. If the TC or anyone >> else has established best practices in this space, it'd be great >> to hear them. >[...] > >First and foremost, office hours shouldn't be about constraining >when and where fruitful conversation can occur. Make sure people >with a common interest in the topic know where to find each other >for discussion at whatever times they happen to be available. When >you have that, "office hours" are simply a means of coordinating and >publicizing specific times when an increased number of participants >expect to be around. This is especially useful for having >consensus-building discussions more quickly than can be done >asynchronously through people responding to comments they see in >scrollback/logs or on mailing list threads. > >Some options I've seen toyed with: [snipped lots of helpful suggestions] >Hope this helps! Indeed it does! I'll bring these ideas up in Denver and we'll go from there. Thanks a lot Jeremy :-) From aspiers at suse.com Fri Aug 17 18:25:13 2018 From: aspiers at suse.com (Adam Spiers) Date: Fri, 17 Aug 2018 19:25:13 +0100 Subject: [Openstack-sigs] [self-healing] [docs] ANNOUNCE: documentation now being auto-published Message-ID: <20180817182513.z4ayvnr53xmxc6h7@pacific.linksys.moosehall> Hi all, I'm happy to announce that the documentation in the self-healing-sig git repository is now being automatically published by zuul here: https://docs.openstack.org/self-healing-sig/latest/ I have also linked to it from the wiki page: https://wiki.openstack.org/wiki/Self-healing_SIG#Community_Infrastructure_.2F_Resources Thanks a lot to Ifat for submitting the first use cases :-) This announcement almost completes this story: https://storyboard.openstack.org/#!/story/2001628 The only remaining task is to find a suitable location under docs.openstack.org from which to link to it. It doesn't fit neatly into any of the existing sections on the main portal pages, so suggestions about where to add the link are very welcome. Perhaps we should add a link from the Deployment Guides section to self-healing use cases which are already possible, and a link from the Contributor Guides section to the self-healing specs? The documentation is currently a little sparse, but I'm planning to add a few use cases, and there are templates for self-healing use cases and specs, so hopefully this will encourage others to submit new content over the coming months. I also hope to add some information regarding existing work on automated testing[0]. Enjoy, and please feel free to submit use cases and specs of your own! Adam [0] https://storyboard.openstack.org/#!/story/2002129 From lhinds at redhat.com Mon Aug 20 08:29:05 2018 From: lhinds at redhat.com (Luke Hinds) Date: Mon, 20 Aug 2018 09:29:05 +0100 Subject: [Openstack-sigs] [security][anchor] Retire Anchor Project Message-ID: A project under the former security project umbrella 'anchor' is no longer actively developed or maintained. The last patch made (not including general infra patches sent to multiple projects), was Mar 11, 2016 - this particular patch was then abandoned. All of the cores to the best of my knowledge, are no longer active in the community Unless any objections are made in the next 7 days (27th Aug), I will proceed to follow the steps outlined [1] [1] https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project Regards, Luke Hinds -------------- next part -------------- An HTML attachment was scrubbed... URL: From msm at redhat.com Mon Aug 20 19:02:36 2018 From: msm at redhat.com (Michael McCune) Date: Mon, 20 Aug 2018 15:02:36 -0400 Subject: [Openstack-sigs] [security][anchor] Retire Anchor Project In-Reply-To: References: Message-ID: /me plays slow dirge farewell anchor, you were a fun skunkworks project and taught me much about certificates /me salutes peace o/ On Mon, Aug 20, 2018 at 4:29 AM Luke Hinds wrote: > > A project under the former security project umbrella 'anchor' is no longer actively developed or maintained. > > The last patch made (not including general infra patches sent to multiple projects), was Mar 11, 2016 - this particular patch was then abandoned. > > All of the cores to the best of my knowledge, are no longer active in the community > > Unless any objections are made in the next 7 days (27th Aug), I will proceed to follow the steps outlined [1] > > [1] https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project > > Regards, > > Luke Hinds > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs From stig.openstack at telfer.org Tue Aug 21 19:18:41 2018 From: stig.openstack at telfer.org (Stig Telfer) Date: Tue, 21 Aug 2018 20:18:41 +0100 Subject: [Openstack-sigs] [scientific] IRC meeting 2100UTC - PTG/OPS, Ceph RBD, BeeGFS Message-ID: Hi All - We have an IRC meeting later today in #openstack-meeting at 2100 UTC (about 2 hours time). Everyone is welcome. Today we have a few topics around the PTG, plus a new Ironic story around bare metal and native Ceph RBD, plus I’m hoping to gather interest on Ansible Galaxy modules for BeeGFS clusters for high performance storage on the fly. Full agenda is here: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_August_21st_2018 Cheers, Stig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Tue Aug 21 19:44:04 2018 From: ed at leafe.com (Ed Leafe) Date: Tue, 21 Aug 2018 14:44:04 -0500 Subject: [Openstack-sigs] UC Elections will not be held Message-ID: <49D533BF-F818-4642-AD23-F93E1F6E8F05@leafe.com> As there were only 2 nominations for the 2 open seats, elections will not be needed. Congratulations to Matt Van Winkle and Joseph Sandoval! -- Ed Leafe From amy at demarco.com Tue Aug 21 20:26:44 2018 From: amy at demarco.com (Amy Marrich) Date: Tue, 21 Aug 2018 15:26:44 -0500 Subject: [Openstack-sigs] UC Elections will not be held In-Reply-To: <49D533BF-F818-4642-AD23-F93E1F6E8F05@leafe.com> References: <49D533BF-F818-4642-AD23-F93E1F6E8F05@leafe.com> Message-ID: Congrats to VW and Joseph. Thank you to Saverio for his hard work. And lastly thank you to Ed, Chandan, and Mohamed for serving as our election officials! Amy (spotz) User Committee On Tue, Aug 21, 2018 at 2:44 PM, Ed Leafe wrote: > As there were only 2 nominations for the 2 open seats, elections will not > be needed. Congratulations to Matt Van Winkle and Joseph Sandoval! > > -- Ed Leafe > > > > > > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From edgar.magana at workday.com Tue Aug 21 19:57:38 2018 From: edgar.magana at workday.com (Edgar Magana) Date: Tue, 21 Aug 2018 19:57:38 +0000 Subject: [Openstack-sigs] [User-committee] UC Elections will not be held In-Reply-To: <49D533BF-F818-4642-AD23-F93E1F6E8F05@leafe.com> References: <49D533BF-F818-4642-AD23-F93E1F6E8F05@leafe.com> Message-ID: Congratulations Matt and Joseph! Our community is in good hands with your leadership, looking forward to seeing you in Berlin. Do not hesitate to ask for help at any time. Edgar On 8/21/18, 12:45 PM, "Ed Leafe" wrote: As there were only 2 nominations for the 2 open seats, elections will not be needed. Congratulations to Matt Van Winkle and Joseph Sandoval! -- Ed Leafe _______________________________________________ User-committee mailing list User-committee at lists.openstack.org https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_user-2Dcommittee&d=DwIGaQ&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=zJVnmWwuk3H0ySNWzMvn_WFZHaXuHfYFrGXivVpZ4I8&s=b5cPci7YTmu4pkYg7k429mism5WKSUOkJpnub4U_Fp8&e= From msm at redhat.com Thu Aug 23 17:02:29 2018 From: msm at redhat.com (Michael McCune) Date: Thu, 23 Aug 2018 13:02:29 -0400 Subject: [Openstack-sigs] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, This week's meeting brings the return of the full SIG core-quartet as all core members were in attendance. The main topics were the agenda [7] for the upcoming Denver PTG [8], and the API-SIG still being listed as TC working group in the governance repository reference files. We also pushed a minor technical change related to the reorganization of the project-config for the upcoming Python 3 transition [9] On the topic of the PTG, there were no new items added or comments about the current list [7]. There was brief talk about who will be attending the gathering, but the details have not been finalized yet. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * None # API Guidelines Proposed for Freeze * None # Guidelines that are ready for wider review by the whole community. * None # Guidelines Currently Under Review [3] * Add an api-design doc with design advice https://review.openstack.org/592003 * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://etherpad.openstack.org/p/api-sig-stein-ptg [8] https://www.openstack.org/ptg/ [9] https://review.openstack.org/#/c/593943/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg From kennelson11 at gmail.com Fri Aug 24 00:58:27 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 23 Aug 2018 17:58:27 -0700 Subject: [Openstack-sigs] [First Contact] Updates- Meeting Info & PTG Message-ID: Hello Everyone, Two things that I wanted to make you all aware of if you aren't already: 1. We have mad a change to the meeting schedule. It will now be bi weekly on odd weeks and an hour earlier- 7:00 UTC on Wednesdays. We noticed that we were struggling to fill a meeting every week and being just past a somewhat reasonable time for the meeting chair, its now been shifted[1] :) 2. We will be meeting at the PTG. I know a few of you won't be able to make it, but if you are planning on being there for another project already, please add your name and preferred days (Monday or Tuesday) to the etherpad[2]. Once we have a better idea of the preferred date and the PTG bot is ready, I will book a room. If there are any topics in particular you hope to discuss as well, please add those! -Kendall Nelson (diablo_rojo) [1] https://review.openstack.org/#/c/595377/ [2] https://etherpad.openstack.org/p/FC_SIG_ptg_stein -------------- next part -------------- An HTML attachment was scrubbed... URL: From stig.openstack at telfer.org Wed Aug 29 09:40:12 2018 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 29 Aug 2018 10:40:12 +0100 Subject: [Openstack-sigs] [scientific] IRC meeting today - science/cloud CFPs, PTG, meetups etc Message-ID: <4681E5E8-728C-4BEF-A55D-F6263D47DC9F@telfer.org> Hi All - We have an IRC meeting at 1100UTC (just over an hour’s time) in channel #openstack-meeting. For today’s agenda we have a number of CFPs to share, plus recent developments for PTG planning. There is also a meetup next week in Manchester, UK, with a presentation on a new project offering research computing in a Scientific OpenStack environment. The agenda is here: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_August_29th_2018 Everyone is welcome. Stig -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at canonical.com Thu Aug 30 10:44:00 2018 From: james.page at canonical.com (James Page) Date: Thu, 30 Aug 2018 11:44:00 +0100 Subject: [Openstack-sigs] [openstack-dev] [sig][upgrades][ansible][charms][tripleo][kolla][airship] reboot or poweroff? In-Reply-To: <20180803154157.7h33v5pxdbbcmdtx@pacific.linksys.moosehall> References: <20180803154157.7h33v5pxdbbcmdtx@pacific.linksys.moosehall> Message-ID: Hi Adam On Fri, 3 Aug 2018 at 16:42 Adam Spiers wrote: > [] > TL;DR response: reboot, absolutely no question! My full response is > below. > Ack > >Since Vancouver, two of the original SIG chairs have stepped down leaving > >me in the hot seat with minimal participation from either deployment > >projects or operators in the IRC meetings. In addition I've only been > able > >to make every 3rd IRC meeting, so they have generally not being happening. > [] > As a SIG leader in a similar position (albeit with one other very > helpful person on board), let me throw my £0.02 in ... > > With both upgrades and self-healing I think there is a big disparity > between supply (developers with time to work on the functionality) and > demand (operators who need the functionality). And perhaps also the > high demand leads to a lot of developers being interested in the topic > whilst not having much spare time to help out. That is probably why > we both see high attendance at the summit / PTG events but relatively > little activity in between. > > I also freely admit that the inevitable conflicts with downstream > requirements mean that I have struggled to find time to be as > proactive with driving momentum as I had wanted, although I'm hoping > to pick this up again over the next weeks leading up to the PTG. It > sounds like maybe you have encountered similar challenges. > Indeed I have but such is life! > That said, I strongly believe that both of these SIGs offer a *lot* of > value, and even if we aren't yet seeing the level of online activity > that we would like, I think it's really important that they both > continue. If for no other reasons, the offline sessions at the > summits and PTGs are hugely useful for helping converge the community > on common approaches, and the associated repositories / wikis serve as > a great focal point too. > > Regarding online collaboration, yes, building momentum for IRC > meetings is tough, especially with the timezone challenges. Maybe a > monthly cadence is a reasonable starting point, or twice a month in > alternating timezones - but maybe with both meetings within ~24 hours > of each other, to reduce accidental creation of geographic silos. > I like that idea - doing the two meetings on the same day would make alot of sense and would fragment time less for participants. Another possibility would be to offer "open clinic" office hours, like > the TC and other projects have done. If the TC or anyone else has > established best practices in this space, it'd be great to hear them. > Either way, I sincerely hope that you decide to continue with the SIG, > and that other people step up to help out. These things don't develop > overnight but it is a tremendously worthwhile initiative; after all, > everyone needs to upgrade OpenStack. Keep the faith! ;-) > I think the upcoming PTG is a good opportunity to hit reboot, plan some suitable IRC meeting slots and for other contributors to step up. I'll put together an agenda for the 1/2 day we have planned for the Upgrade SIG on the Monday afternoon (Ballroom C) - will appear here: https://etherpad.openstack.org/p/upgrade-sig-ptg-stein All - If you intend on attending the Upgrades sessions, please put your details down on the pad - I'll be chasing for representatives from other deployment projects next week! Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Thu Aug 30 16:27:49 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 31 Aug 2018 00:27:49 +0800 Subject: [Openstack-sigs] [self-healing][all] Expose SIG to user/ops In-Reply-To: References: Message-ID: > > On Wed, Aug 15, 2018 at 4:30 AM Eric K wrote: > > > > Hi Rico, > > Great ideas! > > > > For engaging ops, we have an opportunity at the co-located ops meetup in > denver. I'm not very familiar with the process there, but the planning > etherpad just went live not long ago: > > > > https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018 > Thanks Eric, I already put this topic to https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018 hope we can get some discussion on this > On Wed, Jul 11, 2018 at 5:50 AM, Rico Lin > wrote: > >> Hi all >> >> As we went through some discussion form Summit for self-healing sig, To >> collect Use case is one of our goal in Rocky cycle. >> Keep thinking how can we expose This SIG to users/ops and make this >> become a regular thing. >> Here's some idea that might help, also might be able to help other SIG as >> well: >> > >> ** Join user survey:* >> It's possible for SIG to propose options in User survey. >> If we going to do so, we should provide questions which can be answered >> by selecting from options or let's said minimal written is preferred. >> So what will the question be? Would like to hear from everyone for any >> idea. >> >> ** Expose our StoryBoard to user/ops* >> Another idea is to expose our StoryBoard to user/ops. OpenStack >> community currently didn't have any effective way to raise issues for >> self-healing. If we expose StoryBoard to user/ops to allow them to raise >> issues, users can directly file the entire story, instead of just reporting >> part of the issue and that usually reply with `Oh, that's XXX >> project's issue, we got nothing to do with it`. >> Don't get this wrong, we got nothing to block user to raise story(issues) >> in any project, including self-healing SIG. But I believe to specific tell >> user where you can drop that story to trigger cross-project discussions >> will be the right way instead of telling nothing and user not even know any >> valid way to deal with issues. Imaging that when you first join a >> community, there is a line tell you if you have a question about >> self-healing/k8s/upgrade/etc here is where you can raise the issue, and >> find help. >> I will imagine we need to have people from teams to be around to deal >> with issues and tell users/ops when they come. But for what I know, we >> actually got attention from most of teams that concerns about self-healing. >> I think in order to do so (if that's a good idea), we need someplace >> better than ML to tell users/ops that here is where you can go when you >> found your self-healing not working or you need any help. Also, I think >> this might actually apply to other SIGs. >> >> ** Build gate job for self-healing task* >> We have some use cases that already been demo around self-healing cases, >> like Vitrage+Mistral, Heat+Mistral+Aodh, etc. Also, some scenarios are >> under development. I believe there are values to generate a periodic task, >> or even a cross-project gate to make sure we didn't break the general >> self-healing use cases. If we can do so, I think users/ops will have the >> better confidence to say self-healing is absolutely working in OpenStack. >> Also, we don't need to build separate tempest plugin if we can find any >> projects willing to host those test. Not speaking for the entire team, but >> I think Heat might be able to provide something here. >> >> >> Those are my proposal, please help to give your opinions. Thanks all. >> >> -- >> May The Force of OpenStack Be With You, >> >> *Rico Lin*irc: ricolin >> >> >> >> _______________________________________________ >> openstack-sigs mailing list >> openstack-sigs at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >> >> > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Thu Aug 30 16:57:25 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 31 Aug 2018 00:57:25 +0800 Subject: [Openstack-sigs] [self-healing][all] Expose SIG to user/ops In-Reply-To: <20180813165615.m7yltwe5zqkbjdx6@pacific.linksys.moosehall> References: <20180813165615.m7yltwe5zqkbjdx6@pacific.linksys.moosehall> Message-ID: On Tue, Aug 14, 2018 at 12:56 AM Adam Spiers wrote: > > This sounds like a great idea to me! I've submitted a story for this: > > https://storyboard.openstack.org/#!/story/2003423 > > and set up an etherpad for brainstorming: > > https://etherpad.openstack.org/p/self-healing-user-survey-questions > I think those two questions are quite great. Just put some extra idea in that. Also, I have proposed you're etherpad under UC discussions in PTG https://etherpad.openstack.org/p/uc-stein-ptg Unfortunately, that's when UC discuss about next survey, so appears we have to settle down with certain questions before our meeting in PTG. For me, the current version looks nice. If anyone like to suggest more, feel free to do so > > > This sounds reasonable. We already link to the StoryBoard from the > SIG portal wiki page: > > https://wiki.openstack.org/wiki/Self-healing_SIG#Community_Infrastructure_.2F_Resources > > but yes we could also proactively announce this in places which would > reach more users and operators, inviting them to submit stories. Can > you suggest how best to do this? We could email the openstack and > openstack-operators lists, although TBH I have done this several times > in the past and not gotten much engagement - probably because both > lists are very high traffic. > I guess we should keep trying to propose in ML. There're two more places I think we can try on. As we might be able to have our own page in User survey, I think we're allow to put more message to point a place(our git repo, or an etherpad) for Users who like to add more information during that survey. The second place is SuperUser MagZ. If we can come's up with an article (which we can plan in PTG if we like to have one) to introduce our plan, works, and needs. I think Super User will willing to help us post that article out. If we got some article, we can Use official OpenStack social media (FB, Twitter, etc) to broadcast that information. > > > I love this idea, and yes the self-healing-sig git repository could > absolutely be the home for this gating code. I suspect that a big > part of the challenge will be to simulate failures in order to test > the self-healing functionality. In fact we already have a story > regarding automated testing: > > https://storyboard.openstack.org/#!/story/2002129 > > although that is much more ambitious in scope, i.e. building a > complete framework which could support testing of many different > self-healing scenarios. I have some documentation on the Eris project > which I am planning to upload to the repository on this. > > However your proposal sounds less ambitious and more likely to be > achievable in the short-term, so I'd love to learn more about how you > think this might work (unfortunately I don't know much about Tempest > internals yet). Since the git just updated with more official project format, we can add Zuul.yaml to define our periodic job I think we already got a lot of places to add tempest test(I think we might be able to use heat-tempest-plugin too), so we shouldn't need to build our own tempest repo > > Thanks a lot for your ideas! They are great - please keep them coming ;-) > > Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Aug 30 17:03:50 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 30 Aug 2018 17:03:50 +0000 Subject: [Openstack-sigs] [all] Bringing the community together (combine the lists!) Message-ID: <20180830170350.wrz4wlanb276kncb@yuggoth.org> The openstack, openstack-dev, openstack-sigs and openstack-operators mailing lists on lists.openstack.org see an increasing amount of cross-posting and thread fragmentation as conversants attempt to reach various corners of our community with topics of interest to one or more (and sometimes all) of those overlapping groups of subscribers. For some time we've been discussing and trying ways to bring our developers, distributors, operators and end users together into a less isolated, more cohesive community. An option which keeps coming up is to combine these different but overlapping mailing lists into one single discussion list. As we covered[1] in Vancouver at the last Forum there are a lot of potential up-sides: 1. People with questions are no longer asking them in a different place than many of the people who have the answers to those questions (the "not for usage questions" in the openstack-dev ML title only serves to drive the wedge between developers and users deeper). 2. The openstack-sigs mailing list hasn't seem much uptake (an order of magnitude fewer subscribers and posts) compared to the other three lists, yet it was intended to bridge the communication gap between them; combining those lists would have been a better solution to the problem than adding yet another turned out to be. 3. At least one out of every ten messages to any of these lists is cross-posted to one or more of the others, because we have topics that span across these divided groups yet nobody is quite sure which one is the best venue for them; combining would eliminate the fragmented/duplicative/divergent discussion which results from participants following up on the different subsets of lists to which they're subscribed, 4. Half of the people who are actively posting to at least one of the four lists subscribe to two or more, and a quarter to three if not all four; they would no longer be receiving multiple copies of the various cross-posts if these lists were combined. The proposal is simple: create a new openstack-discuss mailing list to cover all the above sorts of discussion and stop using the other four. As the OpenStack ecosystem continues to mature and its software and services stabilize, the nature of our discourse is changing (becoming increasingly focused with fewer heated debates, distilling to a more manageable volume), so this option is looking much more attractive than in the past. That's not to say it's quiet (we're looking at roughly 40 messages a day across them on average, after deduplicating the cross-posts), but we've grown accustomed to tagging the subjects of these messages to make it easier for other participants to quickly filter topics which are relevant to them and so would want a good set of guidelines on how to do so for the combined list (a suggested set is already being brainstormed[2]). None of this is set in stone of course, and I expect a lot of continued discussion across these lists (oh, the irony) while we try to settle on a plan, so definitely please follow up with your questions, concerns, ideas, et cetera. As an aside, some of you have probably also seen me talking about experiments I've been doing with Mailman 3... I'm hoping new features in its Hyperkitty and Postorius WebUIs make some of this easier or more accessible to casual participants (particularly in light of the combined list scenario), but none of the plan above hinges on MM3 and should be entirely doable with the MM2 version we're currently using. Also, in case you were wondering, no the irony of cross-posting this message to four mailing lists is not lost on me. ;) [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community [2] https://etherpad.openstack.org/p/common-openstack-ml-topics -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rico.lin.guanyu at gmail.com Thu Aug 30 17:13:58 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 31 Aug 2018 01:13:58 +0800 Subject: [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: +1 on this idea, people been posting around for the exactly same topic and got feedback from ops or devs, but never together, this will help people do discussion on the same table. What needs to be done for this is full topic categories support under `options` page so people get to filter emails properly. On Fri, Aug 31, 2018 at 1:04 AM Jeremy Stanley wrote: > The openstack, openstack-dev, openstack-sigs and openstack-operators > mailing lists on lists.openstack.org see an increasing amount of > cross-posting and thread fragmentation as conversants attempt to > reach various corners of our community with topics of interest to > one or more (and sometimes all) of those overlapping groups of > subscribers. For some time we've been discussing and trying ways to > bring our developers, distributors, operators and end users together > into a less isolated, more cohesive community. An option which keeps > coming up is to combine these different but overlapping mailing > lists into one single discussion list. As we covered[1] in Vancouver > at the last Forum there are a lot of potential up-sides: > > 1. People with questions are no longer asking them in a different > place than many of the people who have the answers to those > questions (the "not for usage questions" in the openstack-dev ML > title only serves to drive the wedge between developers and users > deeper). > > 2. The openstack-sigs mailing list hasn't seem much uptake (an order > of magnitude fewer subscribers and posts) compared to the other > three lists, yet it was intended to bridge the communication gap > between them; combining those lists would have been a better > solution to the problem than adding yet another turned out to be. > > 3. At least one out of every ten messages to any of these lists is > cross-posted to one or more of the others, because we have topics > that span across these divided groups yet nobody is quite sure which > one is the best venue for them; combining would eliminate the > fragmented/duplicative/divergent discussion which results from > participants following up on the different subsets of lists to which > they're subscribed, > > 4. Half of the people who are actively posting to at least one of > the four lists subscribe to two or more, and a quarter to three if > not all four; they would no longer be receiving multiple copies of > the various cross-posts if these lists were combined. > > The proposal is simple: create a new openstack-discuss mailing list > to cover all the above sorts of discussion and stop using the other > four. As the OpenStack ecosystem continues to mature and its > software and services stabilize, the nature of our discourse is > changing (becoming increasingly focused with fewer heated debates, > distilling to a more manageable volume), so this option is looking > much more attractive than in the past. That's not to say it's quiet > (we're looking at roughly 40 messages a day across them on average, > after deduplicating the cross-posts), but we've grown accustomed to > tagging the subjects of these messages to make it easier for other > participants to quickly filter topics which are relevant to them and > so would want a good set of guidelines on how to do so for the > combined list (a suggested set is already being brainstormed[2]). > None of this is set in stone of course, and I expect a lot of > continued discussion across these lists (oh, the irony) while we try > to settle on a plan, so definitely please follow up with your > questions, concerns, ideas, et cetera. > > As an aside, some of you have probably also seen me talking about > experiments I've been doing with Mailman 3... I'm hoping new > features in its Hyperkitty and Postorius WebUIs make some of this > easier or more accessible to casual participants (particularly in > light of the combined list scenario), but none of the plan above > hinges on MM3 and should be entirely doable with the MM2 version > we're currently using. > > Also, in case you were wondering, no the irony of cross-posting this > message to four mailing lists is not lost on me. ;) > > [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community > [2] https://etherpad.openstack.org/p/common-openstack-ml-topics > -- > Jeremy Stanley > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Aug 30 17:17:14 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 30 Aug 2018 13:17:14 -0400 Subject: [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: <1535649366-sup-1027@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-08-30 17:03:50 +0000: > The openstack, openstack-dev, openstack-sigs and openstack-operators > mailing lists on lists.openstack.org see an increasing amount of > cross-posting and thread fragmentation as conversants attempt to > reach various corners of our community with topics of interest to > one or more (and sometimes all) of those overlapping groups of > subscribers. For some time we've been discussing and trying ways to > bring our developers, distributors, operators and end users together > into a less isolated, more cohesive community. An option which keeps > coming up is to combine these different but overlapping mailing > lists into one single discussion list. As we covered[1] in Vancouver > at the last Forum there are a lot of potential up-sides: > > 1. People with questions are no longer asking them in a different > place than many of the people who have the answers to those > questions (the "not for usage questions" in the openstack-dev ML > title only serves to drive the wedge between developers and users > deeper). > > 2. The openstack-sigs mailing list hasn't seem much uptake (an order > of magnitude fewer subscribers and posts) compared to the other > three lists, yet it was intended to bridge the communication gap > between them; combining those lists would have been a better > solution to the problem than adding yet another turned out to be. > > 3. At least one out of every ten messages to any of these lists is > cross-posted to one or more of the others, because we have topics > that span across these divided groups yet nobody is quite sure which > one is the best venue for them; combining would eliminate the > fragmented/duplicative/divergent discussion which results from > participants following up on the different subsets of lists to which > they're subscribed, > > 4. Half of the people who are actively posting to at least one of > the four lists subscribe to two or more, and a quarter to three if > not all four; they would no longer be receiving multiple copies of > the various cross-posts if these lists were combined. > > The proposal is simple: create a new openstack-discuss mailing list > to cover all the above sorts of discussion and stop using the other > four. As the OpenStack ecosystem continues to mature and its > software and services stabilize, the nature of our discourse is > changing (becoming increasingly focused with fewer heated debates, > distilling to a more manageable volume), so this option is looking > much more attractive than in the past. That's not to say it's quiet > (we're looking at roughly 40 messages a day across them on average, > after deduplicating the cross-posts), but we've grown accustomed to > tagging the subjects of these messages to make it easier for other > participants to quickly filter topics which are relevant to them and > so would want a good set of guidelines on how to do so for the > combined list (a suggested set is already being brainstormed[2]). > None of this is set in stone of course, and I expect a lot of > continued discussion across these lists (oh, the irony) while we try > to settle on a plan, so definitely please follow up with your > questions, concerns, ideas, et cetera. > > As an aside, some of you have probably also seen me talking about > experiments I've been doing with Mailman 3... I'm hoping new > features in its Hyperkitty and Postorius WebUIs make some of this > easier or more accessible to casual participants (particularly in > light of the combined list scenario), but none of the plan above > hinges on MM3 and should be entirely doable with the MM2 version > we're currently using. > > Also, in case you were wondering, no the irony of cross-posting this > message to four mailing lists is not lost on me. ;) > > [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community > [2] https://etherpad.openstack.org/p/common-openstack-ml-topics I fully support the idea of merging the lists. Doug From chris at openstack.org Thu Aug 30 17:19:50 2018 From: chris at openstack.org (Chris Hoge) Date: Thu, 30 Aug 2018 10:19:50 -0700 Subject: [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: I also propose that we merge the interop-wg mailing list also, as the volume on that list is small but topics posted to it are of general interest to the community. Chris Hoge (Interop WG Secretary, amongst other things) > On Aug 30, 2018, at 10:03 AM, Jeremy Stanley wrote: > > The openstack, openstack-dev, openstack-sigs and openstack-operators > mailing lists on lists.openstack.org see an increasing amount of > cross-posting and thread fragmentation as conversants attempt to > reach various corners of our community with topics of interest to > one or more (and sometimes all) of those overlapping groups of > subscribers. For some time we've been discussing and trying ways to > bring our developers, distributors, operators and end users together > into a less isolated, more cohesive community. An option which keeps > coming up is to combine these different but overlapping mailing > lists into one single discussion list. As we covered[1] in Vancouver > at the last Forum there are a lot of potential up-sides: > > 1. People with questions are no longer asking them in a different > place than many of the people who have the answers to those > questions (the "not for usage questions" in the openstack-dev ML > title only serves to drive the wedge between developers and users > deeper). > > 2. The openstack-sigs mailing list hasn't seem much uptake (an order > of magnitude fewer subscribers and posts) compared to the other > three lists, yet it was intended to bridge the communication gap > between them; combining those lists would have been a better > solution to the problem than adding yet another turned out to be. > > 3. At least one out of every ten messages to any of these lists is > cross-posted to one or more of the others, because we have topics > that span across these divided groups yet nobody is quite sure which > one is the best venue for them; combining would eliminate the > fragmented/duplicative/divergent discussion which results from > participants following up on the different subsets of lists to which > they're subscribed, > > 4. Half of the people who are actively posting to at least one of > the four lists subscribe to two or more, and a quarter to three if > not all four; they would no longer be receiving multiple copies of > the various cross-posts if these lists were combined. > > The proposal is simple: create a new openstack-discuss mailing list > to cover all the above sorts of discussion and stop using the other > four. As the OpenStack ecosystem continues to mature and its > software and services stabilize, the nature of our discourse is > changing (becoming increasingly focused with fewer heated debates, > distilling to a more manageable volume), so this option is looking > much more attractive than in the past. That's not to say it's quiet > (we're looking at roughly 40 messages a day across them on average, > after deduplicating the cross-posts), but we've grown accustomed to > tagging the subjects of these messages to make it easier for other > participants to quickly filter topics which are relevant to them and > so would want a good set of guidelines on how to do so for the > combined list (a suggested set is already being brainstormed[2]). > None of this is set in stone of course, and I expect a lot of > continued discussion across these lists (oh, the irony) while we try > to settle on a plan, so definitely please follow up with your > questions, concerns, ideas, et cetera. > > As an aside, some of you have probably also seen me talking about > experiments I've been doing with Mailman 3... I'm hoping new > features in its Hyperkitty and Postorius WebUIs make some of this > easier or more accessible to casual participants (particularly in > light of the combined list scenario), but none of the plan above > hinges on MM3 and should be entirely doable with the MM2 version > we're currently using. > > Also, in case you were wondering, no the irony of cross-posting this > message to four mailing lists is not lost on me. ;) > > [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community > [2] https://etherpad.openstack.org/p/common-openstack-ml-topics > -- > Jeremy Stanley > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs From fungi at yuggoth.org Thu Aug 30 21:12:57 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 30 Aug 2018 21:12:57 +0000 Subject: [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: <20180830211257.oa6hxd4pningzqf4@yuggoth.org> On 2018-08-31 01:13:58 +0800 (+0800), Rico Lin wrote: [...] > What needs to be done for this is full topic categories support > under `options` page so people get to filter emails properly. [...] Unfortunately, topic filtering is one of the MM2 features the Mailman community decided nobody used (or at least not enough to warrant preserving it in MM3). I do think we need to be consistent about tagging subjects to make client-side filtering more effective for people who want that, but if we _do_ want to be able to upgrade we shouldn't continue to rely on server-side filtering support in Mailman unless we can somehow work with them to help in reimplementing it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu Aug 30 21:25:37 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 30 Aug 2018 21:25:37 +0000 Subject: [Openstack-sigs] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <5B883E1B.2070101@windriver.com> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> Message-ID: <20180830212536.yzirmxzxiqhciyby@yuggoth.org> On 2018-08-30 12:57:31 -0600 (-0600), Chris Friesen wrote: [...] > Do we want to merge usage and development onto one list? That > could be a busy list for someone who's just asking a simple usage > question. A counterargument though... projecting the number of unique posts to all four lists combined for this year (both based on trending for the past several years and also simply scaling the count of messages this year so far based on how many days are left) comes out roughly equal to the number of posts which were made to the general openstack mailing list in 2012. > Alternately, if we are going to merge everything then why not just > use the "openstack" mailing list since it already exists and there > are references to it on the web. This was an option we discussed in the "One Community" forum session as well. There seemed to be a slight preference for making a new -disscuss list and retiring the old general one. I see either as an potential solution here. > (Or do you want to force people to move to something new to make them > recognize that something has changed?) That was one of the arguments made. Also I believe we have a *lot* of "black hole" subscribers who aren't actually following that list but whose addresses aren't bouncing new posts we send them for any of a number of possible reasons. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu Aug 30 21:33:41 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 30 Aug 2018 21:33:41 +0000 Subject: [Openstack-sigs] [Openstack-operators] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> Message-ID: <20180830213341.yuxyen2elx2c3is4@yuggoth.org> On 2018-08-30 22:49:26 +0200 (+0200), Thomas Goirand wrote: [...] > I really don't want this. I'm happy with things being sorted in > multiple lists, even though I'm subscribed to multiples. I understand where you're coming from, and I used to feel similarly. I was accustomed to communities where developers had one mailing list, users had another, and whenever a user asked a question on the developer mailing list they were told to go away and bother the user mailing list instead (not even a good, old-fashioned "RTFM" for their trouble). You're probably intimately familiar with at least one of these communities. ;) As the years went by, it's become apparent to me that this is actually an antisocial behavior pattern, and actively harmful to the user base. I believe OpenStack actually wants users to see the development work which is underway, come to understand it, and become part of that process. Requiring them to have their conversations elsewhere sends the opposite message. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mrhillsman at gmail.com Thu Aug 30 23:08:56 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Thu, 30 Aug 2018 18:08:56 -0500 Subject: [Openstack-sigs] [Openstack-operators] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <5B88656D.1020209@openstack.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> <20180830213341.yuxyen2elx2c3is4@yuggoth.org> <5B88656D.1020209@openstack.org> Message-ID: I think the more we can reduce the ML sprawl the better. I also recall us discussing having some documentation or way of notifying net new signups of how to interact with the ML successfully. An example was having some general guidelines around tagging. Also as a maintainer for at least one of the mailing lists over the past 6+ months I have to inquire about how that will happen going forward which again could be part of this documentation/initial message. Also there are many times I miss messages that for one reason or another do not hit the proper mailing list. I mean we could dive into the minutia or start up the mountain of why keeping things the way they are is worst than making this change and vice versa but I am willing to bet there are more advantages than disadvantages. On Thu, Aug 30, 2018 at 4:45 PM Jimmy McArthur wrote: > > > Jeremy Stanley wrote: > > On 2018-08-30 22:49:26 +0200 (+0200), Thomas Goirand wrote: > [...] > > I really don't want this. I'm happy with things being sorted in > multiple lists, even though I'm subscribed to multiples. > > IMO this is easily solved by tagging. If emails are properly tagged > (which they typically are), most email clients will properly sort on rules > and you can just auto-delete if you're 100% not interested in a particular > topic. > Yes, there are definitely ways to go about discarding unwanted mail automagically or not seeing it at all. And to be honest I think if we are relying on so many separate MLs to do that for us it is better community wide for the responsibility for that to be on individuals. It becomes very tiring and inefficient time wise to have to go through the various issues of the way things are now; cross-posting is a great example that is steadily getting worse. > SNIP > > As the years went by, it's become apparent to me that this is > actually an antisocial behavior pattern, and actively harmful to the > user base. I believe OpenStack actually wants users to see the > development work which is underway, come to understand it, and > become part of that process. Requiring them to have their > conversations elsewhere sends the opposite message. > > I really and truly believe that it has become a blocker for our > community. Conversations sent to multiple lists inherently splinter and we > end up with different groups coming up with different solutions for a > single problem. Literally the opposite desired result of sending things to > multiple lists. I believe bringing these groups together, with tags, will > solve a lot of immediate problems. It will also have an added bonus of > allowing people "catching up" on the community to look to a single place > for a thread i/o 1-5 separate lists. It's better in both the short and > long term. > +1 > > Cheers, > Jimmy > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Aug 31 00:03:35 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 31 Aug 2018 10:03:35 +1000 Subject: [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830211257.oa6hxd4pningzqf4@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180830211257.oa6hxd4pningzqf4@yuggoth.org> Message-ID: <20180831000334.GR26778@thor.bakeyournoodle.com> On Thu, Aug 30, 2018 at 09:12:57PM +0000, Jeremy Stanley wrote: > On 2018-08-31 01:13:58 +0800 (+0800), Rico Lin wrote: > [...] > > What needs to be done for this is full topic categories support > > under `options` page so people get to filter emails properly. > [...] > > Unfortunately, topic filtering is one of the MM2 features the > Mailman community decided nobody used (or at least not enough to > warrant preserving it in MM3). I do think we need to be consistent > about tagging subjects to make client-side filtering more effective > for people who want that, but if we _do_ want to be able to upgrade > we shouldn't continue to rely on server-side filtering support in > Mailman unless we can somehow work with them to help in > reimplementing it. The suggestion is to implement it as a 3rd party plugin or work with the mm community to implement: https://wiki.mailman.psf.io/DEV/Dynamic%20Sublists So if we decide we really want that in mm3 we have options. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From fungi at yuggoth.org Fri Aug 31 00:21:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 31 Aug 2018 00:21:22 +0000 Subject: [Openstack-sigs] [Openstack-operators] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> <20180830213341.yuxyen2elx2c3is4@yuggoth.org> <5B88656D.1020209@openstack.org> Message-ID: <20180831002121.ch76mvqeskplqew2@yuggoth.org> On 2018-08-30 18:08:56 -0500 (-0500), Melvin Hillsman wrote: [...] > I also recall us discussing having some documentation or way of > notifying net new signups of how to interact with the ML > successfully. An example was having some general guidelines around > tagging. Also as a maintainer for at least one of the mailing > lists over the past 6+ months I have to inquire about how that > will happen going forward which again could be part of this > documentation/initial message. [...] Mailman supports customizable welcome messages for new subscribers, so the *technical* implementation there is easy. I do think (and failed to highlight it explicitly earlier I'm afraid) that this proposal comes with an expectation that we provide recommended guidelines for mailing list use/etiquette appropriate to our community. It could be contained entirely within the welcome message, or merely linked to a published document (and whether that's best suited for the Infra Manual or New Contributor Guide or somewhere else entirely is certainly up for debate), or even potentially both. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Fri Aug 31 16:17:26 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 31 Aug 2018 16:17:26 +0000 Subject: [Openstack-sigs] Mailman topic filtering (was: Bringing the community together...) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180830211257.oa6hxd4pningzqf4@yuggoth.org> <20180831000334.GR26778@thor.bakeyournoodle.com> Message-ID: <20180831161726.wtjbzr6yvz2wgghv@yuggoth.org> On 2018-08-31 09:35:55 +0100 (+0100), Stephen Finucane wrote: [...] > I've tinked with mailman 3 before so I could probably take a shot at > this over the next few week(end)s; however, I've no idea how this > feature is supposed to work. Any chance an admin of the current list > could send me a couple of screenshots of the feature in mailman 2 along > with a brief description of the feature? Alternatively, maybe we could > upload them to the wiki page Tony linked above or, better yet, to the > technical details page for same: > > https://wiki.mailman.psf.io/DEV/Brief%20Technical%20Details Looks like this should be https://wiki.list.org/DEV/Brief%20Technical%20Details instead, however reading through it doesn't really sound like the topic filtering feature from MM2. The List Member Manual has a very brief description of the feature from the subscriber standpoint: http://www.list.org/mailman-member/node29.html The List Administration Manual unfortunately doesn't have any content for the feature, just a stubbed-out section heading: http://www.list.org/mailman-admin/node30.html Sending screenshots to the ML is a bit tough, but luckily MIT's listadmins have posted some so we don't need to: http://web.mit.edu/lists/mailman/topics.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Fri Aug 31 16:45:24 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 31 Aug 2018 16:45:24 +0000 Subject: [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> <20180830213341.yuxyen2elx2c3is4@yuggoth.org> Message-ID: <20180831164524.mlksltzbzey6tdyo@yuggoth.org> On 2018-08-31 14:02:23 +0200 (+0200), Thomas Goirand wrote: [...] > I'm coming from the time when OpenStack had a list on launchpad > where everything was mixed. We did the split because it was really > annoying to have everything mixed. [...] These days (just running stats for this calendar year) we've been averaging 4 messages a day on the general openstack at lists.o.o ML, so if it's volume you're worried about most of it would be the current -operators and -dev ML discussions anyway (many of which are general questions from users already, because as you also pointed out we don't usually tell them to take their questions elsewhere any more). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: