From mrhillsman at gmail.com Sun Apr 1 17:01:51 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Sun, 1 Apr 2018 12:01:51 -0500 Subject: [Openstack-operators] Meeting Reminder - 4/2 @ 1400UTC Message-ID: Hi everyone, Friendly reminder we have a UC meeting tomorrow in #openstack-uc on freenode at 14:00UTC Agenda: https://wiki.openstack.org/wiki/Governance/Foundation/UserCo mmittee#Meeting_Agenda.2FPrevious_Meeting_Logs -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Mon Apr 2 16:14:31 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 2 Apr 2018 10:14:31 -0600 Subject: [Openstack-operators] nova-placement-api tuning In-Reply-To: References: <76b24db4-bdbb-663c-7d60-4eaaedfe3eac@oracle.com> Message-ID: On Fri, Mar 30, 2018 at 11:11 AM, iain MacDonnell wrote: > > > On 03/29/2018 02:13 AM, Belmiro Moreira wrote: >> >> Some lessons so far... >> - Scale keystone accordingly when enabling placement. > > > Speaking of which; I suppose I have the same question for keystone > (currently running under httpd also). I'm currently using threads=1, based > on this (IIRC): > > https://bugs.launchpad.net/puppet-keystone/+bug/1602530 > > but I'm not sure if that's valid? > > Between placement and ceilometer feeding gnocchi, keystone is kept very > busy. > > Recommendations for processes/threads for keystone? And any other tuning > hints... ? > So this is/was valid. A few years back there was some perf tests done with various combinations of process/threads and for Keystone it was determined that threads should be 1 while you should adjust the process count (hence the bug). Now I guess the question is for every service what is the optimal configuration but I'm not sure there's anyone who's looking at this in the upstream for all the services. In the puppet modules for consistency we applied a similar concept for all the services when they are deployed under apache. It can be tuned as needed for each service but I don't think we have any great examples of perf numbers. It's really a YMMV thing. We ship a basic default that isn't crazy, but it's probably not optimal either. Thanks, -Alex > Thanks! > > ~iain > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From jimmy at openstack.org Mon Apr 2 16:39:53 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 02 Apr 2018 11:39:53 -0500 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> <1521575953-sup-8576@lrrr.local> <20180323150713.GK21100@csail.mit.edu> Message-ID: <5AC25CD9.60202@openstack.org> Hi all - I'd like to check in to see if we've come to a consensus on the colocation of the Ops Meetup. Please let us know as soon as possible as we have to alert our events team. Thanks! Jimmy > Chris Morgan > March 27, 2018 at 11:44 AM > Hello Everyone, > This proposal looks to have very good backing in the community. > There was an informal IRC meeting today with the meetups team, some of > the foundation folk and others and everyone seems to like a proposal > put forward as a sample definition of the combined event - I certainly > do, it looks like we could have a really great combined event in > September. > > I volunteered to share that a bit later today with some other info. In > the meanwhile if you have a viewpoint please do chime in here as we'd > like to declare this agreed by the community ASAP, so in particular IF > YOU OBJECT please speak up by end of week, this week. > > Thanks! > > Chris > > > > > -- > Chris Morgan > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > Jonathan Proulx > March 23, 2018 at 10:07 AM > On Thu, Mar 22, 2018 at 09:02:48PM -0700, Yih Leong, Sun. wrote: > :I support the ideas to try colocating the next Ops Midcycle and PTG. > :Although scheduling could be a potential challenge but it worth give it a > :try. > : > :Also having an joint social event in the evening can also help Dev/Ops to > :meet and offline discussion. :) > > Agreeing stongly with Matt and Melvin's comments about Forum -vs- > PTG/OpsMidcycle > > PTG/OpsMidcycle (as I see them) are about focusing inside teams to get > work done ("how" is a a good one word I think). The advantage of > colocation is for cross team questions like "we're thinking of doing > this thing this way, does this have any impacts on your work my might > not have considered", can get a quick respose in the hall, at lunch, > or over beers as Yih Leong suggests. > > Forum has become about coming to gather across groups for more > conceptual "what" discussions. > > So I also thing they are very distinct and I do see potential benefits > to colocation. > > We do need to watch out for downsides. The concerns around colocation > seemed mostly about larger events costing more and being generally > harder to organize. If we try we will find out if there is merit to > this concern, but (IMO) it is important to keep both of the > events as cheap and simple as possible. > > -Jon > > : > :On Thursday, March 22, 2018, Melvin Hillsman > wrote: > : > :> Thierry and Matt both hit the nail on the head in terms of the very > :> base/purpose/point of the Forum, PTG, and Ops Midcycles and here is > my +2 > :> since I have spoke with both and others outside of this thread and > agree > :> with them here as I have in individual discussions. > :> > :> If nothing else I agree with Jimmy's original statement of at least > giving > :> this a try. > :> > :> On Thu, Mar 22, 2018 at 4:54 PM, Matt Van Winkle > > :> wrote: > :> > :>> Hey folks, > :>> Great discussion! There are number of points to comment on going back > :>> through the last few emails. I'll try to do so in line with Theirry's > :>> latest below. From a User Committee perspective (and as a member > of the > :>> Ops Meetup planning team), I am a convert to the idea of > co-location, but > :>> have come to see a lot of value in it. I'll point some of that out > as I > :>> respond to specific comments, but first a couple of overarching > points. > :>> > :>> In the current model, the Forum sessions are very much about WHAT the > :>> software should do. Keeping the discussions focused on behavior, > feature > :>> and function has made it much easier for an operator to participate > :>> effectively in the conversation versus the older, design sessions, > that > :>> focused largely on blueprints, coding approaches, etc. These are > HOW the > :>> developers should make things work and, now, are a large part of > the focus > :>> of the PTG. I realize it's not that cut and dry, but current model has > :>> allowed for this division of "what" and "how" in many areas, and I > know > :>> several who have found it valuable. > :>> > :>> The other contextual thing to remember is the PTG was the effective > :>> combining of all the various team mid-cycle meetups that were > occurring. > :>> The current Ops mid-cycle was born in that same period. While it's > purpose > :>> was a little different, it's spirit is the same - gather a team > (in this > :>> case operators) together outside the hustle and bustle of a summit to > :>> discuss common issues, topics, etc. I'll also point out, that they > have > :>> been good vehicles in the Ops community to get new folks > integrated. For > :>> the purpose of this discussion, though, one could argue this is just > :>> bringing the last mid-cycle event in to the fold. > :>> > :>> On 3/21/18, 4:40 AM, "Thierry Carrez" wrote: > :>> > :>> Doug Hellmann wrote: > :>> > Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: > :>> >> > :>> >> Would we still need the same style of summit forum if we have the > :>> >> OpenStack Community Working Gathering? One thing I have found with > :>> >> the forum running all week throughout the summit is that it tends > :>> >> to draw audience away from other talks so maybe we could reduce the > :>> >> forum to only a subset of the summit time? > :>> > > :>> > I support the idea of having all contributors attend the contributor > :>> > event (and rebranding it to reflect that change in emphasis), but > :>> > it's not quite clear how the result would be different from the > :>> > Forum. Is it just the scheduling? (Having input earlier in the cycle > :>> > would be convenient, for sure.) > :>> > > :>> > Thierry's comment about "work sessions" earlier in the thread seems > :>> > key. > :>> > :>> Right, I think the key difference between the PTG and Forum is that > :>> one > :>> is a work event for engaged contributors that are part of a group > :>> spending time on making OpenStack better, while the other is a venue > :>> for > :>> engaging with everyone in our community. > :>> > :>> The PTG format is really organized around work groups (whatever their > :>> focus is), enabling them to set their short-term goals, assign work > :>> items and bootstrap the work. The fact that all those work groups are > :>> co-located make it easy to participate in multiple groups, or invite > :>> other people to join the discussion where it touches their area of > :>> expertise, but it's still mostly a venue for our > :>> geographically-distributed workgroups to get together in person and > :>> get > :>> work done. That's why the agenda is so flexible at the PTG, to > :>> maximize > :>> the productivity of attendees, even if that can confuse people who > :>> can't > :>> relate to any specific work group. > :>> > :>> Exactly. I know I way over simplified it as working on the "how", but > :>> it's very important to honor this aspect of the current PTG. We > need this > :>> time for the devs and teams to take output from the previous forum > sessions > :>> (or earlier input) and turn it into plans for the N+1 version. > While some > :>> folks could drift between sessions, co-locating the Ops mid-cycle > is just > :>> that - leveraging venue, sponsors, and Foundation staff support > across one, > :>> larger event - it should NOT disrupt the current spirit of the > sessions > :>> Theirry describes above > :>> > :>> The Forum format, on the other hand, is organized around specific > :>> discussion topics where you want to maximize feedback and input. Forum > :>> sessions are not attached to a specific workgroup or team, they are > :>> defined by their topic. They are well-advertised on the event > :>> schedule, > :>> and happen at a precise time. It takes advantage of the thousands of > :>> attendees being present to get the most relevant feedback possible. It > :>> allows to engage beyond the work groups, to people who can't spend > :>> much > :>> time getting more engaged and contribute back. > :>> > :>> Agreed. Again, I over simplified as the "what", but these sessions are > :>> so valuable as the bring dev and ops in a room and focus on what the > :>> software needs to do or the impact (positive or negative) that planned > :>> behaviors might have on Operators and users. To Tim's earlier > question, no > :>> I think this change doesn't reduce the need for Forum sessions. If > :>> anything, I think it increases the need for us to get REALLY good at > :>> channeling output from the Ops mid-cycle in to session topics at > the next > :>> Summit. > :>> > :>> The Ops meetup under its current format is mostly work sessions, and > :>> those would fit pretty well in the PTG event format. Ideally I would > :>> limit the feedback-gathering sessions there and use the Forum (and > :>> regional events like OpenStack days) to collect it. That sounds like a > :>> better way to reach out to "all users" and take into account their > :>> feedback and needs... > :>> > :>> They are largely work sessions, but independent of the co-location > :>> discussion, the UC is focused on improving the ability for > tangible output > :>> to come from Ops mid-cycles, OpenStack Days and regional meetups - > largely > :>> in the form of Forum sessions and ultimately changes in the > software. So > :>> we, as a committee, see a lot of similarities in what you just > said. I'm > :>> not bold enough to predict exactly how co-location might change the > :>> tone/topic of the Ops sessions, but I agree that we shouldn't > expect a lot > :>> of real-time feedback time with devs at the PTG/mid-summit event > (what ever > :>> we end up calling it). We want the devs to be focused on what's > already > :>> planned for the N+1 version or beyond. The conversations/sessions > at the > :>> Ops portion of the event would hopefully lead to Forum sessions on N+2 > :>> features, functions, bug fixes, etc > :>> > :>> Overall, I still see co-location as a positive move. There will be > some > :>> tricky bits we need to figure out between to the "two sides" of > the event > :>> as we want to MINIMIZE any perceived us/them between dev and ops - > not add > :>> to it. But, the work session themselves, should still honor the > spirit of > :>> the PTG and Ops Mid-cycle as they are today. We just get the added > benefit > :>> of time together as a whole community - and hopefully solve a few > :>> logistic/finance/sponsorship/venue issues that trouble one event > or the > :>> other today. > :>> > :>> Thanks! > :>> VW > :>> -- > :>> Thierry Carrez (ttx) > :>> > :>> _______________________________________________ > :>> OpenStack-operators mailing list > :>> OpenStack-operators at lists.openstack.org > :>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac > :>> k-operators > :>> > :>> > :>> _______________________________________________ > :>> OpenStack-operators mailing list > :>> OpenStack-operators at lists.openstack.org > :>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > :>> > :> > :> > :> > :> -- > :> Kind regards, > :> > :> Melvin Hillsman > :> mrhillsman at gmail.com > :> mobile: (832) 264-2646 > :> > > :_______________________________________________ > :OpenStack-operators mailing list > :OpenStack-operators at lists.openstack.org > :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > Yih Leong, Sun. > March 22, 2018 at 11:02 PM > I support the ideas to try colocating the next Ops Midcycle and PTG. > Although scheduling could be a potential challenge but it worth give > it a try. > > Also having an joint social event in the evening can also help Dev/Ops > to meet and offline discussion. :) > > On Thursday, March 22, 2018, Melvin Hillsman > wrote: > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > Melvin Hillsman > March 22, 2018 at 9:08 PM > Thierry and Matt both hit the nail on the head in terms of the very > base/purpose/point of the Forum, PTG, and Ops Midcycles and here is my > +2 since I have spoke with both and others outside of this thread and > agree with them here as I have in individual discussions. > > If nothing else I agree with Jimmy's original statement of at least > giving this a try. > > > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > Matt Van Winkle > March 22, 2018 at 4:54 PM > Hey folks, > Great discussion! There are number of points to comment on going back > through the last few emails. I'll try to do so in line with Theirry's > latest below. From a User Committee perspective (and as a member of > the Ops Meetup planning team), I am a convert to the idea of > co-location, but have come to see a lot of value in it. I'll point > some of that out as I respond to specific comments, but first a couple > of overarching points. > > In the current model, the Forum sessions are very much about WHAT the > software should do. Keeping the discussions focused on behavior, > feature and function has made it much easier for an operator to > participate effectively in the conversation versus the older, design > sessions, that focused largely on blueprints, coding approaches, etc. > These are HOW the developers should make things work and, now, are a > large part of the focus of the PTG. I realize it's not that cut and > dry, but current model has allowed for this division of "what" and > "how" in many areas, and I know several who have found it valuable. > > The other contextual thing to remember is the PTG was the effective > combining of all the various team mid-cycle meetups that were > occurring. The current Ops mid-cycle was born in that same period. > While it's purpose was a little different, it's spirit is the same - > gather a team (in this case operators) together outside the hustle and > bustle of a summit to discuss common issues, topics, etc. I'll also > point out, that they have been good vehicles in the Ops community to > get new folks integrated. For the purpose of this discussion, though, > one could argue this is just bringing the last mid-cycle event in to > the fold. > > On 3/21/18, 4:40 AM, "Thierry Carrez" wrote: > > Doug Hellmann wrote: > > Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: > >> > >> Would we still need the same style of summit forum if we have the > >> OpenStack Community Working Gathering? One thing I have found with > >> the forum running all week throughout the summit is that it tends > >> to draw audience away from other talks so maybe we could reduce the > >> forum to only a subset of the summit time? > > > > I support the idea of having all contributors attend the contributor > > event (and rebranding it to reflect that change in emphasis), but > > it's not quite clear how the result would be different from the > > Forum. Is it just the scheduling? (Having input earlier in the cycle > > would be convenient, for sure.) > > > > Thierry's comment about "work sessions" earlier in the thread seems > > key. > > Right, I think the key difference between the PTG and Forum is that one > is a work event for engaged contributors that are part of a group > spending time on making OpenStack better, while the other is a venue for > engaging with everyone in our community. > > The PTG format is really organized around work groups (whatever their > focus is), enabling them to set their short-term goals, assign work > items and bootstrap the work. The fact that all those work groups are > co-located make it easy to participate in multiple groups, or invite > other people to join the discussion where it touches their area of > expertise, but it's still mostly a venue for our > geographically-distributed workgroups to get together in person and get > work done. That's why the agenda is so flexible at the PTG, to maximize > the productivity of attendees, even if that can confuse people who can't > relate to any specific work group. > > Exactly. I know I way over simplified it as working on the "how", but > it's very important to honor this aspect of the current PTG. We need > this time for the devs and teams to take output from the previous > forum sessions (or earlier input) and turn it into plans for the N+1 > version. While some folks could drift between sessions, co-locating > the Ops mid-cycle is just that - leveraging venue, sponsors, and > Foundation staff support across one, larger event - it should NOT > disrupt the current spirit of the sessions Theirry describes above > > The Forum format, on the other hand, is organized around specific > discussion topics where you want to maximize feedback and input. Forum > sessions are not attached to a specific workgroup or team, they are > defined by their topic. They are well-advertised on the event schedule, > and happen at a precise time. It takes advantage of the thousands of > attendees being present to get the most relevant feedback possible. It > allows to engage beyond the work groups, to people who can't spend much > time getting more engaged and contribute back. > > Agreed. Again, I over simplified as the "what", but these sessions are > so valuable as the bring dev and ops in a room and focus on what the > software needs to do or the impact (positive or negative) that planned > behaviors might have on Operators and users. To Tim's earlier > question, no I think this change doesn't reduce the need for Forum > sessions. If anything, I think it increases the need for us to get > REALLY good at channeling output from the Ops mid-cycle in to session > topics at the next Summit. > > The Ops meetup under its current format is mostly work sessions, and > those would fit pretty well in the PTG event format. Ideally I would > limit the feedback-gathering sessions there and use the Forum (and > regional events like OpenStack days) to collect it. That sounds like a > better way to reach out to "all users" and take into account their > feedback and needs... > > They are largely work sessions, but independent of the co-location > discussion, the UC is focused on improving the ability for tangible > output to come from Ops mid-cycles, OpenStack Days and regional > meetups - largely in the form of Forum sessions and ultimately changes > in the software. So we, as a committee, see a lot of similarities in > what you just said. I'm not bold enough to predict exactly how > co-location might change the tone/topic of the Ops sessions, but I > agree that we shouldn't expect a lot of real-time feedback time with > devs at the PTG/mid-summit event (what ever we end up calling it). We > want the devs to be focused on what's already planned for the N+1 > version or beyond. The conversations/sessions at the Ops portion of > the event would hopefully lead to Forum sessions on N+2 features, > functions, bug fixes, etc > > Overall, I still see co-location as a positive move. There will be > some tricky bits we need to figure out between to the "two sides" of > the event as we want to MINIMIZE any perceived us/them between dev and > ops - not add to it. But, the work session themselves, should still > honor the spirit of the PTG and Ops Mid-cycle as they are today. We > just get the added benefit of time together as a whole community - and > hopefully solve a few logistic/finance/sponsorship/venue issues that > trouble one event or the other today. > > Thanks! > VW > -- > Thierry Carrez (ttx) > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Mon Apr 2 17:53:15 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 2 Apr 2018 12:53:15 -0500 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: <5AC25CD9.60202@openstack.org> References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> <1521575953-sup-8576@lrrr.local> <20180323150713.GK21100@csail.mit.edu> <5AC25CD9.60202@openstack.org> Message-ID: +1 On Mon, Apr 2, 2018 at 11:39 AM, Jimmy McArthur wrote: > Hi all - > > I'd like to check in to see if we've come to a consensus on the colocation > of the Ops Meetup. Please let us know as soon as possible as we have to > alert our events team. > > Thanks! > Jimmy > > Chris Morgan > March 27, 2018 at 11:44 AM > Hello Everyone, > This proposal looks to have very good backing in the community. There > was an informal IRC meeting today with the meetups team, some of the > foundation folk and others and everyone seems to like a proposal put > forward as a sample definition of the combined event - I certainly do, it > looks like we could have a really great combined event in September. > > I volunteered to share that a bit later today with some other info. In the > meanwhile if you have a viewpoint please do chime in here as we'd like to > declare this agreed by the community ASAP, so in particular IF YOU OBJECT > please speak up by end of week, this week. > > Thanks! > > Chris > > > > > -- > Chris Morgan > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > Jonathan Proulx > March 23, 2018 at 10:07 AM > On Thu, Mar 22, 2018 at 09:02:48PM -0700, Yih Leong, Sun. wrote: > :I support the ideas to try colocating the next Ops Midcycle and PTG. > :Although scheduling could be a potential challenge but it worth give it a > :try. > : > :Also having an joint social event in the evening can also help Dev/Ops to > :meet and offline discussion. :) > > Agreeing stongly with Matt and Melvin's comments about Forum -vs- > PTG/OpsMidcycle > > PTG/OpsMidcycle (as I see them) are about focusing inside teams to get > work done ("how" is a a good one word I think). The advantage of > colocation is for cross team questions like "we're thinking of doing > this thing this way, does this have any impacts on your work my might > not have considered", can get a quick respose in the hall, at lunch, > or over beers as Yih Leong suggests. > > Forum has become about coming to gather across groups for more > conceptual "what" discussions. > > So I also thing they are very distinct and I do see potential benefits > to colocation. > > We do need to watch out for downsides. The concerns around colocation > seemed mostly about larger events costing more and being generally > harder to organize. If we try we will find out if there is merit to > this concern, but (IMO) it is important to keep both of the > events as cheap and simple as possible. > > -Jon > > : > :On Thursday, March 22, 2018, Melvin Hillsman > wrote: > : > :> Thierry and Matt both hit the nail on the head in terms of the very > :> base/purpose/point of the Forum, PTG, and Ops Midcycles and here is my > +2 > :> since I have spoke with both and others outside of this thread and agree > :> with them here as I have in individual discussions. > :> > :> If nothing else I agree with Jimmy's original statement of at least > giving > :> this a try. > :> > :> On Thu, Mar 22, 2018 at 4:54 PM, Matt Van Winkle > > :> wrote: > :> > :>> Hey folks, > :>> Great discussion! There are number of points to comment on going back > :>> through the last few emails. I'll try to do so in line with Theirry's > :>> latest below. From a User Committee perspective (and as a member of the > :>> Ops Meetup planning team), I am a convert to the idea of co-location, > but > :>> have come to see a lot of value in it. I'll point some of that out as I > :>> respond to specific comments, but first a couple of overarching points. > :>> > :>> In the current model, the Forum sessions are very much about WHAT the > :>> software should do. Keeping the discussions focused on behavior, > feature > :>> and function has made it much easier for an operator to participate > :>> effectively in the conversation versus the older, design sessions, that > :>> focused largely on blueprints, coding approaches, etc. These are HOW > the > :>> developers should make things work and, now, are a large part of the > focus > :>> of the PTG. I realize it's not that cut and dry, but current model has > :>> allowed for this division of "what" and "how" in many areas, and I know > :>> several who have found it valuable. > :>> > :>> The other contextual thing to remember is the PTG was the effective > :>> combining of all the various team mid-cycle meetups that were > occurring. > :>> The current Ops mid-cycle was born in that same period. While it's > purpose > :>> was a little different, it's spirit is the same - gather a team (in > this > :>> case operators) together outside the hustle and bustle of a summit to > :>> discuss common issues, topics, etc. I'll also point out, that they have > :>> been good vehicles in the Ops community to get new folks integrated. > For > :>> the purpose of this discussion, though, one could argue this is just > :>> bringing the last mid-cycle event in to the fold. > :>> > :>> On 3/21/18, 4:40 AM, "Thierry Carrez" > wrote: > :>> > :>> Doug Hellmann wrote: > :>> > Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: > :>> >> > :>> >> Would we still need the same style of summit forum if we have the > :>> >> OpenStack Community Working Gathering? One thing I have found with > :>> >> the forum running all week throughout the summit is that it tends > :>> >> to draw audience away from other talks so maybe we could reduce the > :>> >> forum to only a subset of the summit time? > :>> > > :>> > I support the idea of having all contributors attend the contributor > :>> > event (and rebranding it to reflect that change in emphasis), but > :>> > it's not quite clear how the result would be different from the > :>> > Forum. Is it just the scheduling? (Having input earlier in the cycle > :>> > would be convenient, for sure.) > :>> > > :>> > Thierry's comment about "work sessions" earlier in the thread seems > :>> > key. > :>> > :>> Right, I think the key difference between the PTG and Forum is that > :>> one > :>> is a work event for engaged contributors that are part of a group > :>> spending time on making OpenStack better, while the other is a venue > :>> for > :>> engaging with everyone in our community. > :>> > :>> The PTG format is really organized around work groups (whatever their > :>> focus is), enabling them to set their short-term goals, assign work > :>> items and bootstrap the work. The fact that all those work groups are > :>> co-located make it easy to participate in multiple groups, or invite > :>> other people to join the discussion where it touches their area of > :>> expertise, but it's still mostly a venue for our > :>> geographically-distributed workgroups to get together in person and > :>> get > :>> work done. That's why the agenda is so flexible at the PTG, to > :>> maximize > :>> the productivity of attendees, even if that can confuse people who > :>> can't > :>> relate to any specific work group. > :>> > :>> Exactly. I know I way over simplified it as working on the "how", but > :>> it's very important to honor this aspect of the current PTG. We need > this > :>> time for the devs and teams to take output from the previous forum > sessions > :>> (or earlier input) and turn it into plans for the N+1 version. While > some > :>> folks could drift between sessions, co-locating the Ops mid-cycle is > just > :>> that - leveraging venue, sponsors, and Foundation staff support across > one, > :>> larger event - it should NOT disrupt the current spirit of the sessions > :>> Theirry describes above > :>> > :>> The Forum format, on the other hand, is organized around specific > :>> discussion topics where you want to maximize feedback and input. Forum > :>> sessions are not attached to a specific workgroup or team, they are > :>> defined by their topic. They are well-advertised on the event > :>> schedule, > :>> and happen at a precise time. It takes advantage of the thousands of > :>> attendees being present to get the most relevant feedback possible. It > :>> allows to engage beyond the work groups, to people who can't spend > :>> much > :>> time getting more engaged and contribute back. > :>> > :>> Agreed. Again, I over simplified as the "what", but these sessions are > :>> so valuable as the bring dev and ops in a room and focus on what the > :>> software needs to do or the impact (positive or negative) that planned > :>> behaviors might have on Operators and users. To Tim's earlier > question, no > :>> I think this change doesn't reduce the need for Forum sessions. If > :>> anything, I think it increases the need for us to get REALLY good at > :>> channeling output from the Ops mid-cycle in to session topics at the > next > :>> Summit. > :>> > :>> The Ops meetup under its current format is mostly work sessions, and > :>> those would fit pretty well in the PTG event format. Ideally I would > :>> limit the feedback-gathering sessions there and use the Forum (and > :>> regional events like OpenStack days) to collect it. That sounds like a > :>> better way to reach out to "all users" and take into account their > :>> feedback and needs... > :>> > :>> They are largely work sessions, but independent of the co-location > :>> discussion, the UC is focused on improving the ability for tangible > output > :>> to come from Ops mid-cycles, OpenStack Days and regional meetups - > largely > :>> in the form of Forum sessions and ultimately changes in the software. > So > :>> we, as a committee, see a lot of similarities in what you just said. > I'm > :>> not bold enough to predict exactly how co-location might change the > :>> tone/topic of the Ops sessions, but I agree that we shouldn't expect a > lot > :>> of real-time feedback time with devs at the PTG/mid-summit event (what > ever > :>> we end up calling it). We want the devs to be focused on what's already > :>> planned for the N+1 version or beyond. The conversations/sessions at > the > :>> Ops portion of the event would hopefully lead to Forum sessions on N+2 > :>> features, functions, bug fixes, etc > :>> > :>> Overall, I still see co-location as a positive move. There will be some > :>> tricky bits we need to figure out between to the "two sides" of the > event > :>> as we want to MINIMIZE any perceived us/them between dev and ops - not > add > :>> to it. But, the work session themselves, should still honor the spirit > of > :>> the PTG and Ops Mid-cycle as they are today. We just get the added > benefit > :>> of time together as a whole community - and hopefully solve a few > :>> logistic/finance/sponsorship/venue issues that trouble one event or > the > :>> other today. > :>> > :>> Thanks! > :>> VW > :>> -- > :>> Thierry Carrez (ttx) > :>> > :>> _______________________________________________ > :>> OpenStack-operators mailing list > :>> OpenStack-operators at lists.openstack.org > :>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac > :>> k-operators > :>> > :>> > :>> _______________________________________________ > :>> OpenStack-operators mailing list > :>> OpenStack-operators at lists.openstack.org > :>> http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-operators > :>> > :> > :> > :> > :> -- > :> Kind regards, > :> > :> Melvin Hillsman > :> mrhillsman at gmail.com > :> mobile: (832) 264-2646 > :> > > :_______________________________________________ > :OpenStack-operators mailing list > :OpenStack-operators at lists.openstack.org > :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > Yih Leong, Sun. > March 22, 2018 at 11:02 PM > I support the ideas to try colocating the next Ops Midcycle and PTG. > Although scheduling could be a potential challenge but it worth give it a > try. > > Also having an joint social event in the evening can also help Dev/Ops to > meet and offline discussion. :) > > On Thursday, March 22, 2018, Melvin Hillsman wrote: > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > Melvin Hillsman > March 22, 2018 at 9:08 PM > Thierry and Matt both hit the nail on the head in terms of the very > base/purpose/point of the Forum, PTG, and Ops Midcycles and here is my +2 > since I have spoke with both and others outside of this thread and agree > with them here as I have in individual discussions. > > If nothing else I agree with Jimmy's original statement of at least giving > this a try. > > > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > Matt Van Winkle > March 22, 2018 at 4:54 PM > Hey folks, > Great discussion! There are number of points to comment on going back > through the last few emails. I'll try to do so in line with Theirry's > latest below. From a User Committee perspective (and as a member of the Ops > Meetup planning team), I am a convert to the idea of co-location, but have > come to see a lot of value in it. I'll point some of that out as I respond > to specific comments, but first a couple of overarching points. > > In the current model, the Forum sessions are very much about WHAT the > software should do. Keeping the discussions focused on behavior, feature > and function has made it much easier for an operator to participate > effectively in the conversation versus the older, design sessions, that > focused largely on blueprints, coding approaches, etc. These are HOW the > developers should make things work and, now, are a large part of the focus > of the PTG. I realize it's not that cut and dry, but current model has > allowed for this division of "what" and "how" in many areas, and I know > several who have found it valuable. > > The other contextual thing to remember is the PTG was the effective > combining of all the various team mid-cycle meetups that were occurring. > The current Ops mid-cycle was born in that same period. While it's purpose > was a little different, it's spirit is the same - gather a team (in this > case operators) together outside the hustle and bustle of a summit to > discuss common issues, topics, etc. I'll also point out, that they have > been good vehicles in the Ops community to get new folks integrated. For > the purpose of this discussion, though, one could argue this is just > bringing the last mid-cycle event in to the fold. > > On 3/21/18, 4:40 AM, "Thierry Carrez" > wrote: > > Doug Hellmann wrote: > > Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: > >> > >> Would we still need the same style of summit forum if we have the > >> OpenStack Community Working Gathering? One thing I have found with > >> the forum running all week throughout the summit is that it tends > >> to draw audience away from other talks so maybe we could reduce the > >> forum to only a subset of the summit time? > > > > I support the idea of having all contributors attend the contributor > > event (and rebranding it to reflect that change in emphasis), but > > it's not quite clear how the result would be different from the > > Forum. Is it just the scheduling? (Having input earlier in the cycle > > would be convenient, for sure.) > > > > Thierry's comment about "work sessions" earlier in the thread seems > > key. > > Right, I think the key difference between the PTG and Forum is that one > is a work event for engaged contributors that are part of a group > spending time on making OpenStack better, while the other is a venue for > engaging with everyone in our community. > > The PTG format is really organized around work groups (whatever their > focus is), enabling them to set their short-term goals, assign work > items and bootstrap the work. The fact that all those work groups are > co-located make it easy to participate in multiple groups, or invite > other people to join the discussion where it touches their area of > expertise, but it's still mostly a venue for our > geographically-distributed workgroups to get together in person and get > work done. That's why the agenda is so flexible at the PTG, to maximize > the productivity of attendees, even if that can confuse people who can't > relate to any specific work group. > > Exactly. I know I way over simplified it as working on the "how", but it's > very important to honor this aspect of the current PTG. We need this time > for the devs and teams to take output from the previous forum sessions (or > earlier input) and turn it into plans for the N+1 version. While some folks > could drift between sessions, co-locating the Ops mid-cycle is just that - > leveraging venue, sponsors, and Foundation staff support across one, larger > event - it should NOT disrupt the current spirit of the sessions Theirry > describes above > > The Forum format, on the other hand, is organized around specific > discussion topics where you want to maximize feedback and input. Forum > sessions are not attached to a specific workgroup or team, they are > defined by their topic. They are well-advertised on the event schedule, > and happen at a precise time. It takes advantage of the thousands of > attendees being present to get the most relevant feedback possible. It > allows to engage beyond the work groups, to people who can't spend much > time getting more engaged and contribute back. > > Agreed. Again, I over simplified as the "what", but these sessions are so > valuable as the bring dev and ops in a room and focus on what the software > needs to do or the impact (positive or negative) that planned behaviors > might have on Operators and users. To Tim's earlier question, no I think > this change doesn't reduce the need for Forum sessions. If anything, I > think it increases the need for us to get REALLY good at channeling output > from the Ops mid-cycle in to session topics at the next Summit. > > The Ops meetup under its current format is mostly work sessions, and > those would fit pretty well in the PTG event format. Ideally I would > limit the feedback-gathering sessions there and use the Forum (and > regional events like OpenStack days) to collect it. That sounds like a > better way to reach out to "all users" and take into account their > feedback and needs... > > They are largely work sessions, but independent of the co-location > discussion, the UC is focused on improving the ability for tangible output > to come from Ops mid-cycles, OpenStack Days and regional meetups - largely > in the form of Forum sessions and ultimately changes in the software. So > we, as a committee, see a lot of similarities in what you just said. I'm > not bold enough to predict exactly how co-location might change the > tone/topic of the Ops sessions, but I agree that we shouldn't expect a lot > of real-time feedback time with devs at the PTG/mid-summit event (what ever > we end up calling it). We want the devs to be focused on what's already > planned for the N+1 version or beyond. The conversations/sessions at the > Ops portion of the event would hopefully lead to Forum sessions on N+2 > features, functions, bug fixes, etc > > Overall, I still see co-location as a positive move. There will be some > tricky bits we need to figure out between to the "two sides" of the event > as we want to MINIMIZE any perceived us/them between dev and ops - not add > to it. But, the work session themselves, should still honor the spirit of > the PTG and Ops Mid-cycle as they are today. We just get the added benefit > of time together as a whole community - and hopefully solve a few > logistic/finance/sponsorship/venue issues that trouble one event or the > other today. > > Thanks! > VW > -- > Thierry Carrez (ttx) > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Mon Apr 2 20:15:56 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 2 Apr 2018 15:15:56 -0500 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> <1521575953-sup-8576@lrrr.local> <20180323150713.GK21100@csail.mit.edu> <5AC25CD9.60202@openstack.org> Message-ID: Unless anyone has any objections I believe we have quorum Jimmy. On Mon, Apr 2, 2018 at 12:53 PM, Melvin Hillsman wrote: > +1 > > On Mon, Apr 2, 2018 at 11:39 AM, Jimmy McArthur > wrote: > >> Hi all - >> >> I'd like to check in to see if we've come to a consensus on the >> colocation of the Ops Meetup. Please let us know as soon as possible as we >> have to alert our events team. >> >> Thanks! >> Jimmy >> >> Chris Morgan >> March 27, 2018 at 11:44 AM >> Hello Everyone, >> This proposal looks to have very good backing in the community. There >> was an informal IRC meeting today with the meetups team, some of the >> foundation folk and others and everyone seems to like a proposal put >> forward as a sample definition of the combined event - I certainly do, it >> looks like we could have a really great combined event in September. >> >> I volunteered to share that a bit later today with some other info. In >> the meanwhile if you have a viewpoint please do chime in here as we'd like >> to declare this agreed by the community ASAP, so in particular IF YOU >> OBJECT please speak up by end of week, this week. >> >> Thanks! >> >> Chris >> >> >> >> >> -- >> Chris Morgan >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> Jonathan Proulx >> March 23, 2018 at 10:07 AM >> On Thu, Mar 22, 2018 at 09:02:48PM -0700, Yih Leong, Sun. wrote: >> :I support the ideas to try colocating the next Ops Midcycle and PTG. >> :Although scheduling could be a potential challenge but it worth give it a >> :try. >> : >> :Also having an joint social event in the evening can also help Dev/Ops to >> :meet and offline discussion. :) >> >> Agreeing stongly with Matt and Melvin's comments about Forum -vs- >> PTG/OpsMidcycle >> >> PTG/OpsMidcycle (as I see them) are about focusing inside teams to get >> work done ("how" is a a good one word I think). The advantage of >> colocation is for cross team questions like "we're thinking of doing >> this thing this way, does this have any impacts on your work my might >> not have considered", can get a quick respose in the hall, at lunch, >> or over beers as Yih Leong suggests. >> >> Forum has become about coming to gather across groups for more >> conceptual "what" discussions. >> >> So I also thing they are very distinct and I do see potential benefits >> to colocation. >> >> We do need to watch out for downsides. The concerns around colocation >> seemed mostly about larger events costing more and being generally >> harder to organize. If we try we will find out if there is merit to >> this concern, but (IMO) it is important to keep both of the >> events as cheap and simple as possible. >> >> -Jon >> >> : >> :On Thursday, March 22, 2018, Melvin Hillsman >> wrote: >> : >> :> Thierry and Matt both hit the nail on the head in terms of the very >> :> base/purpose/point of the Forum, PTG, and Ops Midcycles and here is my >> +2 >> :> since I have spoke with both and others outside of this thread and >> agree >> :> with them here as I have in individual discussions. >> :> >> :> If nothing else I agree with Jimmy's original statement of at least >> giving >> :> this a try. >> :> >> :> On Thu, Mar 22, 2018 at 4:54 PM, Matt Van Winkle >> >> :> wrote: >> :> >> :>> Hey folks, >> :>> Great discussion! There are number of points to comment on going back >> :>> through the last few emails. I'll try to do so in line with Theirry's >> :>> latest below. From a User Committee perspective (and as a member of >> the >> :>> Ops Meetup planning team), I am a convert to the idea of co-location, >> but >> :>> have come to see a lot of value in it. I'll point some of that out as >> I >> :>> respond to specific comments, but first a couple of overarching >> points. >> :>> >> :>> In the current model, the Forum sessions are very much about WHAT the >> :>> software should do. Keeping the discussions focused on behavior, >> feature >> :>> and function has made it much easier for an operator to participate >> :>> effectively in the conversation versus the older, design sessions, >> that >> :>> focused largely on blueprints, coding approaches, etc. These are HOW >> the >> :>> developers should make things work and, now, are a large part of the >> focus >> :>> of the PTG. I realize it's not that cut and dry, but current model has >> :>> allowed for this division of "what" and "how" in many areas, and I >> know >> :>> several who have found it valuable. >> :>> >> :>> The other contextual thing to remember is the PTG was the effective >> :>> combining of all the various team mid-cycle meetups that were >> occurring. >> :>> The current Ops mid-cycle was born in that same period. While it's >> purpose >> :>> was a little different, it's spirit is the same - gather a team (in >> this >> :>> case operators) together outside the hustle and bustle of a summit to >> :>> discuss common issues, topics, etc. I'll also point out, that they >> have >> :>> been good vehicles in the Ops community to get new folks integrated. >> For >> :>> the purpose of this discussion, though, one could argue this is just >> :>> bringing the last mid-cycle event in to the fold. >> :>> >> :>> On 3/21/18, 4:40 AM, "Thierry Carrez" >> wrote: >> :>> >> :>> Doug Hellmann wrote: >> :>> > Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: >> :>> >> >> :>> >> Would we still need the same style of summit forum if we have the >> :>> >> OpenStack Community Working Gathering? One thing I have found with >> :>> >> the forum running all week throughout the summit is that it tends >> :>> >> to draw audience away from other talks so maybe we could reduce the >> :>> >> forum to only a subset of the summit time? >> :>> > >> :>> > I support the idea of having all contributors attend the contributor >> :>> > event (and rebranding it to reflect that change in emphasis), but >> :>> > it's not quite clear how the result would be different from the >> :>> > Forum. Is it just the scheduling? (Having input earlier in the cycle >> :>> > would be convenient, for sure.) >> :>> > >> :>> > Thierry's comment about "work sessions" earlier in the thread seems >> :>> > key. >> :>> >> :>> Right, I think the key difference between the PTG and Forum is that >> :>> one >> :>> is a work event for engaged contributors that are part of a group >> :>> spending time on making OpenStack better, while the other is a venue >> :>> for >> :>> engaging with everyone in our community. >> :>> >> :>> The PTG format is really organized around work groups (whatever their >> :>> focus is), enabling them to set their short-term goals, assign work >> :>> items and bootstrap the work. The fact that all those work groups are >> :>> co-located make it easy to participate in multiple groups, or invite >> :>> other people to join the discussion where it touches their area of >> :>> expertise, but it's still mostly a venue for our >> :>> geographically-distributed workgroups to get together in person and >> :>> get >> :>> work done. That's why the agenda is so flexible at the PTG, to >> :>> maximize >> :>> the productivity of attendees, even if that can confuse people who >> :>> can't >> :>> relate to any specific work group. >> :>> >> :>> Exactly. I know I way over simplified it as working on the "how", but >> :>> it's very important to honor this aspect of the current PTG. We need >> this >> :>> time for the devs and teams to take output from the previous forum >> sessions >> :>> (or earlier input) and turn it into plans for the N+1 version. While >> some >> :>> folks could drift between sessions, co-locating the Ops mid-cycle is >> just >> :>> that - leveraging venue, sponsors, and Foundation staff support >> across one, >> :>> larger event - it should NOT disrupt the current spirit of the >> sessions >> :>> Theirry describes above >> :>> >> :>> The Forum format, on the other hand, is organized around specific >> :>> discussion topics where you want to maximize feedback and input. Forum >> :>> sessions are not attached to a specific workgroup or team, they are >> :>> defined by their topic. They are well-advertised on the event >> :>> schedule, >> :>> and happen at a precise time. It takes advantage of the thousands of >> :>> attendees being present to get the most relevant feedback possible. It >> :>> allows to engage beyond the work groups, to people who can't spend >> :>> much >> :>> time getting more engaged and contribute back. >> :>> >> :>> Agreed. Again, I over simplified as the "what", but these sessions are >> :>> so valuable as the bring dev and ops in a room and focus on what the >> :>> software needs to do or the impact (positive or negative) that planned >> :>> behaviors might have on Operators and users. To Tim's earlier >> question, no >> :>> I think this change doesn't reduce the need for Forum sessions. If >> :>> anything, I think it increases the need for us to get REALLY good at >> :>> channeling output from the Ops mid-cycle in to session topics at the >> next >> :>> Summit. >> :>> >> :>> The Ops meetup under its current format is mostly work sessions, and >> :>> those would fit pretty well in the PTG event format. Ideally I would >> :>> limit the feedback-gathering sessions there and use the Forum (and >> :>> regional events like OpenStack days) to collect it. That sounds like a >> :>> better way to reach out to "all users" and take into account their >> :>> feedback and needs... >> :>> >> :>> They are largely work sessions, but independent of the co-location >> :>> discussion, the UC is focused on improving the ability for tangible >> output >> :>> to come from Ops mid-cycles, OpenStack Days and regional meetups - >> largely >> :>> in the form of Forum sessions and ultimately changes in the software. >> So >> :>> we, as a committee, see a lot of similarities in what you just said. >> I'm >> :>> not bold enough to predict exactly how co-location might change the >> :>> tone/topic of the Ops sessions, but I agree that we shouldn't expect >> a lot >> :>> of real-time feedback time with devs at the PTG/mid-summit event >> (what ever >> :>> we end up calling it). We want the devs to be focused on what's >> already >> :>> planned for the N+1 version or beyond. The conversations/sessions at >> the >> :>> Ops portion of the event would hopefully lead to Forum sessions on N+2 >> :>> features, functions, bug fixes, etc >> :>> >> :>> Overall, I still see co-location as a positive move. There will be >> some >> :>> tricky bits we need to figure out between to the "two sides" of the >> event >> :>> as we want to MINIMIZE any perceived us/them between dev and ops - >> not add >> :>> to it. But, the work session themselves, should still honor the >> spirit of >> :>> the PTG and Ops Mid-cycle as they are today. We just get the added >> benefit >> :>> of time together as a whole community - and hopefully solve a few >> :>> logistic/finance/sponsorship/venue issues that trouble one event or >> the >> :>> other today. >> :>> >> :>> Thanks! >> :>> VW >> :>> -- >> :>> Thierry Carrez (ttx) >> :>> >> :>> _______________________________________________ >> :>> OpenStack-operators mailing list >> :>> OpenStack-operators at lists.openstack.org >> :>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> :>> k-operators >> :>> >> :>> >> :>> _______________________________________________ >> :>> OpenStack-operators mailing list >> :>> OpenStack-operators at lists.openstack.org >> :>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> k-operators >> :>> >> :> >> :> >> :> >> :> -- >> :> Kind regards, >> :> >> :> Melvin Hillsman >> :> mrhillsman at gmail.com >> :> mobile: (832) 264-2646 >> :> >> >> :_______________________________________________ >> :OpenStack-operators mailing list >> :OpenStack-operators at lists.openstack.org >> :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> Yih Leong, Sun. >> March 22, 2018 at 11:02 PM >> I support the ideas to try colocating the next Ops Midcycle and PTG. >> Although scheduling could be a potential challenge but it worth give it a >> try. >> >> Also having an joint social event in the evening can also help Dev/Ops to >> meet and offline discussion. :) >> >> On Thursday, March 22, 2018, Melvin Hillsman >> wrote: >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> Melvin Hillsman >> March 22, 2018 at 9:08 PM >> Thierry and Matt both hit the nail on the head in terms of the very >> base/purpose/point of the Forum, PTG, and Ops Midcycles and here is my +2 >> since I have spoke with both and others outside of this thread and agree >> with them here as I have in individual discussions. >> >> If nothing else I agree with Jimmy's original statement of at least >> giving this a try. >> >> >> >> >> -- >> Kind regards, >> >> Melvin Hillsman >> mrhillsman at gmail.com >> mobile: (832) 264-2646 >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> Matt Van Winkle >> March 22, 2018 at 4:54 PM >> Hey folks, >> Great discussion! There are number of points to comment on going back >> through the last few emails. I'll try to do so in line with Theirry's >> latest below. From a User Committee perspective (and as a member of the Ops >> Meetup planning team), I am a convert to the idea of co-location, but have >> come to see a lot of value in it. I'll point some of that out as I respond >> to specific comments, but first a couple of overarching points. >> >> In the current model, the Forum sessions are very much about WHAT the >> software should do. Keeping the discussions focused on behavior, feature >> and function has made it much easier for an operator to participate >> effectively in the conversation versus the older, design sessions, that >> focused largely on blueprints, coding approaches, etc. These are HOW the >> developers should make things work and, now, are a large part of the focus >> of the PTG. I realize it's not that cut and dry, but current model has >> allowed for this division of "what" and "how" in many areas, and I know >> several who have found it valuable. >> >> The other contextual thing to remember is the PTG was the effective >> combining of all the various team mid-cycle meetups that were occurring. >> The current Ops mid-cycle was born in that same period. While it's purpose >> was a little different, it's spirit is the same - gather a team (in this >> case operators) together outside the hustle and bustle of a summit to >> discuss common issues, topics, etc. I'll also point out, that they have >> been good vehicles in the Ops community to get new folks integrated. For >> the purpose of this discussion, though, one could argue this is just >> bringing the last mid-cycle event in to the fold. >> >> On 3/21/18, 4:40 AM, "Thierry Carrez" >> wrote: >> >> Doug Hellmann wrote: >> > Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: >> >> >> >> Would we still need the same style of summit forum if we have the >> >> OpenStack Community Working Gathering? One thing I have found with >> >> the forum running all week throughout the summit is that it tends >> >> to draw audience away from other talks so maybe we could reduce the >> >> forum to only a subset of the summit time? >> > >> > I support the idea of having all contributors attend the contributor >> > event (and rebranding it to reflect that change in emphasis), but >> > it's not quite clear how the result would be different from the >> > Forum. Is it just the scheduling? (Having input earlier in the cycle >> > would be convenient, for sure.) >> > >> > Thierry's comment about "work sessions" earlier in the thread seems >> > key. >> >> Right, I think the key difference between the PTG and Forum is that one >> is a work event for engaged contributors that are part of a group >> spending time on making OpenStack better, while the other is a venue for >> engaging with everyone in our community. >> >> The PTG format is really organized around work groups (whatever their >> focus is), enabling them to set their short-term goals, assign work >> items and bootstrap the work. The fact that all those work groups are >> co-located make it easy to participate in multiple groups, or invite >> other people to join the discussion where it touches their area of >> expertise, but it's still mostly a venue for our >> geographically-distributed workgroups to get together in person and get >> work done. That's why the agenda is so flexible at the PTG, to maximize >> the productivity of attendees, even if that can confuse people who can't >> relate to any specific work group. >> >> Exactly. I know I way over simplified it as working on the "how", but >> it's very important to honor this aspect of the current PTG. We need this >> time for the devs and teams to take output from the previous forum sessions >> (or earlier input) and turn it into plans for the N+1 version. While some >> folks could drift between sessions, co-locating the Ops mid-cycle is just >> that - leveraging venue, sponsors, and Foundation staff support across one, >> larger event - it should NOT disrupt the current spirit of the sessions >> Theirry describes above >> >> The Forum format, on the other hand, is organized around specific >> discussion topics where you want to maximize feedback and input. Forum >> sessions are not attached to a specific workgroup or team, they are >> defined by their topic. They are well-advertised on the event schedule, >> and happen at a precise time. It takes advantage of the thousands of >> attendees being present to get the most relevant feedback possible. It >> allows to engage beyond the work groups, to people who can't spend much >> time getting more engaged and contribute back. >> >> Agreed. Again, I over simplified as the "what", but these sessions are so >> valuable as the bring dev and ops in a room and focus on what the software >> needs to do or the impact (positive or negative) that planned behaviors >> might have on Operators and users. To Tim's earlier question, no I think >> this change doesn't reduce the need for Forum sessions. If anything, I >> think it increases the need for us to get REALLY good at channeling output >> from the Ops mid-cycle in to session topics at the next Summit. >> >> The Ops meetup under its current format is mostly work sessions, and >> those would fit pretty well in the PTG event format. Ideally I would >> limit the feedback-gathering sessions there and use the Forum (and >> regional events like OpenStack days) to collect it. That sounds like a >> better way to reach out to "all users" and take into account their >> feedback and needs... >> >> They are largely work sessions, but independent of the co-location >> discussion, the UC is focused on improving the ability for tangible output >> to come from Ops mid-cycles, OpenStack Days and regional meetups - largely >> in the form of Forum sessions and ultimately changes in the software. So >> we, as a committee, see a lot of similarities in what you just said. I'm >> not bold enough to predict exactly how co-location might change the >> tone/topic of the Ops sessions, but I agree that we shouldn't expect a lot >> of real-time feedback time with devs at the PTG/mid-summit event (what ever >> we end up calling it). We want the devs to be focused on what's already >> planned for the N+1 version or beyond. The conversations/sessions at the >> Ops portion of the event would hopefully lead to Forum sessions on N+2 >> features, functions, bug fixes, etc >> >> Overall, I still see co-location as a positive move. There will be some >> tricky bits we need to figure out between to the "two sides" of the event >> as we want to MINIMIZE any perceived us/them between dev and ops - not add >> to it. But, the work session themselves, should still honor the spirit of >> the PTG and Ops Mid-cycle as they are today. We just get the added benefit >> of time together as a whole community - and hopefully solve a few >> logistic/finance/sponsorship/venue issues that trouble one event or the >> other today. >> >> Thanks! >> VW >> -- >> Thierry Carrez (ttx) >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Apr 2 20:27:37 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 2 Apr 2018 15:27:37 -0500 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: References: <1521575953-sup-8576@lrrr.local> <20180323150713.GK21100@csail.mit.edu> <5AC25CD9.60202@openstack.org> Message-ID: <20180402202736.GA26053@sm-xps> On Mon, Apr 02, 2018 at 03:15:56PM -0500, Melvin Hillsman wrote: > Unless anyone has any objections I believe we have quorum Jimmy. > I agree, I think the feedback I've heard so far is that all parties are willing to give this a shot. I think we should go ahead. > On Mon, Apr 2, 2018 at 12:53 PM, Melvin Hillsman > wrote: > > > +1 > > > > On Mon, Apr 2, 2018 at 11:39 AM, Jimmy McArthur > > wrote: > > > >> Hi all - > >> > >> I'd like to check in to see if we've come to a consensus on the > >> colocation of the Ops Meetup. Please let us know as soon as possible as we > >> have to alert our events team. > >> > >> Thanks! > >> Jimmy From amy at demarco.com Mon Apr 2 20:48:05 2018 From: amy at demarco.com (Amy Marrich) Date: Mon, 2 Apr 2018 15:48:05 -0500 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: <20180402202736.GA26053@sm-xps> References: <1521575953-sup-8576@lrrr.local> <20180323150713.GK21100@csail.mit.edu> <5AC25CD9.60202@openstack.org> <20180402202736.GA26053@sm-xps> Message-ID: +2, I think all concerns have been addressed Amy (spotz) On Mon, Apr 2, 2018 at 3:27 PM, Sean McGinnis wrote: > On Mon, Apr 02, 2018 at 03:15:56PM -0500, Melvin Hillsman wrote: > > Unless anyone has any objections I believe we have quorum Jimmy. > > > > I agree, I think the feedback I've heard so far is that all parties are > willing > to give this a shot. I think we should go ahead. > > > On Mon, Apr 2, 2018 at 12:53 PM, Melvin Hillsman > > wrote: > > > > > +1 > > > > > > On Mon, Apr 2, 2018 at 11:39 AM, Jimmy McArthur > > > wrote: > > > > > >> Hi all - > > >> > > >> I'd like to check in to see if we've come to a consensus on the > > >> colocation of the Ops Meetup. Please let us know as soon as possible > as we > > >> have to alert our events team. > > >> > > >> Thanks! > > >> Jimmy > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Mon Apr 2 20:57:13 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Mon, 02 Apr 2018 20:57:13 +0000 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> <1521575953-sup-8576@lrrr.local> <20180323150713.GK21100@csail.mit.edu> <5AC25CD9.60202@openstack.org> Message-ID: I'm a +1 too as long as the devs at large are cool with it and won't hate on us for crashing their party. I also +1 the proposed format. It's basically what we're discussed in Tokyo. Make it so. Cheers Erik PS. Sorry for the radio silence the past couple weeks. Vacation, kids, etc. On Apr 2, 2018 4:18 PM, "Melvin Hillsman" wrote: Unless anyone has any objections I believe we have quorum Jimmy. On Mon, Apr 2, 2018 at 12:53 PM, Melvin Hillsman wrote: > +1 > > On Mon, Apr 2, 2018 at 11:39 AM, Jimmy McArthur > wrote: > >> Hi all - >> >> I'd like to check in to see if we've come to a consensus on the >> colocation of the Ops Meetup. Please let us know as soon as possible as we >> have to alert our events team. >> >> Thanks! >> Jimmy >> >> Chris Morgan >> March 27, 2018 at 11:44 AM >> Hello Everyone, >> This proposal looks to have very good backing in the community. There >> was an informal IRC meeting today with the meetups team, some of the >> foundation folk and others and everyone seems to like a proposal put >> forward as a sample definition of the combined event - I certainly do, it >> looks like we could have a really great combined event in September. >> >> I volunteered to share that a bit later today with some other info. In >> the meanwhile if you have a viewpoint please do chime in here as we'd like >> to declare this agreed by the community ASAP, so in particular IF YOU >> OBJECT please speak up by end of week, this week. >> >> Thanks! >> >> Chris >> >> >> >> >> -- >> Chris Morgan >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> Jonathan Proulx >> March 23, 2018 at 10:07 AM >> On Thu, Mar 22, 2018 at 09:02:48PM -0700, Yih Leong, Sun. wrote: >> :I support the ideas to try colocating the next Ops Midcycle and PTG. >> :Although scheduling could be a potential challenge but it worth give it a >> :try. >> : >> :Also having an joint social event in the evening can also help Dev/Ops to >> :meet and offline discussion. :) >> >> Agreeing stongly with Matt and Melvin's comments about Forum -vs- >> PTG/OpsMidcycle >> >> PTG/OpsMidcycle (as I see them) are about focusing inside teams to get >> work done ("how" is a a good one word I think). The advantage of >> colocation is for cross team questions like "we're thinking of doing >> this thing this way, does this have any impacts on your work my might >> not have considered", can get a quick respose in the hall, at lunch, >> or over beers as Yih Leong suggests. >> >> Forum has become about coming to gather across groups for more >> conceptual "what" discussions. >> >> So I also thing they are very distinct and I do see potential benefits >> to colocation. >> >> We do need to watch out for downsides. The concerns around colocation >> seemed mostly about larger events costing more and being generally >> harder to organize. If we try we will find out if there is merit to >> this concern, but (IMO) it is important to keep both of the >> events as cheap and simple as possible. >> >> -Jon >> >> : >> :On Thursday, March 22, 2018, Melvin Hillsman >> wrote: >> : >> :> Thierry and Matt both hit the nail on the head in terms of the very >> :> base/purpose/point of the Forum, PTG, and Ops Midcycles and here is my >> +2 >> :> since I have spoke with both and others outside of this thread and >> agree >> :> with them here as I have in individual discussions. >> :> >> :> If nothing else I agree with Jimmy's original statement of at least >> giving >> :> this a try. >> :> >> :> On Thu, Mar 22, 2018 at 4:54 PM, Matt Van Winkle >> >> :> wrote: >> :> >> :>> Hey folks, >> :>> Great discussion! There are number of points to comment on going back >> :>> through the last few emails. I'll try to do so in line with Theirry's >> :>> latest below. From a User Committee perspective (and as a member of >> the >> :>> Ops Meetup planning team), I am a convert to the idea of co-location, >> but >> :>> have come to see a lot of value in it. I'll point some of that out as >> I >> :>> respond to specific comments, but first a couple of overarching >> points. >> :>> >> :>> In the current model, the Forum sessions are very much about WHAT the >> :>> software should do. Keeping the discussions focused on behavior, >> feature >> :>> and function has made it much easier for an operator to participate >> :>> effectively in the conversation versus the older, design sessions, >> that >> :>> focused largely on blueprints, coding approaches, etc. These are HOW >> the >> :>> developers should make things work and, now, are a large part of the >> focus >> :>> of the PTG. I realize it's not that cut and dry, but current model has >> :>> allowed for this division of "what" and "how" in many areas, and I >> know >> :>> several who have found it valuable. >> :>> >> :>> The other contextual thing to remember is the PTG was the effective >> :>> combining of all the various team mid-cycle meetups that were >> occurring. >> :>> The current Ops mid-cycle was born in that same period. While it's >> purpose >> :>> was a little different, it's spirit is the same - gather a team (in >> this >> :>> case operators) together outside the hustle and bustle of a summit to >> :>> discuss common issues, topics, etc. I'll also point out, that they >> have >> :>> been good vehicles in the Ops community to get new folks integrated. >> For >> :>> the purpose of this discussion, though, one could argue this is just >> :>> bringing the last mid-cycle event in to the fold. >> :>> >> :>> On 3/21/18, 4:40 AM, "Thierry Carrez" >> wrote: >> :>> >> :>> Doug Hellmann wrote: >> :>> > Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: >> :>> >> >> :>> >> Would we still need the same style of summit forum if we have the >> :>> >> OpenStack Community Working Gathering? One thing I have found with >> :>> >> the forum running all week throughout the summit is that it tends >> :>> >> to draw audience away from other talks so maybe we could reduce the >> :>> >> forum to only a subset of the summit time? >> :>> > >> :>> > I support the idea of having all contributors attend the contributor >> :>> > event (and rebranding it to reflect that change in emphasis), but >> :>> > it's not quite clear how the result would be different from the >> :>> > Forum. Is it just the scheduling? (Having input earlier in the cycle >> :>> > would be convenient, for sure.) >> :>> > >> :>> > Thierry's comment about "work sessions" earlier in the thread seems >> :>> > key. >> :>> >> :>> Right, I think the key difference between the PTG and Forum is that >> :>> one >> :>> is a work event for engaged contributors that are part of a group >> :>> spending time on making OpenStack better, while the other is a venue >> :>> for >> :>> engaging with everyone in our community. >> :>> >> :>> The PTG format is really organized around work groups (whatever their >> :>> focus is), enabling them to set their short-term goals, assign work >> :>> items and bootstrap the work. The fact that all those work groups are >> :>> co-located make it easy to participate in multiple groups, or invite >> :>> other people to join the discussion where it touches their area of >> :>> expertise, but it's still mostly a venue for our >> :>> geographically-distributed workgroups to get together in person and >> :>> get >> :>> work done. That's why the agenda is so flexible at the PTG, to >> :>> maximize >> :>> the productivity of attendees, even if that can confuse people who >> :>> can't >> :>> relate to any specific work group. >> :>> >> :>> Exactly. I know I way over simplified it as working on the "how", but >> :>> it's very important to honor this aspect of the current PTG. We need >> this >> :>> time for the devs and teams to take output from the previous forum >> sessions >> :>> (or earlier input) and turn it into plans for the N+1 version. While >> some >> :>> folks could drift between sessions, co-locating the Ops mid-cycle is >> just >> :>> that - leveraging venue, sponsors, and Foundation staff support >> across one, >> :>> larger event - it should NOT disrupt the current spirit of the >> sessions >> :>> Theirry describes above >> :>> >> :>> The Forum format, on the other hand, is organized around specific >> :>> discussion topics where you want to maximize feedback and input. Forum >> :>> sessions are not attached to a specific workgroup or team, they are >> :>> defined by their topic. They are well-advertised on the event >> :>> schedule, >> :>> and happen at a precise time. It takes advantage of the thousands of >> :>> attendees being present to get the most relevant feedback possible. It >> :>> allows to engage beyond the work groups, to people who can't spend >> :>> much >> :>> time getting more engaged and contribute back. >> :>> >> :>> Agreed. Again, I over simplified as the "what", but these sessions are >> :>> so valuable as the bring dev and ops in a room and focus on what the >> :>> software needs to do or the impact (positive or negative) that planned >> :>> behaviors might have on Operators and users. To Tim's earlier >> question, no >> :>> I think this change doesn't reduce the need for Forum sessions. If >> :>> anything, I think it increases the need for us to get REALLY good at >> :>> channeling output from the Ops mid-cycle in to session topics at the >> next >> :>> Summit. >> :>> >> :>> The Ops meetup under its current format is mostly work sessions, and >> :>> those would fit pretty well in the PTG event format. Ideally I would >> :>> limit the feedback-gathering sessions there and use the Forum (and >> :>> regional events like OpenStack days) to collect it. That sounds like a >> :>> better way to reach out to "all users" and take into account their >> :>> feedback and needs... >> :>> >> :>> They are largely work sessions, but independent of the co-location >> :>> discussion, the UC is focused on improving the ability for tangible >> output >> :>> to come from Ops mid-cycles, OpenStack Days and regional meetups - >> largely >> :>> in the form of Forum sessions and ultimately changes in the software. >> So >> :>> we, as a committee, see a lot of similarities in what you just said. >> I'm >> :>> not bold enough to predict exactly how co-location might change the >> :>> tone/topic of the Ops sessions, but I agree that we shouldn't expect >> a lot >> :>> of real-time feedback time with devs at the PTG/mid-summit event >> (what ever >> :>> we end up calling it). We want the devs to be focused on what's >> already >> :>> planned for the N+1 version or beyond. The conversations/sessions at >> the >> :>> Ops portion of the event would hopefully lead to Forum sessions on N+2 >> :>> features, functions, bug fixes, etc >> :>> >> :>> Overall, I still see co-location as a positive move. There will be >> some >> :>> tricky bits we need to figure out between to the "two sides" of the >> event >> :>> as we want to MINIMIZE any perceived us/them between dev and ops - >> not add >> :>> to it. But, the work session themselves, should still honor the >> spirit of >> :>> the PTG and Ops Mid-cycle as they are today. We just get the added >> benefit >> :>> of time together as a whole community - and hopefully solve a few >> :>> logistic/finance/sponsorship/venue issues that trouble one event or >> the >> :>> other today. >> :>> >> :>> Thanks! >> :>> VW >> :>> -- >> :>> Thierry Carrez (ttx) >> :>> >> :>> _______________________________________________ >> :>> OpenStack-operators mailing list >> :>> OpenStack-operators at lists.openstack.org >> :>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> :>> k-operators >> :>> >> :>> >> :>> _______________________________________________ >> :>> OpenStack-operators mailing list >> :>> OpenStack-operators at lists.openstack.org >> :>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> :>> >> :> >> :> >> :> >> :> -- >> :> Kind regards, >> :> >> :> Melvin Hillsman >> :> mrhillsman at gmail.com >> :> mobile: (832) 264-2646 >> :> >> >> :_______________________________________________ >> :OpenStack-operators mailing list >> :OpenStack-operators at lists.openstack.org >> :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> Yih Leong, Sun. >> March 22, 2018 at 11:02 PM >> I support the ideas to try colocating the next Ops Midcycle and PTG. >> Although scheduling could be a potential challenge but it worth give it a >> try. >> >> Also having an joint social event in the evening can also help Dev/Ops to >> meet and offline discussion. :) >> >> On Thursday, March 22, 2018, Melvin Hillsman >> wrote: >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> Melvin Hillsman >> March 22, 2018 at 9:08 PM >> Thierry and Matt both hit the nail on the head in terms of the very >> base/purpose/point of the Forum, PTG, and Ops Midcycles and here is my +2 >> since I have spoke with both and others outside of this thread and agree >> with them here as I have in individual discussions. >> >> If nothing else I agree with Jimmy's original statement of at least >> giving this a try. >> >> >> >> >> -- >> Kind regards, >> >> Melvin Hillsman >> mrhillsman at gmail.com >> mobile: (832) 264-2646 >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> Matt Van Winkle >> March 22, 2018 at 4:54 PM >> Hey folks, >> Great discussion! There are number of points to comment on going back >> through the last few emails. I'll try to do so in line with Theirry's >> latest below. From a User Committee perspective (and as a member of the Ops >> Meetup planning team), I am a convert to the idea of co-location, but have >> come to see a lot of value in it. I'll point some of that out as I respond >> to specific comments, but first a couple of overarching points. >> >> In the current model, the Forum sessions are very much about WHAT the >> software should do. Keeping the discussions focused on behavior, feature >> and function has made it much easier for an operator to participate >> effectively in the conversation versus the older, design sessions, that >> focused largely on blueprints, coding approaches, etc. These are HOW the >> developers should make things work and, now, are a large part of the focus >> of the PTG. I realize it's not that cut and dry, but current model has >> allowed for this division of "what" and "how" in many areas, and I know >> several who have found it valuable. >> >> The other contextual thing to remember is the PTG was the effective >> combining of all the various team mid-cycle meetups that were occurring. >> The current Ops mid-cycle was born in that same period. While it's purpose >> was a little different, it's spirit is the same - gather a team (in this >> case operators) together outside the hustle and bustle of a summit to >> discuss common issues, topics, etc. I'll also point out, that they have >> been good vehicles in the Ops community to get new folks integrated. For >> the purpose of this discussion, though, one could argue this is just >> bringing the last mid-cycle event in to the fold. >> >> On 3/21/18, 4:40 AM, "Thierry Carrez" >> wrote: >> >> Doug Hellmann wrote: >> > Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: >> >> >> >> Would we still need the same style of summit forum if we have the >> >> OpenStack Community Working Gathering? One thing I have found with >> >> the forum running all week throughout the summit is that it tends >> >> to draw audience away from other talks so maybe we could reduce the >> >> forum to only a subset of the summit time? >> > >> > I support the idea of having all contributors attend the contributor >> > event (and rebranding it to reflect that change in emphasis), but >> > it's not quite clear how the result would be different from the >> > Forum. Is it just the scheduling? (Having input earlier in the cycle >> > would be convenient, for sure.) >> > >> > Thierry's comment about "work sessions" earlier in the thread seems >> > key. >> >> Right, I think the key difference between the PTG and Forum is that one >> is a work event for engaged contributors that are part of a group >> spending time on making OpenStack better, while the other is a venue for >> engaging with everyone in our community. >> >> The PTG format is really organized around work groups (whatever their >> focus is), enabling them to set their short-term goals, assign work >> items and bootstrap the work. The fact that all those work groups are >> co-located make it easy to participate in multiple groups, or invite >> other people to join the discussion where it touches their area of >> expertise, but it's still mostly a venue for our >> geographically-distributed workgroups to get together in person and get >> work done. That's why the agenda is so flexible at the PTG, to maximize >> the productivity of attendees, even if that can confuse people who can't >> relate to any specific work group. >> >> Exactly. I know I way over simplified it as working on the "how", but >> it's very important to honor this aspect of the current PTG. We need this >> time for the devs and teams to take output from the previous forum sessions >> (or earlier input) and turn it into plans for the N+1 version. While some >> folks could drift between sessions, co-locating the Ops mid-cycle is just >> that - leveraging venue, sponsors, and Foundation staff support across one, >> larger event - it should NOT disrupt the current spirit of the sessions >> Theirry describes above >> >> The Forum format, on the other hand, is organized around specific >> discussion topics where you want to maximize feedback and input. Forum >> sessions are not attached to a specific workgroup or team, they are >> defined by their topic. They are well-advertised on the event schedule, >> and happen at a precise time. It takes advantage of the thousands of >> attendees being present to get the most relevant feedback possible. It >> allows to engage beyond the work groups, to people who can't spend much >> time getting more engaged and contribute back. >> >> Agreed. Again, I over simplified as the "what", but these sessions are so >> valuable as the bring dev and ops in a room and focus on what the software >> needs to do or the impact (positive or negative) that planned behaviors >> might have on Operators and users. To Tim's earlier question, no I think >> this change doesn't reduce the need for Forum sessions. If anything, I >> think it increases the need for us to get REALLY good at channeling output >> from the Ops mid-cycle in to session topics at the next Summit. >> >> The Ops meetup under its current format is mostly work sessions, and >> those would fit pretty well in the PTG event format. Ideally I would >> limit the feedback-gathering sessions there and use the Forum (and >> regional events like OpenStack days) to collect it. That sounds like a >> better way to reach out to "all users" and take into account their >> feedback and needs... >> >> They are largely work sessions, but independent of the co-location >> discussion, the UC is focused on improving the ability for tangible output >> to come from Ops mid-cycles, OpenStack Days and regional meetups - largely >> in the form of Forum sessions and ultimately changes in the software. So >> we, as a committee, see a lot of similarities in what you just said. I'm >> not bold enough to predict exactly how co-location might change the >> tone/topic of the Ops sessions, but I agree that we shouldn't expect a lot >> of real-time feedback time with devs at the PTG/mid-summit event (what ever >> we end up calling it). We want the devs to be focused on what's already >> planned for the N+1 version or beyond. The conversations/sessions at the >> Ops portion of the event would hopefully lead to Forum sessions on N+2 >> features, functions, bug fixes, etc >> >> Overall, I still see co-location as a positive move. There will be some >> tricky bits we need to figure out between to the "two sides" of the event >> as we want to MINIMIZE any perceived us/them between dev and ops - not add >> to it. But, the work session themselves, should still honor the spirit of >> the PTG and Ops Mid-cycle as they are today. We just get the added benefit >> of time together as a whole community - and hopefully solve a few >> logistic/finance/sponsorship/venue issues that trouble one event or the >> other today. >> >> Thanks! >> VW >> -- >> Thierry Carrez (ttx) >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Mon Apr 2 20:58:55 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 2 Apr 2018 20:58:55 +0000 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> <1521575953-sup-8576@lrrr.local> <20180323150713.GK21100@csail.mit.edu> <5AC25CD9.60202@openstack.org> Message-ID: +1 From: Erik McCormick [mailto:emccormick at cirrusseven.com] Sent: Monday, April 2, 2018 3:57 PM To: Melvin Hillsman Cc: openstack-operators Subject: Re: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback I'm a +1 too as long as the devs at large are cool with it and won't hate on us for crashing their party. I also +1 the proposed format. It's basically what we're discussed in Tokyo. Make it so. Cheers Erik PS. Sorry for the radio silence the past couple weeks. Vacation, kids, etc. On Apr 2, 2018 4:18 PM, "Melvin Hillsman" > wrote: Unless anyone has any objections I believe we have quorum Jimmy. On Mon, Apr 2, 2018 at 12:53 PM, Melvin Hillsman > wrote: +1 On Mon, Apr 2, 2018 at 11:39 AM, Jimmy McArthur > wrote: Hi all - I'd like to check in to see if we've come to a consensus on the colocation of the Ops Meetup. Please let us know as soon as possible as we have to alert our events team. Thanks! Jimmy Chris Morgan March 27, 2018 at 11:44 AM Hello Everyone, This proposal looks to have very good backing in the community. There was an informal IRC meeting today with the meetups team, some of the foundation folk and others and everyone seems to like a proposal put forward as a sample definition of the combined event - I certainly do, it looks like we could have a really great combined event in September. I volunteered to share that a bit later today with some other info. In the meanwhile if you have a viewpoint please do chime in here as we'd like to declare this agreed by the community ASAP, so in particular IF YOU OBJECT please speak up by end of week, this week. Thanks! Chris -- Chris Morgan > _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators Jonathan Proulx March 23, 2018 at 10:07 AM On Thu, Mar 22, 2018 at 09:02:48PM -0700, Yih Leong, Sun. wrote: :I support the ideas to try colocating the next Ops Midcycle and PTG. :Although scheduling could be a potential challenge but it worth give it a :try. : :Also having an joint social event in the evening can also help Dev/Ops to :meet and offline discussion. :) Agreeing stongly with Matt and Melvin's comments about Forum -vs- PTG/OpsMidcycle PTG/OpsMidcycle (as I see them) are about focusing inside teams to get work done ("how" is a a good one word I think). The advantage of colocation is for cross team questions like "we're thinking of doing this thing this way, does this have any impacts on your work my might not have considered", can get a quick respose in the hall, at lunch, or over beers as Yih Leong suggests. Forum has become about coming to gather across groups for more conceptual "what" discussions. So I also thing they are very distinct and I do see potential benefits to colocation. We do need to watch out for downsides. The concerns around colocation seemed mostly about larger events costing more and being generally harder to organize. If we try we will find out if there is merit to this concern, but (IMO) it is important to keep both of the events as cheap and simple as possible. -Jon : :On Thursday, March 22, 2018, Melvin Hillsman wrote: : :> Thierry and Matt both hit the nail on the head in terms of the very :> base/purpose/point of the Forum, PTG, and Ops Midcycles and here is my +2 :> since I have spoke with both and others outside of this thread and agree :> with them here as I have in individual discussions. :> :> If nothing else I agree with Jimmy's original statement of at least giving :> this a try. :> :> On Thu, Mar 22, 2018 at 4:54 PM, Matt Van Winkle :> wrote: :> :>> Hey folks, :>> Great discussion! There are number of points to comment on going back :>> through the last few emails. I'll try to do so in line with Theirry's :>> latest below. From a User Committee perspective (and as a member of the :>> Ops Meetup planning team), I am a convert to the idea of co-location, but :>> have come to see a lot of value in it. I'll point some of that out as I :>> respond to specific comments, but first a couple of overarching points. :>> :>> In the current model, the Forum sessions are very much about WHAT the :>> software should do. Keeping the discussions focused on behavior, feature :>> and function has made it much easier for an operator to participate :>> effectively in the conversation versus the older, design sessions, that :>> focused largely on blueprints, coding approaches, etc. These are HOW the :>> developers should make things work and, now, are a large part of the focus :>> of the PTG. I realize it's not that cut and dry, but current model has :>> allowed for this division of "what" and "how" in many areas, and I know :>> several who have found it valuable. :>> :>> The other contextual thing to remember is the PTG was the effective :>> combining of all the various team mid-cycle meetups that were occurring. :>> The current Ops mid-cycle was born in that same period. While it's purpose :>> was a little different, it's spirit is the same - gather a team (in this :>> case operators) together outside the hustle and bustle of a summit to :>> discuss common issues, topics, etc. I'll also point out, that they have :>> been good vehicles in the Ops community to get new folks integrated. For :>> the purpose of this discussion, though, one could argue this is just :>> bringing the last mid-cycle event in to the fold. :>> :>> On 3/21/18, 4:40 AM, "Thierry Carrez" wrote: :>> :>> Doug Hellmann wrote: :>> > Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: :>> >> :>> >> Would we still need the same style of summit forum if we have the :>> >> OpenStack Community Working Gathering? One thing I have found with :>> >> the forum running all week throughout the summit is that it tends :>> >> to draw audience away from other talks so maybe we could reduce the :>> >> forum to only a subset of the summit time? :>> > :>> > I support the idea of having all contributors attend the contributor :>> > event (and rebranding it to reflect that change in emphasis), but :>> > it's not quite clear how the result would be different from the :>> > Forum. Is it just the scheduling? (Having input earlier in the cycle :>> > would be convenient, for sure.) :>> > :>> > Thierry's comment about "work sessions" earlier in the thread seems :>> > key. :>> :>> Right, I think the key difference between the PTG and Forum is that :>> one :>> is a work event for engaged contributors that are part of a group :>> spending time on making OpenStack better, while the other is a venue :>> for :>> engaging with everyone in our community. :>> :>> The PTG format is really organized around work groups (whatever their :>> focus is), enabling them to set their short-term goals, assign work :>> items and bootstrap the work. The fact that all those work groups are :>> co-located make it easy to participate in multiple groups, or invite :>> other people to join the discussion where it touches their area of :>> expertise, but it's still mostly a venue for our :>> geographically-distributed workgroups to get together in person and :>> get :>> work done. That's why the agenda is so flexible at the PTG, to :>> maximize :>> the productivity of attendees, even if that can confuse people who :>> can't :>> relate to any specific work group. :>> :>> Exactly. I know I way over simplified it as working on the "how", but :>> it's very important to honor this aspect of the current PTG. We need this :>> time for the devs and teams to take output from the previous forum sessions :>> (or earlier input) and turn it into plans for the N+1 version. While some :>> folks could drift between sessions, co-locating the Ops mid-cycle is just :>> that - leveraging venue, sponsors, and Foundation staff support across one, :>> larger event - it should NOT disrupt the current spirit of the sessions :>> Theirry describes above :>> :>> The Forum format, on the other hand, is organized around specific :>> discussion topics where you want to maximize feedback and input. Forum :>> sessions are not attached to a specific workgroup or team, they are :>> defined by their topic. They are well-advertised on the event :>> schedule, :>> and happen at a precise time. It takes advantage of the thousands of :>> attendees being present to get the most relevant feedback possible. It :>> allows to engage beyond the work groups, to people who can't spend :>> much :>> time getting more engaged and contribute back. :>> :>> Agreed. Again, I over simplified as the "what", but these sessions are :>> so valuable as the bring dev and ops in a room and focus on what the :>> software needs to do or the impact (positive or negative) that planned :>> behaviors might have on Operators and users. To Tim's earlier question, no :>> I think this change doesn't reduce the need for Forum sessions. If :>> anything, I think it increases the need for us to get REALLY good at :>> channeling output from the Ops mid-cycle in to session topics at the next :>> Summit. :>> :>> The Ops meetup under its current format is mostly work sessions, and :>> those would fit pretty well in the PTG event format. Ideally I would :>> limit the feedback-gathering sessions there and use the Forum (and :>> regional events like OpenStack days) to collect it. That sounds like a :>> better way to reach out to "all users" and take into account their :>> feedback and needs... :>> :>> They are largely work sessions, but independent of the co-location :>> discussion, the UC is focused on improving the ability for tangible output :>> to come from Ops mid-cycles, OpenStack Days and regional meetups - largely :>> in the form of Forum sessions and ultimately changes in the software. So :>> we, as a committee, see a lot of similarities in what you just said. I'm :>> not bold enough to predict exactly how co-location might change the :>> tone/topic of the Ops sessions, but I agree that we shouldn't expect a lot :>> of real-time feedback time with devs at the PTG/mid-summit event (what ever :>> we end up calling it). We want the devs to be focused on what's already :>> planned for the N+1 version or beyond. The conversations/sessions at the :>> Ops portion of the event would hopefully lead to Forum sessions on N+2 :>> features, functions, bug fixes, etc :>> :>> Overall, I still see co-location as a positive move. There will be some :>> tricky bits we need to figure out between to the "two sides" of the event :>> as we want to MINIMIZE any perceived us/them between dev and ops - not add :>> to it. But, the work session themselves, should still honor the spirit of :>> the PTG and Ops Mid-cycle as they are today. We just get the added benefit :>> of time together as a whole community - and hopefully solve a few :>> logistic/finance/sponsorship/venue issues that trouble one event or the :>> other today. :>> :>> Thanks! :>> VW :>> -- :>> Thierry Carrez (ttx) :>> :>> _______________________________________________ :>> OpenStack-operators mailing list :>> OpenStack-operators at lists.openstack.org :>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac :>> k-operators :>> :>> :>> _______________________________________________ :>> OpenStack-operators mailing list :>> OpenStack-operators at lists.openstack.org :>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators :>> :> :> :> :> -- :> Kind regards, :> :> Melvin Hillsman :> mrhillsman at gmail.com :> mobile: (832) 264-2646 :> :_______________________________________________ :OpenStack-operators mailing list :OpenStack-operators at lists.openstack.org :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators Yih Leong, Sun. March 22, 2018 at 11:02 PM I support the ideas to try colocating the next Ops Midcycle and PTG. Although scheduling could be a potential challenge but it worth give it a try. Also having an joint social event in the evening can also help Dev/Ops to meet and offline discussion. :) On Thursday, March 22, 2018, Melvin Hillsman > wrote: _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators Melvin Hillsman March 22, 2018 at 9:08 PM Thierry and Matt both hit the nail on the head in terms of the very base/purpose/point of the Forum, PTG, and Ops Midcycles and here is my +2 since I have spoke with both and others outside of this thread and agree with them here as I have in individual discussions. If nothing else I agree with Jimmy's original statement of at least giving this a try. -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators Matt Van Winkle March 22, 2018 at 4:54 PM Hey folks, Great discussion! There are number of points to comment on going back through the last few emails. I'll try to do so in line with Theirry's latest below. From a User Committee perspective (and as a member of the Ops Meetup planning team), I am a convert to the idea of co-location, but have come to see a lot of value in it. I'll point some of that out as I respond to specific comments, but first a couple of overarching points. In the current model, the Forum sessions are very much about WHAT the software should do. Keeping the discussions focused on behavior, feature and function has made it much easier for an operator to participate effectively in the conversation versus the older, design sessions, that focused largely on blueprints, coding approaches, etc. These are HOW the developers should make things work and, now, are a large part of the focus of the PTG. I realize it's not that cut and dry, but current model has allowed for this division of "what" and "how" in many areas, and I know several who have found it valuable. The other contextual thing to remember is the PTG was the effective combining of all the various team mid-cycle meetups that were occurring. The current Ops mid-cycle was born in that same period. While it's purpose was a little different, it's spirit is the same - gather a team (in this case operators) together outside the hustle and bustle of a summit to discuss common issues, topics, etc. I'll also point out, that they have been good vehicles in the Ops community to get new folks integrated. For the purpose of this discussion, though, one could argue this is just bringing the last mid-cycle event in to the fold. On 3/21/18, 4:40 AM, "Thierry Carrez" wrote: Doug Hellmann wrote: > Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: >> >> Would we still need the same style of summit forum if we have the >> OpenStack Community Working Gathering? One thing I have found with >> the forum running all week throughout the summit is that it tends >> to draw audience away from other talks so maybe we could reduce the >> forum to only a subset of the summit time? > > I support the idea of having all contributors attend the contributor > event (and rebranding it to reflect that change in emphasis), but > it's not quite clear how the result would be different from the > Forum. Is it just the scheduling? (Having input earlier in the cycle > would be convenient, for sure.) > > Thierry's comment about "work sessions" earlier in the thread seems > key. Right, I think the key difference between the PTG and Forum is that one is a work event for engaged contributors that are part of a group spending time on making OpenStack better, while the other is a venue for engaging with everyone in our community. The PTG format is really organized around work groups (whatever their focus is), enabling them to set their short-term goals, assign work items and bootstrap the work. The fact that all those work groups are co-located make it easy to participate in multiple groups, or invite other people to join the discussion where it touches their area of expertise, but it's still mostly a venue for our geographically-distributed workgroups to get together in person and get work done. That's why the agenda is so flexible at the PTG, to maximize the productivity of attendees, even if that can confuse people who can't relate to any specific work group. Exactly. I know I way over simplified it as working on the "how", but it's very important to honor this aspect of the current PTG. We need this time for the devs and teams to take output from the previous forum sessions (or earlier input) and turn it into plans for the N+1 version. While some folks could drift between sessions, co-locating the Ops mid-cycle is just that - leveraging venue, sponsors, and Foundation staff support across one, larger event - it should NOT disrupt the current spirit of the sessions Theirry describes above The Forum format, on the other hand, is organized around specific discussion topics where you want to maximize feedback and input. Forum sessions are not attached to a specific workgroup or team, they are defined by their topic. They are well-advertised on the event schedule, and happen at a precise time. It takes advantage of the thousands of attendees being present to get the most relevant feedback possible. It allows to engage beyond the work groups, to people who can't spend much time getting more engaged and contribute back. Agreed. Again, I over simplified as the "what", but these sessions are so valuable as the bring dev and ops in a room and focus on what the software needs to do or the impact (positive or negative) that planned behaviors might have on Operators and users. To Tim's earlier question, no I think this change doesn't reduce the need for Forum sessions. If anything, I think it increases the need for us to get REALLY good at channeling output from the Ops mid-cycle in to session topics at the next Summit. The Ops meetup under its current format is mostly work sessions, and those would fit pretty well in the PTG event format. Ideally I would limit the feedback-gathering sessions there and use the Forum (and regional events like OpenStack days) to collect it. That sounds like a better way to reach out to "all users" and take into account their feedback and needs... They are largely work sessions, but independent of the co-location discussion, the UC is focused on improving the ability for tangible output to come from Ops mid-cycles, OpenStack Days and regional meetups - largely in the form of Forum sessions and ultimately changes in the software. So we, as a committee, see a lot of similarities in what you just said. I'm not bold enough to predict exactly how co-location might change the tone/topic of the Ops sessions, but I agree that we shouldn't expect a lot of real-time feedback time with devs at the PTG/mid-summit event (what ever we end up calling it). We want the devs to be focused on what's already planned for the N+1 version or beyond. The conversations/sessions at the Ops portion of the event would hopefully lead to Forum sessions on N+2 features, functions, bug fixes, etc Overall, I still see co-location as a positive move. There will be some tricky bits we need to figure out between to the "two sides" of the event as we want to MINIMIZE any perceived us/them between dev and ops - not add to it. But, the work session themselves, should still honor the spirit of the PTG and Ops Mid-cycle as they are today. We just get the added benefit of time together as a whole community - and hopefully solve a few logistic/finance/sponsorship/venue issues that trouble one event or the other today. Thanks! VW -- Thierry Carrez (ttx) _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at openstack.org Mon Apr 2 21:55:00 2018 From: mark at openstack.org (Mark Collier) Date: Mon, 2 Apr 2018 16:55:00 -0500 Subject: [Openstack-operators] Last chance Vancouver Summit Early Birds! Message-ID: <32721DA6-2332-40A2-B5D6-24B6B9B2D2CA@openstack.org> Hey Stackers, You’ve got TWO DAYS left to snag an early bird ticket, which is $699 for a full access, week-long pass. That’s four days of 300+ sessions and workshops on OpenStack, containers, edge, CI/CD and HPC/GPU/AI in Vancouver May 21-24th. The OpenStack Summit is my favorite place to meet and learn from smart, driven, funny people from all over the world. Will you join me in Vancouver May 21-24? OpenStack.org/summit has the details. Who else will you meet in Vancouver? - An OpenStack developer to discuss the future of the software? - A Kubernetes expert in one of more than 60 sessions about Kubernetes? - A Foundation member who can help you learn how to contribute code upstream at the Upstream Institute? - Other enterprises & service providers running OpenStack at scale like JPMorgan Chase, Progressive Insurance, Google, Target, Walmart, Yahoo!, China Mobile, AT&T, Verizon, China Railway, and Yahoo! Japan? - Your next employee… or employer? Key links: Register: openstack.org/summit (Early bird pricing ends April 4 at 11:59pm Pacific Time / April 5 6:59 UTC) Full Schedule: https://www.openstack.org/summit/vancouver-2018/summit-schedule/#day=2018-05-21 Hotel Discounts: https://www.openstack.org/summit/vancouver-2018/travel/ Sponsor: https://www.openstack.org/summit/vancouver-2018/sponsors/ Code of Conduct: https://www.openstack.org/summit/vancouver-2018/code-of-conduct/ See you at the Summit! Mark twitter.com/sparkycollier -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Mon Apr 2 22:04:13 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Mon, 2 Apr 2018 22:04:13 +0000 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> <1521575953-sup-8576@lrrr.local> <20180323150713.GK21100@csail.mit.edu> <5AC25CD9.60202@openstack.org> Message-ID: <0FC4B371-ED5C-4AC9-AE5D-995F85FA8977@gmail.com> +1 Greetings from Reykjavik Sent from my iPhone > On Apr 2, 2018, at 8:58 PM, wrote: > > +1 > > From: Erik McCormick [mailto:emccormick at cirrusseven.com] > Sent: Monday, April 2, 2018 3:57 PM > To: Melvin Hillsman > Cc: openstack-operators > Subject: Re: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback > > I'm a +1 too as long as the devs at large are cool with it and won't hate on us for crashing their party. I also +1 the proposed format. It's basically what we're discussed in Tokyo. Make it so. > > Cheers > Erik > > PS. Sorry for the radio silence the past couple weeks. Vacation, kids, etc. > > On Apr 2, 2018 4:18 PM, "Melvin Hillsman" wrote: > Unless anyone has any objections I believe we have quorum Jimmy. > > On Mon, Apr 2, 2018 at 12:53 PM, Melvin Hillsman wrote: > +1 > > On Mon, Apr 2, 2018 at 11:39 AM, Jimmy McArthur wrote: > Hi all - > > I'd like to check in to see if we've come to a consensus on the colocation of the Ops Meetup. Please let us know as soon as possible as we have to alert our events team. > > Thanks! > Jimmy > > > Chris Morgan > March 27, 2018 at 11:44 AM > Hello Everyone, > This proposal looks to have very good backing in the community. There was an informal IRC meeting today with the meetups team, some of the foundation folk and others and everyone seems to like a proposal put forward as a sample definition of the combined event - I certainly do, it looks like we could have a really great combined event in September. > > I volunteered to share that a bit later today with some other info. In the meanwhile if you have a viewpoint please do chime in here as we'd like to declare this agreed by the community ASAP, so in particular IF YOU OBJECT please speak up by end of week, this week. > > Thanks! > > Chris > > > > > -- > Chris Morgan > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > Jonathan Proulx > March 23, 2018 at 10:07 AM > On Thu, Mar 22, 2018 at 09:02:48PM -0700, Yih Leong, Sun. wrote: > :I support the ideas to try colocating the next Ops Midcycle and PTG. > :Although scheduling could be a potential challenge but it worth give it a > :try. > : > :Also having an joint social event in the evening can also help Dev/Ops to > :meet and offline discussion. :) > > Agreeing stongly with Matt and Melvin's comments about Forum -vs- > PTG/OpsMidcycle > > PTG/OpsMidcycle (as I see them) are about focusing inside teams to get > work done ("how" is a a good one word I think). The advantage of > colocation is for cross team questions like "we're thinking of doing > this thing this way, does this have any impacts on your work my might > not have considered", can get a quick respose in the hall, at lunch, > or over beers as Yih Leong suggests. > > Forum has become about coming to gather across groups for more > conceptual "what" discussions. > > So I also thing they are very distinct and I do see potential benefits > to colocation. > > We do need to watch out for downsides. The concerns around colocation > seemed mostly about larger events costing more and being generally > harder to organize. If we try we will find out if there is merit to > this concern, but (IMO) it is important to keep both of the > events as cheap and simple as possible. > > -Jon > > : > :On Thursday, March 22, 2018, Melvin Hillsman wrote: > : > :> Thierry and Matt both hit the nail on the head in terms of the very > :> base/purpose/point of the Forum, PTG, and Ops Midcycles and here is my +2 > :> since I have spoke with both and others outside of this thread and agree > :> with them here as I have in individual discussions. > :> > :> If nothing else I agree with Jimmy's original statement of at least giving > :> this a try. > :> > :> On Thu, Mar 22, 2018 at 4:54 PM, Matt Van Winkle > :> wrote: > :> > :>> Hey folks, > :>> Great discussion! There are number of points to comment on going back > :>> through the last few emails. I'll try to do so in line with Theirry's > :>> latest below. From a User Committee perspective (and as a member of the > :>> Ops Meetup planning team), I am a convert to the idea of co-location, but > :>> have come to see a lot of value in it. I'll point some of that out as I > :>> respond to specific comments, but first a couple of overarching points. > :>> > :>> In the current model, the Forum sessions are very much about WHAT the > :>> software should do. Keeping the discussions focused on behavior, feature > :>> and function has made it much easier for an operator to participate > :>> effectively in the conversation versus the older, design sessions, that > :>> focused largely on blueprints, coding approaches, etc. These are HOW the > :>> developers should make things work and, now, are a large part of the focus > :>> of the PTG. I realize it's not that cut and dry, but current model has > :>> allowed for this division of "what" and "how" in many areas, and I know > :>> several who have found it valuable. > :>> > :>> The other contextual thing to remember is the PTG was the effective > :>> combining of all the various team mid-cycle meetups that were occurring. > :>> The current Ops mid-cycle was born in that same period. While it's purpose > :>> was a little different, it's spirit is the same - gather a team (in this > :>> case operators) together outside the hustle and bustle of a summit to > :>> discuss common issues, topics, etc. I'll also point out, that they have > :>> been good vehicles in the Ops community to get new folks integrated. For > :>> the purpose of this discussion, though, one could argue this is just > :>> bringing the last mid-cycle event in to the fold. > :>> > :>> On 3/21/18, 4:40 AM, "Thierry Carrez" wrote: > :>> > :>> Doug Hellmann wrote: > :>> > Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: > :>> >> > :>> >> Would we still need the same style of summit forum if we have the > :>> >> OpenStack Community Working Gathering? One thing I have found with > :>> >> the forum running all week throughout the summit is that it tends > :>> >> to draw audience away from other talks so maybe we could reduce the > :>> >> forum to only a subset of the summit time? > :>> > > :>> > I support the idea of having all contributors attend the contributor > :>> > event (and rebranding it to reflect that change in emphasis), but > :>> > it's not quite clear how the result would be different from the > :>> > Forum. Is it just the scheduling? (Having input earlier in the cycle > :>> > would be convenient, for sure.) > :>> > > :>> > Thierry's comment about "work sessions" earlier in the thread seems > :>> > key. > :>> > :>> Right, I think the key difference between the PTG and Forum is that > :>> one > :>> is a work event for engaged contributors that are part of a group > :>> spending time on making OpenStack better, while the other is a venue > :>> for > :>> engaging with everyone in our community. > :>> > :>> The PTG format is really organized around work groups (whatever their > :>> focus is), enabling them to set their short-term goals, assign work > :>> items and bootstrap the work. The fact that all those work groups are > :>> co-located make it easy to participate in multiple groups, or invite > :>> other people to join the discussion where it touches their area of > :>> expertise, but it's still mostly a venue for our > :>> geographically-distributed workgroups to get together in person and > :>> get > :>> work done. That's why the agenda is so flexible at the PTG, to > :>> maximize > :>> the productivity of attendees, even if that can confuse people who > :>> can't > :>> relate to any specific work group. > :>> > :>> Exactly. I know I way over simplified it as working on the "how", but > :>> it's very important to honor this aspect of the current PTG. We need this > :>> time for the devs and teams to take output from the previous forum sessions > :>> (or earlier input) and turn it into plans for the N+1 version. While some > :>> folks could drift between sessions, co-locating the Ops mid-cycle is just > :>> that - leveraging venue, sponsors, and Foundation staff support across one, > :>> larger event - it should NOT disrupt the current spirit of the sessions > :>> Theirry describes above > :>> > :>> The Forum format, on the other hand, is organized around specific > :>> discussion topics where you want to maximize feedback and input. Forum > :>> sessions are not attached to a specific workgroup or team, they are > :>> defined by their topic. They are well-advertised on the event > :>> schedule, > :>> and happen at a precise time. It takes advantage of the thousands of > :>> attendees being present to get the most relevant feedback possible. It > :>> allows to engage beyond the work groups, to people who can't spend > :>> much > :>> time getting more engaged and contribute back. > :>> > :>> Agreed. Again, I over simplified as the "what", but these sessions are > :>> so valuable as the bring dev and ops in a room and focus on what the > :>> software needs to do or the impact (positive or negative) that planned > :>> behaviors might have on Operators and users. To Tim's earlier question, no > :>> I think this change doesn't reduce the need for Forum sessions. If > :>> anything, I think it increases the need for us to get REALLY good at > :>> channeling output from the Ops mid-cycle in to session topics at the next > :>> Summit. > :>> > :>> The Ops meetup under its current format is mostly work sessions, and > :>> those would fit pretty well in the PTG event format. Ideally I would > :>> limit the feedback-gathering sessions there and use the Forum (and > :>> regional events like OpenStack days) to collect it. That sounds like a > :>> better way to reach out to "all users" and take into account their > :>> feedback and needs... > :>> > :>> They are largely work sessions, but independent of the co-location > :>> discussion, the UC is focused on improving the ability for tangible output > :>> to come from Ops mid-cycles, OpenStack Days and regional meetups - largely > :>> in the form of Forum sessions and ultimately changes in the software. So > :>> we, as a committee, see a lot of similarities in what you just said. I'm > :>> not bold enough to predict exactly how co-location might change the > :>> tone/topic of the Ops sessions, but I agree that we shouldn't expect a lot > :>> of real-time feedback time with devs at the PTG/mid-summit event (what ever > :>> we end up calling it). We want the devs to be focused on what's already > :>> planned for the N+1 version or beyond. The conversations/sessions at the > :>> Ops portion of the event would hopefully lead to Forum sessions on N+2 > :>> features, functions, bug fixes, etc > :>> > :>> Overall, I still see co-location as a positive move. There will be some > :>> tricky bits we need to figure out between to the "two sides" of the event > :>> as we want to MINIMIZE any perceived us/them between dev and ops - not add > :>> to it. But, the work session themselves, should still honor the spirit of > :>> the PTG and Ops Mid-cycle as they are today. We just get the added benefit > :>> of time together as a whole community - and hopefully solve a few > :>> logistic/finance/sponsorship/venue issues that trouble one event or the > :>> other today. > :>> > :>> Thanks! > :>> VW > :>> -- > :>> Thierry Carrez (ttx) > :>> > :>> _______________________________________________ > :>> OpenStack-operators mailing list > :>> OpenStack-operators at lists.openstack.org > :>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac > :>> k-operators > :>> > :>> > :>> _______________________________________________ > :>> OpenStack-operators mailing list > :>> OpenStack-operators at lists.openstack.org > :>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > :>> > :> > :> > :> > :> -- > :> Kind regards, > :> > :> Melvin Hillsman > :> mrhillsman at gmail.com > :> mobile: (832) 264-2646 > :> > > :_______________________________________________ > :OpenStack-operators mailing list > :OpenStack-operators at lists.openstack.org > :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > Yih Leong, Sun. > March 22, 2018 at 11:02 PM > I support the ideas to try colocating the next Ops Midcycle and PTG. Although scheduling could be a potential challenge but it worth give it a try. > > Also having an joint social event in the evening can also help Dev/Ops to meet and offline discussion. :) > > On Thursday, March 22, 2018, Melvin Hillsman wrote: > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > Melvin Hillsman > March 22, 2018 at 9:08 PM > Thierry and Matt both hit the nail on the head in terms of the very base/purpose/point of the Forum, PTG, and Ops Midcycles and here is my +2 since I have spoke with both and others outside of this thread and agree with them here as I have in individual discussions. > > If nothing else I agree with Jimmy's original statement of at least giving this a try. > > > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > Matt Van Winkle > March 22, 2018 at 4:54 PM > Hey folks, > Great discussion! There are number of points to comment on going back through the last few emails. I'll try to do so in line with Theirry's latest below. From a User Committee perspective (and as a member of the Ops Meetup planning team), I am a convert to the idea of co-location, but have come to see a lot of value in it. I'll point some of that out as I respond to specific comments, but first a couple of overarching points. > > In the current model, the Forum sessions are very much about WHAT the software should do. Keeping the discussions focused on behavior, feature and function has made it much easier for an operator to participate effectively in the conversation versus the older, design sessions, that focused largely on blueprints, coding approaches, etc. These are HOW the developers should make things work and, now, are a large part of the focus of the PTG. I realize it's not that cut and dry, but current model has allowed for this division of "what" and "how" in many areas, and I know several who have found it valuable. > > The other contextual thing to remember is the PTG was the effective combining of all the various team mid-cycle meetups that were occurring. The current Ops mid-cycle was born in that same period. While it's purpose was a little different, it's spirit is the same - gather a team (in this case operators) together outside the hustle and bustle of a summit to discuss common issues, topics, etc. I'll also point out, that they have been good vehicles in the Ops community to get new folks integrated. For the purpose of this discussion, though, one could argue this is just bringing the last mid-cycle event in to the fold. > > On 3/21/18, 4:40 AM, "Thierry Carrez" wrote: > > Doug Hellmann wrote: > > Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: > >> > >> Would we still need the same style of summit forum if we have the > >> OpenStack Community Working Gathering? One thing I have found with > >> the forum running all week throughout the summit is that it tends > >> to draw audience away from other talks so maybe we could reduce the > >> forum to only a subset of the summit time? > > > > I support the idea of having all contributors attend the contributor > > event (and rebranding it to reflect that change in emphasis), but > > it's not quite clear how the result would be different from the > > Forum. Is it just the scheduling? (Having input earlier in the cycle > > would be convenient, for sure.) > > > > Thierry's comment about "work sessions" earlier in the thread seems > > key. > > Right, I think the key difference between the PTG and Forum is that one > is a work event for engaged contributors that are part of a group > spending time on making OpenStack better, while the other is a venue for > engaging with everyone in our community. > > The PTG format is really organized around work groups (whatever their > focus is), enabling them to set their short-term goals, assign work > items and bootstrap the work. The fact that all those work groups are > co-located make it easy to participate in multiple groups, or invite > other people to join the discussion where it touches their area of > expertise, but it's still mostly a venue for our > geographically-distributed workgroups to get together in person and get > work done. That's why the agenda is so flexible at the PTG, to maximize > the productivity of attendees, even if that can confuse people who can't > relate to any specific work group. > > Exactly. I know I way over simplified it as working on the "how", but it's very important to honor this aspect of the current PTG. We need this time for the devs and teams to take output from the previous forum sessions (or earlier input) and turn it into plans for the N+1 version. While some folks could drift between sessions, co-locating the Ops mid-cycle is just that - leveraging venue, sponsors, and Foundation staff support across one, larger event - it should NOT disrupt the current spirit of the sessions Theirry describes above > > The Forum format, on the other hand, is organized around specific > discussion topics where you want to maximize feedback and input. Forum > sessions are not attached to a specific workgroup or team, they are > defined by their topic. They are well-advertised on the event schedule, > and happen at a precise time. It takes advantage of the thousands of > attendees being present to get the most relevant feedback possible. It > allows to engage beyond the work groups, to people who can't spend much > time getting more engaged and contribute back. > > Agreed. Again, I over simplified as the "what", but these sessions are so valuable as the bring dev and ops in a room and focus on what the software needs to do or the impact (positive or negative) that planned behaviors might have on Operators and users. To Tim's earlier question, no I think this change doesn't reduce the need for Forum sessions. If anything, I think it increases the need for us to get REALLY good at channeling output from the Ops mid-cycle in to session topics at the next Summit. > > The Ops meetup under its current format is mostly work sessions, and > those would fit pretty well in the PTG event format. Ideally I would > limit the feedback-gathering sessions there and use the Forum (and > regional events like OpenStack days) to collect it. That sounds like a > better way to reach out to "all users" and take into account their > feedback and needs... > > They are largely work sessions, but independent of the co-location discussion, the UC is focused on improving the ability for tangible output to come from Ops mid-cycles, OpenStack Days and regional meetups - largely in the form of Forum sessions and ultimately changes in the software. So we, as a committee, see a lot of similarities in what you just said. I'm not bold enough to predict exactly how co-location might change the tone/topic of the Ops sessions, but I agree that we shouldn't expect a lot of real-time feedback time with devs at the PTG/mid-summit event (what ever we end up calling it). We want the devs to be focused on what's already planned for the N+1 version or beyond. The conversations/sessions at the Ops portion of the event would hopefully lead to Forum sessions on N+2 features, functions, bug fixes, etc > > Overall, I still see co-location as a positive move. There will be some tricky bits we need to figure out between to the "two sides" of the event as we want to MINIMIZE any perceived us/them between dev and ops - not add to it. But, the work session themselves, should still honor the spirit of the PTG and Ops Mid-cycle as they are today. We just get the added benefit of time together as a whole community - and hopefully solve a few logistic/finance/sponsorship/venue issues that trouble one event or the other today. > > Thanks! > VW > -- > Thierry Carrez (ttx) > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From zioproto at gmail.com Tue Apr 3 07:09:27 2018 From: zioproto at gmail.com (Saverio Proto) Date: Tue, 3 Apr 2018 09:09:27 +0200 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: <0FC4B371-ED5C-4AC9-AE5D-995F85FA8977@gmail.com> References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> <1521575953-sup-8576@lrrr.local> <20180323150713.GK21100@csail.mit.edu> <5AC25CD9.60202@openstack.org> <0FC4B371-ED5C-4AC9-AE5D-995F85FA8977@gmail.com> Message-ID: I am also for +1 Thanks Saverio From thierry at openstack.org Tue Apr 3 08:33:17 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 3 Apr 2018 10:33:17 +0200 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> <1521575953-sup-8576@lrrr.local> <20180323150713.GK21100@csail.mit.edu> <5AC25CD9.60202@openstack.org> Message-ID: Erik McCormick wrote: > I'm a +1 too as long as the devs at large are cool with it and won't > hate on us for crashing their party. As a data point, in a recent survey 89% of surveyed developers supported that the Ops meetup should happen at the same time and place. Amongst past PTG attendees, that support raises to 92%. Furthermore I only heard good things about the Public Cloud WG participating to the Dublin PTG. So I don't think anyone views it as "their party" -- just as an event where we all get stuff done. -- Thierry From mizuno.shintaro at lab.ntt.co.jp Tue Apr 3 08:47:14 2018 From: mizuno.shintaro at lab.ntt.co.jp (Shintaro Mizuno) Date: Tue, 3 Apr 2018 17:47:14 +0900 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> <1521575953-sup-8576@lrrr.local> <20180323150713.GK21100@csail.mit.edu> <5AC25CD9.60202@openstack.org> Message-ID: <993038ab-3eab-482b-1434-cd968a6e05d9@lab.ntt.co.jp> I'm also +1 on this. I've circulated to the Japanese Ops group and heard no objection so would be more +1s from our community. Shintaro -- Shintaro MIZUNO (水野伸太郎) NTT Software Innovation Center TEL: 0422-59-4977 E-mail: mizuno.shintaro at lab.ntt.co.jp From tbechtold at suse.com Tue Apr 3 10:10:43 2018 From: tbechtold at suse.com (Thomas Bechtold) Date: Tue, 3 Apr 2018 12:10:43 +0200 Subject: [Openstack-operators] [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Solar" release In-Reply-To: <20180330142643.ff3czxy35khmjakx@eukaryote> References: <20180330142643.ff3czxy35khmjakx@eukaryote> Message-ID: <50df1384-0a2c-d832-8b7d-7d5f8877bf1b@suse.com> Hey, On 30.03.2018 16:26, Kashyap Chamarthy wrote: [...] > Taking the DistroSupportMatrix into picture, for the sake of discussion, > how about the following NEXT_MIN versions for "Solar" release: > > (a) libvirt: 3.2.0 (released on 23-Feb-2017) [...] > > (b) QEMU: 2.9.0 (released on 20-Apr-2017) [...] Works both for openSUSE and SLES. Best, Tom From cdent+os at anticdent.org Tue Apr 3 10:48:27 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 3 Apr 2018 11:48:27 +0100 (BST) Subject: [Openstack-operators] nova-placement-api tuning In-Reply-To: References: <76b24db4-bdbb-663c-7d60-4eaaedfe3eac@oracle.com> Message-ID: On Mon, 2 Apr 2018, Alex Schultz wrote: > So this is/was valid. A few years back there was some perf tests done > with various combinations of process/threads and for Keystone it was > determined that threads should be 1 while you should adjust the > process count (hence the bug). Now I guess the question is for every > service what is the optimal configuration but I'm not sure there's > anyone who's looking at this in the upstream for all the services. In > the puppet modules for consistency we applied a similar concept for > all the services when they are deployed under apache. It can be tuned > as needed for each service but I don't think we have any great > examples of perf numbers. It's really a YMMV thing. We ship a basic > default that isn't crazy, but it's probably not optimal either. Do you happen to recall if the trouble with keystone and threaded web servers had anything to do with eventlet? Support for the eventlet-based server was removed from keystone in Newton. I've been doing some experiments with placement using multiple uwsgi processes, each with multiple threads and it appears to be working very well. Ideally all the OpenStack HTTP-based services would be able to run effectively in that kind of setup. If they can't I'd like to help make it possible. In any case: processes 3, threads 1 for WSGIDaemonProcess for the placement service for a deployment of any real size errs on the side of too conservative and I hope we can make some adjustments there. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From jaypipes at gmail.com Tue Apr 3 13:33:05 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 3 Apr 2018 09:33:05 -0400 Subject: [Openstack-operators] nova-placement-api tuning In-Reply-To: References: <76b24db4-bdbb-663c-7d60-4eaaedfe3eac@oracle.com> Message-ID: <0188bbcd-6aee-3a5e-dd5b-b77ca7c5945e@gmail.com> On 04/03/2018 06:48 AM, Chris Dent wrote: > On Mon, 2 Apr 2018, Alex Schultz wrote: > >> So this is/was valid. A few years back there was some perf tests done >> with various combinations of process/threads and for Keystone it was >> determined that threads should be 1 while you should adjust the >> process count (hence the bug). Now I guess the question is for every >> service what is the optimal configuration but I'm not sure there's >> anyone who's looking at this in the upstream for all the services.  In >> the puppet modules for consistency we applied a similar concept for >> all the services when they are deployed under apache.  It can be tuned >> as needed for each service but I don't think we have any great >> examples of perf numbers. It's really a YMMV thing. We ship a basic >> default that isn't crazy, but it's probably not optimal either. > > Do you happen to recall if the trouble with keystone and threaded > web servers had anything to do with eventlet? Support for the > eventlet-based server was removed from keystone in Newton. IIRC, it had something to do with the way the keystoneauth middleware interacted with memcache... not sure if this is still valid any more though. Probably worth re-checking the performance. -jay From aschultz at redhat.com Tue Apr 3 16:30:08 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 3 Apr 2018 10:30:08 -0600 Subject: [Openstack-operators] nova-placement-api tuning In-Reply-To: References: <76b24db4-bdbb-663c-7d60-4eaaedfe3eac@oracle.com> Message-ID: On Tue, Apr 3, 2018 at 4:48 AM, Chris Dent wrote: > On Mon, 2 Apr 2018, Alex Schultz wrote: > >> So this is/was valid. A few years back there was some perf tests done >> with various combinations of process/threads and for Keystone it was >> determined that threads should be 1 while you should adjust the >> process count (hence the bug). Now I guess the question is for every >> service what is the optimal configuration but I'm not sure there's >> anyone who's looking at this in the upstream for all the services. In >> the puppet modules for consistency we applied a similar concept for >> all the services when they are deployed under apache. It can be tuned >> as needed for each service but I don't think we have any great >> examples of perf numbers. It's really a YMMV thing. We ship a basic >> default that isn't crazy, but it's probably not optimal either. > > > Do you happen to recall if the trouble with keystone and threaded > web servers had anything to do with eventlet? Support for the > eventlet-based server was removed from keystone in Newton. > It was running under httpd I believe. > I've been doing some experiments with placement using multiple uwsgi > processes, each with multiple threads and it appears to be working > very well. Ideally all the OpenStack HTTP-based services would be > able to run effectively in that kind of setup. If they can't I'd > like to help make it possible. > > In any case: processes 3, threads 1 for WSGIDaemonProcess for the > placement service for a deployment of any real size errs on the > side of too conservative and I hope we can make some adjustments > there. > You'd say that until you realize that the deployment may also be sharing every other service api running on the box. Imagine keystone, glance, nova, cinder, gnocchi, etc etc all running on the same machine. Then 3 isn't so conservative. They start adding up and exhausting resources (cpu cores/memory) really quickly. In a perfect world, yes each api service would get it's own system with processes == processor count but in most cases they end up getting split between the number of services running on the box. In puppet we did a sliding scale and have several facts[0] that can be used if a person doesn't want to switch to $::processorcount. If you're rolling your own you can tune it easier but when you have to come up with something that might be collocated with a bunch of other services you have to hedge your bets to make sure it works most of the time. Thanks, -Alex [0] http://git.openstack.org/cgit/openstack/puppet-openstacklib/tree/lib/facter/os_workers.rb > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From mvanwink at rackspace.com Tue Apr 3 16:43:10 2018 From: mvanwink at rackspace.com (Matt Van Winkle) Date: Tue, 3 Apr 2018 16:43:10 +0000 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: <993038ab-3eab-482b-1434-cd968a6e05d9@lab.ntt.co.jp> References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> <1521575953-sup-8576@lrrr.local> <20180323150713.GK21100@csail.mit.edu> <5AC25CD9.60202@openstack.org> <993038ab-3eab-482b-1434-cd968a6e05d9@lab.ntt.co.jp> Message-ID: <9AD63655-80E4-4001-AB20-EAE4C54C171D@rackspace.com> Looks like we can move forward with co-location!. Jimmy, let us know when we need to work time in for you or other Foundation folks to discuss more details in the UC meeting and/or Ops Meetup Team meetings. Thanks! VW On 4/3/18, 3:49 AM, "Shintaro Mizuno" wrote: I'm also +1 on this. I've circulated to the Japanese Ops group and heard no objection so would be more +1s from our community. Shintaro -- Shintaro MIZUNO (水野伸太郎) NTT Software Innovation Center TEL: 0422-59-4977 E-mail: mizuno.shintaro at lab.ntt.co.jp _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From jimmy at openstack.org Tue Apr 3 16:50:33 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 03 Apr 2018 11:50:33 -0500 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: <9AD63655-80E4-4001-AB20-EAE4C54C171D@rackspace.com> References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> <1521575953-sup-8576@lrrr.local> <20180323150713.GK21100@csail.mit.edu> <5AC25CD9.60202@openstack.org> <993038ab-3eab-482b-1434-cd968a6e05d9@lab.ntt.co.jp> <9AD63655-80E4-4001-AB20-EAE4C54C171D@rackspace.com> Message-ID: <5AC3B0D9.4020709@openstack.org> Thanks to everyone that weighed in! We'll be working on some updated language around the event to clarify the inclusion of the Ops community. We'll plan to float that to both operators and dev lists when we're a little further along. Meantime, if you have any questions or concerns, don't hesitate to reach out. Thanks all! Jimmy > Matt Van Winkle > April 3, 2018 at 11:43 AM > Looks like we can move forward with co-location!. Jimmy, let us know > when we need to work time in for you or other Foundation folks to > discuss more details in the UC meeting and/or Ops Meetup Team meetings. > > Thanks! > VW > > On 4/3/18, 3:49 AM, "Shintaro Mizuno" > wrote: > > I'm also +1 on this. > > I've circulated to the Japanese Ops group and heard no objection so > would be more +1s from our community. > > Shintaro > -- > Shintaro MIZUNO (水野伸太郎) > NTT Software Innovation Center > TEL: 0422-59-4977 > E-mail: mizuno.shintaro at lab.ntt.co.jp > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > Shintaro Mizuno > April 3, 2018 at 3:47 AM > I'm also +1 on this. > > I've circulated to the Japanese Ops group and heard no objection so > would be more +1s from our community. > > Shintaro > Thierry Carrez > April 3, 2018 at 3:33 AM > > As a data point, in a recent survey 89% of surveyed developers supported > that the Ops meetup should happen at the same time and place. Amongst > past PTG attendees, that support raises to 92%. Furthermore I only heard > good things about the Public Cloud WG participating to the Dublin PTG. > > So I don't think anyone views it as "their party" -- just as an event > where we all get stuff done. > > Erik McCormick > April 2, 2018 at 3:57 PM > I'm a +1 too as long as the devs at large are cool with it and won't > hate on us for crashing their party. I also +1 the proposed format. > It's basically what we're discussed in Tokyo. Make it so. > > Cheers > Erik > > PS. Sorry for the radio silence the past couple weeks. Vacation, > kids, etc. > > > Melvin Hillsman > April 2, 2018 at 12:53 PM > +1 > > > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrexojo at gmail.com Wed Apr 4 07:15:42 2018 From: mrexojo at gmail.com (mrexojo) Date: Wed, 04 Apr 2018 09:15:42 +0200 Subject: [Openstack-operators] [charms] openstack-base error with ntp charm Message-ID: <09b0266e48209565428780dd169c637fa54f05a8.camel@gmail.com> Hi all, I'm newbie with Openstack juju deployment, and I have a problem with ntp charm of openstack-base from openstack-charmers: openstack-base #54https://jujucharms.com/openstack-base/ ntp #24https://jujucharms.com/ntp/ The process stop when the install try to add a 5º unit of ntp , but really I have ntp/1 ntp/2 ntp/3 and ntp/4 for my four nodes where the lxc containers are running. ceph- osd/0 waiting idle 0 172.16.117.116 Incomplete relation: monitor ntp/4 error idle 172.16.117.116 hook failed: "ntp-peers-relation-joined" for ntp:ntp-peers And the rest of deployment is waiting...... Why it need replay the ntp on a node one more time?It's a error or my steps for deployment are incorrect? "juju deploy ntp""juju add-relation neutron-gateway ntp""juju add- relation ceph-osd ntp" steps from guide: https://docs.openstack.org/charm-deployment-guide/la test/install-openstack.html#deploy-openstack Someone with juju experience? Thanks-- at mrexojo -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Wed Apr 4 08:45:07 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 4 Apr 2018 10:45:07 +0200 Subject: [Openstack-operators] [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <20180331140929.r5kj3qyrefvsovwf@eukaryote> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> Message-ID: <20180404084507.GA18076@paraplu> On Sat, Mar 31, 2018 at 04:09:29PM +0200, Kashyap Chamarthy wrote: > [Meta comment: corrected the email subject: "Solar" --> "Stein"] Here's a change to get the discussion rolling: https://review.openstack.org/#/c/558171/ -- [RFC] Pick next minimum libvirt / QEMU versions for "Stein" > On Fri, Mar 30, 2018 at 04:26:43PM +0200, Kashyap Chamarthy wrote: [...] > > Taking the DistroSupportMatrix into picture, for the sake of discussion, > > how about the following NEXT_MIN versions for "Solar" release: > > > > (a) libvirt: 3.2.0 (released on 23-Feb-2017) > > > > This satisfies most distributions, but will affect Debian "Stretch", > > as they only have 3.0.0 in the stable branch -- I've checked their > > repositories[3][4]. Although the latest update for the stable > > release "Stretch (9.4)" was released only on 10-March-2018, I don't > > think they increment libvirt and QEMU versions in stable. Is > > there another way for "Stretch (9.4)" users to get the relevant > > versions from elsewhere? I learn that there's Debian 'stretch-backports'[0], which might provide (but doesn't yet) a newer stable version. > > (b) QEMU: 2.9.0 (released on 20-Apr-2017) > > > > This too satisfies most distributions but will affect Oracle Linux > > -- which seem to ship QEMU 1.5.3 (released in August 2013) with > > their "7", from the Wiki. And will also affect Debian "Stretch" -- > > as it only has 2.8.0 > > > > Can folks chime in here? Answering my own questions about Debian -- >From looking at the Debian Archive[1][2], these are the versions for 'Stretch' (the current stable release) and in the upcoming 'Buster' release: libvirt | 3.0.0-4+deb9u2 | stretch libvirt | 4.1.0-2 | buster qemu | 1:2.8+dfsg-6+deb9u3 | stretch qemu | 1:2.11+dfsg-1 | buster I also talked on #debian-backports IRC channel on OFTC network, where I asked: "What I'm essentially looking for is: "How can 'stretch' users get libvirt 3.2.0 and QEMU 2.9.0, even if via a different repository. As they are proposed to be least common denominator versions across distributions." And two people said: Then the versions from 'Buster' could be backported to 'stretch-backports'. The process for that is to: "ask the maintainer of those package and Cc to the backports mailing list." Any takers? [0] https://packages.debian.org/stretch-backports/ [1] https://qa.debian.org/madison.php?package=libvirt [2] https://qa.debian.org/madison.php?package=qemu -- /kashyap From scheuran at linux.vnet.ibm.com Wed Apr 4 12:48:44 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Wed, 4 Apr 2018 14:48:44 +0200 Subject: [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Solar" release In-Reply-To: <20180330142643.ff3czxy35khmjakx@eukaryote> References: <20180330142643.ff3czxy35khmjakx@eukaryote> Message-ID: <718A1B88-EFAE-4474-A227-BA970E4C6C2B@linux.vnet.ibm.com> An HTML attachment was scrubbed... URL: From ross at opentechinstitute.org Wed Apr 4 14:31:38 2018 From: ross at opentechinstitute.org (Ross Schulman) Date: Wed, 4 Apr 2018 10:31:38 -0400 Subject: [Openstack-operators] [HELP REQUESTED] First deployment of OpenStack in existing enterprise network Message-ID: Hello deployers from a rank newbie. My organization wants to try building an OpenStack deployment on premises partly to move our existing ad hoc container services into a more controllable and uniform system and partly to provide a test environment for working with a few other organizations on cloud infrastructure experiments. For right now I have one server that's not being used anymore to play with. My plan is to get an OpenStack environment running on that server, move over some containers so I can reimage another server as a node, rinse and repeat. That first stage will eventually end up with 4 servers in the OpenStack deployment. We may expand further going forward. I've decided to use OpenStack Ansible to run the deployment. In our server room we have an edge router that runs NAT and DHCP for the whole office and routes specific external IPs and ports to internal servers as necessary. The router also serves all of the lan drops, wifi points, and phones in the office. I've claimed four VLANs, and a external subnet of addresses Where I'm getting confused is how to integrate the OS server into the rest of the network. The server I'm using is attached to a switch along with a few other servers that is patched directly into our router. My specific stumbling block right now is what IP to tell the br-mgmt network to use as a gateway (Here: https://github.com/openstack/openstack-ansible/blob/stable/queens/etc/network/interfaces.d/openstack_interface.cfg.prod.example#L75). I'm not sure if that's supposed to be a hardware router or something neutron is going to take care of later. More generally speaking, any wise words about deploying in such an environment would also be welcome. Thanks very much in advance if you made it this far, Ross -- Ross Schulman Senior Counsel, Senior Policy Technologist, New America's Open Technology Institute ross at opentechinstitute.org 202-986-0427 PGP: 4D20 3824 9463 34C5 37EF FB0C 5A05 EB1F 5BBE 56EE -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Wed Apr 4 19:04:01 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 4 Apr 2018 12:04:01 -0700 Subject: [Openstack-operators] [openstack-dev] [nova] The createBackup API In-Reply-To: References: Message-ID: +openstack-operators Operator feedback wanted: do you have users that pass rotation parameter '0' to the createBackup API in order to delete backups? Do you have users using the createBackup API in general? On Fri, 30 Mar 2018 10:44:40 +0800, Alex Xu wrote: > There is spec proposal to fix a bug of createBackup API with > microversion. (https://review.openstack.org/#/c/511825/) > > When rotation parameter is '0', the createBackup API just do a snapshot, > and then delete all the snapshots. That is meaningless behavier. Agreed that '0' is meaningless in the context of 'createBackup' but as a side-effect, it allows users to purge old backups on-demand. > But there is thing hope to get wider suggestion. Since we said before > all the nova API should be primitive, the API shouldn't be another wrap > of another API. > > So the createBackup sounds like just using the createImage API to create > a snapshot, and upload the snapshot into the glance with index number in > the image name, and rotation the image in after each snapshot. > > So it should be something can be done by the client scrips to do same > thing with createImage API. > > We have two options here: > #1. fix the bug with a microversion. And we aren't sure any people > really use '0' in the real life. But we use microversion to fix that > bug, not sure it is worth. I think this is the key point -- are there users who have been using '0' to the createBackup API in order to delete backups? If so, then I would be inclined to go ahead and fix the issue in our API with a microversion (disallow '0' for createBackup and then add a deleteBackups server action). My rationale is that if people are actively using it, let's just fix it since it's nearly already there. The only problem with how it currently works is that '0' needlessly creates a backup that it will turn around and delete. The fix would be small and straightforward as it would just add schema validation for '0' on createBackup and then the new deleteBackups action would be an alias for deleting things (we already have the delete logic happening for '0'). > #2. deprecate the backup API with a microversion, leave the bug along. > Document that how the user can do that in the client script. > > Looking for your comments. If there isn't broader use of the API, then I'd be in favor of deprecating it. -melanie From jimmy at openstack.org Wed Apr 4 22:15:40 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 04 Apr 2018 17:15:40 -0500 Subject: [Openstack-operators] Asking for ask.openstack.org Message-ID: <5AC54E8C.4040003@openstack.org> Hi everyone! We have a very robust and vibrant community at ask.openstack.org . There are literally dozens of posts a day. However, many of them don't receive knowledgeable answers. I'm really worried about this becoming a vacuum where potential community members get frustrated and don't realize how to get more involved with the community. I'm looking for thoughts/ideas/feelings about this tool as well as potential admin volunteers to help us manage the constant influx of technical and not-so-technical questions around OpenStack. For those of you already contributing there, Thank You! For those that are interested in becoming a moderator (instant AUC status!) or have some additional ideas around fostering this community, please respond. Looking forward to your thoughts Thanks! Jimmy irc: jamesmcarthur -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at medberry.net Wed Apr 4 22:45:23 2018 From: openstack at medberry.net (David Medberry) Date: Wed, 4 Apr 2018 16:45:23 -0600 Subject: [Openstack-operators] Asking for ask.openstack.org In-Reply-To: <5AC54E8C.4040003@openstack.org> References: <5AC54E8C.4040003@openstack.org> Message-ID: Hi Jimmy, I tend to jump on things but only those that go to my Inbox and don't otherwise get filtered. I'll see if I'm sub'd to ask.o.o and if so, I'll put that into my Inbox instead of it going into one of my myriad google filtered folders. OTOH, if it is truly just a web site I need to manually monitor, I'll be candid and say it won't happen (as it won't.) Not sure if this idea helps or hinders others but maybe it will elicit some other personal workflow discussions or improvements. -dave medberry On Wed, Apr 4, 2018 at 4:15 PM, Jimmy McArthur wrote: > Hi everyone! > > We have a very robust and vibrant community at ask.openstack.org. There > are literally dozens of posts a day. However, many of them don't receive > knowledgeable answers. I'm really worried about this becoming a vacuum > where potential community members get frustrated and don't realize how to > get more involved with the community. > > I'm looking for thoughts/ideas/feelings about this tool as well as > potential admin volunteers to help us manage the constant influx of > technical and not-so-technical questions around OpenStack. > > For those of you already contributing there, Thank You! For those that > are interested in becoming a moderator (instant AUC status!) or have some > additional ideas around fostering this community, please respond. > > Looking forward to your thoughts > > Thanks! > Jimmy > irc: jamesmcarthur > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Thu Apr 5 13:39:08 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 05 Apr 2018 08:39:08 -0500 Subject: [Openstack-operators] [openstack-dev] Asking for ask.openstack.org In-Reply-To: <4b684314-bdba-8ead-6354-3984b7610705@redhat.com> References: <5AC542F4.2090205@openstack.org> <20180404223030.GA12345@localhost.localdomain> <4b684314-bdba-8ead-6354-3984b7610705@redhat.com> Message-ID: <5AC626FC.9030706@openstack.org> Ian, thanks for digging in and helping sort out some of these issues! > Ian Wienand > April 4, 2018 at 11:04 PM > > We've long had problems with this host and I've looked at it before > [1]. It often drops out. > > It seems there's enough interest we should dive a bit deeper. Here's > what I've found out: > > askbot > ------ > > Of the askbot site, it seems under control, except for an unbounded > session log file. Proposed [2] > > root at ask:/srv# du -hs * > 2.0G askbot-site > 579M dist > > overall > ------- > > The major consumer is /var; where we've got > > 3.9G log > 5.9G backups > 9.4G lib > > backups > ------- > > The backup seem under control at least; we're rotating them out and we > keep 10, and the size is pretty consistently 500mb: > > root at ask:/var/backups/pgsql_backups# ls -lh > total 5.9G > -rw-r--r-- 1 root root 599M Apr 5 00:03 askbotdb.sql.gz > -rw-r--r-- 1 root root 598M Apr 4 00:03 askbotdb.sql.gz.1 > ... > > We could reduce the backup rotations to just one if we like -- the > server is backed up nightly via bup, so at any point we can get > previous dumps from there. bup should de-duplicate everything, but > still, it's probably not necessary. > > The db directory was sitting at ~9gb > > root at ask:/var/lib/postgresql# du -hs > 8.9G . > > AFAICT, it seems like the autovacuum is running OK on the busy tables > > askbotdb=# select relname,last_vacuum, last_autovacuum, last_analyze, > last_autoanalyze from pg_stat_user_tables where last_autovacuum is not > NULL; > relname | last_vacuum | last_autovacuum | last_analyze | last_autoanalyze > ------------------+-------------+-------------------------------+-------------------------------+------------------------------- > django_session | | 2018-04-02 17:29:48.329915+00 | 2018-04-05 > 02:18:39.300126+00 | 2018-04-05 00:11:23.456602+00 > askbot_badgedata | | 2018-04-04 07:19:21.357461+00 | | 2018-04-04 > 07:18:16.201376+00 > askbot_thread | | 2018-04-04 16:24:45.124492+00 | | 2018-04-04 > 20:32:25.845164+00 > auth_message | | 2018-04-04 12:29:24.273651+00 | 2018-04-05 > 02:18:07.633781+00 | 2018-04-04 21:26:38.178586+00 > djkombu_message | | 2018-04-05 02:11:50.186631+00 | | 2018-04-05 > 02:14:45.22926+00 > > Out of interest I did run a manual > > su - postgres -c "vacuumdb --all --full --analyze" > > We dropped something > > root at ask:/var/lib/postgresql# du -hs > 8.9G . > (after) > 5.8G > > I installed pg_activity and watched for a while; nothing seemed to be > really stressing it. > > Ergo, I'm not sure if there's much to do in the db layers. > > logs > ---- > > This leaves the logs > > 1.1G jetty > 2.9G apache2 > > The jetty logs are cleaned regularly. I think they could be made more > quiet, but they seem to be bounded. > > Apache logs are rotated but never cleaned up. Surely logs from 2015 > aren't useful. Proposed [3] > > Random offline > -------------- > > [3] is an example of a user reporting the site was offline. Looking > at the logs, it seems that puppet found httpd not running at 07:14 and > restarted it: > > Apr 4 07:14:40 ask puppet-user[20737]: > (Scope(Class[Postgresql::Server])) Passing "version" to > postgresql::server is deprecated; please use postgresql::globals instead. > Apr 4 07:14:42 ask puppet-user[20737]: Compiled catalog for > ask.openstack.org in environment production in 4.59 seconds > Apr 4 07:14:44 ask crontab[20987]: (root) LIST (root) > Apr 4 07:14:49 ask puppet-user[20737]: > (/Stage[main]/Httpd/Service[httpd]/ensure) ensure changed 'stopped' to > 'running' > Apr 4 07:14:54 ask puppet-user[20737]: Finished catalog run in 10.43 > seconds > > Which first explains why when I looked, it seemed OK. Checking the > apache logs we have: > > [Wed Apr 04 07:01:08.144746 2018] [:error] [pid 12491:tid > 140439253419776] [remote 176.233.126.142:43414] mod_wsgi (pid=12491): > Exception occurred processing WSGI script > '/srv/askbot-site/config/django.wsgi'. > [Wed Apr 04 07:01:08.144870 2018] [:error] [pid 12491:tid > 140439253419776] [remote 176.233.126.142:43414] IOError: failed to > write data > ... more until ... > [Wed Apr 04 07:15:58.270180 2018] [:error] [pid 17060:tid > 140439253419776] [remote 176.233.126.142:43414] mod_wsgi (pid=17060): > Exception occurred processing WSGI script > '/srv/askbot-site/config/django.wsgi'. > [Wed Apr 04 07:15:58.270303 2018] [:error] [pid 17060:tid > 140439253419776] [remote 176.233.126.142:43414] IOError: failed to > write data > > and the restart logged > > [Wed Apr 04 07:14:48.912626 2018] [core:warn] [pid 21247:tid > 140439370192768] AH00098: pid file /var/run/apache2/apache2.pid > overwritten -- Unclean shutdown of previous Apache run? > [Wed Apr 04 07:14:48.913548 2018] [mpm_event:notice] [pid 21247:tid > 140439370192768] AH00489: Apache/2.4.7 (Ubuntu) OpenSSL/1.0.1f > mod_wsgi/3.4 Python/2.7.6 configured -- resuming normal operations > [Wed Apr 04 07:14:48.913583 2018] [core:notice] [pid 21247:tid > 140439370192768] AH00094: Command line: '/usr/sbin/apache2' > [Wed Apr 04 14:59:55.408060 2018] [mpm_event:error] [pid 21247:tid > 140439370192768] AH00485: scoreboard is full, not at MaxRequestWorkers > > This does not appear to be disk-space related; see the cacti graphs > for that period that show the disk is full-ish, but not full [5]. > > What caused the I/O errors? dmesg has nothing in it since 30/Mar. > kern.log is empty. > > Server > ------ > > Most importantly, this sever wants a Xenial upgrade. At the very > least that apache is known to handle the "scoreboard is full" issue > better. > > We should ensure that we use a bigger instance; it's using up some > swap > > postgres at ask:~$ free -h > total used free shared buffers cached > Mem: 3.9G 3.6G 269M 136M 11M 819M > -/+ buffers/cache: 2.8G 1.1G > Swap: 3.8G 259M 3.6G > > tl;dr > ----- > > I don't think there's anything run-away bad going on, but the server > is undersized and needs a system update. > > Since I've got this far with it, over the next few days I'll see where > we are with the puppet for a Xenial upgrade and see if we can't get a > migration underway. > > Thanks, > > -i > > [1] https://review.openstack.org/406670 > [2] https://review.openstack.org/558977 > [3] https://review.openstack.org/558985 > [4] > http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-04-04.log.html#t2018-04-04T07:11:22 > [5] > http://cacti.openstack.org/cacti/graph.php?action=zoom&local_graph_id=2547&rra_id=0&view_type=tree&graph_start=1522859103&graph_end=1522879839 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Paul Belanger > April 4, 2018 at 5:30 PM > > We also have a 2nd issue where the ask.o.o server doesn't appear to be > large > enough any more to handle the traffic. A few times over the last few > weeks we've > had outages due to the HDD being full. > > We likely need to reduce the number of days we retain database backups > / http > logs or look to attach a volume to increase storage. > > Paul > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 4, 2018 at 4:26 PM > Hi everyone! > > We have a very robust and vibrant community at ask.openstack.org > . There are literally dozens of posts a > day. However, many of them don't receive knowledgeable answers. I'm > really worried about this becoming a vacuum where potential community > members get frustrated and don't realize how to get more involved with > the community. > > I'm looking for thoughts/ideas/feelings about this tool as well as > potential admin volunteers to help us manage the constant influx of > technical and not-so-technical questions around OpenStack. > > For those of you already contributing there, Thank You! For those > that are interested in becoming a moderator (instant AUC status!) or > have some additional ideas around fostering this community, please > respond. > > Looking forward to your thoughts :) > > Thanks! > Jimmy > irc: jamesmcarthur > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu Apr 5 20:32:13 2018 From: zigo at debian.org (Thomas Goirand) Date: Thu, 5 Apr 2018 22:32:13 +0200 Subject: [Openstack-operators] [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <20180404084507.GA18076@paraplu> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> Message-ID: On 04/04/2018 10:45 AM, Kashyap Chamarthy wrote: > Answering my own questions about Debian -- > > From looking at the Debian Archive[1][2], these are the versions for > 'Stretch' (the current stable release) and in the upcoming 'Buster' > release: > > libvirt | 3.0.0-4+deb9u2 | stretch > libvirt | 4.1.0-2 | buster > > qemu | 1:2.8+dfsg-6+deb9u3 | stretch > qemu | 1:2.11+dfsg-1 | buster > > I also talked on #debian-backports IRC channel on OFTC network, where I > asked: > > "What I'm essentially looking for is: "How can 'stretch' users get > libvirt 3.2.0 and QEMU 2.9.0, even if via a different repository. > As they are proposed to be least common denominator versions across > distributions." > > And two people said: Then the versions from 'Buster' could be backported > to 'stretch-backports'. The process for that is to: "ask the maintainer > of those package and Cc to the backports mailing list." > > Any takers? > > [0] https://packages.debian.org/stretch-backports/ > [1] https://qa.debian.org/madison.php?package=libvirt > [2] https://qa.debian.org/madison.php?package=qemu Hi Kashyap, Thanks for your considering of Debian, asking me and giving enough time for answering! Here's my thoughts. I updated the wiki page as you suggested [1]. As i wrote on IRC, we don't need to care about Jessie, so I removed Jessie, and added Buster/SID. tl;dr: just skip this section & go to conclusion backport of libvirt/QEMU/libguestfs more in details --------------------------------------------------- I already attempted the backports from Debian Buster to Stretch. All of the 3 components (libvirt, qemu & libguestfs) could be built without extra dependency, which is a very good thing. - libvirt 4.1.0 compiled without issue, though the dh_install phase failed with this error: dh_install: Cannot find (any matches for) "/usr/lib/*/wireshark/" (tried in "." and "debian/tmp") dh_install: libvirt-wireshark missing files: /usr/lib/*/wireshark/ dh_install: missing files, aborting Without more investigation but this build log, it's likely a minor fix in debian/*.install files to make it possible to backport the package. - qemu 2.11 built perfectly with zero change. - libguestfs 1.36.13 only needed to have fdisk replaced by util-linux as build-depends (fdisk is now a separate package in Buster). So it looks like easy to backport these 3 *AT THIS TIME*. [2] However, without a cristal ball, nobody can tell how hard it will be to backport these *IN A YEAR FROM NOW*. Conclusion: ----------- If you don't absolutely need new features from libvirt 3.2.0 and 3.0.0 is fine, please choose 3.0.0 as minimum. If you don't absolutely need new features from qemu 2.9.0 and 2.8.0 is fine, please choose 2.8.0 as minimum. If you don't absolutely need new features from libguestfs 1.36 and 1.34 is fine, please choose 1.34 as minimum. If you do need these new features, I'll do my best adapt. :) About Buster freeze & OpenStack Stein backports to Debian Stretch ----------------------------------------------------------------- Now, about Buster. As you know, Debian doesn't have planned release dates. Though here's the stats showing that roughly, there's a new Debian every 2 years, and the freeze takes about 6 months. https://wiki.debian.org/DebianReleases#Release_statistics With this logic and considering Stretch was released last year in June, after Stein is released, Buster will probably start its freeze. If the Debian freeze happens later, good for me, I'll have more time to make Stein better. But then Debian users will probably expect an OpenStack Stein backport to Debian Stretch, and that's where it can become tricky to backport these 3 packages. The end ------- I hope the above isn't too long, and helps to take the best decision, Cheers, Thomas Goirand (zigo) [1] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix#Distro_minimum_versions [2] I'm not shouting, just highlighting the important part! :) From mriedemos at gmail.com Thu Apr 5 23:11:26 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 5 Apr 2018 18:11:26 -0500 Subject: [Openstack-operators] [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> Message-ID: <902f872d-5b4a-af99-1bc5-3fa2bfdf3fe3@gmail.com> On 4/5/2018 3:32 PM, Thomas Goirand wrote: > If you don't absolutely need new features from libvirt 3.2.0 and 3.0.0 > is fine, please choose 3.0.0 as minimum. > > If you don't absolutely need new features from qemu 2.9.0 and 2.8.0 is > fine, please choose 2.8.0 as minimum. > > If you don't absolutely need new features from libguestfs 1.36 and 1.34 > is fine, please choose 1.34 as minimum. New features in the libvirt driver which depend on minimum versions of libvirt/qemu/libguestfs (or arch for that matter) are always conditional, so I think it's reasonable to go with the lower bound for Debian. We can still support the features for the newer versions if you're running a system with those versions, but not penalize people with slightly older versions if not. -- Thanks, Matt From kchamart at redhat.com Fri Apr 6 10:07:14 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 6 Apr 2018 12:07:14 +0200 Subject: [Openstack-operators] [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> Message-ID: <20180406100714.GB18076@paraplu> On Thu, Apr 05, 2018 at 10:32:13PM +0200, Thomas Goirand wrote: Hey Zigo, thanks for the detailed response; a couple of comments below. [...] > backport of libvirt/QEMU/libguestfs more in details > --------------------------------------------------- > > I already attempted the backports from Debian Buster to Stretch. > > All of the 3 components (libvirt, qemu & libguestfs) could be built > without extra dependency, which is a very good thing. > > - libvirt 4.1.0 compiled without issue, though the dh_install phase > failed with this error: > > dh_install: Cannot find (any matches for) "/usr/lib/*/wireshark/" (tried > in "." and "debian/tmp") > dh_install: libvirt-wireshark missing files: /usr/lib/*/wireshark/ > dh_install: missing files, aborting That seems like a problem in the Debian packaging system, not in libvirt. I double-checked with the upstream folks, and the install rules for Wireshark plugin doesn't have /*/ in there. > - qemu 2.11 built perfectly with zero change. > > - libguestfs 1.36.13 only needed to have fdisk replaced by util-linux as > build-depends (fdisk is now a separate package in Buster). Great. Note: You don't even have to build the versions from 'Buster', which are quite new. Just the slightly more conservative libvirt 3.2.0 and QEMU 2.9.0 -- only if it's possbile. [...] > Conclusion: > ----------- > > If you don't absolutely need new features from libvirt 3.2.0 and 3.0.0 > is fine, please choose 3.0.0 as minimum. > > If you don't absolutely need new features from qemu 2.9.0 and 2.8.0 is > fine, please choose 2.8.0 as minimum. > > If you don't absolutely need new features from libguestfs 1.36 and 1.34 > is fine, please choose 1.34 as minimum. > > If you do need these new features, I'll do my best adapt. :) Sure, can use the 3.0.0 (& QEMU 2.8.0), instead of 3.2.0, as we don't want to "penalize" (that was never the intention) distros with slightly older versions. That said ... I just spent comparing the release notes of libvirt 3.0.0 and libvirt 3.2.0[1][2]. By using libvirt 3.2.0 and QEMU 2.9.0, Debian users will be spared from a lot of critical bugs (see all the list in [3]) in CPU comparision area. [1] https://www.redhat.com/archives/libvirt-announce/2017-April/msg00000.html -- Release of libvirt-3.2.0 [2] https://www.redhat.com/archives/libvirt-announce/2017-January/msg00003.html -- Release of libvirt-3.0.0 [3] https://www.redhat.com/archives/libvir-list/2017-February/msg01295.html [...] -- /kashyap From kchamart at redhat.com Fri Apr 6 12:08:49 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 6 Apr 2018 14:08:49 +0200 Subject: [Openstack-operators] [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <902f872d-5b4a-af99-1bc5-3fa2bfdf3fe3@gmail.com> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> <902f872d-5b4a-af99-1bc5-3fa2bfdf3fe3@gmail.com> Message-ID: <20180406120849.GC18076@paraplu> On Thu, Apr 05, 2018 at 06:11:26PM -0500, Matt Riedemann wrote: > On 4/5/2018 3:32 PM, Thomas Goirand wrote: > > If you don't absolutely need new features from libvirt 3.2.0 and 3.0.0 > > is fine, please choose 3.0.0 as minimum. > > > > If you don't absolutely need new features from qemu 2.9.0 and 2.8.0 is > > fine, please choose 2.8.0 as minimum. > > > > If you don't absolutely need new features from libguestfs 1.36 and 1.34 > > is fine, please choose 1.34 as minimum. > > New features in the libvirt driver which depend on minimum versions of > libvirt/qemu/libguestfs (or arch for that matter) are always conditional, so > I think it's reasonable to go with the lower bound for Debian. We can still > support the features for the newer versions if you're running a system with > those versions, but not penalize people with slightly older versions if not. Yep, we can trivially set the lower bound to versions in 'Stretch'. The intention was never to "penalize" distributions w/ older versions. I was just checking if Debian 'Stretch' users can be spared from the myriad of CPU-modelling related issues (see my other reply for specifics) that are all fixed with 3.2.0 (and QMEU 2.9.0) by default -- without spending inordinate amounts of time and messy backporting procedures. Since rest of all the other stable distributions are using it. I'll wait a day to hear from Zigo, then I'll just rewrite the patch[*] to use what's currently in 'Stretch'. [*] https://review.openstack.org/#/c/558171/ -- /kashyap From zigo at debian.org Fri Apr 6 16:07:18 2018 From: zigo at debian.org (Thomas Goirand) Date: Fri, 6 Apr 2018 18:07:18 +0200 Subject: [Openstack-operators] [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <20180406100714.GB18076@paraplu> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> <20180406100714.GB18076@paraplu> Message-ID: <98bf44f7-331d-7239-6eec-971f8afd604d@debian.org> On 04/06/2018 12:07 PM, Kashyap Chamarthy wrote: >> dh_install: Cannot find (any matches for) "/usr/lib/*/wireshark/" (tried >> in "." and "debian/tmp") >> dh_install: libvirt-wireshark missing files: /usr/lib/*/wireshark/ >> dh_install: missing files, aborting > > That seems like a problem in the Debian packaging system, not in > libvirt. It sure is. As I wrote, it should be a minor packaging issue. > I double-checked with the upstream folks, and the install > rules for Wireshark plugin doesn't have /*/ in there. That part (ie: the path with *) isn't a mistake, it's because Debian has multiarch support, so for example, we get path like this (just a random example from my laptop): /usr/lib/i386-linux-gnu/pulseaudio /usr/lib/x86_64-linux-gnu/pulseaudio > Note: You don't even have to build the versions from 'Buster', which are > quite new. Just the slightly more conservative libvirt 3.2.0 and QEMU > 2.9.0 -- only if it's possbile. Actually, for *official* backports, it's the policy to always update to whatever is in testing until testing is frozen. I could maintain an unofficial backport in stretch-stein.debian.net though. > That said ... I just spent comparing the release notes of libvirt 3.0.0 > and libvirt 3.2.0[1][2]. By using libvirt 3.2.0 and QEMU 2.9.0, Debian users > will be spared from a lot of critical bugs (see all the list in [3]) in > CPU comparision area. > > [1] https://www.redhat.com/archives/libvirt-announce/2017-April/msg00000.html > -- Release of libvirt-3.2.0 > [2] https://www.redhat.com/archives/libvirt-announce/2017-January/msg00003.html > -- Release of libvirt-3.0.0 > [3] https://www.redhat.com/archives/libvir-list/2017-February/msg01295.html So, because of these bugs, would you already advise Nova users to use libvirt 3.2.0 for Queens? Cheers, Thomas Goirand (zigo) From kchamart at redhat.com Fri Apr 6 17:07:03 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 6 Apr 2018 19:07:03 +0200 Subject: [Openstack-operators] [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <98bf44f7-331d-7239-6eec-971f8afd604d@debian.org> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> <20180406100714.GB18076@paraplu> <98bf44f7-331d-7239-6eec-971f8afd604d@debian.org> Message-ID: <20180406170703.GD18076@paraplu> On Fri, Apr 06, 2018 at 06:07:18PM +0200, Thomas Goirand wrote: > On 04/06/2018 12:07 PM, Kashyap Chamarthy wrote: [...] > > Note: You don't even have to build the versions from 'Buster', which are > > quite new. Just the slightly more conservative libvirt 3.2.0 and QEMU > > 2.9.0 -- only if it's possbile. > > Actually, for *official* backports, it's the policy to always update to > wwhatever is in testing until testing is frozen. I see. Sure, that's fine, too (as "Queens" UCA also has it). Whatever is efficient and least painful from a maintenance POV. > I could maintain an unofficial backport in stretch-stein.debian.net > though. > > > That said ... I just spent comparing the release notes of libvirt 3.0.0 > > and libvirt 3.2.0[1][2]. By using libvirt 3.2.0 and QEMU 2.9.0, Debian users > > will be spared from a lot of critical bugs (see all the list in [3]) in > > CPU comparision area. > > > > [1] https://www.redhat.com/archives/libvirt-announce/2017-April/msg00000.html > > -- Release of libvirt-3.2.0 > > [2] https://www.redhat.com/archives/libvirt-announce/2017-January/msg00003.html > > -- Release of libvirt-3.0.0 > > [3] https://www.redhat.com/archives/libvir-list/2017-February/msg01295.html > > So, because of these bugs, would you already advise Nova users to use > libvirt 3.2.0 for Queens? FWIW, I'd suggest so, if it's not too much maintenance. It'll just spare you additional bug reports in that area, and the overall default experience when dealing with CPU models would be relatively much better. (Another way to look at it is, multiple other "conservative" long-term stable distributions also provide libvirt 3.2.0 and QEMU 2.9.0, so that should give you confidence.) Again, I don't want to push too hard on this. If that'll be messy from a package maintainance POV for you / Debian maintainers, then we could settle with whatever is in 'Stretch'. Thanks for looking into it. -- /kashyap From mriedemos at gmail.com Fri Apr 6 17:12:31 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 6 Apr 2018 12:12:31 -0500 Subject: [Openstack-operators] [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <20180406170703.GD18076@paraplu> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> <20180406100714.GB18076@paraplu> <98bf44f7-331d-7239-6eec-971f8afd604d@debian.org> <20180406170703.GD18076@paraplu> Message-ID: <355fafcc-8d7c-67a2-88c0-2823a51296f8@gmail.com> On 4/6/2018 12:07 PM, Kashyap Chamarthy wrote: > FWIW, I'd suggest so, if it's not too much maintenance. It'll just > spare you additional bug reports in that area, and the overall default > experience when dealing with CPU models would be relatively much better. > (Another way to look at it is, multiple other "conservative" long-term > stable distributions also provide libvirt 3.2.0 and QEMU 2.9.0, so that > should give you confidence.) > > Again, I don't want to push too hard on this. If that'll be messy from > a package maintainance POV for you / Debian maintainers, then we could > settle with whatever is in 'Stretch'. Keep in mind that Kashyap has a tendency to want the latest and greatest of libvirt and qemu at all times for all of those delicious bug fixes. But we also know that new code also brings new not-yet-fixed bugs. Keep in mind the big picture here, we're talking about bumping from minimum required (in Rocky) libvirt 1.3.1 to at least 3.0.0 (in Stein) and qemu 2.5.0 to at least 2.8.0, so I think that's already covering some good ground. Let's not get greedy. :) -- Thanks, Matt From durrani.anwar at gmail.com Mon Apr 9 09:53:58 2018 From: durrani.anwar at gmail.com (Anwar Durrani) Date: Mon, 9 Apr 2018 15:23:58 +0530 Subject: [Openstack-operators] Nova resources are out of sync in ocata version Message-ID: Hi All, Nova resources are out of sync in ocata version, what values are showing on dashboard are mismatch of actual running instances, i do remember i had script for auto sync resources but this script is getting fail in this case, Kindly help here. -- Thanks & regards, Anwar M. Durrani +91-9923205011 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Mon Apr 9 09:58:58 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 9 Apr 2018 11:58:58 +0200 Subject: [Openstack-operators] [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <355fafcc-8d7c-67a2-88c0-2823a51296f8@gmail.com> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> <20180406100714.GB18076@paraplu> <98bf44f7-331d-7239-6eec-971f8afd604d@debian.org> <20180406170703.GD18076@paraplu> <355fafcc-8d7c-67a2-88c0-2823a51296f8@gmail.com> Message-ID: <20180409095858.GE18076@paraplu> On Fri, Apr 06, 2018 at 12:12:31PM -0500, Matt Riedemann wrote: > On 4/6/2018 12:07 PM, Kashyap Chamarthy wrote: > > FWIW, I'd suggest so, if it's not too much maintenance. It'll just > > spare you additional bug reports in that area, and the overall default > > experience when dealing with CPU models would be relatively much better. > > (Another way to look at it is, multiple other "conservative" long-term > > stable distributions also provide libvirt 3.2.0 and QEMU 2.9.0, so that > > should give you confidence.) > > > > Again, I don't want to push too hard on this. If that'll be messy from > > a package maintainance POV for you / Debian maintainers, then we could > > settle with whatever is in 'Stretch'. > > Keep in mind that Kashyap has a tendency to want the latest and greatest of > libvirt and qemu at all times for all of those delicious bug fixes. Keep in mind that Matt has a tendency to sometimes unfairly over-simplify others views ;-). More seriously, c'mon Matt; I went out of my way to spend time learning about Debian's packaging structure and trying to get the details right by talking to folks on #debian-backports. And as you may have seen, I marked the patch[*] as "RFC", and repeatedly said that I'm working on an agreeable lowest common denominator. > But we also know that new code also brings new not-yet-fixed bugs. Yep, of course. > Keep in mind the big picture here, we're talking about bumping from > minimum required (in Rocky) libvirt 1.3.1 to at least 3.0.0 (in Stein) > and qemu 2.5.0 to at least 2.8.0, so I think that's already covering > some good ground. Let's not get greedy. :) Sure :-) Also if there's a way we can avoid bugs in the default experience with minimal effort, we should. Anyway, there we go: changed the patch[*] to what's in Stretch. [*] https://review.openstack.org/#/c/558171/ -- /kashyap From zioproto at gmail.com Mon Apr 9 10:41:22 2018 From: zioproto at gmail.com (Saverio Proto) Date: Mon, 9 Apr 2018 12:41:22 +0200 Subject: [Openstack-operators] Nova resources are out of sync in ocata version In-Reply-To: References: Message-ID: Hello Anwar, are you talking about this script ? https://github.com/openstack/osops-tools-contrib/blob/master/nova/nova-libvirt-compare.py it does not work for you ? Saverio 2018-04-09 11:53 GMT+02:00 Anwar Durrani : > Hi All, > > Nova resources are out of sync in ocata version, what values are showing on > dashboard are mismatch of actual running instances, i do remember i had > script for auto sync resources but this script is getting fail in this case, > Kindly help here. > > -- > Thanks & regards, > Anwar M. Durrani > +91-9923205011 > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From adriant at catalyst.net.nz Mon Apr 9 10:43:47 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Mon, 9 Apr 2018 22:43:47 +1200 Subject: [Openstack-operators] Anyone using Adjutant? Message-ID: <136ba60b-10d6-db4f-6f68-9f29334abfa7@catalyst.net.nz> Hello OpenStack Operators, As the project lead for Adjutant I wanted to reach out and double check how many clouds have deployed it and are potentially running it. I believe the number is fairly small, and most of those I've probably been in contact with, but wanted to also send this out in case anyone has tried the service and is using it as well. We're reaching out so we can hopefully work with you to preserve backwards compatibility or a safe migration path as we move forward with some large internal refactors. Until the service hits v1.0.0 we're likely going to change quite a few things internally, but the API and customer facing features shouldn't change much. Our hope is to work with anyone deploying it to stay on top of our changes and make sure you're not hit by the potentially breaking changes we need to make (policy support, config rework, async task processing). We'll be tagging v0.4.0 soon and it should work as it does right now with no major changes to the service or config at present. From there we'll start a more proper change log and keep detailed notes about what deployers need to do as they upgrade. The work we're doing will make certain elements of the service much easier to deploy and configure, and add new features to, but sadly requires us breaking some existing elements. Catalyst Cloud is running it in production (fairly close to master most of the time), so we will of course be careful that we avoid any breaking changes from a customer facing perspective, and document what steps we've needed to take during upgrades. Please reach out if you are using it, and we'll make sure to keep you in the loop as we potentially have to break things. :) Cheers, Adrian Turjak From durrani.anwar at gmail.com Mon Apr 9 11:23:35 2018 From: durrani.anwar at gmail.com (Anwar Durrani) Date: Mon, 9 Apr 2018 16:53:35 +0530 Subject: [Openstack-operators] Nova resources are out of sync in ocata version In-Reply-To: References: Message-ID: No this is different one. should i try this one ? if it works ? On Mon, Apr 9, 2018 at 4:11 PM, Saverio Proto wrote: > Hello Anwar, > > are you talking about this script ? > https://github.com/openstack/osops-tools-contrib/blob/ > master/nova/nova-libvirt-compare.py > > it does not work for you ? > > Saverio > > 2018-04-09 11:53 GMT+02:00 Anwar Durrani : > > Hi All, > > > > Nova resources are out of sync in ocata version, what values are showing > on > > dashboard are mismatch of actual running instances, i do remember i had > > script for auto sync resources but this script is getting fail in this > case, > > Kindly help here. > > > > -- > > Thanks & regards, > > Anwar M. Durrani > > +91-9923205011 > > > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > -- Thanks & regards, Anwar M. Durrani +91-9923205011 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zioproto at gmail.com Mon Apr 9 11:37:50 2018 From: zioproto at gmail.com (Saverio Proto) Date: Mon, 9 Apr 2018 13:37:50 +0200 Subject: [Openstack-operators] Nova resources are out of sync in ocata version In-Reply-To: References: Message-ID: It works for me in Newton. Try it at your own risk :) Cheers, Saverio 2018-04-09 13:23 GMT+02:00 Anwar Durrani : > No this is different one. should i try this one ? if it works ? > > On Mon, Apr 9, 2018 at 4:11 PM, Saverio Proto wrote: >> >> Hello Anwar, >> >> are you talking about this script ? >> >> https://github.com/openstack/osops-tools-contrib/blob/master/nova/nova-libvirt-compare.py >> >> it does not work for you ? >> >> Saverio >> >> 2018-04-09 11:53 GMT+02:00 Anwar Durrani : >> > Hi All, >> > >> > Nova resources are out of sync in ocata version, what values are showing >> > on >> > dashboard are mismatch of actual running instances, i do remember i had >> > script for auto sync resources but this script is getting fail in this >> > case, >> > Kindly help here. >> > >> > -- >> > Thanks & regards, >> > Anwar M. Durrani >> > +91-9923205011 >> > >> > >> > >> > _______________________________________________ >> > OpenStack-operators mailing list >> > OpenStack-operators at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > > > > > -- > Thanks & regards, > Anwar M. Durrani > +91-9923205011 > > From thierry at openstack.org Mon Apr 9 12:02:02 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 9 Apr 2018 14:02:02 +0200 Subject: [Openstack-operators] Vancouver Forum - Post your selected topics now Message-ID: <7af5f78e-2a3f-dacb-77ef-ebe171d74361@openstack.org> Hi everyone, You've been actively brainstorming ideas of topics for discussion at the "Forum" at the Vancouver OpenStack Summit. Now it's time to select which ones you want to propose, and file them at forumtopics.openstack.org ! The topic submission website will be open until EOD on Sunday, April 15, at which point the Forum selection committee will take the entries and make the final selection. So you have the whole week to enter your selection of ideas on the website. Thanks ! -- Thierry Carrez (ttx) From mrhillsman at gmail.com Mon Apr 9 14:17:13 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 9 Apr 2018 09:17:13 -0500 Subject: [Openstack-operators] UC Meeting Reminder - 4/9 @ 1800UTC Message-ID: Hi everyone, Friendly reminder we have a UC meeting today in #openstack-uc on freenode at 18:00UTC Agenda: https://wiki.openstack.org/wiki/Governance/Foundation/UserCo mmittee#Meeting_Agenda.2FPrevious_Meeting_Logs -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Mon Apr 9 18:04:48 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 9 Apr 2018 13:04:48 -0500 Subject: [Openstack-operators] UC Meeting Reminder - 4/9 @ 1800UTC In-Reply-To: References: Message-ID: UC Meeting started :) On Mon, Apr 9, 2018 at 9:17 AM, Melvin Hillsman wrote: > Hi everyone, > > Friendly reminder we have a UC meeting today in #openstack-uc on freenode > at 18:00UTC > > Agenda: > https://wiki.openstack.org/wiki/Governance/Foundation/UserCo > mmittee#Meeting_Agenda.2FPrevious_Meeting_Logs > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Apr 9 21:24:06 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 9 Apr 2018 16:24:06 -0500 Subject: [Openstack-operators] [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <20180409095858.GE18076@paraplu> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> <20180406100714.GB18076@paraplu> <98bf44f7-331d-7239-6eec-971f8afd604d@debian.org> <20180406170703.GD18076@paraplu> <355fafcc-8d7c-67a2-88c0-2823a51296f8@gmail.com> <20180409095858.GE18076@paraplu> Message-ID: <4a1a2732-cc78-eb4a-8517-f43a8f99c779@gmail.com> On 4/9/2018 4:58 AM, Kashyap Chamarthy wrote: > Keep in mind that Matt has a tendency to sometimes unfairly > over-simplify others views;-). More seriously, c'mon Matt; I went out > of my way to spend time learning about Debian's packaging structure and > trying to get the details right by talking to folks on > #debian-backports. And as you may have seen, I marked the patch[*] as > "RFC", and repeatedly said that I'm working on an agreeable lowest > common denominator. Sorry Kashyap, I didn't mean to offend. I was hoping "delicious bugs" would have made that obvious but I can see how it's not. You've done a great, thorough job on sorting this all out. Since I didn't know what "RFC" meant until googling it today, how about dropping that from the patch so I can +2 it? -- Thanks, Matt From kchamart at redhat.com Tue Apr 10 09:17:39 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 10 Apr 2018 11:17:39 +0200 Subject: [Openstack-operators] [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <4a1a2732-cc78-eb4a-8517-f43a8f99c779@gmail.com> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> <20180406100714.GB18076@paraplu> <98bf44f7-331d-7239-6eec-971f8afd604d@debian.org> <20180406170703.GD18076@paraplu> <355fafcc-8d7c-67a2-88c0-2823a51296f8@gmail.com> <20180409095858.GE18076@paraplu> <4a1a2732-cc78-eb4a-8517-f43a8f99c779@gmail.com> Message-ID: <20180410091739.GF18076@paraplu> On Mon, Apr 09, 2018 at 04:24:06PM -0500, Matt Riedemann wrote: > On 4/9/2018 4:58 AM, Kashyap Chamarthy wrote: > > Keep in mind that Matt has a tendency to sometimes unfairly > > over-simplify others views;-). More seriously, c'mon Matt; I went out > > of my way to spend time learning about Debian's packaging structure and > > trying to get the details right by talking to folks on > > #debian-backports. And as you may have seen, I marked the patch[*] as > > "RFC", and repeatedly said that I'm working on an agreeable lowest > > common denominator. > > Sorry Kashyap, I didn't mean to offend. I was hoping "delicious bugs" would > have made that obvious but I can see how it's not. You've done a great, > thorough job on sorting this all out. No problem at all. I know your communication style enough to not take offence :-). Thanks for the words! > Since I didn't know what "RFC" meant until googling it today, how about > dropping that from the patch so I can +2 it? Sure, I meant to remove it on my last iteration; now dropped it. (As you noted on the review, should've used '-Workflow', but I typed "RFC" out of muscle memory.) Thanks for the review. * * * Aside: On the other patch[+] that actually bumps for "Rocky" and fixes the resulting unit test fallout, I intend to fix the rest of the failing tests sometime this week. Remaining tests to be fixed: test_live_migration_update_serial_console_xml test_live_migration_with_valid_target_connect_addr test_live_migration_raises_exception test_virtuozzo_min_version_ok test_min_version_ppc_ok test_live_migration_update_graphics_xml test_min_version_s390_ok [+] https://review.openstack.org/#/c/558783/ -- libvirt: Bump MIN_{LIBVIRT,QEMU}_VERSION for "Rocky" -- /kashyap From emccormick at cirrusseven.com Tue Apr 10 15:07:34 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 10 Apr 2018 11:07:34 -0400 Subject: [Openstack-operators] Ops Session Proposals for Vancouver Forum Message-ID: Greetings Ops, We are rapidly approaching the deadline for Forum session proposals (This coming Sunday, 4/15) and we have been rather lax in getting the process started from our side. I've created an etherpad here for everyone to put up session ideas. https://etherpad.openstack.org/p/YYZ-forum-ops-brainstorming Given the late date, please post your session ideas ASAP, and +1 those that you have interest in. Also if you are willing to moderate the session, put your name on it as you'll see on the examples already there. Moderating is easy, gets you a pretty little speaker sticker on your badge, and lets you go get your badge at the Speaker pickup line. It's also a good way to get AUC status and get more involved in the community. It's fairly painless, and only a few of us bite :). I'd like to wrap this up by Friday so we can weed out duplicates from others proposals and get them into the topic submission system with a little time to spare. It would be helpful to have moderators submit their own so everything credits properly. The submission system is at http://forumtopics.openstack.org/. If you don't already have an account and would like to moderate, go set one up. I'm looking forward to seeing lots of you in Vancouver! Cheers, Erik From jon at csail.mit.edu Tue Apr 10 15:19:55 2018 From: jon at csail.mit.edu (Jonathan Proulx) Date: Tue, 10 Apr 2018 11:19:55 -0400 Subject: [Openstack-operators] Ops Session Proposals for Vancouver Forum In-Reply-To: References: Message-ID: <20180410151955.joigyljkjpyjaboa@csail.mit.edu> Thanks for getting this kicked off Erik. The two things you have up to start (fast forward upgrades, and extended maintenance) are the exact two things I want out of my trip to YVR. At least understanding current 'state of the art' and helping advance that in the right directions best I can. Thanks, -Jon On Tue, Apr 10, 2018 at 11:07:34AM -0400, Erik McCormick wrote: :Greetings Ops, : :We are rapidly approaching the deadline for Forum session proposals :(This coming Sunday, 4/15) and we have been rather lax in getting the :process started from our side. I've created an etherpad here for :everyone to put up session ideas. : :https://etherpad.openstack.org/p/YYZ-forum-ops-brainstorming : :Given the late date, please post your session ideas ASAP, and +1 those :that you have interest in. Also if you are willing to moderate the :session, put your name on it as you'll see on the examples already :there. : :Moderating is easy, gets you a pretty little speaker sticker on your :badge, and lets you go get your badge at the Speaker pickup line. It's :also a good way to get AUC status and get more involved in the :community. It's fairly painless, and only a few of us bite :). : :I'd like to wrap this up by Friday so we can weed out duplicates from :others proposals and get them into the topic submission system with a :little time to spare. It would be helpful to have moderators submit :their own so everything credits properly. The submission system is at :http://forumtopics.openstack.org/. If you don't already have an :account and would like to moderate, go set one up. : :I'm looking forward to seeing lots of you in Vancouver! : :Cheers, :Erik : :_______________________________________________ :OpenStack-operators mailing list :OpenStack-operators at lists.openstack.org :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- From emccormick at cirrusseven.com Tue Apr 10 15:31:55 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 10 Apr 2018 11:31:55 -0400 Subject: [Openstack-operators] Ops Session Proposals for Vancouver Forum In-Reply-To: <20180410151955.joigyljkjpyjaboa@csail.mit.edu> References: <20180410151955.joigyljkjpyjaboa@csail.mit.edu> Message-ID: On Tue, Apr 10, 2018 at 11:19 AM, Jonathan Proulx wrote: > > Thanks for getting this kicked off Erik. The two things you have up > to start (fast forward upgrades, and extended maintenance) are the > exact two things I want out of my trip to YVR. At least understanding > current 'state of the art' and helping advance that in the right > directions best I can. > I see Tony Breeds has posted the Extended Maintenance session already which is awesome. I'm actually considering a Part I and Part II for FFU as we had a packed house and not nearly enough time in SYD. I'm just not sure how to structure it or break it down. > Thanks, > -Jon > > On Tue, Apr 10, 2018 at 11:07:34AM -0400, Erik McCormick wrote: > :Greetings Ops, > : > :We are rapidly approaching the deadline for Forum session proposals > :(This coming Sunday, 4/15) and we have been rather lax in getting the > :process started from our side. I've created an etherpad here for > :everyone to put up session ideas. > : > :https://etherpad.openstack.org/p/YYZ-forum-ops-brainstorming > : > :Given the late date, please post your session ideas ASAP, and +1 those > :that you have interest in. Also if you are willing to moderate the > :session, put your name on it as you'll see on the examples already > :there. > : > :Moderating is easy, gets you a pretty little speaker sticker on your > :badge, and lets you go get your badge at the Speaker pickup line. It's > :also a good way to get AUC status and get more involved in the > :community. It's fairly painless, and only a few of us bite :). > : > :I'd like to wrap this up by Friday so we can weed out duplicates from > :others proposals and get them into the topic submission system with a > :little time to spare. It would be helpful to have moderators submit > :their own so everything credits properly. The submission system is at > :http://forumtopics.openstack.org/. If you don't already have an > :account and would like to moderate, go set one up. > : > :I'm looking forward to seeing lots of you in Vancouver! > : > :Cheers, > :Erik > : > :_______________________________________________ > :OpenStack-operators mailing list > :OpenStack-operators at lists.openstack.org > :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- From jon at csail.mit.edu Tue Apr 10 15:41:12 2018 From: jon at csail.mit.edu (Jonathan Proulx) Date: Tue, 10 Apr 2018 11:41:12 -0400 Subject: [Openstack-operators] Ops Session Proposals for Vancouver Forum In-Reply-To: References: <20180410151955.joigyljkjpyjaboa@csail.mit.edu> Message-ID: <20180410154112.risqihel6f5frznr@csail.mit.edu> On Tue, Apr 10, 2018 at 11:31:55AM -0400, Erik McCormick wrote: :On Tue, Apr 10, 2018 at 11:19 AM, Jonathan Proulx wrote: :> :> Thanks for getting this kicked off Erik. The two things you have up :> to start (fast forward upgrades, and extended maintenance) are the :> exact two things I want out of my trip to YVR. At least understanding :> current 'state of the art' and helping advance that in the right :> directions best I can. :> : :I see Tony Breeds has posted the Extended Maintenance session already :which is awesome. I'm actually considering a Part I and Part II for :FFU as we had a packed house and not nearly enough time in SYD. I'm :just not sure how to structure it or break it down. Off the top of my head perhaps a current state session focusing on what can be done with existing releases and a forward looking session on how to improve that experience? My $0.02, -Jon From openstack at medberry.net Tue Apr 10 17:19:50 2018 From: openstack at medberry.net (David Medberry) Date: Tue, 10 Apr 2018 11:19:50 -0600 Subject: [Openstack-operators] Ops Session Proposals for Vancouver Forum In-Reply-To: References: Message-ID: Dropped in 2¢ worth. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gregorio.corral at gmail.com Tue Apr 10 17:40:53 2018 From: gregorio.corral at gmail.com (Gregorio Corral) Date: Tue, 10 Apr 2018 19:40:53 +0200 Subject: [Openstack-operators] [freezer][horizont] try restore instance from storage local backup Message-ID: Hi When i try to restore an instance from a local storage backup, through horizon freezer plugin (... / horizon / disaster_recovery / backups /), and appears the form for setting the restore operation, the "Destination Path" field is a path in the host where the freezer agent runs (Hostname). The result of the operation is to extract the files from the backup to the destination path. This command line seems to do the same: freezer-agent --action restore --overwrite \ --nova-inst-id af8a5e8f-062b-408e-a7f6-d808db005b99 \ --restore-from-date "2018-03-26T18: 58: 04" \ --container / var / lib / backup - localstorage --restore-abs-path / var / gong --logfile /tmp/kk.log --backup-name gong But this is not what i want to do. I want to restore the running nova instance with the content of the backup. How can i do that? Thaks in advance. P.S.: i run Pike and freezer-agent 5.0.1 freezer-agent --version 5.0.1 -- -------------------------------------------------- "Non posso più perdere tempo a fare cose che non mi va di fare" La Grande Bellezza -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Apr 10 19:18:34 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 10 Apr 2018 14:18:34 -0500 Subject: [Openstack-operators] How are you handling billing/chargeback? In-Reply-To: <3E58AE40-309A-493E-A9E2-17897E119B3D@cern.ch> References: <20180312192113.znz4eavfze5zg7yn@redhat.com> <20180314161143.2w6skkpmyhvixmyj@redhat.com> <3E58AE40-309A-493E-A9E2-17897E119B3D@cern.ch> Message-ID: <5ACD0E0A.7080400@openstack.org> Hi all - The good folks at SuperUser are intrigued by this topic and interested in writing a feature. I've compiled the feedback so far in this etherpad . If there are any final thoughts please free to add to it . If you have questions about the nature of the article or would like to be contacted about the subject matter, please reach out to Allison Price or Nicole Martinelli . Thanks! Jimmy > Tim Bell > March 14, 2018 at 12:39 PM > We’re using a combination of cASO > (https://caso.readthedocs.io/en/stable/) and some low level libvirt > fabric monitoring. The showback accounting reports are generated with > merging with other compute/storage usage across various systems > (HTCondor, SLURM, ...) > > It would seem that those who needed solutions in the past found they > had to do them themselves. It would be interesting if there are > references of usage data/accounting/chargeback at scale with the > current project set but doing the re-evaluation would be an effort > which would need to be balanced versus just keeping the local solution > working. > > Tim > > -----Original Message----- > From: Lars Kellogg-Stedman > Date: Wednesday, 14 March 2018 at 17:15 > To: openstack-operators > Subject: Re: [Openstack-operators] How are you handling > billing/chargeback? > > On Mon, Mar 12, 2018 at 03:21:13PM -0400, Lars Kellogg-Stedman wrote: > > I'm curious what folks out there are using for chargeback/billing in > > your OpenStack environment. > > So far it looks like everyone is using a homegrown solution. Is > anyone using an existing product/project? > > -- > Lars Kellogg-Stedman | larsks @ {irc,twitter,github} > http://blog.oddbit.com/ | > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > Lars Kellogg-Stedman > March 14, 2018 at 11:11 AM > > So far it looks like everyone is using a homegrown solution. Is > anyone using an existing product/project? > > Lars Kellogg-Stedman > March 12, 2018 at 2:21 PM > Hey folks, > > I'm curious what folks out there are using for chargeback/billing in > your OpenStack environment. > > Are you doing any sort of chargeback (or showback)? Are you using (or > have you tried) CloudKitty? Or some other existing project? Have you > rolled your own instead? > > I ask because I am helping out some folks get a handle on the > operational side of their existing OpenStack environment, and they are > interested in but have not yet deployed some sort of reporting > mechanism. > > Thanks, > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Apr 10 19:33:21 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 10 Apr 2018 15:33:21 -0400 Subject: [Openstack-operators] Ops Session Proposals for Vancouver Forum In-Reply-To: References: Message-ID: I've submitted a couple of session ideas that relate to things we've been discussing in the past that still need more work - the meetups team, and the community documents on the wiki. Chris On Tue, Apr 10, 2018 at 1:19 PM, David Medberry wrote: > Dropped in 2¢ worth. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Tue Apr 10 20:14:32 2018 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Tue, 10 Apr 2018 16:14:32 -0400 Subject: [Openstack-operators] How are you handling billing/chargeback? In-Reply-To: <5ACD0E0A.7080400@openstack.org> References: <20180312192113.znz4eavfze5zg7yn@redhat.com> <20180314161143.2w6skkpmyhvixmyj@redhat.com> <3E58AE40-309A-493E-A9E2-17897E119B3D@cern.ch> <5ACD0E0A.7080400@openstack.org> Message-ID: <20180410201432.htbi5xwod32wzf5v@redhat.com> On Tue, Apr 10, 2018 at 02:18:34PM -0500, Jimmy McArthur wrote: > The good folks at SuperUser are intrigued > by this topic and interested in writing a feature. That would be an interesting read, especially if they're able to incorporate information or commentary beyond what was in this thread. -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From tobias at citynetwork.se Wed Apr 11 11:26:08 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Wed, 11 Apr 2018 13:26:08 +0200 Subject: [Openstack-operators] [publiccloud-wg] Reminder and agenda tomorrows meeting Message-ID: Hi everyone, Time for a new meeting for the Public Cloud WG. Forum sessions for Vancouver is priority of this meeting, would be nice to see as many of you there. Agenda can be found at https://etherpad.openstack.org/p/publiccloud-wg Feel free to add items to the agenda! See you all tomorrow 1400 UTC in #opensstack-publiccloud Cheers, Tobias -- Tobias Rydberg Senior Developer Mobile: +46 733 312780 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From rico.lin.guanyu at gmail.com Wed Apr 11 12:09:53 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 11 Apr 2018 20:09:53 +0800 Subject: [Openstack-operators] [openstack-dev] [Elections][TC] Announcing Rico Lin candidacy for TC Message-ID: Forwarding to Openstack-operators ---------- Forwarded message ---------- From: Rico Lin Date: 2018-04-11 20:02 GMT+08:00 Subject: [openstack-dev] [Elections][TC] Announcing Rico Lin candidacy for TC To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Dear all, I'd like to announce my candidacy for a seat on the OpenStack Technical Committee. I'm Rico Lin, employed by EasyStack, full-time OpenStacker. I have been in this community since 2014. And been deeply involved with technical contribution [1], mostly around Orchestration service which allows me to work on integration and management resources cross-projects. Also, I have served as PTL for three cycles. Which allows me to learn better on how we can join users and operators' experiences and requirements and integrated with development workflow and technical decision processes. Here are my major goals with this seat in TC: - Application: We've updated our resolution with [3] and saying we care about what applications needs on top of OpenStack. As for jobs from few projects are taking the role and thinking about what application needs, we should provide help with setting up community goals, making resolutions, or define what are top priority applications (can be a short-term definition) we need to focus on and taking action items/guidelines and finding weaknesses, so others from community can follow (if they agree with the goals, but got no idea on what they can help, IMO this will be a good stuff). - Cooperate with Users, Operators, and Developers: We have been losing some communication cross Users, Operators, and Developers. And it's never a good thing when users can share use cases, ops shares experiences, developers shares code, but they won't make it to one another not if a user provides developers by them self. In this case, works like StoryBoard should be in our first priority. We need a more solid way to get user feedback for developers, so we can actually learn what's working or not for each feature. And maybe it's considerable, to strengthen the communication between TCs and UCs (User Committee). - Diversity: The math is easy. [2] shows we got around one-third of users from Asia (with 75% of users in China). Also IIRC, around the same percentage of developers. But we got 0 in TC. The actual works are hard. We need forwards our technical guideline to developers in Asia and provide chances to get more feedback from them, so we can provide better technical resolutions which should be able to tight developers all together. Which I think I'm a good candidate for this. - Reach out for new blood: With cloud getting more mature. It's normal that cloud developers need to works in multiple communities, and they might comes and goes (mostly based on their job definition from their enterprise), so we need more new developers. And most important is to provides more chances for them to stay. Which I know there are many new join developers struggle with finding ways to fit in each project. We need ways to shorten their onboarding time, so they can make good works during they're in our community. - Paying the debt: Our community has done a great job on changing our resolutions and guidelines to adopt new trends and keep ourself sharp. TCs try really hard to migrate our path and do the magic. IMO, we need more effects on some specific jobs (like cross-project for Application infra. or Storyboard migrate), I do like to keep that going and closing our technical debts, so we can have room for new. Thank you for your consideration. Best Regards, Rico Lin (ricolin) [1] http://stackalytics.com/?release=all&user_id=rico-lin&metric=person-day [2] https://www.openstack.org/assets/survey/OpenStack-User-Survey-Nov17.pdf [3] https://review.openstack.org/#/c/447031/5/resolutions/ 20170317-cloud-applications-mission.rst -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikal at stillhq.com Wed Apr 11 22:09:44 2018 From: mikal at stillhq.com (Michael Still) Date: Thu, 12 Apr 2018 08:09:44 +1000 Subject: [Openstack-operators] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt Message-ID: Hi, https://review.openstack.org/#/c/523387 proposes adding a z/VM specific dependancy to nova's requirements.txt. When I objected the counter argument is that we have examples of windows specific dependancies (os-win) and powervm specific dependancies in that file already. I think perhaps all three are a mistake and should be removed. My recollection is that for drivers like ironic which may not be deployed by everyone, we have the dependancy documented, and then loaded at runtime by the driver itself instead of adding it to requirements.txt. This is to stop pip for auto-installing the dependancy for anyone who wants to run nova. I had assumed this was at the request of the deployer community. So what do we do with z/VM? Do we clean this up? Or do we now allow dependancies that are only useful to a very small number of deployments into requirements.txt? Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Thu Apr 12 14:25:50 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 12 Apr 2018 09:25:50 -0500 Subject: [Openstack-operators] Forum Submissions Reminder + Vancouver Info Message-ID: <5ACF6C6E.6070705@openstack.org> Hello! A quick reminder that the Vancouver Forum Submission deadline is this coming Sunday, April 15th. Submission Process Please proceed to http://forumtopics.openstack.org/ to submit your topics. What is the Forum? If you'd like more details about the Forum, go to https://wiki.openstack.org/wiki/Forum Where do I register for the Summit in Vancouver? https://www.eventbrite.com/e/openstack-summit-may-2018-vancouver-tickets-40845826968?aff=YVRSummit2018 Now get a hotel room for up to 55% off the standard Vancouver rates https://www.openstack.org/summit/vancouver-2018/travel/ Thanks and we look forward to seeing you all in Vancouver! Cheers, Jimmy -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Thu Apr 12 19:13:56 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Thu, 12 Apr 2018 15:13:56 -0400 Subject: [Openstack-operators] Help finding old (Mitaka) RDO RPMs Message-ID: Hi All, Does anyone happen to have an archive of the MItaka RDO repo lying around they'd be willing to share with a poor unfortunate soul? My clone of it has gone AWOL and I have moderately desperate need of it. Thanks! Cheers, Erik From iain.macdonnell at oracle.com Thu Apr 12 19:18:36 2018 From: iain.macdonnell at oracle.com (iain MacDonnell) Date: Thu, 12 Apr 2018 12:18:36 -0700 Subject: [Openstack-operators] Help finding old (Mitaka) RDO RPMs In-Reply-To: References: Message-ID: <22330ce6-ad08-6bf4-e0f7-3ea68c218e0a@oracle.com> On 04/12/2018 12:13 PM, Erik McCormick wrote: > Does anyone happen to have an archive of the MItaka RDO repo lying > around they'd be willing to share with a poor unfortunate soul? My > clone of it has gone AWOL and I have moderately desperate need of it. https://buildlogs.centos.org/centos/7/cloud/x86_64/openstack-mitaka/ maybe? ~iain From amy at demarco.com Thu Apr 12 19:20:42 2018 From: amy at demarco.com (Amy Marrich) Date: Thu, 12 Apr 2018 14:20:42 -0500 Subject: [Openstack-operators] Help finding old (Mitaka) RDO RPMs In-Reply-To: References: Message-ID: Erik, Here's the Mitaka archive:) http://vault.centos.org/7.3.1611/cloud/x86_64/openstack-mitaka/ Amy (spotz) On Thu, Apr 12, 2018 at 2:13 PM, Erik McCormick wrote: > Hi All, > > Does anyone happen to have an archive of the MItaka RDO repo lying > around they'd be willing to share with a poor unfortunate soul? My > clone of it has gone AWOL and I have moderately desperate need of it. > > Thanks! > > Cheers, > Erik > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Thu Apr 12 19:23:42 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Thu, 12 Apr 2018 15:23:42 -0400 Subject: [Openstack-operators] Help finding old (Mitaka) RDO RPMs In-Reply-To: References: Message-ID: Thanks! You're my heroes :) On Thu, Apr 12, 2018 at 3:20 PM, Amy Marrich wrote: > Erik, > > Here's the Mitaka archive:) > > http://vault.centos.org/7.3.1611/cloud/x86_64/openstack-mitaka/ > > Amy (spotz) > > On Thu, Apr 12, 2018 at 2:13 PM, Erik McCormick > wrote: >> >> Hi All, >> >> Does anyone happen to have an archive of the MItaka RDO repo lying >> around they'd be willing to share with a poor unfortunate soul? My >> clone of it has gone AWOL and I have moderately desperate need of it. >> >> Thanks! >> >> Cheers, >> Erik >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > From allison at openstack.org Thu Apr 12 21:27:34 2018 From: allison at openstack.org (Allison Price) Date: Thu, 12 Apr 2018 16:27:34 -0500 Subject: [Openstack-operators] Save $500 on OpenStack Summit Vancouver Hotel + Ticket Message-ID: Hi everyone, For a limited time, you can now purchase a discounted package including a Vancouver Summit ticket and hotel stay at the beautiful Pan Pacific Hotel for savings of more than $500 USD! This discount runs until April 25 pending availability - book your ticket & hotel room now for maximum savings: 4-night stay at the Pan Pacific Hotel & Weeklong Vancouver Summit Pass: $1,859 USD—$500 in savings per person 5-night stay at the Pan Pacific Hotel & Weeklong Vancouver Summit Pass: $2,149 USD—$550 in savings per person REGISTER HERE After you've registered we will book your hotel room for you, and follow-up with your confirmed hotel information in early May. Please email summit at openstack.org if you have any questions. Cheers, Allison Allison Price OpenStack Foundation allison at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From luo.lujin at jp.fujitsu.com Fri Apr 13 09:16:11 2018 From: luo.lujin at jp.fujitsu.com (Luo, Lujin) Date: Fri, 13 Apr 2018 09:16:11 +0000 Subject: [Openstack-operators] [sig][upgrades] Upgrade SIG IRC meeting poll Message-ID: Hello everyone, Sorry for keeping you waiting! Since we have launched Upgrade SIG [1], we are now happy to invite everyone who is interested to take a vote so that we can find a good time for our regular IRC meeting. Please kindly look at the weekdays in the poll only, not the actual date. Odd week: https://doodle.com/poll/q8qr9iza9kmwax2z Even week: https://doodle.com/poll/ude4rmacmbp4k5xg We expect to alternate meeting times between odd and even weeks to cover different time zones. We'd love that if people can vote before Apr. 22nd. Best, Lujin [1] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128426.html From RKlimenko at itkey.com Fri Apr 13 14:49:29 2018 From: RKlimenko at itkey.com (Klimenko, Roman) Date: Fri, 13 Apr 2018 14:49:29 +0000 Subject: [Openstack-operators] Problem with 'Image is unacceptable: Image has no associated data' Message-ID: Hi everyone! Im trying to get working ansible openstack pike. I deployed env with config similar to example.prod conf. Now i'm experiencing troubles with instance boot. I get this error: BuildAbortException: Build of instance 0a308c78-9d16-4808-9f8f-0805f319f1e3 aborted: Image 64e4c7d9-392f-4f4f-9959-e32f141c93ef is unacceptable: Image has no associated data I have tried different official qcow cloud images - cirros, ubuntu -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Fri Apr 13 15:00:31 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 13 Apr 2018 08:00:31 -0700 Subject: [Openstack-operators] [nova] Rocky forum topics brainstorming In-Reply-To: <0037fa0a-aa31-1744-b050-783e8be81138@gmail.com> References: <0037fa0a-aa31-1744-b050-783e8be81138@gmail.com> Message-ID: <4de18eaf-0b28-62aa-2935-a18d5ada160c@gmail.com> +openstack-operators (apologies that I forgot to add originally) On Mon, 9 Apr 2018 10:09:12 -0700, Melanie Witt wrote: > Hey everyone, > > Let's collect forum topic brainstorming ideas for the Forum sessions in > Vancouver in this etherpad [0]. Once we've brainstormed, we'll select > and submit our topic proposals for consideration at the end of this > week. The deadline for submissions is Sunday April 15. > > Thanks, > -melanie > > [0] https://etherpad.openstack.org/p/YVR-nova-brainstorming Just a reminder that we're collecting forum topic ideas to propose for Vancouver and input from operators is especially important. Please add your topics and/or comments to the etherpad [0] and we'll submit proposals before the Sunday deadline. Thanks all, -melanie From adriant at catalyst.net.nz Tue Apr 17 06:10:25 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Tue, 17 Apr 2018 18:10:25 +1200 Subject: [Openstack-operators] [all] How to handle python3 only projects Message-ID: <8c07821c-3546-7c6e-8288-e42f4847e36d@catalyst.net.nz> Hello devs, The python27 clock of doom ticks closer to zero (https://pythonclock.org/) and officially dropping python27 support is going to have to happen eventually, that though is a bigger topic. Before we get there outright what we should think about is what place python3 only projects have in OpenStack alongside ones that support both. Given that python27's life is nearing the end, we should probably support a project either transitioning to only python3 or new projects that are only python3. Not to mention the potential inclusion of python3 only libraries in global-requirements. Potentially we should even encourage python3 only projects, and encourage deployers and distro providers to focus on python3 only (do we?). Python3 only projects are now a reality, python3 only libraries are now a reality, and most of OpenStack already supports python3. Major libraries are dropping python27 support in newer versions, and we should think about how we want to do it too. So where do projects that want to stop supporting python27 fit in the OpenStack ecosystem? Or given the impending end of python27, why should new projects be required to support it at all, or should we heavily encourage new projects to be python3 only (if not require it)? It's not an easy topic, and there are likely lots of opinions on the matter, but it's something to start considering. Cheers! - Adrian Turjak From jean-philippe at evrard.me Tue Apr 17 15:04:07 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Tue, 17 Apr 2018 16:04:07 +0100 Subject: [Openstack-operators] Fwd: [openstack-ansible] We need to change! In-Reply-To: References: Message-ID: Sorry for the cross-posting, but I think this could interest operators too. Dear community, Starting at the end of this month, I won't be able to work full time on OpenStack-Ansible anymore. I want to highlight the following: Our current way of working is not sustainable in the long run, as a lot of work (and therefore pressure) is concentrated on a few individuals. I managed to get more people working on some parts of our code (becoming cores on specific areas of knowledge, like mbuil on networking, mnaser and gokhan on telemetry, johnsom on octavia, mugsie on designate), but at the same time we have lost a core reviewer on all our code base (mhayden). I like the fact we are still innovating with our own deployment tooling, bringing more features in, changing the deployment models to be always more stable, more user-friendly. But new features aren't all. We need people actively looking at the quality of existing deliverables. We need to stretch those responsibilities to more people. I would be very happy if some people using OpenStack-Ansible would help on: * Bugs. We are reaching an all-time high amount of bugs pending. We need people actively cleaning those. We need someone to organize a bug smash. We need people willing to lead the bug triage process too. * Releases. Our current release process is manual. People interested by how releases are handled should step in there (for example, what does in, and at what time). We also need to coordinate with the releases team, and improve our way to release. * Jobs/state monitoring. I have spent an insane amount of time cleaning up after other people. That cannot be done any longer. If you're breaking a job, whether it's part of the openstack-ansible gates or not, you should be fixing it. Even if it's a non-voting job, or a periodic job. I'd like everyone to monitor our zuul dashboard, and take action based on that. When queens was close to release, everything job was green on the zuul dashboard. I did an experiment of 1 month without me fixing the upgrade jobs, and guess what: ALL (or almost ALL) the upgrade jobs are now broken. Please monitor [1] and actively help fixing the jobs. Remember, if everyone works on this, it would give a great feedback to new users, and it becomes a virtuous cycle. * Reduce technical debt. We have so many variables, so many remnants of the past. This cycle is planned to be a cleanup. Let's simplify all of this, making sure the deployment of openstack with openstack-ansible ends up with a system KISS. * Increasing voting test coverage. We need more code paths tested and we need those code path preventing bad patches to merge. It makes the reduction of technical debt easier. Really thank you for your understanding. Best regards, Jean-Philippe (evrardjp) [1]: http://zuul.openstack.org/builds.html?pipeline=periodic&project=openstack%2Fopenstack-ansible From alifshit at redhat.com Wed Apr 18 15:17:23 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Wed, 18 Apr 2018 11:17:23 -0400 Subject: [Openstack-operators] [nova] Default scheduler filters survey Message-ID: Hi all, A CI issue [1] caused by tempest thinking some filters are enabled when they're really not, and a proposed patch [2] to add (Same|Different)HostFilter to the default filters as a workaround, has led to a discussion about what filters should be enabled by default in nova. The default filters should make sense for a majority of real world deployments. Adding some filters to the defaults because CI needs them is faulty logic, because the needs of CI are different to the needs of operators/users, and the latter takes priority (though it's my understanding that a good chunk of operators run tempest on their clouds post-deployment as a way to validate that the cloud is working properly, so maybe CI's and users' needs aren't that different after all). To that end, we'd like to know what filters operators are enabling in their deployment. If you can, please reply to this email with your [filter_scheduler]/enabled_filters (or [DEFAULT]/scheduler_default_filters if you're using an older version) option from nova.conf. Any other comments are welcome as well :) Cheers! [1] https://bugs.launchpad.net/tempest/+bug/1628443 [2] https://review.openstack.org/#/c/561651/ From melwittt at gmail.com Wed Apr 18 16:04:26 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 18 Apr 2018 09:04:26 -0700 Subject: [Openstack-operators] [nova] Rocky forum topics brainstorming In-Reply-To: <4de18eaf-0b28-62aa-2935-a18d5ada160c@gmail.com> References: <0037fa0a-aa31-1744-b050-783e8be81138@gmail.com> <4de18eaf-0b28-62aa-2935-a18d5ada160c@gmail.com> Message-ID: <97076e49-80fb-9889-8123-b141413f73b7@gmail.com> On Fri, 13 Apr 2018 08:00:31 -0700, Melanie Witt wrote: > +openstack-operators (apologies that I forgot to add originally) > > On Mon, 9 Apr 2018 10:09:12 -0700, Melanie Witt wrote: >> Hey everyone, >> >> Let's collect forum topic brainstorming ideas for the Forum sessions in >> Vancouver in this etherpad [0]. Once we've brainstormed, we'll select >> and submit our topic proposals for consideration at the end of this >> week. The deadline for submissions is Sunday April 15. >> >> Thanks, >> -melanie >> >> [0] https://etherpad.openstack.org/p/YVR-nova-brainstorming > > Just a reminder that we're collecting forum topic ideas to propose for > Vancouver and input from operators is especially important. Please add > your topics and/or comments to the etherpad [0] and we'll submit > proposals before the Sunday deadline. Here's a list of nova-related sessions that have been proposed: * CellsV2 migration process sync with operators: http://forumtopics.openstack.org/cfp/details/125 * nova/neutron + ops cross-project session: http://forumtopics.openstack.org/cfp/details/124 * Planning to use Placement in Cinder: http://forumtopics.openstack.org/cfp/details/89 * Building the path to extracting Placement from Nova: http://forumtopics.openstack.org/cfp/details/88 * Multi-attach introduction and future direction: http://forumtopics.openstack.org/cfp/details/101 * Making NFV features easier to use: http://forumtopics.openstack.org/cfp/details/146 A list of all proposed forum topics can be seen here: http://forumtopics.openstack.org Cheers, -melanie From mriedemos at gmail.com Wed Apr 18 16:41:03 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 18 Apr 2018 11:41:03 -0500 Subject: [Openstack-operators] [nova] Concern about trusted certificates API change Message-ID: <617ae5d9-25c3-d4d0-1d1a-4e8d7602ccea@gmail.com> There is a compute REST API change proposed [1] which will allow users to pass trusted certificate IDs to be used with validation of images when creating or rebuilding a server. The trusted cert IDs are based on certificates stored in some key manager, e.g. Barbican. The full nova spec is here [2]. The main concern I have is that trusted certs will not be supported for volume-backed instances, and some clouds only support volume-backed instances. The way the patch is written is that if the user attempts to boot from volume with trusted certs, it will fail. In thinking about a semi-discoverable/configurable solution, I'm thinking we should add a policy rule around trusted certs to indicate if they can be used or not. Beyond the boot from volume issue, the only virt driver that supports trusted cert image validation is the libvirt driver, so any cloud that's not using the libvirt driver simply cannot support this feature, regardless of boot from volume. We have added similar policy rules in the past for backend-dependent features like volume extend and volume multi-attach, so I don't think this is a new issue. Alternatively we can block the change in nova until it supports boot from volume, but that would mean needing to add trusted cert image validation support into cinder along with API changes, effectively killing the chance of this getting done in nova in Rocky, and this blueprint has been around since at least Ocata so it would be good to make progress if possible. [1] https://review.openstack.org/#/c/486204/ [2] https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/nova-validate-certificates.html -- Thanks, Matt From mriedemos at gmail.com Wed Apr 18 17:11:58 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 18 Apr 2018 12:11:58 -0500 Subject: [Openstack-operators] [openstack-dev] [nova] Concern about trusted certificates API change In-Reply-To: References: <617ae5d9-25c3-d4d0-1d1a-4e8d7602ccea@gmail.com> Message-ID: <1f88c15c-b75a-cc34-0299-3b47f78ba794@gmail.com> On 4/18/2018 11:57 AM, Jay Pipes wrote: >> There is a compute REST API change proposed [1] which will allow users >> to pass trusted certificate IDs to be used with validation of images >> when creating or rebuilding a server. The trusted cert IDs are based >> on certificates stored in some key manager, e.g. Barbican. >> >> The full nova spec is here [2]. >> >> The main concern I have is that trusted certs will not be supported >> for volume-backed instances, and some clouds only support >> volume-backed instances. > > Yes. And some clouds only support VMWare vCenter virt driver. And some > only support Hyper-V. I don't believe we should delay adding good > functionality to (large percentage of) clouds because it doesn't yet > work with one virt driver or one piece of (badly-designed) functionality. Maybe it wasn't clear but I'm not advocating that we block the change until volume-backed instances are supported with trusted certs. I'm suggesting we add a policy rule which allows deployers to at least disable it via policy if it's not supported for their cloud. > > The way the patch is written is that if the user attempts to >> boot from volume with trusted certs, it will fail. > > And... I think that's perfectly fine. I agree. I'm the one that noticed the issue and pointed out in the code review that we should explicitly fail the request if we can't honor it. > >> In thinking about a semi-discoverable/configurable solution, I'm >> thinking we should add a policy rule around trusted certs to indicate >> if they can be used or not. Beyond the boot from volume issue, the >> only virt driver that supports trusted cert image validation is the >> libvirt driver, so any cloud that's not using the libvirt driver >> simply cannot support this feature, regardless of boot from volume. We >> have added similar policy rules in the past for backend-dependent >> features like volume extend and volume multi-attach, so I don't think >> this is a new issue. >> >> Alternatively we can block the change in nova until it supports boot >> from volume, but that would mean needing to add trusted cert image >> validation support into cinder along with API changes, effectively >> killing the chance of this getting done in nova in Rocky, and this >> blueprint has been around since at least Ocata so it would be good to >> make progress if possible. > > As mentioned above, I don't want to derail progress until (if ever?) > trusted certs achieves this magical > works-for-every-driver-and-functionality state. It's not realistic to > expect this to be done, IMHO, and just keeps good functionality out of > the hands of many cloud users. Again, I'm not advocating that we block until boot from volume is supported. However, we have a lot of technical debt for "good functionality" added over the years that failed to consider volume-backed instances, like rebuild, rescue, backup, etc and it's painful to deal with that after the fact, as can be seen from the various specs proposed for adding that support to those APIs. -- Thanks, Matt From dms at danplanet.com Wed Apr 18 18:17:00 2018 From: dms at danplanet.com (Dan Smith) Date: Wed, 18 Apr 2018 11:17:00 -0700 Subject: [Openstack-operators] [openstack-dev] [nova] Concern about trusted certificates API change In-Reply-To: <1f88c15c-b75a-cc34-0299-3b47f78ba794@gmail.com> (Matt Riedemann's message of "Wed, 18 Apr 2018 12:11:58 -0500") References: <617ae5d9-25c3-d4d0-1d1a-4e8d7602ccea@gmail.com> <1f88c15c-b75a-cc34-0299-3b47f78ba794@gmail.com> Message-ID: > Maybe it wasn't clear but I'm not advocating that we block the change > until volume-backed instances are supported with trusted certs. I'm > suggesting we add a policy rule which allows deployers to at least > disable it via policy if it's not supported for their cloud. That's fine with me, and provides an out for another issue I pointed out on the code review. Basically, the operator has no way to disable this feature. If they haven't set this up properly and have no desire to, a user reading the API spec and passing trusted certs will not be able to boot an instance and not really understand why. > I agree. I'm the one that noticed the issue and pointed out in the > code review that we should explicitly fail the request if we can't > honor it. I agree for the moment for sure, but it would obviously be nice not to open another gap we're not going to close. There's no reason this can't be supported for volume-backed instances, it just requires some help from cinder. I would think that it'd be nice if we could declare the "can't do this for reasons" response as a valid one regardless of the cause so we don't need another microversion for the future where volume-backed instances can do this. > Again, I'm not advocating that we block until boot from volume is > supported. However, we have a lot of technical debt for "good > functionality" added over the years that failed to consider > volume-backed instances, like rebuild, rescue, backup, etc and it's > painful to deal with that after the fact, as can be seen from the > various specs proposed for adding that support to those APIs. Totes agree. --Dan From simon.leinen at switch.ch Wed Apr 18 20:20:45 2018 From: simon.leinen at switch.ch (Simon Leinen) Date: Wed, 18 Apr 2018 22:20:45 +0200 Subject: [Openstack-operators] [openstack-dev] [nova] Default scheduler filters survey In-Reply-To: (Artom Lifshitz's message of "Wed, 18 Apr 2018 11:17:23 -0400") References: Message-ID: Artom Lifshitz writes: > To that end, we'd like to know what filters operators are enabling in > their deployment. If you can, please reply to this email with your > [filter_scheduler]/enabled_filters (or > [DEFAULT]/scheduler_default_filters if you're using an older version) > option from nova.conf. Any other comments are welcome as well :) We have the following enabled on our semi-public (academic community) cloud, which runs on Newton: AggregateInstanceExtraSpecsFilter AvailabilityZoneFilter ComputeCapabilitiesFilter ComputeFilter ImagePropertiesFilter PciPassthroughFilter RamFilter RetryFilter ServerGroupAffinityFilter ServerGroupAntiAffinityFilter (sorted alphabetically) Recently we've also been trying AggregateImagePropertiesIsolation ...but it looks like we'll replace it with our own because it's a bit awkward to use for our purpose (scheduling Windows instance to licensed compute nodes). -- Simon. From kendall at openstack.org Thu Apr 19 17:13:57 2018 From: kendall at openstack.org (Kendall Waters) Date: Thu, 19 Apr 2018 12:13:57 -0500 Subject: [Openstack-operators] Project Teams Gathering- Denver September 10-14th Message-ID: All aboard! Next stop Denver! The fourth Project Teams Gathering [1] will be held September 10-14th back at the Renaissance Stapleton Hotel [2] in Denver, Colorado (3801 Quebec Street, Denver, Colorado 80207). The Project Teams Gathering (PTG) is an event organized by the OpenStack Foundation. It provides meeting facilities allowing the various technical community groups working with OpenStack (operators, development teams, user workgroups, SIGs) to meet in-person, exchange and get work done in a productive setting. As you may have heard, this time around the Ops Meetup will be co-located with the Denver PTG. We're excited to have these two communities under one roof. Registration, travel support program, and the discounted hotel block are now live! REGISTRATION AND HOTEL Registration is now available here: https://denver2018ptg.eventbrite.com Ticket prices for this PTG will be tiered, and are significantly subsidized to help cover part of the overall event cost: Early Bird: USD $199 (Deadline May 11 at 6:59 UTC) Regular: USD $399 (Deadline August 23 at 6:59 UTC) Late/Onsite: USD $599 We've reserved a very limited block of discounted hotel rooms at $149/night USD (does not include breakfast) with the Renaissance Denver Stapleton Hotel where the event will be held. Please move quickly to reserve a room with 2 queen beds[3] or 1 king bed[4] by August 20th or until they sell out! TRAIN NEAR HOTEL You may be curious about the train noise situation around the hotel. This was due to an unsafe crossing requiring human flaggers and trains signalling using horns. After a meeting held in February of 2018, the Director for the RTD project stated that “The gate crossings are complete, operational and safe, and we feel that it’s appropriate at this time to remove the requirements to have grade crossing attendants at those crossings,” Regulatory approvals for the A, B and G commuter rail lines have a contracted deadline of June 2nd, 2018 to be approved by Federal Railroad Administration Commissioners. Also worth noting, right after we left the PTG last September, the hotel installed sound reduction windows throughout the property which should help with an overall quality of stay for guests. USA VISA APPLICATIONS Please note: Due to recent delays in the visa system, please allow as much time as possible for the application process if a visa is required in order to travel to the United States. We normally recommend applying no later than 60 days prior to the event. If you are unsure whether you require a visa or not, please visit this page [5] to see if your country is a part of the Visa Waiver Program. If it is not one of the countries listed, you will need to obtain a Visa to enter the U.S. To supplement your Visa application, we can also provide you with a Visa Invitation Letter on official OpenStack Foundation letterhead. Requests for invitation letters may be submitted here [6] and must be received by Friday, August 24, 2018. TRAVEL SUPPORT PROGRAM The OpenStack Travel Support Program's aim is to facilitate participation of key contributors to the OpenStack Project Teams Gathering (PTG) covering costs for travel, accommodation, and event pass. Please fill out this form [7] to apply; the application deadline for the first round of sponsorships is July 1st. If you are interested in donating to the Travel Support Program, you can do so on the Eventbrite page [8]. SPONSORSHIP The PTGs are critical to the OpenStack release cycle and community, and sponsorship of these events is a public demonstration of your commitment to the continued growth and success of OpenStack. Since this is a working event and we strive to maintain a distraction-free environment so teams, we have created sponsorship packages that are community focused so that all sponsors receive prominent recognition for their ongoing support of OpenStack without impacting productivity. If your organization is interested in sponsoring the Stein PTG in Denver, please review the sponsorship prospectus and contract here , and send any questions to ptg at openstack.org . Feel free to reach out to me directly with any questions, looking forward to seeing everyone in Denver! Cheers, Kendall Kendall Waters OpenStack Marketing kendall at openstack.org [1] www.openstack.org/ptg [2] http://www.marriott.com/hotels/travel/densa-renaissance-denver-stapleton-hotel/ [3] http://www.marriott.com/meeting-event-hotels/group-corporate-travel/groupCorp.mi?resLinkData=Project%20Team%20Gathering%20Two%20Queen%20Beds%5Edensa%60opnopnb%60149.00%60USD%60false%604%609/5/18%609/18/18%608/20/18&app=resvlink&stop_mobi=yes [4] http://www.marriott.com/meeting-event-hotels/group-corporate-travel/groupCorp.mi?resLinkData=Project%20Teams%20Gathering%20King%20Bed%5Edensa%60opnopna%60149.00%60USD%60false%604%609/5/18%609/18/18%608/20/18&app=resvlink&stop_mobi=yes [5] https://www.dhs.gov/visa-waiver-program-requirements [6] https://openstackfoundation.formstack.com/forms/visa_form_denver_2018_ptg [7] https://openstackfoundation.formstack.com/forms/travelsupportptg_denver_2018 [8] https://denver2018ptg.eventbrite.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdamkot at salesforce.com Fri Apr 20 15:22:46 2018 From: mdamkot at salesforce.com (Michael Damkot) Date: Fri, 20 Apr 2018 11:22:46 -0400 Subject: [Openstack-operators] Intro and Containerized Control Plane Message-ID: Hello Operators!! I wanted to say "Hello" to the community once again! I've come back into the OpenStack fold after my time as a former member of the Time Warner Cable Team. Salesforce is working toward greatly increasing the size and scale of our OpenStack use cases as well as our participation in the community. We're currently deep diving on a few things including containerizing a number of control plane components. Is anyone willing to share any hurdles or hiccups they've hit while exploring containerization? I didn't see much of anything in the archives but I know we aren't the only ones heading down this path. Thanks in advance! -- Michael Damkot @mdamkot - twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From dev.faz at gmail.com Fri Apr 20 16:32:07 2018 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Fri, 20 Apr 2018 18:32:07 +0200 Subject: [Openstack-operators] Intro and Containerized Control Plane In-Reply-To: References: Message-ID: <3D947769-1E6A-4FD4-B21C-D8964A498956@gmail.com> Hi, we run completely in containers. I would recommend to take a look at how kolla is creating and managing the containers. This should prevent you from the bigger pitfalls :) If you have any specific questions. Don't hesitate to ask. Fabian Zimmermann Am 20. April 2018 17:22:46 MESZ schrieb Michael Damkot : >Hello Operators!! > >I wanted to say "Hello" to the community once again! I've come back >into >the OpenStack fold after my time as a former member of the Time Warner >Cable Team. > >Salesforce is working toward greatly increasing the size and scale of >our >OpenStack use cases as well as our participation in the community. >We're >currently deep diving on a few things including containerizing a number >of >control plane components. Is anyone willing to share any hurdles or >hiccups >they've hit while exploring containerization? I didn't see much of >anything >in the archives but I know we aren't the only ones heading down this >path. > >Thanks in advance! > >-- >Michael Damkot >@mdamkot - twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Fri Apr 20 17:31:20 2018 From: amy at demarco.com (Amy Marrich) Date: Fri, 20 Apr 2018 12:31:20 -0500 Subject: [Openstack-operators] =?utf-8?q?OpenStack_Summit_Vancouver_Speed_?= =?utf-8?q?Mentoring_Workshop=E2=80=94Call_for_Mentors?= Message-ID: *Calling All OpenStack Mentors!We’re quickly nearing the Vancouver Summit, and gearing up for another successful Speed Mentoring workshop! This workshop, now a mainstay at OpenStack Summits, is designed to provide guidance to newcomers so that they can dive in and actively engage, participate and contribute to our community. And we couldn’t do this without you—our fearless mentors!Speed Mentoring Workshop & LunchMonday, May 21, 12:15 – 1:30 pmVancouver Convention Centre West, Level 2, Room 215-216https://bit.ly/2HCGjMo Who should sign up?Are you excited about OpenStack and interested in sharing your career, community or technical advice and expertise with others? Contributed (code and non-code contributions welcome) to the OpenStack community for at least one year? Any mentor of any gender with a technical or non-technical background is encouraged to join us. Share your insights, inspire those new to our community, grab lunch, and pick up special mentor gifts!How does it work?Simply sign up here , and fill in a short survey about your areas of interests and expertise. Your answers will be used to produce fun, customized baseball cards that you can use to introduce yourself to the mentees. You will be provided with mentees’ areas of interest and questions in advance to help you prepare, and we’ll meet as a team ahead of time to go over logistics and answer any questions you may have. On the day of the event, plan to arrive ~ 15 minutes before the session. During the session, you will meet with small groups of mentees in 15-minute intervals and answer their questions about how to grow in the community.It’s a fast-paced event and a great way to meet new people, introduce them to the Summit and welcome them to the OpenStack community.Be sure to sign up today !* *Thanks,* *Amy (spotz)* -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Sat Apr 21 05:49:09 2018 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Sat, 21 Apr 2018 07:49:09 +0200 Subject: [Openstack-operators] [openstack-dev] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: enabled_filters = AggregateInstanceExtraSpecsFilter,AggregateMultiTenancyIsolation,RetryFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,AggregateRamFilter,AggregateCoreFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter Cheers, Massimo On Wed, Apr 18, 2018 at 10:20 PM, Simon Leinen wrote: > Artom Lifshitz writes: > > To that end, we'd like to know what filters operators are enabling in > > their deployment. If you can, please reply to this email with your > > [filter_scheduler]/enabled_filters (or > > [DEFAULT]/scheduler_default_filters if you're using an older version) > > option from nova.conf. Any other comments are welcome as well :) > > We have the following enabled on our semi-public (academic community) > cloud, which runs on Newton: > > AggregateInstanceExtraSpecsFilter > AvailabilityZoneFilter > ComputeCapabilitiesFilter > ComputeFilter > ImagePropertiesFilter > PciPassthroughFilter > RamFilter > RetryFilter > ServerGroupAffinityFilter > ServerGroupAntiAffinityFilter > > (sorted alphabetically) Recently we've also been trying > > AggregateImagePropertiesIsolation > > ...but it looks like we'll replace it with our own because it's a bit > awkward to use for our purpose (scheduling Windows instance to licensed > compute nodes). > -- > Simon. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Mon Apr 23 00:48:18 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Sun, 22 Apr 2018 19:48:18 -0500 Subject: [Openstack-operators] Reminder: UC Meeting 4/23 @ 1800UTC Message-ID: Hi everyone, Friendly reminder we have a UC meeting tomorrow in #openstack-uc on freenode at 18:00UTC Agenda: https://wiki.openstack.org/wiki/Governance/Foundation/UserCo mmittee#Meeting_Agenda.2FPrevious_Meeting_Logs -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Mon Apr 23 08:17:07 2018 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Mon, 23 Apr 2018 10:17:07 +0200 Subject: [Openstack-operators] Receipt to transfer the ownership of an instance Message-ID: As far as I understand there is not a clean way to transfer the ownership of an instance from a user to another one (the implementation of the blueprint https://blueprints.launchpad.net/nova/+spec/transfer-instance-ownership was abandoned). Is there at least a receipt (i.e. what needs to be changed in the database) that operators can follow to implement such use case ? Thanks, Massimo -------------- next part -------------- An HTML attachment was scrubbed... URL: From eumel at arcor.de Mon Apr 23 09:52:42 2018 From: eumel at arcor.de (Frank Kloeker) Date: Mon, 23 Apr 2018 11:52:42 +0200 Subject: [Openstack-operators] [I18n] Office Hours, Thursday, 2018/04/26 13:00-14:00 UTC & 2018/05/03 07:00-08:00 UTC Message-ID: <2fbf8d44661c0af21ca59ac358abe3e5@arcor.de> Hello, the I18n team wants to change something in collaboration and communication with other teams and users. Instead of team meetings we offer around the Summit an open communication on Freenode IRC #openstack-i18n channel. Feel free to add your topics on the wiki page on [1]. Or better join one of our Office Hours to discuss topis around I18n. We are especially interested on: * Feedback about quality of translation in different languages * New projects or documents with interests on translation * New ideas like AI for I18n or new feature requests for Zanata, our translation platform You can meet us in person together with Docs team on the Project Onboarding Session during the Vancouver Summit [2]. kind regards Frank PTL I18n [1] https://wiki.openstack.org/wiki/Meetings/I18nTeamMeeting [2] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21627/docsi18n-project-onboarding From zioproto at gmail.com Mon Apr 23 11:38:49 2018 From: zioproto at gmail.com (Saverio Proto) Date: Mon, 23 Apr 2018 13:38:49 +0200 Subject: [Openstack-operators] Receipt to transfer the ownership of an instance In-Reply-To: References: Message-ID: Hello Massimo, what we suggest to our users, is to migrate a volume, and to create a new VM from that volume. https://help.switch.ch/engines/documentation/migrating-resources/ the bad thing is that the new VM has a new IP address, so eventually DNS records have to be updated by the users. Cheers, Saverio 2018-04-23 10:17 GMT+02:00 Massimo Sgaravatto : > As far as I understand there is not a clean way to transfer the ownership of > an instance from a user to another one (the implementation of the blueprint > https://blueprints.launchpad.net/nova/+spec/transfer-instance-ownership was > abandoned). > > > Is there at least a receipt (i.e. what needs to be changed in the database) > that operators can follow to implement such use case ? > > Thanks, Massimo > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From mrhillsman at gmail.com Mon Apr 23 13:52:02 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 23 Apr 2018 08:52:02 -0500 Subject: [Openstack-operators] [OpenStack] [user-committee] Reminder: UC Meeting 4/23 @ 1800UTC Message-ID: Hi everyone, Friendly reminder we have a UC meeting tomorrow in #openstack-uc on freenode at 18:00UTC Agenda: https://wiki.openstack.org/wiki/Governance/Foundation/UserCo mmittee#Meeting_Agenda.2FPrevious_Meeting_Logs -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Mon Apr 23 13:55:44 2018 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Mon, 23 Apr 2018 15:55:44 +0200 Subject: [Openstack-operators] Receipt to transfer the ownership of an instance In-Reply-To: References: Message-ID: Thanks for the hint ! :-) Cheers, Massimo On Mon, Apr 23, 2018 at 1:38 PM, Saverio Proto wrote: > Hello Massimo, > > what we suggest to our users, is to migrate a volume, and to create a > new VM from that volume. > https://help.switch.ch/engines/documentation/migrating-resources/ > > the bad thing is that the new VM has a new IP address, so eventually > DNS records have to be updated by the users. > > Cheers, > > Saverio > > > 2018-04-23 10:17 GMT+02:00 Massimo Sgaravatto < > massimo.sgaravatto at gmail.com>: > > As far as I understand there is not a clean way to transfer the > ownership of > > an instance from a user to another one (the implementation of the > blueprint > > https://blueprints.launchpad.net/nova/+spec/transfer-instance-ownership > was > > abandoned). > > > > > > Is there at least a receipt (i.e. what needs to be changed in the > database) > > that operators can follow to implement such use case ? > > > > Thanks, Massimo > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Mon Apr 23 17:46:40 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Mon, 23 Apr 2018 17:46:40 +0000 Subject: [Openstack-operators] 4K block size Message-ID: Has anyone experience of working with local disks or volumes with physical/logical block sizes of 4K rather than 512? There seems to be KVM support for this (http://fibrevillage.com/sysadmin/216-how-to-make-qemu-kvm-accept-4k-sector-sized-disks) but I could not see how to get the appropriate flavors/volumes in an OpenStack environment? Is there any performance improvement from moving to 4K rather than 512 byte sectors? Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Mon Apr 23 19:27:52 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 23 Apr 2018 14:27:52 -0500 Subject: [Openstack-operators] [Openstack] help In-Reply-To: References: Message-ID: Douaa can you provide details on the error you are getting? Also I am adding the Operators ML as some more practitioners may be able to see it from there. On Mon, Apr 23, 2018 at 11:55 AM, Douaa wrote: > Hello > I'm trying to use openstack (VIM) on openbaton for that i create to VMs > one for OpenStack and the second for Opnebaton. > i have installed packstack on CentOS and Openbaton on ubuntu 16.04. Now > i'm trying to create VIM Openstack on openbaton but i have erreur there is > any plugins or configuration i have to do it before creating VIM Openstack ? > Thanks for helping > > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Apr 23 19:54:50 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 23 Apr 2018 14:54:50 -0500 Subject: [Openstack-operators] 4K block size In-Reply-To: References: Message-ID: <20180423195449.GB17397@sm-xps> On Mon, Apr 23, 2018 at 05:46:40PM +0000, Tim Bell wrote: > > Has anyone experience of working with local disks or volumes with physical/logical block sizes of 4K rather than 512? > > There seems to be KVM support for this (http://fibrevillage.com/sysadmin/216-how-to-make-qemu-kvm-accept-4k-sector-sized-disks) but I could not see how to get the appropriate flavors/volumes in an OpenStack environment? > > Is there any performance improvement from moving to 4K rather than 512 byte sectors? > I haven't seen much of a performance difference between drives with one exception that I don't think will apply here. For backward compatiblity, there is something that is called "512e mode". This basically takes a 4k sector size and, using software abstraction, presents it to the host as a 512 byte sector drive. So with this abstraction being done in between, there can be a slight performance hit as things are translated. As far as I know, you shouldn't need to have specific volume types and the use of these drives should be transparent to the upper layers. At least coming from the Cinder side. I'm not sure if there are any special considerations on the Nova side though, so it would be great to hear from anyone that has any experience with this and Nova. I did find a decent write up on some of the differences with 4k vs 512. It is focusing on SQL workloads, but I think the basics are generally applicable to most workloads: http://en.community.dell.com/techcenter/enterprise-solutions/w/sql_solutions/12102.performance-comparison-between-4k-and-512e-hard-drives Sean From john.vanommen at gmail.com Mon Apr 23 20:15:51 2018 From: john.vanommen at gmail.com (John van Ommen) Date: Mon, 23 Apr 2018 20:15:51 +0000 Subject: [Openstack-operators] 4K block size In-Reply-To: References: Message-ID: The benchmarks DO appear to show a modest improvement. In the bugzilla post you linked, Paolo said that "With 4k logical sector size in the host, you must have a 4k logical sector size in the guest too." So it appears you'd need to use a disk that physically supports it, along with an OS that supports it also. RHEL7 supports it, here's some detail: https://access.redhat.com/solutions/56494 On Mon, Apr 23, 2018, 10:47 AM Tim Bell wrote: > > > Has anyone experience of working with local disks or volumes with > physical/logical block sizes of 4K rather than 512? > > > > There seems to be KVM support for this ( > http://fibrevillage.com/sysadmin/216-how-to-make-qemu-kvm-accept-4k-sector-sized-disks) > but I could not see how to get the appropriate flavors/volumes in an > OpenStack environment? > > > > Is there any performance improvement from moving to 4K rather than 512 > byte sectors? > > > > Tim > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openstack.org Mon Apr 23 20:34:54 2018 From: allison at openstack.org (Allison Price) Date: Mon, 23 Apr 2018 15:34:54 -0500 Subject: [Openstack-operators] OpenStack User Survey: Identity Service, Networking and Block Storage Drivers Answer Options In-Reply-To: <5ADE34FC.4020003@openstack.org> References: <9F1755CF-6823-4662-887E-C6D17F962C2D@openstack.org> <339AB2CE-A99C-43C0-8C3B-875520050DFF@cern.ch> <5ADE34FC.4020003@openstack.org> Message-ID: Hi Tim, Sorry for the delay in response, and thank you for bringing this up. We tried to clarify the Networking Drivers by explicitly specifying each distinct driver that might be selected and noting which ones were ML2, but we're also open to other ideas if people have suggestions. Cheers, Allison > > >> From: Tim Bell >> Date: March 29, 2018 at 2:27 PM >> To: Allison Price , OpenStack Operators >> Subject: [Openstack-operators] OpenStack User Survey: Identity Service, Networking and Block Storage Drivers Answer Options >> Allison, >> >> In the past, there has been some confusion on the ML2 driver since many of the drivers are both ML2 based and have specific drivers. Had you an approach in mind for this time? >> >> It does mean that the results won’t be directly comparable but cleaning up this confusion would seem worth it in the longer term. >> >> Tim >> >> From: Allison Price >> Date: Thursday, 29 March 2018 at 19:24 >> To: openstack-operators >> Subject: [Openstack-operators] OpenStack User Survey: Identity Service, Networking and Block Storage Drivers Answer Options >> >> Hi everyone, <> >> >> We are opening the OpenStack User Survey submission process next month and wanted to collect operator feedback on the answer choices for three particular questions: Identity Service (Keystone) drivers, Network (Neutron) drivers and Block Storage (Cinder) drivers. We want to make sure that we have a list of the most commonly used drivers so that we can collect the appropriate data from OpenStack users. Each of the questions will have a free text “Other” option, so they don’t need to be comprehensive, but if you think that there is a driver that should be included, please reply on this email thread or contact me directly. >> >> Thanks! >> Allison >> >> >> Allison Price >> OpenStack Foundation >> allison at openstack.org >> >> >> Which OpenStack Identity Service (Keystone) drivers are you using? >> · Active Directory >> · KVS >> · LDAP >> · PAM >> · SQL (default) >> · Templated >> · Other >> >> Which OpenStack Network (Neutron) drivers are you using? >> · Cisco UCS / Nexus >> · ML2 - Cisco APIC >> · ML2 - Linux Bridge >> · ML2 - Mellanox >> · ML2 - MidoNet >> · ML2 - OpenDaylight >> · ML2 - Open vSwitch >> · nova-network >> · VMware NSX (formerly NIcira NVP) >> · A10 Networks >> · Arista >> · Big Switch >> · Brocade >> · Embrace >> · Extreme Networks >> · Hyper-V >> · IBM SDN-VE >> · Linux Bridge >> · Mellanox >> · Meta PluginP >> · MidoNet >> · Modular Layer 2 Plugin (ML2) >> · NEC OpenFlow >> · OpenDaylight >> · Nuage Networks >> · One Convergence NVSD >> · Tungsten Fabric (OpenContrail) >> · Open vSwitch >> · PLUMgrid >> · Ruijie Networks >> · Ryu OpenFlow Controller >> · ML2 - Alcatel-Lucent Omniswitch >> · ML2 - Arista >> · ML2 - Big Switch >> · ML2 - Brocade VDX/VCS >> · ML2 - Calico >> · ML2 - Cisco DFA >> · ML2 - Cloudbase Hyper-V >> · ML2 - Freescale SDN >> · ML2 - Freescale FWaaS >> · ML2 - Fujitsu Converged Fabric Switch >> · ML2 - Huawei Agile Controller >> · ML2 - Mellanox SR-IOV >> · ML2 - Nuage Networks >> · ML2 - One Convergence >> · ML2 - ONOS >> · ML2 - OpenFlow Agent >> · ML2 - Pluribus >> · ML2 - Fail-F >> · ML2 - VMware DVS >> · Other >> >> Which OpenStack Block Storage (Cinder) drivers are you using? >> · Ceph RBD >> · Coraid >> · Dell EqualLogic >> · EMC >> · GlusterFS >> · HDS >> · HP 3PAR >> · HP LeftHand >> · Huawei >> · IBM GPFS >> · IBM NAS >> · IBM Storwize >> · IBM XIV / DS8000 >> · LVM (default) >> · Mellanox >> · NetApp >> · Nexenta >> · NFS >> · ProphetStor >> · SAN / Solaris >> · Scality >> · Sheepdog >> · SolidFire >> · VMware VMDK >> · Windows Server 2012 >> · Xenapi NFS >> · XenAPI Storage Manager >> · Zadara >> · Other >> >> >> >> >> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> From: Allison Price >> Date: March 29, 2018 at 12:22 PM >> To: OpenStack Operators >> Subject: [Openstack-operators] OpenStack User Survey: Identity Service, Networking and Block Storage Drivers Answer Options >> Hi everyone, >> >> We are opening the OpenStack User Survey submission process next month and wanted to collect operator feedback on the answer choices for three particular questions: Identity Service (Keystone) drivers, Network (Neutron) drivers and Block Storage (Cinder) drivers. We want to make sure that we have a list of the most commonly used drivers so that we can collect the appropriate data from OpenStack users. Each of the questions will have a free text “Other” option, so they don’t need to be comprehensive, but if you think that there is a driver that should be included, please reply on this email thread or contact me directly. >> >> Thanks! >> Allison >> >> >> Allison Price >> OpenStack Foundation >> allison at openstack.org >> >> >> Which OpenStack Identity Service (Keystone) drivers are you using? >> Active Directory >> KVS >> LDAP >> PAM >> SQL (default) >> Templated >> Other >> >> Which OpenStack Network (Neutron) drivers are you using? >> Cisco UCS / Nexus >> ML2 - Cisco APIC >> ML2 - Linux Bridge >> ML2 - Mellanox >> ML2 - MidoNet >> ML2 - OpenDaylight >> ML2 - Open vSwitch >> nova-network >> VMware NSX (formerly NIcira NVP) >> A10 Networks >> Arista >> Big Switch >> Brocade >> Embrace >> Extreme Networks >> Hyper-V >> IBM SDN-VE >> Linux Bridge >> Mellanox >> Meta PluginP >> MidoNet >> Modular Layer 2 Plugin (ML2) >> NEC OpenFlow >> OpenDaylight >> Nuage Networks >> One Convergence NVSD >> Tungsten Fabric (OpenContrail) >> Open vSwitch >> PLUMgrid >> Ruijie Networks >> Ryu OpenFlow Controller >> ML2 - Alcatel-Lucent Omniswitch >> ML2 - Arista >> ML2 - Big Switch >> ML2 - Brocade VDX/VCS >> ML2 - Calico >> ML2 - Cisco DFA >> ML2 - Cloudbase Hyper-V >> ML2 - Freescale SDN >> ML2 - Freescale FWaaS >> ML2 - Fujitsu Converged Fabric Switch >> ML2 - Huawei Agile Controller >> ML2 - Mellanox SR-IOV >> ML2 - Nuage Networks >> ML2 - One Convergence >> ML2 - ONOS >> ML2 - OpenFlow Agent >> ML2 - Pluribus >> ML2 - Fail-F >> ML2 - VMware DVS >> Other >> >> Which OpenStack Block Storage (Cinder) drivers are you using? >> Ceph RBD >> Coraid >> Dell EqualLogic >> EMC >> GlusterFS >> HDS >> HP 3PAR >> HP LeftHand >> Huawei >> IBM GPFS >> IBM NAS >> IBM Storwize >> IBM XIV / DS8000 >> LVM (default) >> Mellanox >> NetApp >> Nexenta >> NFS >> ProphetStor >> SAN / Solaris >> Scality >> Sheepdog >> SolidFire >> VMware VMDK >> Windows Server 2012 >> Xenapi NFS >> XenAPI Storage Manager >> Zadara >> Other >> >> >> >> >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp.methot at planethoster.info Tue Apr 24 00:58:26 2018 From: jp.methot at planethoster.info (=?utf-8?Q?Jean-Philippe_M=C3=A9thot?=) Date: Tue, 24 Apr 2018 09:58:26 +0900 Subject: [Openstack-operators] Strange behaviour change in cinder with a Dell compellent backend Message-ID: Hi, This is a very strange behaviour that has causing me issues with my SAN ever since we upgraded to Mitaka or Ocata I believe, several months ago. Essentially, I used to be able to change the ID of a disk in the SAN to swap the disk in Openstack. So, for example, I had disk 1. I could restore a snapshot of disk 1 on disk 2, rename disk 1 to disk 1-bak and rename disk 2 to disk 1 and the VM would start booting off of the new disk I had just made. This has changed. Now, somehow, Openstack sticks to the original disk even if I rename the disk. While annoying, this behaviour didn’t really cause me that many issues. However, I have discovered that if I migrate VMs on which such an operation happened, the VM will try to boot off the original disk from several months ago, despite the new disk being there with the correct ID. How can Openstack find the old disk even if its ID in the SAN has changed? Jean-Philippe Méthot Openstack system administrator Administrateur système Openstack PlanetHoster inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.mielke at gmail.com Tue Apr 24 01:22:18 2018 From: mark.mielke at gmail.com (Mark Mielke) Date: Mon, 23 Apr 2018 21:22:18 -0400 Subject: [Openstack-operators] 4K block size In-Reply-To: <20180423195449.GB17397@sm-xps> References: <20180423195449.GB17397@sm-xps> Message-ID: On Mon, Apr 23, 2018 at 3:54 PM, Sean McGinnis wrote: > On Mon, Apr 23, 2018 at 05:46:40PM +0000, Tim Bell wrote: > > Has anyone experience of working with local disks or volumes with > physical/logical block sizes of 4K rather than 512? > > There seems to be KVM support for this (http://fibrevillage.com/ > sysadmin/216-how-to-make-qemu-kvm-accept-4k-sector-sized-disks) but I > could not see how to get the appropriate flavors/volumes in an OpenStack > environment? > > Is there any performance improvement from moving to 4K rather than 512 > byte sectors? > > I haven't seen much of a performance difference between drives with one > exception that I don't think will apply here. For backward compatiblity, > there > is something that is called "512e mode". This basically takes a 4k sector > size > and, using software abstraction, presents it to the host as a 512 byte > sector > drive. So with this abstraction being done in between, there can be a > slight > performance hit as things are translated. > Today, most commonly used file systems where performance matters, already use 4K logical block sizes underneath. So, it doesn't matter if it is 512n, 512e, or 4Kn. They all work approximately the same. In theory the disk performance is better with "Advanced Format" disks as there are fewer gaps between the data sectors, but you can get such gains with denser platters or faster rotation. An example of a difference here might be that you might have 5 platter 4TB disk with 512n, or a 4 platter 4TB with 4Kn. The 4 platter might require less energy to run, and may have a higher sequential read and write performance. But, if the disk specs meet your requirements, you often wouldn't care if it was 512n, 512e, or 4Kn. One case where it definitely does matter, is alignment. If the logical sectors are not aligned with the physical sectors, this can have crippling impact. A 512e drive "emulates" 512n. But, if you logical sector is out of alignment, and bridges the end of one physical sector, and the beginning of another physical sector, how does it safely write in units of the physical sector? Unless the physical blocks happen to be in cache, it will have to first read each block before it can re-write the block. I believe GRUB Legacy is not 512e/4Kn aware. RHEL 5 systems, and RHEL 5 systems upgraded ot RHEL 6 systems, or systems that were created with fixed partition tables that were designed before 512e/4Kn drives existed, can end up with the more traditional MBR layout where the first partition of the disk begins on the 63rd 512-byte sector. Using such a partition table on a 512e disk can be very bad news. In our case, we had real life RHEL 7 systems naively imaged using a Kickstart configuration that was originally designed for RHEL 5. They were configured to use the Docker lvm-thinp driver, and this particular use case was very heavy on random I/O through the thin volume layers. A set of users were reporting good performance, and another set of users were reporting really bad performance. I looked at the systems and determined that they had different make and model of disk, and the "slow" systems all had 512e, and the "fast" systems all had 512n. I checked the Kickstart configuration they were using, and sure enough they were using the original layout. Modern partition tools allow a full 1 MB at the start of disk, making the first partition aligned on both 512, and 4K (and 1MB). This leaves more room for the boot code, and it mostly eliminates alignment problems. If you did have a file system that was of the belief that it could read and write at the 512-byte sector level, it would also have worst case behaviour similar to the above. I don't think EXT4 or XFS do this, so it is outside my concern and I didn't research which ones still do this. But, knowing all of the above... I actually patch our SolidFire driver to properly implement 512e information to be exposed through libvirt and qemu/virtio into the guest. The guest can clearly see "4K" physical sectors, and "512" logical sectors, and it can then make the best decision based upon this information. I did suggest that the SolidFire people adopt this, but with my local patch the pressure to follow up here was eliminated, and I didn't get back to this. I think the guest should have the right information so that it can make the correct decision. If the information is filtered, and a guest is presented with 512 byte physical and logical, even though the physical is 4K, then this means that certain use cases may exhibit very bad behaviour. Probably you won't notice, because the typical benchmarks run would show good speed and you would be unaware that the overhead is actually due to mis-alignment, or partial sector reads and writes. -- Mark Mielke -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Tue Apr 24 02:22:48 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 24 Apr 2018 10:22:48 +0800 Subject: [Openstack-operators] Public Cloud WG PTG Summary Message-ID: Hi team, Sorry for this long overdue summary. During the Dublin PTG as a WG we held two successful discussion sessions on Mon and Tues, and below are the conclusions for this year's planning as far as I could recall. Please feel free to provide further feedback :) - Passport Program v2 - We want to push forward the passport program into the v2 stage this year, including QR code promotion, more member clouds (APAC and North America) and possibly a blockchain experiment (cloud ledger proposal [0]) targeting Berlin Summit if the testnet proves to be successful. - We will be also looking into the possibility of having OpenLab as a special member of Passport Program to help ease some of the difficulties of purely business facing or academic clouds to join the initiative. - Public Cloud Feature List - We will look at a more formal draft of the feature list [1] ready for Vancouver and gather some additional requirement at Vancouver summit. It is also possible for us to do a white paper based upon the feature list content this year, to help user and operators alike better understanding what OpenStack public cloud could offer. - Public Cloud SDK Certification - Chris Hoge, Dims and Melvin have been helping putting up a testing plan for public cloud sdk certification based upon the initial work OpenLab team has achieved. Public Cloud WG will provide a interop-like guideline based upon the testing mechanism. - Public Cloud Meetup - We look forward to have more :) [0] https://docs.google.com/presentation/d/1RYRq1YdYEoZ5KNKwlDDtnunMdoYRAHPjPslnng3VqcI/edit?usp=sharing [1] https://docs.google.com/spreadsheets/d/1Mf8OAyTzZxCKzYHMgBl-QK_2-XSycSkOjqCyMTIedkA/edit?usp=sharing -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Tue Apr 24 07:22:51 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 24 Apr 2018 15:22:51 +0800 Subject: [Openstack-operators] [publiccloud-wg]KubeCon EU Public Cloud Meetup ? Message-ID: Hi, I'm wondering for people who will attend KubeCon EU is there any interest for a public cloud meetup ? We could discuss many items listed in the ptg summary I just sent out via the meetup :) -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Apr 24 13:26:09 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 24 Apr 2018 15:26:09 +0200 Subject: [Openstack-operators] [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: <530903a4-701d-595e-acc3-05369697cf06@gmail.com> References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> <530903a4-701d-595e-acc3-05369697cf06@gmail.com> Message-ID: Sorry folks for the late reply, I'll try to also weigh in the Gerrit change. On Tue, Apr 24, 2018 at 2:55 PM, Jay Pipes wrote: > On 04/23/2018 05:51 PM, Arvind N wrote: > >> Thanks for the detailed options Matt/eric/jay. >> >> Just few of my thoughts, >> >> For #1, we can make the explanation very clear that we rejected the >> request because the original traits specified in the original image and the >> new traits specified in the new image do not match and hence rebuild is not >> supported. >> > > I believe I had suggested that on the spec amendment patch. Matt had > concerns about an error message being a poor user experience (I don't > necessarily disagree with that) and I had suggested a clearer error message > to try and make that user experience slightly less sucky. > > For #3, >> >> Even though it handles the nested provider, there is a potential issue. >> >> Lets say a host with two SRIOV nic. One is normal SRIOV nic(VF1), another >> one with some kind of offload feature(VF2).(Described by alex) >> >> Initial instance launch happens with VF:1 allocated, rebuild launches >> with modified request with traits=HW_NIC_OFFLOAD_X, so basically we want >> the instance to be allocated VF2. >> >> But the original allocation happens against VF1 and since in rebuild the >> original allocations are not changed, we have wrong allocations. >> > > Yep, that is certainly an issue. The only solution to this that I can see > would be to have the conductor ask the compute node to do the pre-flight > check. The compute node already has the entire tree of providers, their > inventories and traits, along with information about providers that share > resources with the compute node. It has this information in the > ProviderTree object in the reportclient that is contained in the compute > node resource tracker. > > The pre-flight check, if run on the compute node, would be able to grab > the allocation records for the instance and determine if the required > traits for the new image are present on the actual resource providers > allocated against for the instance (and not including any child providers > not allocated against). > > Yup, that. We also have pre-flight checks for move operations like live and cold migrations, and I'd really like to keep all the conditionals in the conductor, because it knows better than the scheduler which operation is asked. I'm not really happy with adding more in the scheduler about "yeah, it's a rebuild, so please do something exceptional", and I'm also not happy with having a filter (that can be disabled) calling the Placement API. > Or... we chalk this up as a "too bad" situation and just either go with > option #1 or simply don't care about it. Also, that too. Maybe just provide an error should be enough, nope? Operators, what do you think ? (cross-calling openstack-operators@) -Sylvain > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Tue Apr 24 15:22:40 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 24 Apr 2018 10:22:40 -0500 Subject: [Openstack-operators] Strange behaviour change in cinder with a Dell compellent backend In-Reply-To: References: Message-ID: <20180424152240.GB30030@sm-xps> On Tue, Apr 24, 2018 at 09:58:26AM +0900, Jean-Philippe Méthot wrote: > Hi, > > This is a very strange behaviour that has causing me issues with my SAN ever since we upgraded to Mitaka or Ocata I believe, several months ago. Essentially, I used to be able to change the ID of a disk in the SAN to swap the disk in Openstack. So, for example, I had disk 1. I could restore a snapshot of disk 1 on disk 2, rename disk 1 to disk 1-bak and rename disk 2 to disk 1 and the VM would start booting off of the new disk I had just made. This has changed. Now, somehow, Openstack sticks to the original disk even if I rename the disk. > > While annoying, this behaviour didn’t really cause me that many issues. However, I have discovered that if I migrate VMs on which such an operation happened, the VM will try to boot off the original disk from several months ago, despite the new disk being there with the correct ID. How can Openstack find the old disk even if its ID in the SAN has changed? > If I remember right, this was a fix in the driver to be able to track the volume by its native array ID and not its name. This is how things should work. Reliance on the name of the volume is not safe, and as seen here, can be misused to do things that are not really supported and can cause some unintended side effects. You can update the database to get the same (mis)behavior you were using, but I am not suggesting that is a good thing to do. From jp.methot at planethoster.info Wed Apr 25 02:09:59 2018 From: jp.methot at planethoster.info (=?utf-8?Q?Jean-Philippe_M=C3=A9thot?=) Date: Wed, 25 Apr 2018 11:09:59 +0900 Subject: [Openstack-operators] Strange behaviour change in cinder with a Dell compellent backend In-Reply-To: <20180424152240.GB30030@sm-xps> References: <20180424152240.GB30030@sm-xps> Message-ID: <94A0B9DC-9B40-4E30-B660-06FEEC95339B@planethoster.info> Thank you for your reply. This implies that the SAN keeps track of a volume’s native ID. So, for example, if I had a VM that was trying to mount an inexistant volume ID after migration, I could fix this by creating a new volume with the proper ID and just migrate the data from the volume that was in-use previously. That was my main concern and now it does make the process of fixing this simpler. Jean-Philippe Méthot Openstack system administrator Administrateur système Openstack PlanetHoster inc. > Le 25 avr. 2018 à 00:22, Sean McGinnis a écrit : > > If I remember right, this was a fix in the driver to be able to track the > volume by its native array ID and not its name. This is how things should work. > Reliance on the name of the volume is not safe, and as seen here, can be > misused to do things that are not really supported and can cause some > unintended side effects. > > You can update the database to get the same (mis)behavior you were using, but I > am not suggesting that is a good thing to do. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stig.openstack at telfer.org Wed Apr 25 08:45:56 2018 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 25 Apr 2018 09:45:56 +0100 Subject: [Openstack-operators] [scientific] IRC meeting: Docker and HPC Message-ID: Hello All - We have an IRC meeting at 1100 UTC today in channel #openstack-meeting. Everyone is welcome. Today we have Christian Kniep from Docker joining us to talk about how Docker can be adapted to suit the requirements of HPC workloads. I saw him present on this recently and it should be a very interesting discussion. The full agenda is here: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_April_25th_2018 Cheers, Stig From jimmy at openstack.org Wed Apr 25 21:07:24 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 25 Apr 2018 16:07:24 -0500 Subject: [Openstack-operators] Summit Forum Schedule Message-ID: <5AE0EE0C.1070400@openstack.org> Hi everyone - Please have a look at the Vancouver Forum schedule: https://docs.google.com/spreadsheets/d/15hkU0FLJ7yqougCiCQgprwTy867CGnmUvAvKaBjlDAo/edit?usp=sharing (also attached as a CSV) The proposed schedule was put together by two members from UC, TC and Foundation. We do our best to avoid moving scheduled items around as it tends to create a domino affect, but we do realize we might have missed something. The schedule should generally be set, but if you see a major conflict in either content or speaker availability, please email speakersupport at openstack.org. Thanks all, Jimmy -------------- next part -------------- A non-text attachment was scrubbed... Name: Vancouver forum topic proposals - Community Review - Schedule.csv Type: text/csv Size: 3300 bytes Desc: not available URL: From kendall at openstack.org Wed Apr 25 21:23:57 2018 From: kendall at openstack.org (Kendall Waters) Date: Wed, 25 Apr 2018 16:23:57 -0500 Subject: [Openstack-operators] Only a Few Hours Left Until Prices Increase - OpenStack Summit Vancouver Message-ID: Hi everyone, Friendly reminder that prices for the OpenStack Summit Vancouver will be increasing TONIGHT at 11:59pm PT (April 26, 6:59 UTC). Register NOW before the price increases! Also, if you haven’t booked your hotel yet, we still have a limited number of reduced rate hotel rooms available here . If you have any Summit-related questions, please contact summit at openstack.org . Cheers, Kendall Kendall Waters OpenStack Marketing kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias at citynetwork.se Thu Apr 26 09:32:07 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Thu, 26 Apr 2018 11:32:07 +0200 Subject: [Openstack-operators] [publiccloud-wg] Meeting this afternoon for Public Cloud WG Message-ID: <81ac878c-386c-74c9-d295-100b60412842@citynetwork.se> Hi folks, Time for a new meeting for the Public Cloud WG. Vancouver is coming closer, very open agenda this week so please join and bring your topic to discuss. The open agenda (please add topics) can be found at https://etherpad.openstack.org/p/publiccloud-wg See you all at 1400 UTC in #opensstack-publiccloud Cheers, Tobias -- Tobias Rydberg Senior Developer Mobile: +46 733 312780 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From Rajini.Karthik at Dell.com Thu Apr 26 15:23:35 2018 From: Rajini.Karthik at Dell.com (Rajini.Karthik at Dell.com) Date: Thu, 26 Apr 2018 15:23:35 +0000 Subject: [Openstack-operators] Strange behaviour change in cinder with a Dell compellent backend References: <20180424152240.GB30030@sm-xps> <94A0B9DC-9B40-4E30-B660-06FEEC95339B@planethoster.info> Message-ID: <609d5e95e16b42cabc137278d9826ca9@AUSX13MPS308.AMER.DELL.COM> Dell - Internal Use - Confidential We track the SAN’s internal ID for the volume. This isn’t something that the user can create. Your options are to change the OpenStack database provider_id for the volume to match the new ID. Hope this helps Rajini From: Jean-Philippe Méthot [mailto:jp.methot at planethoster.info] Sent: Tuesday, April 24, 2018 9:10 PM To: Sean McGinnis Cc: openstack-operators Subject: Re: [Openstack-operators] Strange behaviour change in cinder with a Dell compellent backend Thank you for your reply. This implies that the SAN keeps track of a volume’s native ID. So, for example, if I had a VM that was trying to mount an inexistant volume ID after migration, I could fix this by creating a new volume with the proper ID and just migrate the data from the volume that was in-use previously. That was my main concern and now it does make the process of fixing this simpler. Jean-Philippe Méthot Openstack system administrator Administrateur système Openstack PlanetHoster inc. Le 25 avr. 2018 à 00:22, Sean McGinnis > a écrit : If I remember right, this was a fix in the driver to be able to track the volume by its native array ID and not its name. This is how things should work. Reliance on the name of the volume is not safe, and as seen here, can be misused to do things that are not really supported and can cause some unintended side effects. You can update the database to get the same (mis)behavior you were using, but I am not suggesting that is a good thing to do. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vondra at homeatcloud.cz Fri Apr 27 09:02:53 2018 From: vondra at homeatcloud.cz (=?utf-8?Q?Tom=C3=A1=C5=A1_Vondra?=) Date: Fri, 27 Apr 2018 11:02:53 +0200 Subject: [Openstack-operators] [openstack-dev] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: <045e01d3de06$870220b0$95066210$@homeatcloud.cz> Hi! What we‘ve got in our small public cloud: scheduler_default_filters=AggregateInstanceExtraSpecsFilter, AggregateImagePropertiesIsolation, RetryFilter, AvailabilityZoneFilter, AggregateRamFilter, AggregateDiskFilter, AggregateCoreFilter, ComputeFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter #ComputeCapabilitiesFilter off because of conflict with AggregateInstanceExtraSpecFilter https://bugs.launchpad.net/nova/+bug/1279719 I really like to set resource limits using Aggregate metadata. Also, Windows host isolation is done using image metadata. I have filled a bug somewhere that it does not work correctly with Boot from Volume. I believe it got pretty much ignored. That’s why we also use flavor metadata. Tomas from Homeatcloud From: Massimo Sgaravatto [mailto:massimo.sgaravatto at gmail.com] Sent: Saturday, April 21, 2018 7:49 AM To: Simon Leinen Cc: OpenStack Development Mailing List (not for usage questions); OpenStack Operators Subject: Re: [Openstack-operators] [openstack-dev] [nova] Default scheduler filters survey enabled_filters = AggregateInstanceExtraSpecsFilter,AggregateMultiTenancyIsolation,RetryFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,AggregateRamFilter,AggregateCoreFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter Cheers, Massimo On Wed, Apr 18, 2018 at 10:20 PM, Simon Leinen wrote: Artom Lifshitz writes: > To that end, we'd like to know what filters operators are enabling in > their deployment. If you can, please reply to this email with your > [filter_scheduler]/enabled_filters (or > [DEFAULT]/scheduler_default_filters if you're using an older version) > option from nova.conf. Any other comments are welcome as well :) We have the following enabled on our semi-public (academic community) cloud, which runs on Newton: AggregateInstanceExtraSpecsFilter AvailabilityZoneFilter ComputeCapabilitiesFilter ComputeFilter ImagePropertiesFilter PciPassthroughFilter RamFilter RetryFilter ServerGroupAffinityFilter ServerGroupAntiAffinityFilter (sorted alphabetically) Recently we've also been trying AggregateImagePropertiesIsolation ...but it looks like we'll replace it with our own because it's a bit awkward to use for our purpose (scheduling Windows instance to licensed compute nodes). -- Simon. _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Apr 27 14:56:40 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 27 Apr 2018 09:56:40 -0500 Subject: [Openstack-operators] [openstack-dev] [nova] Default scheduler filters survey In-Reply-To: <045e01d3de06$870220b0$95066210$@homeatcloud.cz> References: <045e01d3de06$870220b0$95066210$@homeatcloud.cz> Message-ID: <822c0915-d999-f75f-9632-5fab7d57e4f1@gmail.com> On 4/27/2018 4:02 AM, Tomáš Vondra wrote: > Also, Windows host isolation is done using image metadata. I have filled > a bug somewhere that it does not work correctly with Boot from Volume. Likely because for boot from volume the instance.image_id is ''. The request spec, which the filter has access to, also likely doesn't have the backing image metadata for the volume because the instance isn't creating with an image directly. But nova could fetch the image metadata from the volume and put that into the request spec. We fixed a similar bug recently for the IsolatedHostsFilter: https://review.openstack.org/#/c/543263/ If you can find the bug, or report a new one, I could take a look. -- Thanks, Matt From jim at jimrollenhagen.com Fri Apr 27 15:04:24 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 27 Apr 2018 11:04:24 -0400 Subject: [Openstack-operators] [openstack-dev] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: On Wed, Apr 18, 2018 at 11:17 AM, Artom Lifshitz wrote: > Hi all, > > A CI issue [1] caused by tempest thinking some filters are enabled > when they're really not, and a proposed patch [2] to add > (Same|Different)HostFilter to the default filters as a workaround, has > led to a discussion about what filters should be enabled by default in > nova. > > The default filters should make sense for a majority of real world > deployments. Adding some filters to the defaults because CI needs them > is faulty logic, because the needs of CI are different to the needs of > operators/users, and the latter takes priority (though it's my > understanding that a good chunk of operators run tempest on their > clouds post-deployment as a way to validate that the cloud is working > properly, so maybe CI's and users' needs aren't that different after > all). > > To that end, we'd like to know what filters operators are enabling in > their deployment. If you can, please reply to this email with your > [filter_scheduler]/enabled_filters (or > [DEFAULT]/scheduler_default_filters if you're using an older version) > option from nova.conf. Any other comments are welcome as well :) > At Oath: AggregateImagePropertiesIsolation ComputeFilter CoreFilter DifferentHostFilter SameHostFilter ServerGroupAntiAffinityFilter ServerGroupAffinityFilter AvailabilityZoneFilter AggregateInstanceExtraSpecsFilter // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Fri Apr 27 16:04:18 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Fri, 27 Apr 2018 11:04:18 -0500 Subject: [Openstack-operators] The Forum Schedule is now live Message-ID: <5AE34A02.8020802@openstack.org> Hello all - Please take a look here for the posted Forum schedule: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 You should also see it update on your Summit App. Thank you and see you in Vancouver! Jimmy From jimmy at openstack.org Fri Apr 27 16:31:28 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Fri, 27 Apr 2018 11:31:28 -0500 Subject: [Openstack-operators] [openstack-dev] The Forum Schedule is now live In-Reply-To: <5AE34A02.8020802@openstack.org> References: <5AE34A02.8020802@openstack.org> Message-ID: <5AE35060.1040506@openstack.org> PS: If you have general questions on the schedule, additional updates to an abstract, or changes to the speaker list, please send them along to speakersupport at openstack.org. > Jimmy McArthur > April 27, 2018 at 11:04 AM > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > You should also see it update on your Summit App. > > Thank you and see you in Vancouver! > Jimmy > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From alifshit at redhat.com Sun Apr 29 18:34:09 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Sun, 29 Apr 2018 14:34:09 -0400 Subject: [Openstack-operators] [openstack-dev] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: Thanks everyone for your input! I wrote a small Python script [1] to present all your responses in an understandable format. Here's the output: Filters common to all deployments: {'ComputeFilter', 'ServerGroupAntiAffinityFilter'} Filter counts (out of 9 deployments): ServerGroupAntiAffinityFilter 9 ComputeFilter 9 AvailabilityZoneFilter 8 ServerGroupAffinityFilter 8 AggregateInstanceExtraSpecsFilter 8 ImagePropertiesFilter 8 RetryFilter 7 ComputeCapabilitiesFilter 5 AggregateCoreFilter 4 RamFilter 4 PciPassthroughFilter 3 AggregateRamFilter 3 CoreFilter 2 DiskFilter 2 AggregateImagePropertiesIsolation 2 SameHostFilter 2 AggregateMultiTenancyIsolation 1 NUMATopologyFilter 1 AggregateDiskFilter 1 DifferentHostFilter 1 Based on that, we can definitely say that SameHostFilter and DifferentHostFilter do *not* belong in the defaults. In fact, we got our defaults pretty spot on, based on this admittedly very limited dataset. The only frequently occurring filter that's not in our defaults is AggregateInstanceExtraSpecsFilter. [1] https://gist.github.com/notartom/0819df7c3cb9d02315bfabe5630385c9 On Fri, Apr 27, 2018 at 8:10 PM, Lingxian Kong wrote: > At Catalyst Cloud: > > RetryFilter > AvailabilityZoneFilter > RamFilter > ComputeFilter > AggregateCoreFilter > DiskFilter > AggregateInstanceExtraSpecsFilter > ImagePropertiesFilter > ServerGroupAntiAffinityFilter > SameHostFilter > > Cheers, > Lingxian Kong > > > On Sat, Apr 28, 2018 at 3:04 AM Jim Rollenhagen > wrote: >> >> On Wed, Apr 18, 2018 at 11:17 AM, Artom Lifshitz >> wrote: >>> >>> Hi all, >>> >>> A CI issue [1] caused by tempest thinking some filters are enabled >>> when they're really not, and a proposed patch [2] to add >>> (Same|Different)HostFilter to the default filters as a workaround, has >>> led to a discussion about what filters should be enabled by default in >>> nova. >>> >>> The default filters should make sense for a majority of real world >>> deployments. Adding some filters to the defaults because CI needs them >>> is faulty logic, because the needs of CI are different to the needs of >>> operators/users, and the latter takes priority (though it's my >>> understanding that a good chunk of operators run tempest on their >>> clouds post-deployment as a way to validate that the cloud is working >>> properly, so maybe CI's and users' needs aren't that different after >>> all). >>> >>> To that end, we'd like to know what filters operators are enabling in >>> their deployment. If you can, please reply to this email with your >>> [filter_scheduler]/enabled_filters (or >>> [DEFAULT]/scheduler_default_filters if you're using an older version) >>> option from nova.conf. Any other comments are welcome as well :) >> >> >> At Oath: >> >> AggregateImagePropertiesIsolation >> ComputeFilter >> CoreFilter >> DifferentHostFilter >> SameHostFilter >> ServerGroupAntiAffinityFilter >> ServerGroupAffinityFilter >> AvailabilityZoneFilter >> AggregateInstanceExtraSpecsFilter >> >> // jim >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- -- Artom Lifshitz Software Engineer, OpenStack Compute DFG From ed at leafe.com Sun Apr 29 21:29:17 2018 From: ed at leafe.com (Ed Leafe) Date: Sun, 29 Apr 2018 16:29:17 -0500 Subject: [Openstack-operators] [openstack-dev] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: On Apr 29, 2018, at 1:34 PM, Artom Lifshitz wrote: > > Based on that, we can definitely say that SameHostFilter and > DifferentHostFilter do *not* belong in the defaults. In fact, we got > our defaults pretty spot on, based on this admittedly very limited > dataset. The only frequently occurring filter that's not in our > defaults is AggregateInstanceExtraSpecsFilter. Another data point that might be illuminating is: how many sites use a custom (i.e., not in-tree) filter or weigher? One of the original design tenets of the scheduler was that we did not want to artificially limit what people could use to control their deployments, but inside of Nova there is a lot of confusion as to whether anyone is using anything but the included filters. So - does anyone out there rely on a filter and/or weigher that they wrote themselves, and maintain outside of OpenStack? -- Ed Leafe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From dtantsur at redhat.com Mon Apr 30 12:58:06 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 30 Apr 2018 14:58:06 +0200 Subject: [Openstack-operators] [ironic] [all] The last reminder about the classic drivers removal In-Reply-To: References: Message-ID: Hi all, This is the last reminder that the classic drivers will be removed from ironic. We plan on finish the removal before Rocky-2. See below for the information on migration. If for some reason we need to delay the removal, please speak up NOW. Note that I'm personally not inclined to delay it past Rocky, since it requires my time and effort to track this process. Cheers, Dmitry On 03/06/2018 12:11 PM, Dmitry Tantsur wrote: > Hi all, > > As you may already know, we have deprecated classic drivers in the Queens > release. We don't have specific removal plans yet. But according to the > deprecation policy we may remove them at any time after May 1st, which will be > half way to Rocky milestone 2. Personally, I'd like to do it around then. > > The `online_data_migrations` script will handle migrating nodes, if all required > hardware interfaces and types are enabled before the upgrade to Queens. > Otherwise, check the documentation [1] on how to update your nodes. > > Dmitry > > [1] https://docs.openstack.org/ironic/latest/admin/upgrade-to-hardware-types.html From mihailmed at gmail.com Mon Apr 30 13:18:19 2018 From: mihailmed at gmail.com (Mikhail Medvedev) Date: Mon, 30 Apr 2018 08:18:19 -0500 Subject: [Openstack-operators] [openstack-dev] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: On Sun, Apr 29, 2018 at 4:29 PM, Ed Leafe wrote: > > Another data point that might be illuminating is: how many sites use a custom (i.e., not in-tree) filter or weigher? One of the original design tenets of the scheduler was that we did not want to artificially limit what people could use to control their deployments, but inside of Nova there is a lot of confusion as to whether anyone is using anything but the included filters. > > So - does anyone out there rely on a filter and/or weigher that they wrote themselves, and maintain outside of OpenStack? > Internal cloud that is used for Power KVM CI single use VMs: AvailabilityZoneFilter AggregateMultiTenancyIsolation RetryFilter RamFilter ComputeFilter ComputeCapabilitiesFilter ImagePropertiesFilter CoreFilter NumInstancesFilter * NUMATopologyFilter NumInstancesFilter is a custom weigher I have added that returns negative number of instances on a host. Using it this way gives an even spread of instances over the compute nodes up to a point the compute cores are filled up evenly, then it overflows to the compute nodes with more CPU cores. Maybe it is possible to achieve the same with existing filters, at the time I did not see how. --- Mikhail Medvedev IBM From emilien at redhat.com Mon Apr 30 15:33:14 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 30 Apr 2018 08:33:14 -0700 Subject: [Openstack-operators] [openstack-dev] The Forum Schedule is now live In-Reply-To: <5AE34A02.8020802@openstack.org> References: <5AE34A02.8020802@openstack.org> Message-ID: On Fri, Apr 27, 2018 at 9:04 AM, Jimmy McArthur wrote: > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > You should also see it update on your Summit App. > Why TripleO doesn't have project update? Maybe we could combine it with TripleO - Project Onboarding if needed but it would be great to have it advertised as a project update! Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Mon Apr 30 15:44:15 2018 From: amy at demarco.com (Amy Marrich) Date: Mon, 30 Apr 2018 10:44:15 -0500 Subject: [Openstack-operators] [openstack-dev] The Forum Schedule is now live In-Reply-To: References: <5AE34A02.8020802@openstack.org> Message-ID: Emilien, I believe that the Project Updates are separate from the Forum? I know I saw some in the schedule before the Forum submittals were even closed. Maybe contact speaker support or Jimmy will answer here. Thanks, Amy (spotz) On Mon, Apr 30, 2018 at 10:33 AM, Emilien Macchi wrote: > > > On Fri, Apr 27, 2018 at 9:04 AM, Jimmy McArthur > wrote: > >> Hello all - >> >> Please take a look here for the posted Forum schedule: >> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 >> You should also see it update on your Summit App. >> > > Why TripleO doesn't have project update? > Maybe we could combine it with TripleO - Project Onboarding if needed but > it would be great to have it advertised as a project update! > > Thanks, > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Apr 30 15:47:47 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 30 Apr 2018 10:47:47 -0500 Subject: [Openstack-operators] [openstack-dev] The Forum Schedule is now live In-Reply-To: References: <5AE34A02.8020802@openstack.org> Message-ID: <5AE73AA3.4030408@openstack.org> Project Updates are in their own track: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 As are SIG, BoF and Working Groups: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 > Amy Marrich > April 30, 2018 at 10:44 AM > Emilien, > > I believe that the Project Updates are separate from the Forum? I know > I saw some in the schedule before the Forum submittals were even > closed. Maybe contact speaker support or Jimmy will answer here. > > Thanks, > > Amy (spotz) > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > Emilien Macchi > April 30, 2018 at 10:33 AM > > > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > > You should also see it update on your Summit App. > > Why TripleO doesn't have project update? > Maybe we could combine it with TripleO - Project Onboarding if needed > but it would be great to have it advertised as a project update! > > Thanks, > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 27, 2018 at 11:04 AM > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > You should also see it update on your Summit App. > > Thank you and see you in Vancouver! > Jimmy > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From valery.tschopp at switch.ch Mon Apr 30 15:55:23 2018 From: valery.tschopp at switch.ch (=?utf-8?B?VmFsw6lyeSBUc2Nob3Bw?=) Date: Mon, 30 Apr 2018 15:55:23 +0000 Subject: [Openstack-operators] [openstack-dev] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: Yeap, because of the bug [#1677217] in the standard AggregateImagePropertiesIsolation filter, we have written a custom Nova scheduler filter. The filter AggregateImageOsDistroIsolation is a simplified version the AggregateImagePropertiesIsolation, based only on the 'os_distro' image property. https://github.com/valerytschopp/nova/blob/aggregate_image_isolation/nova/scheduler/filters/aggregate_image_os_distro_isolation.py [#1677217] https://bugs.launchpad.net/nova/+bug/1677217 Cheers, Valery On 29/04/18, 23:29 , "Ed Leafe" wrote: On Apr 29, 2018, at 1:34 PM, Artom Lifshitz wrote: > > Based on that, we can definitely say that SameHostFilter and > DifferentHostFilter do *not* belong in the defaults. In fact, we got > our defaults pretty spot on, based on this admittedly very limited > dataset. The only frequently occurring filter that's not in our > defaults is AggregateInstanceExtraSpecsFilter. Another data point that might be illuminating is: how many sites use a custom (i.e., not in-tree) filter or weigher? One of the original design tenets of the scheduler was that we did not want to artificially limit what people could use to control their deployments, but inside of Nova there is a lot of confusion as to whether anyone is using anything but the included filters. So - does anyone out there rely on a filter and/or weigher that they wrote themselves, and maintain outside of OpenStack? -- Ed Leafe __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Arkady.Kanevsky at dell.com Mon Apr 30 15:58:24 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 30 Apr 2018 15:58:24 +0000 Subject: [Openstack-operators] [openstack-dev] The Forum Schedule is now live In-Reply-To: <5AE73AA3.4030408@openstack.org> References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> Message-ID: <18ce76f6eb3b4b30afedb642f43ce93c@AUSX13MPS308.AMER.DELL.COM> Both are currently empty. From: Jimmy McArthur [mailto:jimmy at openstack.org] Sent: Monday, April 30, 2018 10:48 AM To: Amy Marrich Cc: OpenStack Development Mailing List (not for usage questions); OpenStack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now live Project Updates are in their own track: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 As are SIG, BoF and Working Groups: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 Amy Marrich April 30, 2018 at 10:44 AM Emilien, I believe that the Project Updates are separate from the Forum? I know I saw some in the schedule before the Forum submittals were even closed. Maybe contact speaker support or Jimmy will answer here. Thanks, Amy (spotz) _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators Emilien Macchi April 30, 2018 at 10:33 AM Hello all - Please take a look here for the posted Forum schedule: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 You should also see it update on your Summit App. Why TripleO doesn't have project update? Maybe we could combine it with TripleO - Project Onboarding if needed but it would be great to have it advertised as a project update! Thanks, -- Emilien Macchi __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Jimmy McArthur April 27, 2018 at 11:04 AM Hello all - Please take a look here for the posted Forum schedule: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 You should also see it update on your Summit App. Thank you and see you in Vancouver! Jimmy __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Apr 30 16:22:07 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 30 Apr 2018 11:22:07 -0500 Subject: [Openstack-operators] [openstack-dev] The Forum Schedule is now live In-Reply-To: <18ce76f6eb3b4b30afedb642f43ce93c@AUSX13MPS308.AMER.DELL.COM> References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> <18ce76f6eb3b4b30afedb642f43ce93c@AUSX13MPS308.AMER.DELL.COM> Message-ID: <5AE742AF.2010106@openstack.org> Hmm. I see both populated with all of the relevant sessions. Can you send me a screencap of what you're seeing? > Arkady.Kanevsky at dell.com > April 30, 2018 at 10:58 AM > > Both are currently empty. > > *From:*Jimmy McArthur [mailto:jimmy at openstack.org] > *Sent:* Monday, April 30, 2018 10:48 AM > *To:* Amy Marrich > *Cc:* OpenStack Development Mailing List (not for usage questions); > OpenStack-operators at lists.openstack.org > *Subject:* Re: [Openstack-operators] [openstack-dev] The Forum > Schedule is now live > > Project Updates are in their own track: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 > > As are SIG, BoF and Working Groups: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 > > > Jimmy McArthur > April 30, 2018 at 10:47 AM > Project Updates are in their own track: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 > > As are SIG, BoF and Working Groups: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Amy Marrich > April 30, 2018 at 10:44 AM > Emilien, > > I believe that the Project Updates are separate from the Forum? I know > I saw some in the schedule before the Forum submittals were even > closed. Maybe contact speaker support or Jimmy will answer here. > > Thanks, > > Amy (spotz) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Emilien Macchi > April 30, 2018 at 10:33 AM > > > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > > You should also see it update on your Summit App. > > Why TripleO doesn't have project update? > Maybe we could combine it with TripleO - Project Onboarding if needed > but it would be great to have it advertised as a project update! > > Thanks, > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 27, 2018 at 11:04 AM > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > You should also see it update on your Summit App. > > Thank you and see you in Vancouver! > Jimmy > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgagne at calavera.ca Mon Apr 30 16:41:21 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Mon, 30 Apr 2018 12:41:21 -0400 Subject: [Openstack-operators] [openstack-dev] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: Hi, On Sun, Apr 29, 2018 at 5:29 PM, Ed Leafe wrote: > On Apr 29, 2018, at 1:34 PM, Artom Lifshitz wrote: >> >> Based on that, we can definitely say that SameHostFilter and >> DifferentHostFilter do *not* belong in the defaults. In fact, we got >> our defaults pretty spot on, based on this admittedly very limited >> dataset. The only frequently occurring filter that's not in our >> defaults is AggregateInstanceExtraSpecsFilter. > > Another data point that might be illuminating is: how many sites use a custom (i.e., not in-tree) filter or weigher? One of the original design tenets of the scheduler was that we did not want to artificially limit what people could use to control their deployments, but inside of Nova there is a lot of confusion as to whether anyone is using anything but the included filters. > > So - does anyone out there rely on a filter and/or weigher that they wrote themselves, and maintain outside of OpenStack? Yes and we have a bunch. Here are our filters and weighers with explanations. Filters for cells: * InstanceTypeClassFilter [0] Filters for cloud/virtual cells: * RetryFilter * AvailabilityZoneFilter * RamFilter * ComputeFilter * AggregateCoreFilter * ImagePropertiesFilter * AggregateImageOsTypeIsolationFilter [1] * AggregateInstanceExtraSpecsFilter * AggregateProjectsIsolationFilter [2] Weighers for cloud/virtual cells: * MetricsWeigher * AggregateRAMWeigher [3] Filters for baremetal cells: * ComputeFilter * NetworkModelFilter [4] * TenantFilter [5] * UserFilter [6] * RetryFilter * AvailabilityZoneFilter * ComputeCapabilitiesFilter * ImagePropertiesFilter * ExactRamFilter * ExactDiskFilter * ExactCoreFilter Weighers for baremetal cells: * ReservedHostForTenantWeigher [7] * ReservedHostForUserWeigher [8] [0] Used to scheduler instances based on flavor class found in extra_specs (virtual/baremetal) [1] Allows to properly isolated hosts for licensing purposes. The upstream filter is not strict as per bugs/reviews/specs: * https://bugs.launchpad.net/nova/+bug/1293444 * https://bugs.launchpad.net/nova/+bug/1677217 * https://review.openstack.org/#/c/56420/ * https://review.openstack.org/#/c/85399/ Our custom implementation for Mitaka: https://gist.github.com/mgagne/462e7fa8417843055aa6da7c5fd51c00 [2] Similar filter to AggregateImageOsTypeIsolationFilter but for projects. Our custom implementation for Mitaka: https://gist.github.com/mgagne/d729ccb512b0434568ffb094441f643f [3] Allows to change stacking behavior based on the 'ram_weight_multiplier' aggregate key. (emptiest/fullest) Our custom implementation for Mitaka: https://gist.github.com/mgagne/65f033cbc5fdd4c8d1f45e90c943a5f4 [4] Used to filter Ironic nodes based on supported network models as requested by flavor extra_specs. We support JIT network configuration (flat/bond) and need to know which nodes support what network models beforehand. [5] Used to filter Ironic nodes based on the 'reserved_for_tenant_id' Ironic node property. This is used to reserve Ironic node to specific projects. Some customers order lot of machines in advance. We reserve those for them. [6] Used to filter Ironic nodes based on the 'reserved_for_user_id' Ironic node property. This is mainly used when enrolling existing nodes already living on a different system. We reserve the node to a special internal user so the customer cannot reserve the node by mistake until the process is completed. Latest version of Nova dropped user_id from RequestSpec. We had to add it back. [7] Used to favor reserved host over non-reserved ones based on project. [8] Used to favor reserved host over non-reserved ones based on user. Latest version of Nova dropped user_id from RequestSpec. We had to add it back. -- Mathieu From aschultz at redhat.com Mon Apr 30 16:52:32 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 30 Apr 2018 10:52:32 -0600 Subject: [Openstack-operators] [openstack-dev] The Forum Schedule is now live In-Reply-To: <5AE73AA3.4030408@openstack.org> References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> Message-ID: On Mon, Apr 30, 2018 at 9:47 AM, Jimmy McArthur wrote: > Project Updates are in their own track: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 > TripleO is still missing? Thanks, -Alex > As are SIG, BoF and Working Groups: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 > > Amy Marrich > April 30, 2018 at 10:44 AM > Emilien, > > I believe that the Project Updates are separate from the Forum? I know I saw > some in the schedule before the Forum submittals were even closed. Maybe > contact speaker support or Jimmy will answer here. > > Thanks, > > Amy (spotz) > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > Emilien Macchi > April 30, 2018 at 10:33 AM > > >> Hello all - >> >> Please take a look here for the posted Forum schedule: >> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 >> You should also see it update on your Summit App. > > > Why TripleO doesn't have project update? > Maybe we could combine it with TripleO - Project Onboarding if needed but it > would be great to have it advertised as a project update! > > Thanks, > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 27, 2018 at 11:04 AM > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > You should also see it update on your Summit App. > > Thank you and see you in Vancouver! > Jimmy > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jon at csail.mit.edu Mon Apr 30 16:58:16 2018 From: jon at csail.mit.edu (Jonathan Proulx) Date: Mon, 30 Apr 2018 12:58:16 -0400 Subject: [Openstack-operators] I/O errors on RBD after hypervisor crash. Message-ID: <20180430165816.twzs6xrol3eigtnq@csail.mit.edu> Hi All, I have a VM with ephemeral root on RBD spewing I/O erros on boot after hypervisor crash. I've (unfortunately) seen a lot of hypervisors go down badly with lots of VMs on them and this is a new one on me. I can 'rbd export' the volume and I get a clean filesystem. version details OpenStack: Mitaka Host OS: Ubuntu 16.04 Ceph: Luminous (12.2.4) after booting to initrd VM shows: end_request: I/O error, dev vda, sector Tried hard reboot, tried rescue (in which case vdb shows same issue) tried migrating to different hypervisor and all have consistent failure. I do have writeback caching enable on the crashed hypervisor so I can imaging filesystem corruption, but not this type of I/O error. Also if the rbd volume doesn't seem to be dammaged since I could dump it to an iamge and see correct partioning and filesystems. Anyone seen this before? I have the bits since the export worked but concerned about possibility of recurrence. Thanks, -Jon -- From juliaashleykreger at gmail.com Mon Apr 30 17:00:12 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 30 Apr 2018 10:00:12 -0700 Subject: [Openstack-operators] Ironic post-deploy cleanup change backport Message-ID: Greetings everyone, We in the Ironic community have recently been discussing back porting a bug fix[1] to stable/queens which addresses cases where where a baremetal node is prevented from being able to be deployed due to an orphaned VIF (neutron port) record in ironic. Under normal circumstances, Nova removes the records from Ironic when removing an instance. We found that in some cases, this might not always succeed due to extended locking of a bare metal node undergoing cleaning. Typically we've seen this where IPMI BMCs do not immediately replying to power actions. Our fix adds the logic to ironic itself to go ahead and clean-up neutron port attachment records when we are un-deploying a baremetal node. For users of Ironic through nova, there is no difference. For users of ironic where they are directly interacting with ironic's API, and where the environment also utilizes Neutron, there is a slight behavior difference in that the `openstack baremetal node vif detach` will not be needed after un-deploying a baremetal node. This also means that the attachment will need to be re-added if the node is being deployed manually. Please let us know if there are any questions or concerns. We have not yet merged this back-port patch, but feel that the benefit to it greatly out-weighs the negative impact. -Julia [1]: https://review.openstack.org/#/c/562314/ From jimmy at openstack.org Mon Apr 30 17:05:54 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 30 Apr 2018 12:05:54 -0500 Subject: [Openstack-operators] [openstack-dev] The Forum Schedule is now live In-Reply-To: References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> Message-ID: <5AE74CF2.9010804@openstack.org> Alex, It looks like we have a spot held for you, but did not receive confirmation that TripleO would be moving forward with Project Update. If you all will be recording this, we have you down for Wednesday from 11:25 - 11:45am. Just let me know and I'll get it up on the schedule. Thanks! Jimmy > Alex Schultz > April 30, 2018 at 11:52 AM > On Mon, Apr 30, 2018 at 9:47 AM, Jimmy McArthur wrote: >> Project Updates are in their own track: >> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 >> > > TripleO is still missing? > > Thanks, > -Alex > >> As are SIG, BoF and Working Groups: >> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 >> >> Amy Marrich >> April 30, 2018 at 10:44 AM >> Emilien, >> >> I believe that the Project Updates are separate from the Forum? I know I saw >> some in the schedule before the Forum submittals were even closed. Maybe >> contact speaker support or Jimmy will answer here. >> >> Thanks, >> >> Amy (spotz) >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> Emilien Macchi >> April 30, 2018 at 10:33 AM >> >> >>> Hello all - >>> >>> Please take a look here for the posted Forum schedule: >>> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 >>> You should also see it update on your Summit App. >> Why TripleO doesn't have project update? >> Maybe we could combine it with TripleO - Project Onboarding if needed but it >> would be great to have it advertised as a project update! >> >> Thanks, >> -- >> Emilien Macchi >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jimmy McArthur >> April 27, 2018 at 11:04 AM >> Hello all - >> >> Please take a look here for the posted Forum schedule: >> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 >> You should also see it update on your Summit App. >> >> Thank you and see you in Vancouver! >> Jimmy >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > Jimmy McArthur > April 30, 2018 at 10:47 AM > Project Updates are in their own track: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 > > As are SIG, BoF and Working Groups: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Amy Marrich > April 30, 2018 at 10:44 AM > Emilien, > > I believe that the Project Updates are separate from the Forum? I know > I saw some in the schedule before the Forum submittals were even > closed. Maybe contact speaker support or Jimmy will answer here. > > Thanks, > > Amy (spotz) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Emilien Macchi > April 30, 2018 at 10:33 AM > > > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > > You should also see it update on your Summit App. > > Why TripleO doesn't have project update? > Maybe we could combine it with TripleO - Project Onboarding if needed > but it would be great to have it advertised as a project update! > > Thanks, > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 27, 2018 at 11:04 AM > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > You should also see it update on your Summit App. > > Thank you and see you in Vancouver! > Jimmy > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jon at csail.mit.edu Mon Apr 30 17:22:57 2018 From: jon at csail.mit.edu (Jonathan Proulx) Date: Mon, 30 Apr 2018 13:22:57 -0400 Subject: [Openstack-operators] I/O errors on RBD after hypervisor crash. In-Reply-To: <20180430165816.twzs6xrol3eigtnq@csail.mit.edu> References: <20180430165816.twzs6xrol3eigtnq@csail.mit.edu> Message-ID: <20180430172257.hzfpxowfv2sygps4@csail.mit.edu> In Proulx's Corollary to Murphy's Law, just after hitting send I tried something that "worked". I noticed the volume shared nothing with the image it was based on so tried "flattening" it just to try something. Oddly that worked, that or just having waited in power off state for an hour wile I was at lunch. Still have no theory on why it broke or how that could be a fix...if anyone else does please do tell :) Thanks, -JOn On Mon, Apr 30, 2018 at 12:58:16PM -0400, Jonathan Proulx wrote: :Hi All, : :I have a VM with ephemeral root on RBD spewing I/O erros on boot after :hypervisor crash. I've (unfortunately) seen a lot of hypervisors go :down badly with lots of VMs on them and this is a new one on me. : :I can 'rbd export' the volume and I get a clean filesystem. : :version details : :OpenStack: Mitaka :Host OS: Ubuntu 16.04 :Ceph: Luminous (12.2.4) : :after booting to initrd VM shows: : :end_request: I/O error, dev vda, sector : :Tried hard reboot, tried rescue (in which case vdb shows same :issue) tried migrating to different hypervisor and all have consistent :failure. : :I do have writeback caching enable on the crashed hypervisor so I can :imaging filesystem corruption, but not this type of I/O error. : :Also if the rbd volume doesn't seem to be dammaged since I could dump :it to an iamge and see correct partioning and filesystems. : :Anyone seen this before? I have the bits since the export worked but :concerned about possibility of recurrence. : :Thanks, :-Jon : :-- -- From emilien at redhat.com Mon Apr 30 17:25:33 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 30 Apr 2018 10:25:33 -0700 Subject: [Openstack-operators] [openstack-dev] The Forum Schedule is now live In-Reply-To: <5AE74CF2.9010804@openstack.org> References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> <5AE74CF2.9010804@openstack.org> Message-ID: On Mon, Apr 30, 2018 at 10:05 AM, Jimmy McArthur wrote: > > It looks like we have a spot held for you, but did not receive > confirmation that TripleO would be moving forward with Project Update. If > you all will be recording this, we have you down for Wednesday from 11:25 - > 11:45am. Just let me know and I'll get it up on the schedule. > This slot is perfect, and I'll run it with one of my tripleo co-workers (Alex won't be here). Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From jomlowe at iu.edu Mon Apr 30 17:37:26 2018 From: jomlowe at iu.edu (Mike Lowe) Date: Mon, 30 Apr 2018 13:37:26 -0400 Subject: [Openstack-operators] I/O errors on RBD after hypervisor crash. In-Reply-To: <20180430172257.hzfpxowfv2sygps4@csail.mit.edu> References: <20180430165816.twzs6xrol3eigtnq@csail.mit.edu> <20180430172257.hzfpxowfv2sygps4@csail.mit.edu> Message-ID: Sometimes I’ve had similar problems that can by fixed by running fsck against the rbd device in bare meta oob via rbd-nbd. I’ve been thinking it’s related to trim/discard and some sort of disk geometry mismatch. > On Apr 30, 2018, at 1:22 PM, Jonathan Proulx wrote: > > > In Proulx's Corollary to Murphy's Law, just after hitting send I tried > something that "worked". > > I noticed the volume shared nothing with the image it was based on > so tried "flattening" it just to try something. > > Oddly that worked, that or just having waited in power off state for > an hour wile I was at lunch. > > Still have no theory on why it broke or how that could be a fix...if > anyone else does please do tell :) > > Thanks, > -JOn > > On Mon, Apr 30, 2018 at 12:58:16PM -0400, Jonathan Proulx wrote: > :Hi All, > : > :I have a VM with ephemeral root on RBD spewing I/O erros on boot after > :hypervisor crash. I've (unfortunately) seen a lot of hypervisors go > :down badly with lots of VMs on them and this is a new one on me. > : > :I can 'rbd export' the volume and I get a clean filesystem. > : > :version details > : > :OpenStack: Mitaka > :Host OS: Ubuntu 16.04 > :Ceph: Luminous (12.2.4) > : > :after booting to initrd VM shows: > : > :end_request: I/O error, dev vda, sector > : > :Tried hard reboot, tried rescue (in which case vdb shows same > :issue) tried migrating to different hypervisor and all have consistent > :failure. > : > :I do have writeback caching enable on the crashed hypervisor so I can > :imaging filesystem corruption, but not this type of I/O error. > : > :Also if the rbd volume doesn't seem to be dammaged since I could dump > :it to an iamge and see correct partioning and filesystems. > : > :Anyone seen this before? I have the bits since the export worked but > :concerned about possibility of recurrence. > : > :Thanks, > :-Jon > : > :-- > > -- > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From boll0107 at umn.edu Mon Apr 30 17:44:02 2018 From: boll0107 at umn.edu (Evan Bollig PhD) Date: Mon, 30 Apr 2018 12:44:02 -0500 Subject: [Openstack-operators] I/O errors on RBD after hypervisor crash. In-Reply-To: References: <20180430165816.twzs6xrol3eigtnq@csail.mit.edu> <20180430172257.hzfpxowfv2sygps4@csail.mit.edu> Message-ID: Good tips. Thanks for following up. We'll be on the lookout for this too. Cheers, -E -- Evan F. Bollig, PhD Senior Scientific Computing Consultant, Application Developer | Scientific Computing Solutions (SCS) Minnesota Supercomputing Institute | msi.umn.edu University of Minnesota | umn.edu boll0107 at umn.edu | 612-624-1447 | Walter Lib Rm 556 On Mon, Apr 30, 2018 at 12:37 PM, Mike Lowe wrote: > Sometimes I’ve had similar problems that can by fixed by running fsck against the rbd device in bare meta oob via rbd-nbd. I’ve been thinking it’s related to trim/discard and some sort of disk geometry mismatch. > >> On Apr 30, 2018, at 1:22 PM, Jonathan Proulx wrote: >> >> >> In Proulx's Corollary to Murphy's Law, just after hitting send I tried >> something that "worked". >> >> I noticed the volume shared nothing with the image it was based on >> so tried "flattening" it just to try something. >> >> Oddly that worked, that or just having waited in power off state for >> an hour wile I was at lunch. >> >> Still have no theory on why it broke or how that could be a fix...if >> anyone else does please do tell :) >> >> Thanks, >> -JOn >> >> On Mon, Apr 30, 2018 at 12:58:16PM -0400, Jonathan Proulx wrote: >> :Hi All, >> : >> :I have a VM with ephemeral root on RBD spewing I/O erros on boot after >> :hypervisor crash. I've (unfortunately) seen a lot of hypervisors go >> :down badly with lots of VMs on them and this is a new one on me. >> : >> :I can 'rbd export' the volume and I get a clean filesystem. >> : >> :version details >> : >> :OpenStack: Mitaka >> :Host OS: Ubuntu 16.04 >> :Ceph: Luminous (12.2.4) >> : >> :after booting to initrd VM shows: >> : >> :end_request: I/O error, dev vda, sector >> : >> :Tried hard reboot, tried rescue (in which case vdb shows same >> :issue) tried migrating to different hypervisor and all have consistent >> :failure. >> : >> :I do have writeback caching enable on the crashed hypervisor so I can >> :imaging filesystem corruption, but not this type of I/O error. >> : >> :Also if the rbd volume doesn't seem to be dammaged since I could dump >> :it to an iamge and see correct partioning and filesystems. >> : >> :Anyone seen this before? I have the bits since the export worked but >> :concerned about possibility of recurrence. >> : >> :Thanks, >> :-Jon >> : >> :-- >> >> -- >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From Arkady.Kanevsky at dell.com Mon Apr 30 18:14:19 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 30 Apr 2018 18:14:19 +0000 Subject: [Openstack-operators] [openstack-dev] The Forum Schedule is now live In-Reply-To: <5AE742AF.2010106@openstack.org> References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> <18ce76f6eb3b4b30afedb642f43ce93c@AUSX13MPS308.AMER.DELL.COM> <5AE742AF.2010106@openstack.org> Message-ID: Interesting. It does work on Chrome but not on IE. Here is IE screenshot. Thanks, Arkady From: Jimmy McArthur [mailto:jimmy at openstack.org] Sent: Monday, April 30, 2018 11:22 AM To: Kanevsky, Arkady Cc: amy at demarco.com; openstack-dev at lists.openstack.org; OpenStack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now live Hmm. I see both populated with all of the relevant sessions. Can you send me a screencap of what you're seeing? Arkady.Kanevsky at dell.com April 30, 2018 at 10:58 AM Both are currently empty. From: Jimmy McArthur [mailto:jimmy at openstack.org] Sent: Monday, April 30, 2018 10:48 AM To: Amy Marrich Cc: OpenStack Development Mailing List (not for usage questions); OpenStack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now live Project Updates are in their own track: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 As are SIG, BoF and Working Groups: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 Jimmy McArthur April 30, 2018 at 10:47 AM Project Updates are in their own track: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 As are SIG, BoF and Working Groups: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Amy Marrich April 30, 2018 at 10:44 AM Emilien, I believe that the Project Updates are separate from the Forum? I know I saw some in the schedule before the Forum submittals were even closed. Maybe contact speaker support or Jimmy will answer here. Thanks, Amy (spotz) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Emilien Macchi April 30, 2018 at 10:33 AM Hello all - Please take a look here for the posted Forum schedule: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 You should also see it update on your Summit App. Why TripleO doesn't have project update? Maybe we could combine it with TripleO - Project Onboarding if needed but it would be great to have it advertised as a project update! Thanks, -- Emilien Macchi __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Jimmy McArthur April 27, 2018 at 11:04 AM Hello all - Please take a look here for the posted Forum schedule: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 You should also see it update on your Summit App. Thank you and see you in Vancouver! Jimmy __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Capture1.png Type: image/png Size: 121994 bytes Desc: Capture1.png URL: From jimmy at openstack.org Mon Apr 30 18:22:10 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 30 Apr 2018 13:22:10 -0500 Subject: [Openstack-operators] [openstack-dev] The Forum Schedule is now live In-Reply-To: References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> <18ce76f6eb3b4b30afedb642f43ce93c@AUSX13MPS308.AMER.DELL.COM> <5AE742AF.2010106@openstack.org> Message-ID: <5AE75ED2.5020506@openstack.org> We don't support deprecated browsers, I'm afraid. > Arkady.Kanevsky at dell.com > April 30, 2018 at 1:14 PM > > Interesting. > > It does work on Chrome but not on IE. > > Here is IE screenshot. > > Thanks, > > Arkady > > *From:*Jimmy McArthur [mailto:jimmy at openstack.org] > *Sent:* Monday, April 30, 2018 11:22 AM > *To:* Kanevsky, Arkady > *Cc:* amy at demarco.com; openstack-dev at lists.openstack.org; > OpenStack-operators at lists.openstack.org > *Subject:* Re: [Openstack-operators] [openstack-dev] The Forum > Schedule is now live > > Hmm. I see both populated with all of the relevant sessions. Can you > send me a screencap of what you're seeing? > > > Jimmy McArthur > April 30, 2018 at 11:22 AM > Hmm. I see both populated with all of the relevant sessions. Can you > send me a screencap of what you're seeing? > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Arkady.Kanevsky at dell.com > April 30, 2018 at 10:58 AM > > Both are currently empty. > > *From:*Jimmy McArthur [mailto:jimmy at openstack.org] > *Sent:* Monday, April 30, 2018 10:48 AM > *To:* Amy Marrich > *Cc:* OpenStack Development Mailing List (not for usage questions); > OpenStack-operators at lists.openstack.org > *Subject:* Re: [Openstack-operators] [openstack-dev] The Forum > Schedule is now live > > Project Updates are in their own track: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 > > As are SIG, BoF and Working Groups: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 > > > Jimmy McArthur > April 30, 2018 at 10:47 AM > Project Updates are in their own track: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 > > As are SIG, BoF and Working Groups: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Amy Marrich > April 30, 2018 at 10:44 AM > Emilien, > > I believe that the Project Updates are separate from the Forum? I know > I saw some in the schedule before the Forum submittals were even > closed. Maybe contact speaker support or Jimmy will answer here. > > Thanks, > > Amy (spotz) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Mon Apr 30 19:40:05 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 30 Apr 2018 19:40:05 +0000 Subject: [Openstack-operators] [openstack-dev] The Forum Schedule is now live In-Reply-To: <5AE75ED2.5020506@openstack.org> References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> <18ce76f6eb3b4b30afedb642f43ce93c@AUSX13MPS308.AMER.DELL.COM> <5AE742AF.2010106@openstack.org> <5AE75ED2.5020506@openstack.org> Message-ID: <0349cc9ad93344b88868d0288ddee485@AUSX13MPS308.AMER.DELL.COM> LOL From: Jimmy McArthur [mailto:jimmy at openstack.org] Sent: Monday, April 30, 2018 1:22 PM To: Kanevsky, Arkady Cc: amy at demarco.com; openstack-dev at lists.openstack.org; OpenStack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now live We don't support deprecated browsers, I'm afraid. Arkady.Kanevsky at dell.com April 30, 2018 at 1:14 PM Interesting. It does work on Chrome but not on IE. Here is IE screenshot. Thanks, Arkady From: Jimmy McArthur [mailto:jimmy at openstack.org] Sent: Monday, April 30, 2018 11:22 AM To: Kanevsky, Arkady Cc: amy at demarco.com; openstack-dev at lists.openstack.org; OpenStack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now live Hmm. I see both populated with all of the relevant sessions. Can you send me a screencap of what you're seeing? Jimmy McArthur April 30, 2018 at 11:22 AM Hmm. I see both populated with all of the relevant sessions. Can you send me a screencap of what you're seeing? __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Arkady.Kanevsky at dell.com April 30, 2018 at 10:58 AM Both are currently empty. From: Jimmy McArthur [mailto:jimmy at openstack.org] Sent: Monday, April 30, 2018 10:48 AM To: Amy Marrich Cc: OpenStack Development Mailing List (not for usage questions); OpenStack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now live Project Updates are in their own track: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 As are SIG, BoF and Working Groups: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 Jimmy McArthur April 30, 2018 at 10:47 AM Project Updates are in their own track: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 As are SIG, BoF and Working Groups: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Amy Marrich April 30, 2018 at 10:44 AM Emilien, I believe that the Project Updates are separate from the Forum? I know I saw some in the schedule before the Forum submittals were even closed. Maybe contact speaker support or Jimmy will answer here. Thanks, Amy (spotz) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Apr 30 21:28:55 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 30 Apr 2018 16:28:55 -0500 Subject: [Openstack-operators] [openstack-dev] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: <5333196f-eab5-3343-7476-cfffddb5c299@gmail.com> On 4/30/2018 11:41 AM, Mathieu Gagné wrote: > [6] Used to filter Ironic nodes based on the 'reserved_for_user_id' > Ironic node property. > This is mainly used when enrolling existing nodes already living > on a different system. > We reserve the node to a special internal user so the customer > cannot reserve > the node by mistake until the process is completed. > Latest version of Nova dropped user_id from RequestSpec. We had to > add it back. See https://review.openstack.org/#/c/565340/ for context on the regression mentioned about RequestSpec.user_id. Thanks Mathieu for jumping in #openstack-nova and discussing it. -- Thanks, Matt