From zhipengh512 at gmail.com Sat Mar 3 10:39:08 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Sat, 3 Mar 2018 10:39:08 +0000 Subject: [Openstack-operators] [tc][ltm]Long Term Maintenance mode discussion Message-ID: Hi operators, users and public cloud providers, I would like to draw your attention to a discussion[0] happening now on the Technical Committee side about introducing a resolution on how to better provide long term maintenance for stable branches. As I understand this is a widely desired feature so please help reviewing the patch with your insight. For public cloud providers , if you have any questions we could discuss on #openstack-publiccloud and then Tobias or me could relay any concerns you might have but find it difficult to directly comment on the patch. Thanks :) [0]https://review.openstack.org/#/c/548916/ -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From thingee at gmail.com Mon Mar 5 23:15:38 2018 From: thingee at gmail.com (Mike Perez) Date: Mon, 5 Mar 2018 15:15:38 -0800 Subject: [Openstack-operators] [forum] Brainstorming Topics for Vancouver 2018 Message-ID: <20180305231538.GF32596@gmail.com> Hi all, Welcome to the topic selection process for our Forum in Vancouver. Note that this is not a classic conference track with speakers and presentations. OpenStack community members (participants in development teams, SIGS, working groups, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is for the entire community to come together, to create a neutral space rather than having separate "ops" and "dev" days. Users should should aim to come with ideas for for the next release, gather feedback on the past version and have strategic discussions that go beyond just one release cycle. We aim to ensure the broadest coverage of topics that will allow for multiple parts of the community getting together to discuss key areas within our community/projects. There are two stages to the brainstorming: 1. Starting today, set up an etherpad with your team and start discussing ideas you'd like to talk about at the Forum and work out which ones to submit - just like you did prior to the design summit. 2. Then, in a couple of weeks, we will open up a more formal web-based tool for you to submit abstracts for the most popular sessions that came out of your brainstorming. Make an etherpad and add it to the list at: https://wiki.openstack.org/wiki/Forum/Vancouver2018 One key thing we'd like to see (as always?) is cross-project collaboration, and discussion between every area of the community. Try to see if there is an interested working group on the user side to add to your ideas. Examples of typical discussions that include multiple parts of the community getting together to discuss: * Strategic, whole-of-community discussions, to think about the big picture, including beyond just one release cycle and new technologies o eg Making OpenStack One Platform for containers/VMs/Bare Metal (Strategic session) the entire community congregates to share opinions on how to make OpenStack achieve its integration engine goal * Cross-project sessions, in a similar vein to what has happened at past design summits, but with increased emphasis on issues that are of relevant to all areas of the community o eg Rolling Upgrades at Scale (Cross-Project session) -- the Large Deployments Team collaborates with Nova, Cinder and Keystone to tackle issues that come up with rolling upgrades when there's a large number of machines. * Project-specific sessions, where developers can ask users specific questions about their experience, users can provide feedback from the last release and cross-community collaboration on the priorities and 'blue sky' ideas for the next release. o eg Neutron Pain Points (Project-Specific session) -- Co-organized by neutron developers and users. Neutron developers bring some specific questions they want answered, Neutron users bring feedback from the latest release and ideas about the future. Think about what kind of session ideas might end up as: Project-specific, cross-project or strategic/whole-of-community discussions. There'll be more slots for the latter two, so do try and think outside the box! This part of the process is where we gather broad community consensus - in theory the second part is just about fitting in as many of the good ideas into the schedule as we can. Further details about the forum can be found at: https://wiki.openstack.org/wiki/Forum -- Mike Perez (thingee) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From mihalis68 at gmail.com Tue Mar 6 02:04:07 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 6 Mar 2018 11:04:07 +0900 Subject: [Openstack-operators] Tokyo Ops Meetup - attendees muster! Message-ID: <0AAC2DC3-A6C1-40D9-A171-5F9ECF622EFD@gmail.com> I’m in Tokyo for the Ops Meetup tomorrow and I wonder who else is around. I was wondering if people want to make a discussion or chat room? I’ve got WhatsApp, Slack, Google Hangouts etc or we can just respond to this thread ? I’m at hotel villa fontaine across the street from granpark Chris Sent from my iPhone From mizuno.shintaro at lab.ntt.co.jp Tue Mar 6 04:14:18 2018 From: mizuno.shintaro at lab.ntt.co.jp (Shintaro Mizuno) Date: Tue, 6 Mar 2018 13:14:18 +0900 Subject: [Openstack-operators] Tokyo Ops Meetup - attendees muster! In-Reply-To: <0AAC2DC3-A6C1-40D9-A171-5F9ECF622EFD@gmail.com> References: <0AAC2DC3-A6C1-40D9-A171-5F9ECF622EFD@gmail.com> Message-ID: <6ca5a98e-36fc-098f-116c-c6e81b5442c0@lab.ntt.co.jp> Hi Chris, Welcome to Tokyo! I hope your flight didn't get affected by the storm we had last night. We had a slack channel in MEX OpsMeetup and it worked well for general announcement for attendees. Etherpad may be enough for the purpose, but with my phone, slack works better. Regards, Shintaro On 2018/03/06 11:04, Chris Morgan wrote: > I’m in Tokyo for the Ops Meetup tomorrow and I wonder who else is around. I was wondering if people want to make a discussion or chat room? I’ve got WhatsApp, Slack, Google Hangouts etc or we can just respond to this thread ? > > I’m at hotel villa fontaine across the street from granpark > > Chris > > Sent from my iPhone > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Shintaro MIZUNO (水野伸太郎) NTT Software Innovation Center TEL: 0422-59-4977 E-mail: mizuno.shintaro at lab.ntt.co.jp From sean.mcginnis at gmx.com Tue Mar 6 04:21:51 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 5 Mar 2018 22:21:51 -0600 Subject: [Openstack-operators] Tokyo Ops Meetup - attendees muster! In-Reply-To: <6ca5a98e-36fc-098f-116c-c6e81b5442c0@lab.ntt.co.jp> References: <0AAC2DC3-A6C1-40D9-A171-5F9ECF622EFD@gmail.com> <6ca5a98e-36fc-098f-116c-c6e81b5442c0@lab.ntt.co.jp> Message-ID: I’m at the Hotel Gracery. Really rather not install slack, but if that works well for everyone then that’s fine. I’ll probably wander around the area to find dinner tonight. If I don’t meet up with others, see you at the meet up tomorrow. Sean > On Mar 5, 2018, at 22:14, Shintaro Mizuno wrote: > > Hi Chris, > > Welcome to Tokyo! > I hope your flight didn't get affected by the storm we had last night. > > We had a slack channel in MEX OpsMeetup and it worked well for general announcement for attendees. > Etherpad may be enough for the purpose, but with my phone, slack works better. > > Regards, > Shintaro > > On 2018/03/06 11:04, Chris Morgan wrote: >> I’m in Tokyo for the Ops Meetup tomorrow and I wonder who else is around. I was wondering if people want to make a discussion or chat room? I’ve got WhatsApp, Slack, Google Hangouts etc or we can just respond to this thread ? >> >> I’m at hotel villa fontaine across the street from granpark >> >> Chris >> >> Sent from my iPhone >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > > -- > Shintaro MIZUNO (水野伸太郎) > NTT Software Innovation Center > TEL: 0422-59-4977 > E-mail: mizuno.shintaro at lab.ntt.co.jp > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From mihalis68 at gmail.com Tue Mar 6 04:56:58 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 06 Mar 2018 04:56:58 +0000 Subject: [Openstack-operators] Tokyo Ops Meetup - attendees muster! In-Reply-To: <6ca5a98e-36fc-098f-116c-c6e81b5442c0@lab.ntt.co.jp> References: <0AAC2DC3-A6C1-40D9-A171-5F9ECF622EFD@gmail.com> <6ca5a98e-36fc-098f-116c-c6e81b5442c0@lab.ntt.co.jp> Message-ID: On Tue, Mar 6, 2018 at 1:18 PM Shintaro Mizuno < mizuno.shintaro at lab.ntt.co.jp> wrote: > Hi Chris, > > Welcome to Tokyo! > I hope your flight didn't get affected by the storm we had last night. My flight was delayed but otherwise fine. I’m having more difficulty with being without my corporate Amex but that’s a different story! > > We had a slack channel in MEX OpsMeetup and it worked well for general > announcement for attendees. > Etherpad may be enough for the purpose, but with my phone, slack works > better. I see Sean would rather not do slack. Is google hangouts better? Or I must say WhatsApp works quite well for a group chat I have with Bloomberg friends. It’s lighter than slack or etherpad for mobile use (I find etherpad needs nothing less than a decent laptop) Chris > > Regards, > Shintaro > > On 2018/03/06 11:04, Chris Morgan wrote: > > I’m in Tokyo for the Ops Meetup tomorrow and I wonder who else is > around. I was wondering if people want to make a discussion or chat room? > I’ve got WhatsApp, Slack, Google Hangouts etc or we can just respond to > this thread ? > > > > I’m at hotel villa fontaine across the street from granpark > > > > Chris > > > > Sent from my iPhone > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > -- > Shintaro MIZUNO (水野伸太郎) > NTT Software Innovation Center > TEL: 0422-59-4977 > E-mail: mizuno.shintaro at lab.ntt.co.jp > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Sent from Rivendell -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Mar 6 04:58:20 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 06 Mar 2018 04:58:20 +0000 Subject: [Openstack-operators] Tokyo Ops Meetup - attendees muster! In-Reply-To: References: <0AAC2DC3-A6C1-40D9-A171-5F9ECF622EFD@gmail.com> <6ca5a98e-36fc-098f-116c-c6e81b5442c0@lab.ntt.co.jp> Message-ID: On Tue, Mar 6, 2018 at 1:25 PM Sean McGinnis wrote: > I’m at the Hotel Gracery. Really rather not install slack, but if that > works well for everyone then that’s fine. If there’s something better for all than slack I’m game! > > I’ll probably wander around the area to find dinner tonight. If I don’t > meet up with others, see you at the meet up tomorrow. Due to hook up with Erik McCormick. Will ping you one way or another! Chris > > Sean > > > On Mar 5, 2018, at 22:14, Shintaro Mizuno > wrote: > > > > Hi Chris, > > > > Welcome to Tokyo! > > I hope your flight didn't get affected by the storm we had last night. > > > > We had a slack channel in MEX OpsMeetup and it worked well for general > announcement for attendees. > > Etherpad may be enough for the purpose, but with my phone, slack works > better. > > > > Regards, > > Shintaro > > > > On 2018/03/06 11:04, Chris Morgan wrote: > >> I’m in Tokyo for the Ops Meetup tomorrow and I wonder who else is > around. I was wondering if people want to make a discussion or chat room? > I’ve got WhatsApp, Slack, Google Hangouts etc or we can just respond to > this thread ? > >> > >> I’m at hotel villa fontaine across the street from granpark > >> > >> Chris > >> > >> Sent from my iPhone > >> _______________________________________________ > >> OpenStack-operators mailing list > >> OpenStack-operators at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> > > > > > > -- > > Shintaro MIZUNO (水野伸太郎) > > NTT Software Innovation Center > > TEL: 0422-59-4977 > > E-mail: mizuno.shintaro at lab.ntt.co.jp > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Sent from Rivendell -------------- next part -------------- An HTML attachment was scrubbed... URL: From mizuno.shintaro at lab.ntt.co.jp Tue Mar 6 05:31:20 2018 From: mizuno.shintaro at lab.ntt.co.jp (Shintaro Mizuno) Date: Tue, 6 Mar 2018 14:31:20 +0900 Subject: [Openstack-operators] Tokyo Ops Meetup - attendees muster! In-Reply-To: References: <0AAC2DC3-A6C1-40D9-A171-5F9ECF622EFD@gmail.com> <6ca5a98e-36fc-098f-116c-c6e81b5442c0@lab.ntt.co.jp> Message-ID: Hi Chris, Hangout, WhatsApp is fine, too :) On 2018/03/06 13:56, Chris Morgan wrote: > > On Tue, Mar 6, 2018 at 1:18 PM Shintaro Mizuno > > > wrote: > > Hi Chris, > > Welcome to Tokyo! > I hope your flight didn't get affected by the storm we had last night. > > > > My flight was delayed but otherwise fine. I’m having more difficulty > with being without my corporate Amex but that’s a different story! > > > > > We had a slack channel in MEX OpsMeetup and it worked well for general > announcement for attendees. > Etherpad may be enough for the purpose, but with my phone, slack works > better. > > > I see Sean would rather not do slack. Is google hangouts better? Or I > must say WhatsApp works quite well for a group chat I have with > Bloomberg friends. It’s lighter than slack or etherpad for mobile use (I > find etherpad needs nothing less than a decent laptop) > > Chris > > > > > Regards, > Shintaro > > On 2018/03/06 11:04, Chris Morgan wrote: > > I’m in Tokyo for the Ops Meetup tomorrow and I wonder who else is > around. I was wondering if people want to make a discussion or chat > room? I’ve got WhatsApp, Slack, Google Hangouts etc or we can just > respond to this thread ? > > > > I’m at hotel villa fontaine across the street from granpark > > > > Chris > > > > Sent from my iPhone > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > -- > Shintaro MIZUNO (水野伸太郎) > NTT Software Innovation Center > TEL: 0422-59-4977 > E-mail: mizuno.shintaro at lab.ntt.co.jp > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- > Sent from Rivendell -- Shintaro MIZUNO (水野伸太郎) NTT Software Innovation Center TEL: 0422-59-4977 E-mail: mizuno.shintaro at lab.ntt.co.jp From sean.mcginnis at gmx.com Tue Mar 6 05:36:03 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 5 Mar 2018 23:36:03 -0600 Subject: [Openstack-operators] Tokyo Ops Meetup - attendees muster! In-Reply-To: References: <0AAC2DC3-A6C1-40D9-A171-5F9ECF622EFD@gmail.com> <6ca5a98e-36fc-098f-116c-c6e81b5442c0@lab.ntt.co.jp> Message-ID: <5E1FCFAC-4A60-4E3A-B20C-4CD33489D247@gmx.com> I have both set up already on my phone. Either WhatsApp or Hangouts work fine for me. > On Mar 5, 2018, at 23:31, Shintaro Mizuno wrote: > > Hi Chris, > > Hangout, WhatsApp is fine, too :) From tbechtold at suse.com Tue Mar 6 09:35:48 2018 From: tbechtold at suse.com (Thomas Bechtold) Date: Tue, 6 Mar 2018 10:35:48 +0100 Subject: [Openstack-operators] Queens packages for openSUSE and SLES available Message-ID: <8303ee9f-922c-c50f-8ac2-88d6172519fe@suse.com> Hi, Queens packages for openSUSE and SLES are now available at: http://download.opensuse.org/repositories/Cloud:/OpenStack:/Queens/ We maintain + test the packages for SLES 12SP3 and openSUSE Leap 42.3. If you find issues, please do not hesitate to report them to opensuse-cloud at opensuse.org or to https://bugzilla.opensuse.org/ Thanks and have a lot of fun, Tom From dtantsur at redhat.com Tue Mar 6 11:11:53 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 6 Mar 2018 12:11:53 +0100 Subject: [Openstack-operators] [ironic] heads-up: classic drivers deprecation and future removal Message-ID: Hi all, As you may already know, we have deprecated classic drivers in the Queens release. We don't have specific removal plans yet. But according to the deprecation policy we may remove them at any time after May 1st, which will be half way to Rocky milestone 2. Personally, I'd like to do it around then. The `online_data_migrations` script will handle migrating nodes, if all required hardware interfaces and types are enabled before the upgrade to Queens. Otherwise, check the documentation [1] on how to update your nodes. Dmitry [1] https://docs.openstack.org/ironic/latest/admin/upgrade-to-hardware-types.html From rosmaita.fossdev at gmail.com Tue Mar 6 11:52:28 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 6 Mar 2018 06:52:28 -0500 Subject: [Openstack-operators] [glance] fixing OSSN-0075 Message-ID: Hello Operators, The spec for a fix of OSSN-0075, "Deleted Glance image IDs may be reassigned", has been revised after discussions at the PTG last week and is ready for your comments. As you may be aware, the spec has been held up over disagreement about the proper way to fix the issue, but the Glance team has agreed on a way forward in which the interoperability and backward compatibility aspects win out. Please read through the spec and leave comments before Tuesday 13 March at 12:00 UTC: https://review.openstack.org/#/c/468179/ Thanks! From mrhillsman at gmail.com Tue Mar 6 12:53:29 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Tue, 6 Mar 2018 06:53:29 -0600 Subject: [Openstack-operators] Stable Branch EOL and "Extended Maintenance" Resolution Message-ID: Hi everyone, If you are interested in the items in the subject please be sure to take time to review and comment on the following patch - https://review.openstack.org/#/c/548916/ -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Tue Mar 6 15:30:57 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 6 Mar 2018 10:30:57 -0500 Subject: [Openstack-operators] [glance] proposal to deprecate owner_is_tenant option Message-ID: Hello Operators, There's a spec-lite up to deprecate the owner_is_tenant option, with the goal being to eliminate the option so that the owner of an image is always the project (tenant). Based on a survey of operators in March 2017, no one is using the option in its non-default configuration, so no migration path is proposed. Please leave comments on the gerrit review before 12:00 UTC on Tuesday 13 March: https://review.openstack.org/#/c/550096/ Thanks! From mihalis68 at gmail.com Wed Mar 7 04:16:07 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Wed, 7 Mar 2018 13:16:07 +0900 Subject: [Openstack-operators] Stable Branch EOL and "Extended Maintenance" Resolution In-Reply-To: References: Message-ID: Thanks for pointing this one out! Chris On Tue, Mar 6, 2018 at 9:53 PM, Melvin Hillsman wrote: > Hi everyone, > > If you are interested in the items in the subject please be sure to take > time to review and comment on the following patch - > https://review.openstack.org/#/c/548916/ > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From dev.faz at gmail.com Wed Mar 7 07:54:12 2018 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Wed, 7 Mar 2018 08:54:12 +0100 Subject: [Openstack-operators] sporadic missing vxlan-tunnel-port assignment Message-ID: <68784b6e-1e41-7431-04a6-392bbd8a7786@googlemail.com> Hi, we currently see sporadic communication problems. After some research we found out, that this is caused by missing tunnel-port assignments in table 21 of openvswitch. Today we had the issue again and here the logs of the add_fdb_entries calls at the affected system: neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent method add_fdb_entries called with arguments (,) {u'fdb_entries': {u'cd2baf3d-427c-41be-be56-7cbb8176067f': {u'segment_id': 96, u'ports': {u'10.78.23.12': [[u'00:00:00:00:00:00', u'0.0.0.0'], [u'fa:16:3e:d0:a0:77', u'192.168.0.2']]}, u'network_type': u'vxlan'}}} neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent method add_fdb_entries called with arguments (,) {u'fdb_entries': {u'cd2baf3d-427c-41be-be56-7cbb8176067f': {u'segment_id': 96, u'ports': {u'10.78.23.11': [[u'00:00:00:00:00:00', u'0.0.0.0'], [u'fa:16:3e:29:0c:d5', u'192.168.0.3']]}, u'network_type': u'vxlan'}}} neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent method add_fdb_entries called with arguments (,) {u'fdb_entries': {u'cd2baf3d-427c-41be-be56-7cbb8176067f': {u'segment_id': 96, u'ports': {u'10.78.12.101': []}, u'network_type': u'vxlan'}}} neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent method add_fdb_entries called with arguments (,) {u'fdb_entries': {u'cd2baf3d-427c-41be-be56-7cbb8176067f': {u'segment_id': 96, u'ports': {u'10.79.20.102': [[u'00:00:00:00:00:00', u'0.0.0.0']]}, u'network_type': u'vxlan'}}} The missing tunnel-port is the connection to 10.78.12.101, it looks like the empty array/dict may cause this issue. Any hints how to further debug the situation? What may cause an empty dict in add_fdb_entries? Thanks a lot, Fabian Zimmermann From dev.faz at gmail.com Wed Mar 7 07:56:25 2018 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Wed, 7 Mar 2018 08:56:25 +0100 Subject: [Openstack-operators] sporadic missing vxlan-tunnel-port assignment In-Reply-To: <68784b6e-1e41-7431-04a6-392bbd8a7786@googlemail.com> References: <68784b6e-1e41-7431-04a6-392bbd8a7786@googlemail.com> Message-ID: <1c068fd8-25c2-8ee4-b9d6-0a1021ecaae1@gmail.com> Hi, sorry, it is table 22.  Fabian From thierry at openstack.org Wed Mar 7 14:46:52 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 7 Mar 2018 15:46:52 +0100 Subject: [Openstack-operators] Pointer to the release cycles vs. downstream consuming models PTG discussion summary Message-ID: <6c2110f9-73b9-c376-5eb1-c85e5003e501@openstack.org> Hi! On Tuesday afternoon of the PTG week we had a track of discussions to brainstorm how to better align our release cycle and stable branch maintenance with the OpenStack downstream consumption models. I posted a summary at: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128005.html Happy reading! -- Thierry Carrez (ttx) From zhipengh512 at gmail.com Wed Mar 7 16:36:08 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 8 Mar 2018 00:36:08 +0800 Subject: [Openstack-operators] [openstack-dev] [keystone] [oslo] new unified limit library In-Reply-To: <0C7BCB2F-BE9C-4B8B-8344-0DA03F16BA9A@cern.ch> References: <5AA005E0.7050808@windriver.com> <4a8db303-318d-c385-c350-ef25702d8b20@gmail.com> <60EC27CD-7F2F-4328-A09D-94CB92ED7988@cern.ch> <0C7BCB2F-BE9C-4B8B-8344-0DA03F16BA9A@cern.ch> Message-ID: This is certainly a feature will make Public Cloud providers very happy :) On Thu, Mar 8, 2018 at 12:33 AM, Tim Bell wrote: > Sorry, I remember more detail now... it was using the 'owner' of the VM as > part of the policy rather than quota. > > Is there a per-user/per-group quota in Nova? > > Tim > > -----Original Message----- > From: Tim Bell > Reply-To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > Date: Wednesday, 7 March 2018 at 17:29 > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library > > > There was discussion that Nova would deprecate the user quota feature > since it really didn't fit well with the 'projects own resources' approach > and was little used. At one point, some of the functionality stopped > working and was repaired. The use case we had identified goes away if you > have 2 level deep nested quotas (and we have now worked around it). > > Tim > -----Original Message----- > From: Lance Bragstad > Reply-To: "OpenStack Development Mailing List (not for usage > questions)" > Date: Wednesday, 7 March 2018 at 16:51 > To: "openstack-dev at lists.openstack.org" openstack.org> > Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit > library > > > > On 03/07/2018 09:31 AM, Chris Friesen wrote: > > On 03/07/2018 08:58 AM, Lance Bragstad wrote: > >> Hi all, > >> > ] > > > > 1) Nova currently supports quotas for a user/group tuple that > can be > > stricter than the overall quotas for that group. As far as I > know no > > other project supports this. > ... > I think the initial implementation of a unified limit pattern is > targeting limits and quotas for things associated to projects. In > the > future, we can probably expand on the limit information in > keystone to > include user-specific limits, which would be great if nova wants > to move > away from handling that kind of stuff. > > > > Chris > > > > ____________________________________________________________ > ______________ > > > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > > > > > ____________________________________________________________ > ______________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Wed Mar 7 16:44:10 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 7 Mar 2018 16:44:10 +0000 Subject: [Openstack-operators] [Openstack-sigs] [openstack-dev] [keystone] [oslo] new unified limit library In-Reply-To: References: <5AA005E0.7050808@windriver.com> <4a8db303-318d-c385-c350-ef25702d8b20@gmail.com> <60EC27CD-7F2F-4328-A09D-94CB92ED7988@cern.ch> <0C7BCB2F-BE9C-4B8B-8344-0DA03F16BA9A@cern.ch> Message-ID: I think nested quotas would give the same thing, i.e. you have a parent project for the group and child projects for the users. This would not need user/group quotas but continue with the ‘project owns resources’ approach. It can be generalised to other use cases like the value add partner or the research experiment working groups (http://openstack-in-production.blogspot.fr/2017/07/nested-quota-models.html) Tim From: Zhipeng Huang Reply-To: "openstack-sigs at lists.openstack.org" Date: Wednesday, 7 March 2018 at 17:37 To: "OpenStack Development Mailing List (not for usage questions)" , openstack-operators , "openstack-sigs at lists.openstack.org" Subject: Re: [Openstack-sigs] [openstack-dev] [keystone] [oslo] new unified limit library This is certainly a feature will make Public Cloud providers very happy :) On Thu, Mar 8, 2018 at 12:33 AM, Tim Bell > wrote: Sorry, I remember more detail now... it was using the 'owner' of the VM as part of the policy rather than quota. Is there a per-user/per-group quota in Nova? Tim -----Original Message----- From: Tim Bell > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 7 March 2018 at 17:29 To: "OpenStack Development Mailing List (not for usage questions)" > Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library There was discussion that Nova would deprecate the user quota feature since it really didn't fit well with the 'projects own resources' approach and was little used. At one point, some of the functionality stopped working and was repaired. The use case we had identified goes away if you have 2 level deep nested quotas (and we have now worked around it). Tim -----Original Message----- From: Lance Bragstad > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 7 March 2018 at 16:51 To: "openstack-dev at lists.openstack.org" > Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library On 03/07/2018 09:31 AM, Chris Friesen wrote: > On 03/07/2018 08:58 AM, Lance Bragstad wrote: >> Hi all, >> ] > > 1) Nova currently supports quotas for a user/group tuple that can be > stricter than the overall quotas for that group. As far as I know no > other project supports this. ... I think the initial implementation of a unified limit pattern is targeting limits and quotas for things associated to projects. In the future, we can probably expand on the limit information in keystone to include user-specific limits, which would be great if nova wants to move away from handling that kind of stuff. > > Chris > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Wed Mar 7 21:55:53 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 7 Mar 2018 15:55:53 -0600 Subject: [Openstack-operators] [openstack-dev] [Openstack-sigs] [keystone] [oslo] new unified limit library In-Reply-To: References: <5AA005E0.7050808@windriver.com> <4a8db303-318d-c385-c350-ef25702d8b20@gmail.com> <60EC27CD-7F2F-4328-A09D-94CB92ED7988@cern.ch> <0C7BCB2F-BE9C-4B8B-8344-0DA03F16BA9A@cern.ch> Message-ID: <5AA05FE9.6050708@windriver.com> On 03/07/2018 10:44 AM, Tim Bell wrote: > I think nested quotas would give the same thing, i.e. you have a parent project > for the group and child projects for the users. This would not need user/group > quotas but continue with the ‘project owns resources’ approach. Agreed, I think that if we support nested quotas with a suitable depth of nesting it could be used to handle the existing nova user/project quotas. Chris From mriedemos at gmail.com Wed Mar 7 23:57:38 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 7 Mar 2018 17:57:38 -0600 Subject: [Openstack-operators] nova 17.0.1 released (queens) Message-ID: <17079676-25b5-1b21-4c17-bdcc5b1203a6@gmail.com> I just wanted to give a heads up to anyone thinking about upgrading to queens that nova has released a 17.0.1 patch release [1]. There are some pretty important fixes in there that came up after the queens GA so if you haven't upgraded yet, I recommend going straight to that one instead of 17.0.0. [1] https://review.openstack.org/#/c/550620/ -- Thanks, Matt From openstack at medberry.net Thu Mar 8 00:10:39 2018 From: openstack at medberry.net (David Medberry) Date: Wed, 7 Mar 2018 17:10:39 -0700 Subject: [Openstack-operators] nova 17.0.1 released (queens) In-Reply-To: <17079676-25b5-1b21-4c17-bdcc5b1203a6@gmail.com> References: <17079676-25b5-1b21-4c17-bdcc5b1203a6@gmail.com> Message-ID: Thanks for the headsup Matt. On Wed, Mar 7, 2018 at 4:57 PM, Matt Riedemann wrote: > I just wanted to give a heads up to anyone thinking about upgrading to > queens that nova has released a 17.0.1 patch release [1]. > > There are some pretty important fixes in there that came up after the > queens GA so if you haven't upgraded yet, I recommend going straight to > that one instead of 17.0.0. > > [1] https://review.openstack.org/#/c/550620/ > > -- > > Thanks, > > Matt > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kirubak at zadarastorage.com Thu Mar 8 10:39:12 2018 From: kirubak at zadarastorage.com (Kirubakaran Kaliannan) Date: Thu, 8 Mar 2018 16:09:12 +0530 Subject: [Openstack-operators] swift - 1 Billion object on a single container Message-ID: <9e6f6bd399b045351e8f176948810609@mail.gmail.com> Hi All, I am seeing lot of threads and discussion on large container issues. We also have a spec ( https://specs.openstack.org/openstack/swift-specs/specs/in_progress/container_sharding.html) to fix this in the future releases. I have the container-ring/account-ring on the SSD devices. I started seeing the drop in performance after crossing 10 million objects. Do we have any performance number taken on large container or suggestion on alternatives (other than distributing object into multiple containers – which is not possible due to the nature of the application) to improve this performance ? Appreciate your responses. Thanks. Thanks, -kiru -------------- next part -------------- An HTML attachment was scrubbed... URL: From anne at openstack.org Thu Mar 8 13:42:44 2018 From: anne at openstack.org (Anne Bertucio) Date: Thu, 8 Mar 2018 22:42:44 +0900 Subject: [Openstack-operators] Stable Branch EOL and "Extended Maintenance" Resolution In-Reply-To: References: Message-ID: <5F54EB7C-EC87-415C-815B-3E4881092E6F@openstack.org> Hi all, Given our conversations this morning at the Ops Midcycle about Extended Maintenance, particularly how projects individually deciding stable maintenance policies would affect operators, I wanted to pop this to the top of your inbox again. The thread is actively moving, so it’d be good to get your operator input in there: https://review.openstack.org/#/c/548916/ Anne Bertucio OpenStack Foundation anne at openstack.org | irc: annabelleB > On Mar 7, 2018, at 1:16 PM, Chris Morgan wrote: > > Thanks for pointing this one out! > > Chris > > On Tue, Mar 6, 2018 at 9:53 PM, Melvin Hillsman > wrote: > Hi everyone, > > If you are interested in the items in the subject please be sure to take time to review and comment on the following patch - https://review.openstack.org/#/c/548916/ > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > -- > Chris Morgan > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From clay.gerrard at gmail.com Thu Mar 8 19:03:26 2018 From: clay.gerrard at gmail.com (Clay Gerrard) Date: Thu, 8 Mar 2018 11:03:26 -0800 Subject: [Openstack-operators] swift - 1 Billion object on a single container In-Reply-To: <9e6f6bd399b045351e8f176948810609@mail.gmail.com> References: <9e6f6bd399b045351e8f176948810609@mail.gmail.com> Message-ID: On Thu, Mar 8, 2018 at 2:39 AM, Kirubakaran Kaliannan < kirubak at zadarastorage.com> wrote: > > > Hi All, > > > > I am seeing lot of threads and discussion on large container issues. We > also have a spec (https://specs.openstack.org/openstack/swift-specs/specs/ > in_progress/container_sharding.html) to fix this in the future releases. > > > Yes, very soon - lots of progress on the feature/deep branch https://review.openstack.org/#/q/status:merged+project:openstack/swift+branch:feature/deep > > > Do we have any performance number taken on large container or suggestion > on alternatives (other than distributing object into multiple containers – > which is not possible due to the nature of the application) to improve this > performance ? > you can turn down the container_update_timeout: https://github.com/openstack/swift/blob/master/etc/object-server.conf-sample#L64 -Clay -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Mar 8 19:44:59 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 8 Mar 2018 13:44:59 -0600 Subject: [Openstack-operators] [openstack-dev] [ptg] Release cycles vs. downstream consuming models discussion summary In-Reply-To: <6e283dce-a1ce-4878-2af8-8441beb3dc33@openstack.org> References: <6e283dce-a1ce-4878-2af8-8441beb3dc33@openstack.org> Message-ID: <4789132d-19f5-7afa-6f0b-5a5f4764dce4@gmail.com> On 3/7/2018 8:43 AM, Thierry Carrez wrote: > mriedem volunteered to work on a TC resolution to define > what we exactly meant by that (the proposal is now being discussed at > https://review.openstack.org/#/c/548916/). A new revision is now up for this after much discussion in the review itself and in the #openstack-tc channel today: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-08.log.html#t2018-03-08T16:17:03 https://review.openstack.org/#/c/548916/ I'm quite sure it's now perfect in every form and all stakeholders will be equally elated at its magnificence. -- Thanks, Matt From mriedemos at gmail.com Thu Mar 8 20:13:15 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 8 Mar 2018 14:13:15 -0600 Subject: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects In-Reply-To: References: Message-ID: On 2/5/2018 11:44 PM, Massimo Sgaravatto wrote: > But if I try to specify the long list of projects, I get:a "Value ... is > too long" error message [*]. > > I can see two workarounds for this problem: > > 1) Create an host aggregate per project: > > HA1 including CA1, C2, ... Cx and with filter_tenant_id=p1 > HA2 including CA1, C2, ... Cx and with filter_tenant_id=p2 > etc > > 2) Use the AggregateInstanceExtraSpecsFilter, creating two aggregates > and having each flavor visible only by a set of projects, and tagged > with a specific string that should match the value specified in the > correspondent host aggregate > > Is this correct ? Can you see better options ? This problem came up in the public cloud WG meeting at the PTG last week. The issue is that the host aggregate metadata value is limited to 255 characters so you're pretty severely restricted in the number of projects you can isolate to that host aggregate. There were two ideas that I remember getting discussed for possible solutions: 1. The filter could grow support for domains (or some other fancy keystone construct) such that you could nest projects and then just isolate the root project/domain to that host aggregate. I'm not sharp on keystone stuff so would need more input here, but this might not be a great solution if nova has to ask keystone for this information per run through the filters - that could get expensive. If the information is in the user request context (token) then maybe that would work. 2. Dan Smith mentioned another idea such that we could index the aggregate metadata keys like filter_tenant_id0, filter_tenant_id1, ... filter_tenant_idN and then combine those so you have one host aggregate filter_tenant_id* key per tenant. -- Thanks, Matt From dms at danplanet.com Thu Mar 8 20:19:02 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 08 Mar 2018 12:19:02 -0800 Subject: [Openstack-operators] [openstack-dev] AggregateMultiTenancyIsolation with multiple (many) projects In-Reply-To: (Matt Riedemann's message of "Thu, 8 Mar 2018 14:13:15 -0600") References: Message-ID: > 2. Dan Smith mentioned another idea such that we could index the > aggregate metadata keys like filter_tenant_id0, filter_tenant_id1, > ... filter_tenant_idN and then combine those so you have one host > aggregate filter_tenant_id* key per tenant. Yep, and that's what I've done in my request_filter implementation: https://review.openstack.org/#/c/545002/9/nova/scheduler/request_filter.py Basically it allows any suffix to 'filter_tenant_id' to be processed as a potentially-matching key. Note that I'm hoping we can deprecate/remove the post filter and replace it with this much more efficient version. --Dan From thingee at gmail.com Sat Mar 10 03:42:55 2018 From: thingee at gmail.com (Mike Perez) Date: Fri, 9 Mar 2018 19:42:55 -0800 Subject: [Openstack-operators] Developer Mailing List Digest March 3-9th Message-ID: <20180310034255.GG32596@gmail.com> HTML version: https://www.openstack.org/blog/?p=8361 Contribute to the Dev Digest by summarizing OpenStack Dev List threads: * https://etherpad.openstack.org/p/devdigest * http://lists.openstack.org/pipermail/openstack-dev/ * http://lists.openstack.org/pipermail/openstack-sigs Success Bot Says ================ * kong: Qinling now supports Node.js runtime(experimental) * AJaeger: Jenkins user and jenkins directory on images are gone. /usr/local/jenkins is only created for legacy jobs * eumel8: Zanata 4 is now here [0] * smcginnis: Queens has been released!! * kong: welcome openstackstatus to #openstack-qinling channel! * Tell us yours in OpenStack IRC channels using the command "#success " * More: https://wiki.openstack.org/wiki/Successes [0] - https://www.translate.openstack.org Thanks Bot Says =============== * Thanks dhellmann for setting up community wide goals + good use of storyboard [0] * Thanks ianw for kind help on upgrading to Zanata 4 which has much better UI and improved APIs! * Tell us yours in OpenStack IRC channels using the command "#thanks " * More: https://wiki.openstack.org/wiki/Thanks [0] - https://storyboard.openstack.org/#!/project/923 Community Summaries =================== * Release countdown [0] * TC report [1] * Technical Committee status update [2] [0] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/128036.html [1] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/127991.html [2] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/128098.html PTG Summaries ============= Here's some summaries that people wrote for their project at the PTG: * Documentation and i18n [0] * First Contact SIG [1] * Cinder [2] * Mistral [3] * Interop [4] * QA [5] * Release cycle versus downstream consuming models [6] * Nova Placements [7] * Kolla [8] * Oslo [9] * Ironic [10] * Cyborg [11] [0] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/127936.html [1] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/127937.html [2] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/127968.html [3] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/127988.html [4] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/127994.html [5] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/128002.html [6] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/128005.html [7] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/128041.html [8] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/128044.html [9] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/128055.html [10] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/128085.html [11] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/128094.html OpenStack Queens is Officially Released! ======================================== Congratulations to all the teams who contributed to this release! Release notes of different projects for Queens are available [0] and a list of projects [1] that still need to approve their release note patches! [0] - https://releases.openstack.org/queens/ [1] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127813.html Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127812.html Release Cycles vs. Downstream consumers PTG Summary =================================================== Notes can be found on the original etherpad [0]. A TC resolution is in review [1] TLDR summary: * No consensus on longer / shorter release cycles * Focus on FFU to make upgrades less painful * Longer stable branch maintenance time (18 months for Ocata) * Bootstrap the "extended maintenance" concept with common policy * Group most impacted by release cadence are packagers/distros/vendors * Need for finer user survey questions on upgrade models * Need more data and more discussion, next discussion at Vancouver forum * Release Management team tracks it between events [0] - https://etherpad.openstack.org/p/release-cycles-ptg-rocky [1] - https://review.openstack.org/#/c/548916/ Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128005.html Pros and Cons of face-to-face Meetings ====================================== Some contributors might not be able to attend the PTG for various reasons: * Health issues * Privilege issues (like not getting visa or travel permits) * Caretaking responsibilities (children, other family, animals, plants) * Environmental concerns There is a concern if this is preventing us from meeting our four opens [1] if people are not able to attend the events. The PTG sessions are not recorded, but there is a super user article on how teams can do this themselves [0]. At the PTG in Denver, the OpenStack Foundation provided bluetooth speakers for teams to help with remote participation. Consensus is this may not be trivial for everyone and it could still be a challenge for remote participants due to things like audio quality. Some people at the PTG in Dublin due to the weather had to participate remotely from their hotel room and felt it challenging to partipate. [0] - http://superuser.openstack.org/articles/community-participation-remote/ Full thread: http://lists.openstack.org/pipermail/openstack-dev/2018-March/thread.html#128043 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From kirubak at zadarastorage.com Mon Mar 12 05:51:09 2018 From: kirubak at zadarastorage.com (Kirubakaran Kaliannan) Date: Mon, 12 Mar 2018 11:21:09 +0530 Subject: [Openstack-operators] swift - 1 Billion object on a single container In-Reply-To: References: <9e6f6bd399b045351e8f176948810609@mail.gmail.com> Message-ID: <4913c42e22345f666cf003521c44b1fe@mail.gmail.com> Thanks Clay. Any possibility, this container sharding can be expected in the Queen release ? Thanks -kiru *From:* Clay Gerrard [mailto:clay.gerrard at gmail.com] *Sent:* Friday, March 09, 2018 12:33 AM *To:* Kirubakaran Kaliannan *Cc:* openstack-operators at lists.openstack.org *Subject:* Re: [Openstack-operators] swift - 1 Billion object on a single container On Thu, Mar 8, 2018 at 2:39 AM, Kirubakaran Kaliannan < kirubak at zadarastorage.com> wrote: Hi All, I am seeing lot of threads and discussion on large container issues. We also have a spec ( https://specs.openstack.org/openstack/swift-specs/specs/in_progress/container_sharding.html) to fix this in the future releases. Yes, very soon - lots of progress on the feature/deep branch https://review.openstack.org/#/q/status:merged+project:openstack/swift+branch:feature/deep Do we have any performance number taken on large container or suggestion on alternatives (other than distributing object into multiple containers – which is not possible due to the nature of the application) to improve this performance ? you can turn down the container_update_timeout: https://github.com/openstack/swift/blob/master/etc/object-server.conf-sample#L64 -Clay -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Mon Mar 12 13:12:30 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 12 Mar 2018 08:12:30 -0500 Subject: [Openstack-operators] Reminder: UC Meeting Today - 14:00 UTC #openstack-uc Message-ID: Hi everyone, Friendly reminder we have a UC meeting today in #openstack-uc on freenode at 14:00UTC Agenda: https://wiki.openstack.org/wiki/Governance/Foundation/ UserCommittee#Meeting_Agenda.2FPrevious_Meeting_Logs -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mvanwink at rackspace.com Mon Mar 12 14:17:57 2018 From: mvanwink at rackspace.com (Matt Van Winkle) Date: Mon, 12 Mar 2018 14:17:57 +0000 Subject: [Openstack-operators] [User-committee] Reminder: UC Meeting Today - 14:00 UTC #openstack-uc In-Reply-To: References: Message-ID: Hey folks, Traveling with family for the kids Spring Break. Trying to join but hotel WiFi not co-operating Get Outlook for iOS ________________________________ From: Melvin Hillsman Sent: Monday, March 12, 2018 7:12:30 AM To: user-committee; OpenStack Operators Subject: [User-committee] Reminder: UC Meeting Today - 14:00 UTC #openstack-uc Hi everyone, Friendly reminder we have a UC meeting today in #openstack-uc on freenode at 14:00UTC Agenda: https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee#Meeting_Agenda.2FPrevious_Meeting_Logs -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at not.mn Mon Mar 12 16:29:12 2018 From: me at not.mn (John Dickinson) Date: Mon, 12 Mar 2018 09:29:12 -0700 Subject: [Openstack-operators] swift - 1 Billion object on a single container In-Reply-To: <4913c42e22345f666cf003521c44b1fe@mail.gmail.com> References: <9e6f6bd399b045351e8f176948810609@mail.gmail.com> <4913c42e22345f666cf003521c44b1fe@mail.gmail.com> Message-ID: Container sharding is something we're actively working on. I hope we're able to deliver something during the current OpenStack cycle (Rocky, IIRC), but at this point all I'm saying is that we're actively working on it and we'll release it when it's ready. --John On 11 Mar 2018, at 22:51, Kirubakaran Kaliannan wrote: > Thanks Clay. > > > > Any possibility, this container sharding can be expected in the Queen > release ? > > > > Thanks > > -kiru > > > > *From:* Clay Gerrard [mailto:clay.gerrard at gmail.com] > *Sent:* Friday, March 09, 2018 12:33 AM > *To:* Kirubakaran Kaliannan > *Cc:* openstack-operators at lists.openstack.org > *Subject:* Re: [Openstack-operators] swift - 1 Billion object on a single > container > > > > > > > > On Thu, Mar 8, 2018 at 2:39 AM, Kirubakaran Kaliannan < > kirubak at zadarastorage.com> wrote: > > > > Hi All, > > > > I am seeing lot of threads and discussion on large container issues. We > also have a spec ( > https://specs.openstack.org/openstack/swift-specs/specs/in_progress/container_sharding.html) > to fix this in the future releases. > > > > > > Yes, very soon - lots of progress on the feature/deep branch > > > > https://review.openstack.org/#/q/status:merged+project:openstack/swift+branch:feature/deep > > > > > > Do we have any performance number taken on large container or suggestion > on alternatives (other than distributing object into multiple containers – > which is not possible due to the nature of the application) to improve this > performance ? > > > > you can turn down the container_update_timeout: > > > > https://github.com/openstack/swift/blob/master/etc/object-server.conf-sample#L64 > > > > -Clay > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From lars at redhat.com Mon Mar 12 19:21:13 2018 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Mon, 12 Mar 2018 15:21:13 -0400 Subject: [Openstack-operators] How are you handling billing/chargeback? Message-ID: <20180312192113.znz4eavfze5zg7yn@redhat.com> Hey folks, I'm curious what folks out there are using for chargeback/billing in your OpenStack environment. Are you doing any sort of chargeback (or showback)? Are you using (or have you tried) CloudKitty? Or some other existing project? Have you rolled your own instead? I ask because I am helping out some folks get a handle on the operational side of their existing OpenStack environment, and they are interested in but have not yet deployed some sort of reporting mechanism. Thanks, -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From gael.therond at gmail.com Mon Mar 12 20:40:42 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Mon, 12 Mar 2018 20:40:42 +0000 Subject: [Openstack-operators] How are you handling billing/chargeback? In-Reply-To: <20180312192113.znz4eavfze5zg7yn@redhat.com> References: <20180312192113.znz4eavfze5zg7yn@redhat.com> Message-ID: Hi lars, personally using an internally crafted service. It’s one of my main regret with Openstack, lack of a decent billing system. Le lun. 12 mars 2018 à 20:22, Lars Kellogg-Stedman a écrit : > Hey folks, > > I'm curious what folks out there are using for chargeback/billing in > your OpenStack environment. > > Are you doing any sort of chargeback (or showback)? Are you using (or > have you tried) CloudKitty? Or some other existing project? Have you > rolled your own instead? > > I ask because I am helping out some folks get a handle on the > operational side of their existing OpenStack environment, and they are > interested in but have not yet deployed some sort of reporting > mechanism. > > Thanks, > > -- > Lars Kellogg-Stedman | larsks @ {irc,twitter,github} > http://blog.oddbit.com/ | > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vondra at homeatcloud.cz Tue Mar 13 09:03:21 2018 From: vondra at homeatcloud.cz (=?UTF-8?Q?Tom=C3=A1=C5=A1_Vondra?=) Date: Tue, 13 Mar 2018 10:03:21 +0100 Subject: [Openstack-operators] How are you handling billing/chargeback? In-Reply-To: References: <20180312192113.znz4eavfze5zg7yn@redhat.com> Message-ID: <012101d3baaa$2367fae0$6a37f0a0$@homeatcloud.cz> Hi! We at Homeatcloud have rolled our own engine taking data from Ceilometer events. However, CloudKitty didn‘t exist back then. Now we would probably use it to calculate the rating AND roll our own engine for billing and invoice printing. Tomas From: Flint WALRUS [mailto:gael.therond at gmail.com] Sent: Monday, March 12, 2018 9:41 PM To: Lars Kellogg-Stedman Cc: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] How are you handling billing/chargeback? Hi lars, personally using an internally crafted service. It’s one of my main regret with Openstack, lack of a decent billing system. Le lun. 12 mars 2018 à 20:22, Lars Kellogg-Stedman a écrit : Hey folks, I'm curious what folks out there are using for chargeback/billing in your OpenStack environment. Are you doing any sort of chargeback (or showback)? Are you using (or have you tried) CloudKitty? Or some other existing project? Have you rolled your own instead? I ask because I am helping out some folks get a handle on the operational side of their existing OpenStack environment, and they are interested in but have not yet deployed some sort of reporting mechanism. Thanks, -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Mar 13 10:21:24 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 13 Mar 2018 06:21:24 -0400 Subject: [Openstack-operators] No Ops Meetups team meeting this week Message-ID: There will be no team meeting this week for the Ops Meetups team this week as those of us who made it to the Tokyo meetup last week catch up on other things. Normal meetings will resume next week and we will also put together a Meetup recap/lessons-learned note to be shared here shortly. Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From benedikt.trefzer at cirrax.com Tue Mar 13 11:20:45 2018 From: benedikt.trefzer at cirrax.com (Benedikt Trefzer) Date: Tue, 13 Mar 2018 12:20:45 +0100 Subject: [Openstack-operators] How are you handling billing/chargeback? In-Reply-To: <012101d3baaa$2367fae0$6a37f0a0$@homeatcloud.cz> References: <20180312192113.znz4eavfze5zg7yn@redhat.com> <012101d3baaa$2367fae0$6a37f0a0$@homeatcloud.cz> Message-ID: Hi  same here, despite we do not use events but audit data from ceilometer. Benedikt On 13.03.2018 10:03, Tomáš Vondra wrote: > > Hi! > > We at Homeatcloud have rolled our own engine taking data from > Ceilometer events. However, CloudKitty didn‘t exist back then. Now we > would probably use it to calculate the rating AND roll our own engine > for billing and invoice printing. > > Tomas > >   > > *From:*Flint WALRUS [mailto:gael.therond at gmail.com] > *Sent:* Monday, March 12, 2018 9:41 PM > *To:* Lars Kellogg-Stedman > *Cc:* openstack-operators at lists.openstack.org > *Subject:* Re: [Openstack-operators] How are you handling > billing/chargeback? > >   > > Hi lars, personally using an internally crafted service. > > It’s one of my main regret with Openstack, lack of a decent billing > system. > > Le lun. 12 mars 2018 à 20:22, Lars Kellogg-Stedman > a écrit : > > Hey folks, > > I'm curious what folks out there are using for chargeback/billing in > your OpenStack environment. > > Are you doing any sort of chargeback (or showback)?  Are you using (or > have you tried) CloudKitty?  Or some other existing project?  Have you > rolled your own instead? > > I ask because I am helping out some folks get a handle on the > operational side of their existing OpenStack environment, and they are > interested in but have not yet deployed some sort of reporting > mechanism. > > Thanks, > > -- > Lars Kellogg-Stedman > | > larsks @ {irc,twitter,github} > http://blog.oddbit.com/                | > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From christophe.sauthier at objectif-libre.com Tue Mar 13 11:29:57 2018 From: christophe.sauthier at objectif-libre.com (Christophe Sauthier) Date: Tue, 13 Mar 2018 12:29:57 +0100 Subject: [Openstack-operators] =?utf-8?q?How_are_you_handling_billing/char?= =?utf-8?q?geback=3F?= In-Reply-To: <012101d3baaa$2367fae0$6a37f0a0$@homeatcloud.cz> References: <20180312192113.znz4eavfze5zg7yn@redhat.com> <012101d3baaa$2367fae0$6a37f0a0$@homeatcloud.cz> Message-ID: <5b2197da7431cba608d01e4c1915c8f5@objectif-libre.com> Le 2018-03-13 10:03, Tomáš Vondra a écrit : > Hi! > > We at Homeatcloud have rolled our own engine taking data from > Ceilometer events. However, CloudKitty didn‘t exist back then. Now we > would probably use it to calculate the rating AND roll our own engine > for billing and invoice printing. The case you are mentionning is clearly the scope of cloudkitty. So if you are willing to have some informations I'd be happy to exchange with you (by mail or huats on IRC). Regards Christophe Sauthier ---- Christophe Sauthier CEO Objectif Libre : Au service de votre Cloud +33 (0) 6 16 98 63 96 | christophe.sauthier at objectif-libre.com www.objectif-libre.com | @objectiflibre | www.linkedin.com/company/objectif-libre Recevez la Pause Cloud Et DevOps : olib.re/abo-pause From tobias at citynetwork.se Tue Mar 13 12:25:52 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Tue, 13 Mar 2018 13:25:52 +0100 Subject: [Openstack-operators] [publiccloud-wg] Poll new meeting time and bi-weekly meeting Message-ID: <4ccfe58f-4e22-90e9-83f8-24aa6398552e@citynetwork.se> Hi folks, We have under some time had requests of changing the current time for our bi-weekly meetings. Not very many suggestions of new time slots have ended up in my inbox, so have added a few suggestions my self. The plan is to have this set and do final voting at tomorrows meeting. Reply to this email if you have other suggestions and I can add those as well. Please mark the alternatives that works for you no later than tomorrow 1400UTC. Doodle link: https://doodle.com/poll/2kv4h79xypmathac Tomorrows meeting will be held as planned in #openstack-meeting-3 at 1400 UTC. Agenda can be found at: https://etherpad.openstack.org/p/publiccloud-wg S -- Tobias Rydberg Senior Developer Mobile: +46 733 312780 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From simon.leinen at switch.ch Tue Mar 13 22:11:27 2018 From: simon.leinen at switch.ch (Simon Leinen) Date: Tue, 13 Mar 2018 23:11:27 +0100 Subject: [Openstack-operators] How are you handling billing/chargeback? In-Reply-To: <20180312192113.znz4eavfze5zg7yn@redhat.com> (Lars Kellogg-Stedman's message of "Mon, 12 Mar 2018 15:21:13 -0400") References: <20180312192113.znz4eavfze5zg7yn@redhat.com> Message-ID: Lars Kellogg-Stedman writes: > I'm curious what folks out there are using for chargeback/billing in > your OpenStack environment. We use a homegrown billing system that periodically samples utilization of billable resources. (We turned off Ceilometer a few years ago because we weren't really using it and found that it caused us trouble.) -- Simon. > Are you doing any sort of chargeback (or showback)? Are you using (or > have you tried) CloudKitty? Or some other existing project? Have you > rolled your own instead? > I ask because I am helping out some folks get a handle on the > operational side of their existing OpenStack environment, and they are > interested in but have not yet deployed some sort of reporting > mechanism. From jomlowe at iu.edu Tue Mar 13 23:21:39 2018 From: jomlowe at iu.edu (Mike Lowe) Date: Tue, 13 Mar 2018 19:21:39 -0400 Subject: [Openstack-operators] How are you handling billing/chargeback? In-Reply-To: References: <20180312192113.znz4eavfze5zg7yn@redhat.com> Message-ID: Ceilometer (now panko) vm exists events coerced to look like jobs from a HPC batch system. Sent from my iPad > On Mar 13, 2018, at 6:11 PM, Simon Leinen wrote: > > Lars Kellogg-Stedman writes: >> I'm curious what folks out there are using for chargeback/billing in >> your OpenStack environment. > > We use a homegrown billing system that periodically samples utilization > of billable resources. > > (We turned off Ceilometer a few years ago because we weren't really > using it and found that it caused us trouble.) > -- > Simon. > >> Are you doing any sort of chargeback (or showback)? Are you using (or >> have you tried) CloudKitty? Or some other existing project? Have you >> rolled your own instead? > >> I ask because I am helping out some folks get a handle on the >> operational side of their existing OpenStack environment, and they are >> interested in but have not yet deployed some sort of reporting >> mechanism. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From pabelanger at redhat.com Tue Mar 13 23:58:59 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 13 Mar 2018 19:58:59 -0400 Subject: [Openstack-operators] Poll: S Release Naming Message-ID: <20180313235859.GA14573@localhost.localdomain> Greetings all, It is time again to cast your vote for the naming of the S Release. This time is little different as we've decided to use a public polling option over per user private URLs for voting. This means, everybody should proceed to use the following URL to cast their vote: https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1&akey=8cfdc1f5df5fe4d3 Because this is a public poll, results will currently be only viewable by myself until the poll closes. Once closed, I'll post the URL making the results viewable to everybody. This was done to avoid everybody seeing the results while the public poll is running. The poll will officially end on 2018-03-21 23:59:59[1], and results will be posted shortly after. [1] http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst --- According to the Release Naming Process, this poll is to determine the community preferences for the name of the R release of OpenStack. It is possible that the top choice is not viable for legal reasons, so the second or later community preference could wind up being the name. Release Name Criteria Each release name must start with the letter of the ISO basic Latin alphabet following the initial letter of the previous release, starting with the initial release of "Austin". After "Z", the next name should start with "A" again. The name must be composed only of the 26 characters of the ISO basic Latin alphabet. Names which can be transliterated into this character set are also acceptable. The name must refer to the physical or human geography of the region encompassing the location of the OpenStack design summit for the corresponding release. The exact boundaries of the geographic region under consideration must be declared before the opening of nominations, as part of the initiation of the selection process. The name must be a single word with a maximum of 10 characters. Words that describe the feature should not be included, so "Foo City" or "Foo Peak" would both be eligible as "Foo". Names which do not meet these criteria but otherwise sound really cool should be added to a separate section of the wiki page and the TC may make an exception for one or more of them to be considered in the Condorcet poll. The naming official is responsible for presenting the list of exceptional names for consideration to the TC before the poll opens. Exact Geographic Region The Geographic Region from where names for the S release will come is Berlin Proposed Names Spree (a river that flows through the Saxony, Brandenburg and Berlin states of Germany) SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin) Spandau (One of the twelve boroughs of Berlin) Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently abbreviated as 🍺) Steglitz (a locality in the South Western part of the city) Springer (Berlin is headquarters of Axel Springer publishing house) Staaken (a locality within the Spandau borough) Schoenholz (A zone in the Niederschönhausen district of Berlin) Shellhaus (A famous office building) Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg) Schiller (A park in the Mitte borough) Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood) (The adjective form, Saatwinkler is also a really cool bridge but that form is too long) Sonne (Sonnenallee is the name of a large street in Berlin crossing the former wall, also translates as "sun") Savigny (Common place in City-West) Soorstreet (Street in Berlin restrict Charlottenburg) Solar (Skybar in Berlin) See (Seestraße or "See Street" in Berlin) Thanks, Paul From slawek at kaplonski.pl Wed Mar 14 08:21:20 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdvbWlyIEthcMWCb8WEc2tp?=) Date: Wed, 14 Mar 2018 09:21:20 +0100 Subject: [Openstack-operators] [openstack-dev] Poll: S Release Naming In-Reply-To: <20180313235859.GA14573@localhost.localdomain> References: <20180313235859.GA14573@localhost.localdomain> Message-ID: <7E7A7CA7-7A5D-4428-95CF-6E47F31F96F3@kaplonski.pl> Hi, Are You sure this link is good? I just tried it and I got info that "Already voted" which isn't true in fact :) — Best regards Slawek Kaplonski slawek at kaplonski.pl > Wiadomość napisana przez Paul Belanger w dniu 14.03.2018, o godz. 00:58: > > Greetings all, > > It is time again to cast your vote for the naming of the S Release. This time > is little different as we've decided to use a public polling option over per > user private URLs for voting. This means, everybody should proceed to use the > following URL to cast their vote: > > https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1&akey=8cfdc1f5df5fe4d3 > > Because this is a public poll, results will currently be only viewable by myself > until the poll closes. Once closed, I'll post the URL making the results > viewable to everybody. This was done to avoid everybody seeing the results while > the public poll is running. > > The poll will officially end on 2018-03-21 23:59:59[1], and results will be > posted shortly after. > > [1] http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst > --- > > According to the Release Naming Process, this poll is to determine the > community preferences for the name of the R release of OpenStack. It is > possible that the top choice is not viable for legal reasons, so the second or > later community preference could wind up being the name. > > Release Name Criteria > > Each release name must start with the letter of the ISO basic Latin alphabet > following the initial letter of the previous release, starting with the > initial release of "Austin". After "Z", the next name should start with > "A" again. > > The name must be composed only of the 26 characters of the ISO basic Latin > alphabet. Names which can be transliterated into this character set are also > acceptable. > > The name must refer to the physical or human geography of the region > encompassing the location of the OpenStack design summit for the > corresponding release. The exact boundaries of the geographic region under > consideration must be declared before the opening of nominations, as part of > the initiation of the selection process. > > The name must be a single word with a maximum of 10 characters. Words that > describe the feature should not be included, so "Foo City" or "Foo Peak" > would both be eligible as "Foo". > > Names which do not meet these criteria but otherwise sound really cool > should be added to a separate section of the wiki page and the TC may make > an exception for one or more of them to be considered in the Condorcet poll. > The naming official is responsible for presenting the list of exceptional > names for consideration to the TC before the poll opens. > > Exact Geographic Region > > The Geographic Region from where names for the S release will come is Berlin > > Proposed Names > > Spree (a river that flows through the Saxony, Brandenburg and Berlin states of > Germany) > > SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin) > > Spandau (One of the twelve boroughs of Berlin) > > Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently > abbreviated as 🍺) > > Steglitz (a locality in the South Western part of the city) > > Springer (Berlin is headquarters of Axel Springer publishing house) > > Staaken (a locality within the Spandau borough) > > Schoenholz (A zone in the Niederschönhausen district of Berlin) > > Shellhaus (A famous office building) > > Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg) > > Schiller (A park in the Mitte borough) > > Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood) > (The adjective form, Saatwinkler is also a really cool bridge but > that form is too long) > > Sonne (Sonnenallee is the name of a large street in Berlin crossing the former > wall, also translates as "sun") > > Savigny (Common place in City-West) > > Soorstreet (Street in Berlin restrict Charlottenburg) > > Solar (Skybar in Berlin) > > See (Seestraße or "See Street" in Berlin) > > Thanks, > Paul > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From j.harbott at x-ion.de Wed Mar 14 08:34:07 2018 From: j.harbott at x-ion.de (Jens Harbott) Date: Wed, 14 Mar 2018 09:34:07 +0100 Subject: [Openstack-operators] [openstack-dev] Poll: S Release Naming In-Reply-To: <7E7A7CA7-7A5D-4428-95CF-6E47F31F96F3@kaplonski.pl> References: <20180313235859.GA14573@localhost.localdomain> <7E7A7CA7-7A5D-4428-95CF-6E47F31F96F3@kaplonski.pl> Message-ID: 2018-03-14 9:21 GMT+01:00 Sławomir Kapłoński : > Hi, > > Are You sure this link is good? I just tried it and I got info that "Already voted" which isn't true in fact :) Comparing with previous polls, these should be personalized links that need to be sent out to each voter individually, so I agree that this looks like a mistake. >> Wiadomość napisana przez Paul Belanger w dniu 14.03.2018, o godz. 00:58: >> >> Greetings all, >> >> It is time again to cast your vote for the naming of the S Release. This time >> is little different as we've decided to use a public polling option over per >> user private URLs for voting. This means, everybody should proceed to use the >> following URL to cast their vote: >> >> https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1&akey=8cfdc1f5df5fe4d3 >> >> Because this is a public poll, results will currently be only viewable by myself >> until the poll closes. Once closed, I'll post the URL making the results >> viewable to everybody. This was done to avoid everybody seeing the results while >> the public poll is running. >> >> The poll will officially end on 2018-03-21 23:59:59[1], and results will be >> posted shortly after. >> >> [1] http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst >> --- >> >> According to the Release Naming Process, this poll is to determine the >> community preferences for the name of the R release of OpenStack. It is >> possible that the top choice is not viable for legal reasons, so the second or >> later community preference could wind up being the name. >> >> Release Name Criteria >> >> Each release name must start with the letter of the ISO basic Latin alphabet >> following the initial letter of the previous release, starting with the >> initial release of "Austin". After "Z", the next name should start with >> "A" again. >> >> The name must be composed only of the 26 characters of the ISO basic Latin >> alphabet. Names which can be transliterated into this character set are also >> acceptable. >> >> The name must refer to the physical or human geography of the region >> encompassing the location of the OpenStack design summit for the >> corresponding release. The exact boundaries of the geographic region under >> consideration must be declared before the opening of nominations, as part of >> the initiation of the selection process. >> >> The name must be a single word with a maximum of 10 characters. Words that >> describe the feature should not be included, so "Foo City" or "Foo Peak" >> would both be eligible as "Foo". >> >> Names which do not meet these criteria but otherwise sound really cool >> should be added to a separate section of the wiki page and the TC may make >> an exception for one or more of them to be considered in the Condorcet poll. >> The naming official is responsible for presenting the list of exceptional >> names for consideration to the TC before the poll opens. >> >> Exact Geographic Region >> >> The Geographic Region from where names for the S release will come is Berlin >> >> Proposed Names >> >> Spree (a river that flows through the Saxony, Brandenburg and Berlin states of >> Germany) >> >> SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin) >> >> Spandau (One of the twelve boroughs of Berlin) >> >> Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently >> abbreviated as 🍺) >> >> Steglitz (a locality in the South Western part of the city) >> >> Springer (Berlin is headquarters of Axel Springer publishing house) >> >> Staaken (a locality within the Spandau borough) >> >> Schoenholz (A zone in the Niederschönhausen district of Berlin) >> >> Shellhaus (A famous office building) >> >> Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg) >> >> Schiller (A park in the Mitte borough) >> >> Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood) >> (The adjective form, Saatwinkler is also a really cool bridge but >> that form is too long) >> >> Sonne (Sonnenallee is the name of a large street in Berlin crossing the former >> wall, also translates as "sun") >> >> Savigny (Common place in City-West) >> >> Soorstreet (Street in Berlin restrict Charlottenburg) >> >> Solar (Skybar in Berlin) >> >> See (Seestraße or "See Street" in Berlin) >> >> Thanks, >> Paul >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From thierry at openstack.org Wed Mar 14 09:05:34 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 14 Mar 2018 10:05:34 +0100 Subject: [Openstack-operators] [openstack-dev] Poll: S Release Naming In-Reply-To: References: <20180313235859.GA14573@localhost.localdomain> <7E7A7CA7-7A5D-4428-95CF-6E47F31F96F3@kaplonski.pl> Message-ID: <49541945-e517-83ee-bec8-216ad669fea3@openstack.org> Jens Harbott wrote: > 2018-03-14 9:21 GMT+01:00 Sławomir Kapłoński : >> Hi, >> >> Are You sure this link is good? I just tried it and I got info that "Already voted" which isn't true in fact :) > > Comparing with previous polls, these should be personalized links that > need to be sent out to each voter individually, so I agree that this > looks like a mistake. We crashed CIVS for the last naming with a private poll sent to all the Foundation membership, so the TC decided to use public (open) polling this time around. Anyone with the link can vote, nothing was sent to each of the voters individually. The "Already voted" error might be due to CIVS limiting public polling to one entry per IP, and a colleague of yours already voted... Maybe try from another IP address ? -- Thierry Carrez (ttx) From slawek at kaplonski.pl Wed Mar 14 09:16:30 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdvbWlyIEthcMWCb8WEc2tp?=) Date: Wed, 14 Mar 2018 10:16:30 +0100 Subject: [Openstack-operators] [openstack-dev] Poll: S Release Naming In-Reply-To: <49541945-e517-83ee-bec8-216ad669fea3@openstack.org> References: <20180313235859.GA14573@localhost.localdomain> <7E7A7CA7-7A5D-4428-95CF-6E47F31F96F3@kaplonski.pl> <49541945-e517-83ee-bec8-216ad669fea3@openstack.org> Message-ID: <88B4EEE3-8058-48AA-AB7E-5A77E6D932A3@kaplonski.pl> Indeed. I now tried from different IP address and I was able to vote. Thx a lot for help. — Best regards Slawek Kaplonski slawek at kaplonski.pl > Wiadomość napisana przez Thierry Carrez w dniu 14.03.2018, o godz. 10:05: > > Jens Harbott wrote: >> 2018-03-14 9:21 GMT+01:00 Sławomir Kapłoński : >>> Hi, >>> >>> Are You sure this link is good? I just tried it and I got info that "Already voted" which isn't true in fact :) >> >> Comparing with previous polls, these should be personalized links that >> need to be sent out to each voter individually, so I agree that this >> looks like a mistake. > > We crashed CIVS for the last naming with a private poll sent to all the > Foundation membership, so the TC decided to use public (open) polling > this time around. Anyone with the link can vote, nothing was sent to > each of the voters individually. > > The "Already voted" error might be due to CIVS limiting public polling > to one entry per IP, and a colleague of yours already voted... Maybe try > from another IP address ? > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lars at redhat.com Wed Mar 14 13:12:45 2018 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 14 Mar 2018 09:12:45 -0400 Subject: [Openstack-operators] How are you handling billing/chargeback? In-Reply-To: References: <20180312192113.znz4eavfze5zg7yn@redhat.com> Message-ID: <20180314131245.kmgoirzlje5fmtfr@redhat.com> On Tue, Mar 13, 2018 at 07:21:39PM -0400, Mike Lowe wrote: > Ceilometer (now panko) vm exists events coerced to look like jobs from a HPC batch system. Interesting. And are you feeding that into an off-the-shelf HPC accounting system, or did you have an existing locally-developed system in place? -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From mriedemos at gmail.com Wed Mar 14 13:46:41 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 14 Mar 2018 08:46:41 -0500 Subject: [Openstack-operators] [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: References: Message-ID: On 3/14/2018 3:42 AM, 李杰 wrote: > >             This is the spec about  rebuild a instance booted from > volume.In the spec,there is a >       question about if we should delete the old root_volume.Anyone who > is interested in >       booted from volume can help to review this. Any suggestion is > welcome.Thank you! >       The link is here. >       Re:the rebuild spec:https://review.openstack.org/#/c/532407/ Copying the operators list and giving some more context. This spec is proposing to add support for rebuild with a new image for volume-backed servers, which today is just a 400 failure in the API since the compute doesn't support that scenario. With the proposed solution, the backing root volume would be deleted and a new volume would be created from the new image, similar to how boot from volume works. The question raised in the spec is whether or not nova should delete the root volume even if its delete_on_termination flag is set to False. The semantics get a bit weird here since that flag was not meant for this scenario, it's meant to be used when deleting the server to which the volume is attached. Rebuilding a server is not deleting it, but we would need to replace the root volume, so what do we do with the volume we're replacing? Do we say that delete_on_termination only applies to deleting a server and not rebuild and therefore nova can delete the root volume during a rebuild? If we don't delete the volume during rebuild, we could end up leaving a lot of volumes lying around that the user then has to clean up, otherwise they'll eventually go over quota. We need user (and operator) feedback on this issue and what they would expect to happen. -- Thanks, Matt From Tim.Bell at cern.ch Wed Mar 14 14:10:33 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 14 Mar 2018 14:10:33 +0000 Subject: [Openstack-operators] [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: References: Message-ID: <6AC92E2F-2F9D-4B18-8877-361B7877B677@cern.ch> Matt, To add another scenario and make things even more difficult (sorry (), if the original volume has snapshots, I don't think you can delete it. Tim -----Original Message----- From: Matt Riedemann Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 14 March 2018 at 14:55 To: "openstack-dev at lists.openstack.org" , openstack-operators Subject: Re: [openstack-dev] [nova] about rebuild instance booted from volume On 3/14/2018 3:42 AM, 李杰 wrote: > > This is the spec about rebuild a instance booted from > volume.In the spec,there is a > question about if we should delete the old root_volume.Anyone who > is interested in > booted from volume can help to review this. Any suggestion is > welcome.Thank you! > The link is here. > Re:the rebuild spec:https://review.openstack.org/#/c/532407/ Copying the operators list and giving some more context. This spec is proposing to add support for rebuild with a new image for volume-backed servers, which today is just a 400 failure in the API since the compute doesn't support that scenario. With the proposed solution, the backing root volume would be deleted and a new volume would be created from the new image, similar to how boot from volume works. The question raised in the spec is whether or not nova should delete the root volume even if its delete_on_termination flag is set to False. The semantics get a bit weird here since that flag was not meant for this scenario, it's meant to be used when deleting the server to which the volume is attached. Rebuilding a server is not deleting it, but we would need to replace the root volume, so what do we do with the volume we're replacing? Do we say that delete_on_termination only applies to deleting a server and not rebuild and therefore nova can delete the root volume during a rebuild? If we don't delete the volume during rebuild, we could end up leaving a lot of volumes lying around that the user then has to clean up, otherwise they'll eventually go over quota. We need user (and operator) feedback on this issue and what they would expect to happen. -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mrhillsman at gmail.com Wed Mar 14 14:18:20 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 14 Mar 2018 09:18:20 -0500 Subject: [Openstack-operators] Stable Branch EOL and "Extended Maintenance" Resolution In-Reply-To: <5F54EB7C-EC87-415C-815B-3E4881092E6F@openstack.org> References: <5F54EB7C-EC87-415C-815B-3E4881092E6F@openstack.org> Message-ID: Hi everyone, I believe this resolution is getting close to being passed and so I highly suggest anyone interested provide any feedback they have good/bad/indifferent - https://review.openstack.org/#/c/548916/3/resolutions/20180301-stable-branch-eol.rst On Thu, Mar 8, 2018 at 7:42 AM, Anne Bertucio wrote: > Hi all, > > Given our conversations this morning at the Ops Midcycle about Extended > Maintenance, particularly how projects individually deciding stable > maintenance policies would affect operators, I wanted to pop this to the > top of your inbox again. The thread is actively moving, so it’d be good to > get your operator input in there: https://review.openstack.org/#/c/548916/ > > > Anne Bertucio > OpenStack Foundation > anne at openstack.org | irc: annabelleB > > > > > > On Mar 7, 2018, at 1:16 PM, Chris Morgan wrote: > > Thanks for pointing this one out! > > Chris > > On Tue, Mar 6, 2018 at 9:53 PM, Melvin Hillsman > wrote: > >> Hi everyone, >> >> If you are interested in the items in the subject please be sure to take >> time to review and comment on the following patch - >> https://review.openstack.org/#/c/548916/ >> >> -- >> Kind regards, >> >> Melvin Hillsman >> mrhillsman at gmail.com >> mobile: (832) 264-2646 >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > > > -- > Chris Morgan > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nick at stackhpc.com Wed Mar 14 14:39:04 2018 From: nick at stackhpc.com (Nick Jones) Date: Wed, 14 Mar 2018 14:39:04 +0000 Subject: [Openstack-operators] How are you handling billing/chargeback? In-Reply-To: References: <20180312192113.znz4eavfze5zg7yn@redhat.com> Message-ID: <20180314143904.tpfmhkurtsyjh774@stackhpc.com> On 13/03, Simon Leinen wrote: >Lars Kellogg-Stedman writes: >> I'm curious what folks out there are using for chargeback/billing in >> your OpenStack environment. > >We use a homegrown billing system that periodically samples utilization >of billable resources. We had something similar at DataCentred - a homegrown (Rails-based) application, but one which periodically polled Ceilometer for utilisation information and subsequently performed a bunch of sanity checks. It was open-sourced for posterity here: https://github.com/seanhandley/stronghold This was used to generate billing information with integration into Salesforce. -- -Nick From zioproto at gmail.com Wed Mar 14 14:19:25 2018 From: zioproto at gmail.com (Saverio Proto) Date: Wed, 14 Mar 2018 15:19:25 +0100 Subject: [Openstack-operators] [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: <6AC92E2F-2F9D-4B18-8877-361B7877B677@cern.ch> References: <6AC92E2F-2F9D-4B18-8877-361B7877B677@cern.ch> Message-ID: My idea is that if delete_on_termination flag is set to False the Volume should never be deleted by Nova. my 2 cents Saverio 2018-03-14 15:10 GMT+01:00 Tim Bell : > Matt, > > To add another scenario and make things even more difficult (sorry (), if the original volume has snapshots, I don't think you can delete it. > > Tim > > > -----Original Message----- > From: Matt Riedemann > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 14 March 2018 at 14:55 > To: "openstack-dev at lists.openstack.org" , openstack-operators > Subject: Re: [openstack-dev] [nova] about rebuild instance booted from volume > > On 3/14/2018 3:42 AM, 李杰 wrote: > > > > This is the spec about rebuild a instance booted from > > volume.In the spec,there is a > > question about if we should delete the old root_volume.Anyone who > > is interested in > > booted from volume can help to review this. Any suggestion is > > welcome.Thank you! > > The link is here. > > Re:the rebuild spec:https://review.openstack.org/#/c/532407/ > > Copying the operators list and giving some more context. > > This spec is proposing to add support for rebuild with a new image for > volume-backed servers, which today is just a 400 failure in the API > since the compute doesn't support that scenario. > > With the proposed solution, the backing root volume would be deleted and > a new volume would be created from the new image, similar to how boot > from volume works. > > The question raised in the spec is whether or not nova should delete the > root volume even if its delete_on_termination flag is set to False. The > semantics get a bit weird here since that flag was not meant for this > scenario, it's meant to be used when deleting the server to which the > volume is attached. Rebuilding a server is not deleting it, but we would > need to replace the root volume, so what do we do with the volume we're > replacing? > > Do we say that delete_on_termination only applies to deleting a server > and not rebuild and therefore nova can delete the root volume during a > rebuild? > > If we don't delete the volume during rebuild, we could end up leaving a > lot of volumes lying around that the user then has to clean up, > otherwise they'll eventually go over quota. > > We need user (and operator) feedback on this issue and what they would > expect to happen. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From blair.bethwaite at gmail.com Wed Mar 14 14:47:40 2018 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Wed, 14 Mar 2018 14:47:40 +0000 Subject: [Openstack-operators] [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: References: Message-ID: Please do not default to deleting it, otherwise someone will eventually be back here asking why an irate user has just lost data. The better scenario is that the rebuild will fail (early - before impact to the running instance) with a quota error. Cheers, On Thu., 15 Mar. 2018, 00:46 Matt Riedemann, wrote: > On 3/14/2018 3:42 AM, 李杰 wrote: > > > > This is the spec about rebuild a instance booted from > > volume.In the spec,there is a > > question about if we should delete the old root_volume.Anyone who > > is interested in > > booted from volume can help to review this. Any suggestion is > > welcome.Thank you! > > The link is here. > > Re:the rebuild spec:https://review.openstack.org/#/c/532407/ > > Copying the operators list and giving some more context. > > This spec is proposing to add support for rebuild with a new image for > volume-backed servers, which today is just a 400 failure in the API > since the compute doesn't support that scenario. > > With the proposed solution, the backing root volume would be deleted and > a new volume would be created from the new image, similar to how boot > from volume works. > > The question raised in the spec is whether or not nova should delete the > root volume even if its delete_on_termination flag is set to False. The > semantics get a bit weird here since that flag was not meant for this > scenario, it's meant to be used when deleting the server to which the > volume is attached. Rebuilding a server is not deleting it, but we would > need to replace the root volume, so what do we do with the volume we're > replacing? > > Do we say that delete_on_termination only applies to deleting a server > and not rebuild and therefore nova can delete the root volume during a > rebuild? > > If we don't delete the volume during rebuild, we could end up leaving a > lot of volumes lying around that the user then has to clean up, > otherwise they'll eventually go over quota. > > We need user (and operator) feedback on this issue and what they would > expect to happen. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbooth at redhat.com Wed Mar 14 15:35:22 2018 From: mbooth at redhat.com (Matthew Booth) Date: Wed, 14 Mar 2018 15:35:22 +0000 Subject: [Openstack-operators] [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: References: Message-ID: On 14 March 2018 at 13:46, Matt Riedemann wrote: > On 3/14/2018 3:42 AM, 李杰 wrote: > >> >> This is the spec about rebuild a instance booted from >> volume.In the spec,there is a >> question about if we should delete the old root_volume.Anyone who >> is interested in >> booted from volume can help to review this. Any suggestion is >> welcome.Thank you! >> The link is here. >> Re:the rebuild spec:https://review.openstack.org/#/c/532407/ >> > > Copying the operators list and giving some more context. > > This spec is proposing to add support for rebuild with a new image for > volume-backed servers, which today is just a 400 failure in the API since > the compute doesn't support that scenario. > > With the proposed solution, the backing root volume would be deleted and a > new volume would be created from the new image, similar to how boot from > volume works. > > The question raised in the spec is whether or not nova should delete the > root volume even if its delete_on_termination flag is set to False. The > semantics get a bit weird here since that flag was not meant for this > scenario, it's meant to be used when deleting the server to which the > volume is attached. Rebuilding a server is not deleting it, but we would > need to replace the root volume, so what do we do with the volume we're > replacing? > > Do we say that delete_on_termination only applies to deleting a server and > not rebuild and therefore nova can delete the root volume during a rebuild? > > If we don't delete the volume during rebuild, we could end up leaving a > lot of volumes lying around that the user then has to clean up, otherwise > they'll eventually go over quota. > > We need user (and operator) feedback on this issue and what they would > expect to happen. > My 2c was to overwrite, not delete the volume[1]. I believe this preserves both sets of semantics: the server is rebuilt, and the volume is not deleted. The user will still lose their data, of course, but that's implied by the rebuild they explicitly requested. The volume id will remain the same. [1] I suspect this would require new functionality in cinder to re-initialize from image. Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) -------------- next part -------------- An HTML attachment was scrubbed... URL: From vondra at homeatcloud.cz Wed Mar 14 15:43:37 2018 From: vondra at homeatcloud.cz (=?UTF-8?Q?Tom=C3=A1=C5=A1_Vondra?=) Date: Wed, 14 Mar 2018 16:43:37 +0100 Subject: [Openstack-operators] [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: References: <6AC92E2F-2F9D-4B18-8877-361B7877B677@cern.ch> Message-ID: <024e01d3bbab$38f44290$aadcc7b0$@homeatcloud.cz> Hi! I say delete! Delete them all! Really, it's called delete_on_termination and should be ignored on Rebuild. We have a VPS service implemented on top of OpenStack and do throw the old contents away on Rebuild. When the user has the Backup service paid, they can restore a snapshot. Backup is implemented as volume snapshot, then clone volume, then upload to image (glance is on a different disk array). I also sometimes multi-attach a volume manually to a service node and just dd an image onto it. If it was to be implemented this way, then there would be no deleting a volume with delete_on_termination, just overwriting. But the effect is the same. IMHO you can have snapshots of volumes that have been deleted. Just some backends like our 3PAR don't allow it, but it's not disallowed in the API contract. Tomas from Homeatcloud -----Original Message----- From: Saverio Proto [mailto:zioproto at gmail.com] Sent: Wednesday, March 14, 2018 3:19 PM To: Tim Bell; Matt Riedemann Cc: OpenStack Development Mailing List (not for usage questions); openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [openstack-dev] [nova] about rebuild instance booted from volume My idea is that if delete_on_termination flag is set to False the Volume should never be deleted by Nova. my 2 cents Saverio 2018-03-14 15:10 GMT+01:00 Tim Bell : > Matt, > > To add another scenario and make things even more difficult (sorry (), if the original volume has snapshots, I don't think you can delete it. > > Tim > > > -----Original Message----- > From: Matt Riedemann > Reply-To: "OpenStack Development Mailing List (not for usage > questions)" > Date: Wednesday, 14 March 2018 at 14:55 > To: "openstack-dev at lists.openstack.org" > , openstack-operators > > Subject: Re: [openstack-dev] [nova] about rebuild instance booted from > volume > > On 3/14/2018 3:42 AM, 李杰 wrote: > > > > This is the spec about rebuild a instance booted from > > volume.In the spec,there is a > > question about if we should delete the old root_volume.Anyone who > > is interested in > > booted from volume can help to review this. Any suggestion is > > welcome.Thank you! > > The link is here. > > Re:the rebuild spec:https://review.openstack.org/#/c/532407/ > > Copying the operators list and giving some more context. > > This spec is proposing to add support for rebuild with a new image for > volume-backed servers, which today is just a 400 failure in the API > since the compute doesn't support that scenario. > > With the proposed solution, the backing root volume would be deleted and > a new volume would be created from the new image, similar to how boot > from volume works. > > The question raised in the spec is whether or not nova should delete the > root volume even if its delete_on_termination flag is set to False. The > semantics get a bit weird here since that flag was not meant for this > scenario, it's meant to be used when deleting the server to which the > volume is attached. Rebuilding a server is not deleting it, but we would > need to replace the root volume, so what do we do with the volume we're > replacing? > > Do we say that delete_on_termination only applies to deleting a server > and not rebuild and therefore nova can delete the root volume during a > rebuild? > > If we don't delete the volume during rebuild, we could end up leaving a > lot of volumes lying around that the user then has to clean up, > otherwise they'll eventually go over quota. > > We need user (and operator) feedback on this issue and what they would > expect to happen. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operator > s _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From lars at redhat.com Wed Mar 14 16:11:43 2018 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 14 Mar 2018 12:11:43 -0400 Subject: [Openstack-operators] How are you handling billing/chargeback? In-Reply-To: <20180312192113.znz4eavfze5zg7yn@redhat.com> References: <20180312192113.znz4eavfze5zg7yn@redhat.com> Message-ID: <20180314161143.2w6skkpmyhvixmyj@redhat.com> On Mon, Mar 12, 2018 at 03:21:13PM -0400, Lars Kellogg-Stedman wrote: > I'm curious what folks out there are using for chargeback/billing in > your OpenStack environment. So far it looks like everyone is using a homegrown solution. Is anyone using an existing product/project? -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From tobias.urdin at crystone.com Wed Mar 14 16:31:12 2018 From: tobias.urdin at crystone.com (Tobias Urdin) Date: Wed, 14 Mar 2018 16:31:12 +0000 Subject: [Openstack-operators] How are you handling billing/chargeback? References: <20180312192113.znz4eavfze5zg7yn@redhat.com> <20180314161143.2w6skkpmyhvixmyj@redhat.com> Message-ID: <6c933bf4e974415c8e4db7c2b8eaff11@mb01.staff.ognet.se> We are using the billing engine part of the commercial software provided by Atomia [1]. Using ceilometer as of now, but they just recently added support for Gnocchi which we are gonna use for our newer setups. [1] https://www.atomia.com On 03/14/2018 05:13 PM, Lars Kellogg-Stedman wrote: > On Mon, Mar 12, 2018 at 03:21:13PM -0400, Lars Kellogg-Stedman wrote: >> I'm curious what folks out there are using for chargeback/billing in >> your OpenStack environment. > So far it looks like everyone is using a homegrown solution. Is > anyone using an existing product/project? > From Tim.Bell at cern.ch Wed Mar 14 17:39:29 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 14 Mar 2018 17:39:29 +0000 Subject: [Openstack-operators] How are you handling billing/chargeback? In-Reply-To: <20180314161143.2w6skkpmyhvixmyj@redhat.com> References: <20180312192113.znz4eavfze5zg7yn@redhat.com> <20180314161143.2w6skkpmyhvixmyj@redhat.com> Message-ID: <3E58AE40-309A-493E-A9E2-17897E119B3D@cern.ch> We’re using a combination of cASO (https://caso.readthedocs.io/en/stable/) and some low level libvirt fabric monitoring. The showback accounting reports are generated with merging with other compute/storage usage across various systems (HTCondor, SLURM, ...) It would seem that those who needed solutions in the past found they had to do them themselves. It would be interesting if there are references of usage data/accounting/chargeback at scale with the current project set but doing the re-evaluation would be an effort which would need to be balanced versus just keeping the local solution working. Tim -----Original Message----- From: Lars Kellogg-Stedman Date: Wednesday, 14 March 2018 at 17:15 To: openstack-operators Subject: Re: [Openstack-operators] How are you handling billing/chargeback? On Mon, Mar 12, 2018 at 03:21:13PM -0400, Lars Kellogg-Stedman wrote: > I'm curious what folks out there are using for chargeback/billing in > your OpenStack environment. So far it looks like everyone is using a homegrown solution. Is anyone using an existing product/project? -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From aspiers at suse.com Wed Mar 14 18:58:04 2018 From: aspiers at suse.com (Adam Spiers) Date: Wed, 14 Mar 2018 18:58:04 +0000 Subject: [Openstack-operators] [self-healing] Dublin PTG summary, and request for feedback Message-ID: <20180314185804.yqccn2jqyk26lnxk@pacific.linksys.moosehall> Hi all, I just posted a summary of the Self-healing SIG session at the Dublin PTG: http://lists.openstack.org/pipermail/openstack-sigs/2018-March/000317.html If you are interested in the topic of self-healing within OpenStack, you are warmly invited to subscribe to the openstack-sigs mailing list: http://lists.openstack.org/pipermail/openstack-sigs/ and/or join the #openstack-self-healing channel on Freenode IRC. We are actively gathering feedback to help steer the SIG's focus in the right direction, so all thoughts are very welcome, especially from operators, since the primary goal of the SIG is to make life easier for operators. I have also just created an etherpad for brainstorming topics for the Forum in Vancouver: https://etherpad.openstack.org/p/YVR-self-healing-brainstorming Feel free to put braindumps in there :-) Thanks! Adam From mrhillsman at gmail.com Wed Mar 14 19:15:31 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 14 Mar 2018 14:15:31 -0500 Subject: [Openstack-operators] [forum] We want your session ideas for the Vancouver Forum! Message-ID: Hey everyone, Please take time to put ideas for sessions at the forum in the TC and/or UC catch-all etherpads or any of the others that are appropriate: https://wiki.openstack.org/wiki/Forum/Vancouver2018 We really want to get as many session ideas as possible so that the committee has too many to choose from :) Here is an idea of the types of sessions to think about proposing: *Project-specific sessions* Where developers can ask users specific questions about their experience, users can provide feedback from the last release and cross-community collaboration on the priorities and 'blue sky' ideas for the next release can occur. *Strategic, whole-of-community discussions* To think about the big picture, including beyond just one release cycle and new technologies *Cross-project sessions* In a similar vein to what has happened at past design summits, but with increased emphasis on issues that are of relevant to all areas of the community If you have organized any events in the past year you probably have heard of or been in some sessions that are perfect for the Forum. -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Pablo.Iranzo at redhat.com Wed Mar 14 21:22:04 2018 From: Pablo.Iranzo at redhat.com (Pablo Iranzo =?iso-8859-1?Q?G=F3mez?=) Date: Wed, 14 Mar 2018 22:22:04 +0100 Subject: [Openstack-operators] Request for [ideas][help wanted] with OpenStack tooling In-Reply-To: References: Message-ID: <20180314212204.GF6041@redhat.com> Hi all, Apart of some updates above what Robin mentioned in December (Now: 170+ plugins, ansible support, checks across different systems, web interface, json output, pip package, container, etc) we're starting to add support for Debian-based distributions. Would it be possible for you running OSP on Debian (or even regular Debian/Ubuntu/etc) installations to contribute a 'sosreport' (yes you can install sosreport tool on Debian*) ? If so, would you mind attaching them at https://www.dropbox.com/request/8LGneF9i9nc9RB6aqXge Or reply to us with the url for us to download? Thanks a lot! Pablo PD: In case you're interested we've #citellus channel on freenode and mailing list at citellus-dev at redhat.com +++ Robin Cernin [01/12/17 15:39 +1000]: >Hello, > >This is my second time reaching out to this ML. We are all having same >ultimate goal guys. Fixing the problems as soon as possible. > >Back then in June (SUBJ:OpenStack logs Health Validator) we had rough >version, now we are reaching 6 months and we have done 75 scripts >(currently) checking things not only in OpenStack deployment. > >https://github.com/zerodayz/citellus > >What I am looking for from you is if you would be please so kind and take a >look, create issues what you think would be the best to bring this tool to >a higher level. > >75 scripts so far having output to JSON planning on adding Web front-end. >From last OpenStack Australia Group Meetup here in Brisbane I got some >feedback on adding configuration options so one doesn't have to re-type all >the things and instead use config file. > >We have all documentation you can imagine including templates: > >How to Contribute: >https://github.com/zerodayz/citellus/blob/master/CONTRIBUTING.md >Templates: https://github.com/zerodayz/citellus/tree/master/doc/templates >Writing Tests: https://github.com/zerodayz/citellus/blob/master/TESTING.md >Presentation in reveal-md: >https://github.com/zerodayz/citellus/blob/master/doc/presentation-revealmd.md >How to Review code: >https://github.com/zerodayz/citellus/blob/master/REVIEWER.md > >We are now mainly focused on RPM distribution but we could add multiple >distros, we are also discussing the possible way of integrating this with >Ansible and it's playbook so we could use them too. > >if you are interested please create issue, code(see How to Contribute), >join discussion in github. > >Thank you! >Robin Černín -- Pablo Iranzo Gómez (Pablo.Iranzo at redhat.com) GnuPG: 0x5BD8E1E4 Senior Software Maintenance Engineer - OpenStack iranzo @ IRC RHC{A,SS,DS,VA,E,SA,SP,AOSP}, JBCAA #110-215-852 RHCA Level V -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 228 bytes Desc: not available URL: From rico.lin.guanyu at gmail.com Thu Mar 15 08:01:51 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 15 Mar 2018 16:01:51 +0800 Subject: [Openstack-operators] [openstack-dev][openstack-ops][heat][PTG] Heat PTG Summary Message-ID: Hi Heat devs and ops It's a great PTG plus SnowpenStack experience. Now Rocky started. We really need all kind of input and effort to make sure we're heading toward the right way. Here is what we been discussed during PTG: - Future strategy for heat-tempest-plugin & functional tests - Multi-cloud support - Next plan for Heat Dashboard - Race conditions for clients updating/deleting stacks - Swift Template/file object support - heat dashboard needs of clients - Resuming after an engine failure - Moving SyncPoints from DB to DLM - toggle the debug option at runtime - remove mox - Allow partial success in ASG - Client Plugins and OpenStackSDK - Global Request Id support - Heat KeyStone Credential issue - (How we going to survive on the island) You can find *all Etherpads links* in *https://etherpad.openstack.org/p/heat-rocky-ptg * We try to document down as much as we can(Thanks Zane for picking it up), including discussion and actions. *Will try to target all actions in Rocky*. If you do like to input on any topic (or any topic you think we missing), *please try to provide inputs to the etherpad* (and be kind to leave messages in ML or meeting so we won't miss it.) *Use Cases* If you have any use case for us (What's your usecase, what's not working/ what's working well), please help us and input to* https://etherpad.openstack.org/p/heat-usecases * Here are *Team photos* we took: *https://www.dropbox.com/sh/dtei3ovfi7z74vo/AADX_s3PXFiC3Fod8Yj_RO4na/Heat?dl=0 * -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Thu Mar 15 12:04:39 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Thu, 15 Mar 2018 20:04:39 +0800 Subject: [Openstack-operators] [openstack-dev] [nova] about rebuild instance booted from volume Message-ID: Hi,all This spec is proposing to add support for rebuild with a new image for volume-backed servers, which today is just a 400 failure in the API since the compute doesn't support that scenario. With the proposed solution, the backing root volume would be deleted and a new volume would be created from the new image, similar to how boot from volume works. The question raised in the spec is whether or not nova should delete the root volume even if its delete_on_termination flag is set to False. The semantics get a bit weird here since that flag was not meant for this scenario, it's meant to be used when deleting the server to which the volume is attached. Rebuilding a server is not deleting it, but we would need to replace the root volume, so what do we do with the volume we're replacing? Do we say that delete_on_termination only applies to deleting a server and not rebuild and therefore nova can delete the root volume during a rebuild? If we don't delete the volume during rebuild, we could end up leaving a lot of volumes lying around that the user then has to clean up, otherwise they'll eventually go over quota. We need your feedback on this issue and what you would expect to happen. The link is here. Re:the rebuild spec:https://review.openstack.org/#/c/532407/ Thanks, lijie Best Regards Lijie -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Mar 15 13:35:46 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 15 Mar 2018 08:35:46 -0500 Subject: [Openstack-operators] [openstack-dev] [nova] about rebuildinstance booted from volume In-Reply-To: References: <6AC92E2F-2F9D-4B18-8877-361B7877B677@cern.ch> <024e01d3bbab$38f44290$aadcc7b0$@homeatcloud.cz> Message-ID: On 3/15/2018 7:27 AM, 李杰 wrote: > It seems that  we  can  only delete the snapshots of the original volume > firstly,then delete the original volume if the original volume has > snapshots. Nova won't be deleting the volume snapshots just to delete the volume during a rebuild. If we decide to delete the root volume during rebuild (delete_on_termination=True *or* we decide to not consider that flag during rebuild), the rebuild operation will likely have to handle the scenario that the volume has snapshots and can't be deleted. Which opens up another question: if we hit that scenario, what should the rebuild operation do? Log a warning and just detach the volume but not delete it and continue, or fail? -- Thanks, Matt From mriedemos at gmail.com Thu Mar 15 13:39:08 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 15 Mar 2018 08:39:08 -0500 Subject: [Openstack-operators] [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: References: Message-ID: I'm not sure why you're sending this same email *yet again*. There is already a thread going with the same details and questions below (from my original response yesterday): http://lists.openstack.org/pipermail/openstack-operators/2018-March/014952.html Can we please stop spamming the mailing list(s) and just stick to one email thread on this issue? On 3/15/2018 7:04 AM, 李杰 wrote: > Hi,all > This spec is proposing to add support for rebuild with a new image for > volume-backed servers, which today is just a 400 failure in the API > since the compute doesn't support that scenario. > > With the proposed solution, the backing root volume would be deleted and > a new volume would be created from the new image, similar to how boot > from volume works. > > The question raised in the spec is whether or not nova should delete the > root volume even if its delete_on_termination flag is set to False. The > semantics get a bit weird here since that flag was not meant for this > scenario, it's meant to be used when deleting the server to which the > volume is attached. Rebuilding a server is not deleting it, but we would > need to replace the root volume, so what do we do with the volume we're > replacing? > > Do we say that delete_on_termination only applies to deleting a server > and not rebuild and therefore nova can delete the root volume during a > rebuild? > > If we don't delete the volume during rebuild, we could end up leaving a > lot of volumes lying around that the user then has to clean up, > otherwise they'll eventually go over quota. > > We need your feedback on this issue and what you would > expect to happen. > > The link is here. > > Re:the rebuild spec:https://review.openstack.org/#/c/532407/ -- Thanks, Matt From dms at danplanet.com Thu Mar 15 14:46:56 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 15 Mar 2018 07:46:56 -0700 Subject: [Openstack-operators] [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: (Ben Nemec's message of "Thu, 15 Mar 2018 09:27:55 -0500") References: Message-ID: > Rather than overload delete_on_termination, could another flag like > delete_on_rebuild be added? Isn't delete_on_termination already the field we want? To me, that field means "nova owns this". If that is true, then we should be able to re-image the volume (in-place is ideal, IMHO) and if not, we just fail. Is that reasonable? --Dan From Goutham.PachaRavi at netapp.com Thu Mar 15 15:08:58 2018 From: Goutham.PachaRavi at netapp.com (Ravi, Goutham) Date: Thu, 15 Mar 2018 15:08:58 +0000 Subject: [Openstack-operators] [manila] [summit] Forum topic proposal etherpad Message-ID: X-posting this here from openstack-dev On 3/12/18, 4:27 PM, "Tom Barron" wrote: Please add proposed topics for manila to this etherpad [1] for the Vancouver Forum. In a couple weeks we'll use this list to submit abstracts for the next stage of the process [2] As a reminder, the Forum is the part of the Summit conference dedicated to open discourse among operators, developers, users -- all who have a vested interest in design and planning of the future of OpenStack [3]. I've added a few topics to prime the pump. [1] https://etherpad.openstack.org/p/YVR-manila-brainstorming [2] http://lists.openstack.org/pipermail/openstack-dev/2018-March/127944.html [3] quoting http://lists.openstack.org/pipermail/openstack-dev/2018-March/128180.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 499 bytes Desc: signature.asc URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ATT00001.txt URL: From kendall at openstack.org Thu Mar 15 16:05:05 2018 From: kendall at openstack.org (Kendall Waters) Date: Thu, 15 Mar 2018 11:05:05 -0500 Subject: [Openstack-operators] Vancouver Summit Schedule now live! Message-ID: <2A3E2A7B-8A2D-4FD5-B193-954AD28EA09D@openstack.org> The schedule is now live for the Vancouver Summit! Check out the 100+ sessions, demos, workshops and tutorials that will be featured at the Vancouver Summit, May 21-24. What’s New? As infrastructure has evolved, so has the Summit. In addition to OpenStack features and operations, you'll find a strong focus on cross-project integration and addressing new use cases like edge computing and machine learning. Sessions will feature user stories from the likes of JPMorgan Chase, Progressive Insurance, Target, Wells Fargo, and more, as well as the integration and use of projects like Kata Containers, Kubernetes, Istio, Ceph, ONAP, Ansible, and many others. The schedule is organized by new tracks according to use cases: private & hybrid cloud, public cloud, container infrastructure, CI / CD, edge computing, HPC / GPUs / AI, and telecom / NFV. You can sort within the schedule to find sessions and speakers around each topic or open source project (with new tags!). Please check out this Superuser article and help us promote it via social media: http://superuser.openstack.org/articles/whats-new-vancouver-summit/ Submit Sessions to the Forum The Technical Committee and User Committee are now collecting sessions for the Forum at the Vancouver Summit. If you have a project-specific session, strategic community-wide discussion or cross-project that you would like to propose, add links to the etherpads found at the Vancouver Forum Wiki (https://wiki.openstack.org/wiki/Forum/Vancouver2018) . Time to Register The early bird deadline is approaching, so please register https://www.openstack.org/summit/vancouver-2018/ before prices increase on April 4 at 11:59pm PT. For speakers whose sessions were accepted, look for an email from speakersupport at openstack.org for next steps on registration. ATCs and AUCs should also check their inbox for discount codes. Questions? Email summit at openstack.org Cheers, Kendall Kendall Waters OpenStack Marketing kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Thu Mar 15 19:55:18 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Thu, 15 Mar 2018 19:55:18 +0000 Subject: [Openstack-operators] [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: References: Message-ID: <6E229F29-BAFE-480A-A359-4BECEFE47B65@cern.ch> Deleting all snapshots would seem dangerous though... 1. I want to reset my instance to how it was before 2. I'll just do a snapshot in case I need any data in the future 3. rebuild 4. oops Tim -----Original Message----- From: Ben Nemec Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Thursday, 15 March 2018 at 20:42 To: Dan Smith Cc: "OpenStack Development Mailing List (not for usage questions)" , openstack-operators Subject: Re: [openstack-dev] [nova] about rebuild instance booted from volume On 03/15/2018 09:46 AM, Dan Smith wrote: >> Rather than overload delete_on_termination, could another flag like >> delete_on_rebuild be added? > > Isn't delete_on_termination already the field we want? To me, that field > means "nova owns this". If that is true, then we should be able to > re-image the volume (in-place is ideal, IMHO) and if not, we just > fail. Is that reasonable? If that's what the flag means then it seems reasonable. I got the impression from the previous discussion that not everyone was seeing it that way though. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dms at danplanet.com Thu Mar 15 22:29:13 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 15 Mar 2018 15:29:13 -0700 Subject: [Openstack-operators] [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: <6E229F29-BAFE-480A-A359-4BECEFE47B65@cern.ch> (Tim Bell's message of "Thu, 15 Mar 2018 19:55:18 +0000") References: <6E229F29-BAFE-480A-A359-4BECEFE47B65@cern.ch> Message-ID: > Deleting all snapshots would seem dangerous though... > > 1. I want to reset my instance to how it was before > 2. I'll just do a snapshot in case I need any data in the future > 3. rebuild > 4. oops Yep, for sure. I think if there are snapshots, we have to refuse to do te thing. My comment was about the "does nova have authority to destroy the root volume during a rebuild" and I think it does, if delete_on_termination=True, and if there are no snapshots. --Dan From chris at openstack.org Thu Mar 15 22:55:58 2018 From: chris at openstack.org (Chris Hoge) Date: Thu, 15 Mar 2018 15:55:58 -0700 Subject: [Openstack-operators] [openstack-dev][publiccloud-wg][k8s][octavia] OpenStack Load Balancer APIs and K8s Message-ID: Hi everyone, I wanted to notify you of a thread I started in openstack-dev about the state of the OpenStack load balancer APIs and the difficulty in integrating them with Kubernetes. This in part directly relates to current public and private deployments, and any feedback you have would be appreciated. Especially feedback on which version of the load balancer APIs you deploy, and if you haven't moved on to Octavia, why. http://lists.openstack.org/pipermail/openstack-dev/2018-March/128399.html Thanks in advance, Chris From Pablo.Iranzo at redhat.com Fri Mar 16 07:12:01 2018 From: Pablo.Iranzo at redhat.com (Pablo Iranzo =?iso-8859-1?Q?G=F3mez?=) Date: Fri, 16 Mar 2018 08:12:01 +0100 Subject: [Openstack-operators] Request for [ideas][help wanted] with OpenStack tooling In-Reply-To: <20180314212204.GF6041@redhat.com> References: <20180314212204.GF6041@redhat.com> Message-ID: <20180316071201.GA923@redhat.com> +++ Pablo Iranzo Gómez [14/03/18 22:22 +0100]: >Hi all, > >Apart of some updates above what Robin mentioned in December (Now: 170+ plugins, ansible >support, checks across different systems, web interface, json output, >pip package, container, etc) we're starting to add support for Debian-based distributions. > >Would it be possible for you running OSP on Debian (or even regular >Debian/Ubuntu/etc) installations to contribute a 'sosreport' (yes you >can install sosreport tool on Debian*) ? > >If so, would you mind attaching them at >https://www.dropbox.com/request/8LGneF9i9nc9RB6aqXge > >Or reply to us with the url for us to download? Sorry, forgot to add that if you're running on top of CentOS/Fedora it will be also great to get them. Thanks! Pablo > >Thanks a lot! >Pablo > >PD: In case you're interested we've #citellus channel on freenode and >mailing list at citellus-dev at redhat.com > >+++ Robin Cernin [01/12/17 15:39 +1000]: >>Hello, >> >>This is my second time reaching out to this ML. We are all having same >>ultimate goal guys. Fixing the problems as soon as possible. >> >>Back then in June (SUBJ:OpenStack logs Health Validator) we had rough >>version, now we are reaching 6 months and we have done 75 scripts >>(currently) checking things not only in OpenStack deployment. >> >>https://github.com/zerodayz/citellus >> >>What I am looking for from you is if you would be please so kind and take a >>look, create issues what you think would be the best to bring this tool to >>a higher level. >> >>75 scripts so far having output to JSON planning on adding Web front-end. >>From last OpenStack Australia Group Meetup here in Brisbane I got some >>feedback on adding configuration options so one doesn't have to re-type all >>the things and instead use config file. >> >>We have all documentation you can imagine including templates: >> >>How to Contribute: >>https://github.com/zerodayz/citellus/blob/master/CONTRIBUTING.md >>Templates: https://github.com/zerodayz/citellus/tree/master/doc/templates >>Writing Tests: https://github.com/zerodayz/citellus/blob/master/TESTING.md >>Presentation in reveal-md: >>https://github.com/zerodayz/citellus/blob/master/doc/presentation-revealmd.md >>How to Review code: >>https://github.com/zerodayz/citellus/blob/master/REVIEWER.md >> >>We are now mainly focused on RPM distribution but we could add multiple >>distros, we are also discussing the possible way of integrating this with >>Ansible and it's playbook so we could use them too. >> >>if you are interested please create issue, code(see How to Contribute), >>join discussion in github. >> >>Thank you! >>Robin Černín > >-- > >Pablo Iranzo Gómez (Pablo.Iranzo at redhat.com) GnuPG: 0x5BD8E1E4 >Senior Software Maintenance Engineer - OpenStack iranzo @ IRC >RHC{A,SS,DS,VA,E,SA,SP,AOSP}, JBCAA #110-215-852 RHCA Level V -- Pablo Iranzo Gómez (Pablo.Iranzo at redhat.com) GnuPG: 0x5BD8E1E4 Senior Software Maintenance Engineer - OpenStack iranzo @ IRC RHC{A,SS,DS,VA,E,SA,SP,AOSP}, JBCAA #110-215-852 RHCA Level V -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 228 bytes Desc: not available URL: From james.page at ubuntu.com Fri Mar 16 10:10:23 2018 From: james.page at ubuntu.com (James Page) Date: Fri, 16 Mar 2018 10:10:23 +0000 Subject: [Openstack-operators] [ptg][sig][upgrades] Upgrade SIG Message-ID: Hi All I finally got round to writing up my summary of the Upgrades session at the PTG in Dublin (see [0]). One outcome of that session was to form a new SIG centered on Upgrading OpenStack - I'm pleased to announce that the SIG has been formally accepted! The objective of the Upgrade SIG is to improve the overall upgrade process for OpenStack Clouds, covering both offline ‘fast-forward’ upgrades and online ‘rolling’ upgrades, by providing a forum for cross-project collaboration between operators and developers to document and codify best practice for upgrading OpenStack. If you are interested in participating in the SIG please add your details to the wiki page under 'Interested Parties': https://wiki.openstack.org/wiki/Upgrade_SIG I'll be working with the other SIG leads to setup regular IRC meetings in the next week or so - we expect to alternate between slots that are compatible with all time zones. Regards James [0] https://javacruft.wordpress.com/2018/03/16/winning-with-openstack-upgrades/ [1] https://governance.openstack.org/sigs/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dilip.sunkummanjunath at boeing.com Fri Mar 16 14:58:13 2018 From: dilip.sunkummanjunath at boeing.com (I-Sunkum Manjunath, Dilip) Date: Fri, 16 Mar 2018 14:58:13 +0000 Subject: [Openstack-operators] [Help Needed] : information on building Show back capability Message-ID: <96de32bbd8ec42008203d31eb4b6a42d@XCH15-05-06.nw.nos.boeing.com> Hi All, Can someone help me in point to resource to archive show back capability? What are all the ways in general, other have done this? Thanks Dilip -------------- next part -------------- An HTML attachment was scrubbed... URL: From markus.adam.pub at gmail.com Fri Mar 16 21:24:42 2018 From: markus.adam.pub at gmail.com (Markus Adam) Date: Fri, 16 Mar 2018 22:24:42 +0100 Subject: [Openstack-operators] [Help Needed] : information on building Show back capability In-Reply-To: <96de32bbd8ec42008203d31eb4b6a42d@XCH15-05-06.nw.nos.boeing.com> References: <96de32bbd8ec42008203d31eb4b6a42d@XCH15-05-06.nw.nos.boeing.com> Message-ID: Hi Dilip, I think that Cloud Kitty is that what you may be looking for. Cheers On Fri, Mar 16, 2018 at 3:58 PM, I-Sunkum Manjunath, Dilip < dilip.sunkummanjunath at boeing.com> wrote: > Hi All, > > > > > > > > > > Can someone help me in point to resource to archive show back capability? > What are all the ways in general, other have done this? > > > > > > Thanks > > Dilip > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bitskrieg at bitskrieg.net Sat Mar 17 21:10:50 2018 From: bitskrieg at bitskrieg.net (Chris Apsey) Date: Sat, 17 Mar 2018 17:10:50 -0400 Subject: [Openstack-operators] [openstack operators][neutron] Neutron Router getting address not inside allocation range on provider network In-Reply-To: References: <96de32bbd8ec42008203d31eb4b6a42d@XCH15-05-06.nw.nos.boeing.com> Message-ID: All, Had a strange incident the other day that seems like it shouldn't be possible inside of neutron... We are currently running Queens on Ubuntu 16.04 w/ the linuxbridge ml2 plugin with vxlan overlays. We have a single, large provider network that we have set to 'shared' and 'external', so people who need to do things that don't work well with NAT can connect their instances directly to the provider network. Our 'allocation range' as defined in our provider subnet is dedicated to tenants, so there should be no conflicts. The other day, one of our users connected a neutron router to the provider network (not via the 'external network' option, but rather via the normal 'add interface' option) and neglected to specify an IP address. The neutron router decided that it was now the gateway for the entire provider network and began arp'ing as such (I'm sure you can imagine the results). To me, this seems like it should be disallowed inside of neutron (you shouldn't be able to specify an IP address for a router interface that isn't explicitly part of your allocation range on said subnet). Does neutron just expect issues like this to be handled by the physical provider infrastructure (spoofing prevention, etc.)? Thanks, --- v/r Chris Apsey bitskrieg at bitskrieg.net https://www.bitskrieg.net From cedlerouge at dalarmor.org Sun Mar 18 17:28:55 2018 From: cedlerouge at dalarmor.org (Cedlerouge) Date: Sun, 18 Mar 2018 18:28:55 +0100 Subject: [Openstack-operators] tracking history of floating IP Message-ID: Hi all I need to get history of a floating IP, to know which instance or which user used the floating IP at a specific time in the past. I believe this is based on events. Is panko (whith ceilometer) the solution or do i setup an ELK to do this ? Or Maybe you use another solution, I'm interested on if you have some advice or feedback Best regards -- Cedlerouge From jerome.pansanel at iphc.cnrs.fr Sun Mar 18 20:10:47 2018 From: jerome.pansanel at iphc.cnrs.fr (Jerome Pansanel) Date: Sun, 18 Mar 2018 21:10:47 +0100 Subject: [Openstack-operators] tracking history of floating IP In-Reply-To: References: Message-ID: Hi, We have developed a simple MySQL trigger to register the floating ip usage: https://github.com/FranceGrilles/openstack-triggers (a recent modification has not been yet committed, that cover the case where Heat is assigning a floating ip). Cheers, Jerome Le 18/03/2018 à 18:28, Cedlerouge a écrit : > Hi all > > I need to get history of a floating IP, to know which instance or which > user used the floating IP at a specific time in the past. > I believe this is based on events. Is panko (whith ceilometer) the > solution or do i setup an ELK to do this ? > Or Maybe you use another solution, I'm interested on if you have some > advice or feedback > > Best regards > -- Jerome Pansanel, PhD Technical Director at France Grilles Grid & Cloud Computing Operations Manager at IPHC IPHC || GSM: +33 (0)6 25 19 24 43 23 rue du Loess, BP 28 || Tel: +33 (0)3 88 10 66 24 F-67037 STRASBOURG Cedex 2 || Fax: +33 (0)3 88 10 62 34 From piotrmisiak1984 at gmail.com Sun Mar 18 21:37:16 2018 From: piotrmisiak1984 at gmail.com (Piotr Misiak) Date: Sun, 18 Mar 2018 22:37:16 +0100 Subject: [Openstack-operators] sporadic missing vxlan-tunnel-port assignment In-Reply-To: <1c068fd8-25c2-8ee4-b9d6-0a1021ecaae1@gmail.com> References: <68784b6e-1e41-7431-04a6-392bbd8a7786@googlemail.com> <1c068fd8-25c2-8ee4-b9d6-0a1021ecaae1@gmail.com> Message-ID: Hi Fabian, We also encounter issues with OVS table 22. Please take a look at this bug: https://bugs.launchpad.net/neutron/+bug/1754695 We are still debugging this. Maybe you hit the same bug? On 07.03.2018 08:56, Fabian Zimmermann wrote: > Hi, > > sorry, it is table 22. > >  Fabian > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From tobias at citynetwork.se Mon Mar 19 09:59:45 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Mon, 19 Mar 2018 10:59:45 +0100 Subject: [Openstack-operators] [publiccloud-wg] New meeting time and call to Forum brainstorming Message-ID: <0b04f6e6-7216-99aa-087b-723a14a04203@citynetwork.se> Hi folks, At last group meeting we decided upon a new meeting time for our bi-weekly meetings - new meeting time is: *Thursdays 1400 UTC odd weeks in IRC channel #openstack-publiccloud* Wiki and official calendar file are updated accordingly. During last meeting we touched on the subject "Forum sessions", got a few ideas. Now it's time to start brainstorming more officially. If you have a topic - please add it to the official list of topics for the Public Cloud WG [1] - and put your name as moderator. If you need or would like to have help moderating, please make a note about that and I'm pretty sure we can figure out a way to solve that as well. Next meeting will focus a bit on the Forum sessions - but it is encouraged to start discussions in #openstack-publiccloud before that. Deadline for proposal submission is April 2nd. More can be read at the Forum wiki page [2]. [1] - https://etherpad.openstack.org/p/YVR-publiccloud-wg-brainstorming [2] - https://wiki.openstack.org/wiki/Forum/Vancouver2018 Talk soon! Tobias -- Tobias Rydberg Senior Developer Mobile: +46 733 312780 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From emccormick at cirrusseven.com Mon Mar 19 14:26:30 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Mon, 19 Mar 2018 10:26:30 -0400 Subject: [Openstack-operators] [Openstack] HA Guide, no Ubuntu instructions for HA Identity In-Reply-To: <1e9d8e69c03a45eb92e2e00beec6fee9@granddial.com> References: <1e9d8e69c03a45eb92e2e00beec6fee9@granddial.com> Message-ID: Looping the list back in since I accidentally dropped it yet again :/ On Mon, Mar 19, 2018 at 8:45 AM, Torin Woltjer wrote: > That's good to know, thank you. Out of curiousity, without > pacemaker/chorosync, does haproxy have the capability to manage a floating > ip and failover etc? > HAProxy can't do that alone. However, using Pacemaker just to manage a floating IP is like using an aircraft carrier to go fishing. It's best to use Keepalived (or similar) to do that job. It only does that one thing, and it does it very well. > ________________________________ > From: Erik McCormick > Sent: 3/16/18 5:22 PM > To: torin.woltjer at granddial.com > Subject: Re: [Openstack] HA Guide, no Ubuntu instructions for HA Identity > There's no good reason to do any of that pacemaker stuff. Just stick haproxy > in front of 2+ servers running Keystone and move along. This is the case for > almost all Openstack services. > > The main exceptions are the Neutron agents. Just look into L3 HA or DVR for > that and you should be good. The guide needs much reworking. > > -Erik > > > > On Mar 16, 2018 11:28 AM, "Torin Woltjer" > wrote: >> >> I'm currently going through the HA guide, setting up openstack HA on >> ubuntu server. I've gotten to this page, >> https://docs.openstack.org/ha-guide/controller-ha-identity.html , and there >> is no instructions for ubuntu. Would I be fine following the instructions >> for SUSE or is there a different process for setting up HA keystone on >> Ubuntu? >> >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> > From jomlowe at iu.edu Mon Mar 19 14:55:40 2018 From: jomlowe at iu.edu (Mike Lowe) Date: Mon, 19 Mar 2018 10:55:40 -0400 Subject: [Openstack-operators] [Openstack] HA Guide, no Ubuntu instructions for HA Identity In-Reply-To: References: <1e9d8e69c03a45eb92e2e00beec6fee9@granddial.com> Message-ID: As far as that goes, if you have 2 haproxies you might as well use both. Use 2 VIPs and DNS round robin between them, configure keepalived to have each haproxy node take one VIP as primary with the other node’s VIP as backup. This has worked well for me for the past couple of years. vrrp_instance VI_1 { state MASTER virtual_router_id 1 … virtual_ipaddress { IP_ADDRESS_1 dev eth0 } } vrrp_instance VI_2 { state BACKUP virtual_router_id 5 … virtual_ipaddress { IP_ADDRESS_2 dev eth0 } } > On Mar 19, 2018, at 10:26 AM, Erik McCormick wrote: > > Looping the list back in since I accidentally dropped it yet again :/ > > On Mon, Mar 19, 2018 at 8:45 AM, Torin Woltjer > wrote: >> That's good to know, thank you. Out of curiousity, without >> pacemaker/chorosync, does haproxy have the capability to manage a floating >> ip and failover etc? >> > > HAProxy can't do that alone. However, using Pacemaker just to manage a > floating IP is like using an aircraft carrier to go fishing. It's best > to use Keepalived (or similar) to do that job. It only does that one > thing, and it does it very well. > >> ________________________________ >> From: Erik McCormick >> Sent: 3/16/18 5:22 PM >> To: torin.woltjer at granddial.com >> Subject: Re: [Openstack] HA Guide, no Ubuntu instructions for HA Identity >> There's no good reason to do any of that pacemaker stuff. Just stick haproxy >> in front of 2+ servers running Keystone and move along. This is the case for >> almost all Openstack services. >> >> The main exceptions are the Neutron agents. Just look into L3 HA or DVR for >> that and you should be good. The guide needs much reworking. >> >> -Erik >> >> >> >> On Mar 16, 2018 11:28 AM, "Torin Woltjer" >> wrote: >>> >>> I'm currently going through the HA guide, setting up openstack HA on >>> ubuntu server. I've gotten to this page, >>> https://docs.openstack.org/ha-guide/controller-ha-identity.html , and there >>> is no instructions for ubuntu. Would I be fine following the instructions >>> for SUSE or is there a different process for setting up HA keystone on >>> Ubuntu? >>> >>> >>> _______________________________________________ >>> Mailing list: >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >> > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4035 bytes Desc: not available URL: From aspiers at suse.com Mon Mar 19 15:26:14 2018 From: aspiers at suse.com (Adam Spiers) Date: Mon, 19 Mar 2018 15:26:14 +0000 Subject: [Openstack-operators] [Openstack] HA Guide, no Ubuntu instructions for HA Identity In-Reply-To: References: <1e9d8e69c03a45eb92e2e00beec6fee9@granddial.com> Message-ID: <20180319152614.ufqlmxaitzuiz4nz@pacific.linksys.moosehall> Erik McCormick wrote: >Looping the list back in since I accidentally dropped it yet again :/ > >On Mon, Mar 19, 2018 at 8:45 AM, Torin Woltjer > wrote: >> That's good to know, thank you. Out of curiousity, without >> pacemaker/chorosync, does haproxy have the capability to manage a floating >> ip and failover etc? > >HAProxy can't do that alone. However, using Pacemaker just to manage a >floating IP is like using an aircraft carrier to go fishing. It's best >to use Keepalived (or similar) to do that job. It only does that one >thing, and it does it very well. Just for balance: from my experience talking with a lot of HA experts over the last few years, I don't think there is broad consensus on this point. With the loud disclaimer that I don't personally have a strong background of experience with keepalived, I've heard multiple experts point out that the simplicity of keepalived can occasionally cause problems, especially related to its lack of fencing and the way it determines which node should be the master and how that can cause problems when there are network partitions. I don't mean to spread FUD, so apologies if it comes across that way, but I recommend doing some in-depth reading so you can make up your mind either way. Additionally take a look at the bug queue for Neutron's L3 HA feature which uses keepalived: https://bugs.launchpad.net/neutron/+bugs?field.tag=l3-ha Certainly many of these are not related directly to keepalived, but equally there are still some issues with this feature which need to be addressed. In summary: HA is *hard*. Anything which claims to be both simple and a full solution deserves to be scrutinised very carefully. From blair.bethwaite at monash.edu Tue Mar 20 06:47:05 2018 From: blair.bethwaite at monash.edu (Blair Bethwaite) Date: Tue, 20 Mar 2018 17:47:05 +1100 Subject: [Openstack-operators] outstanding issues with GPU passthrough Message-ID: Hi all, This has turned into a bit of a screed I'm afraid... Last week I had pings from both the folks at CERN and Catalyst Cloud about GPU-accelerated OpenStack instances with PCI passthrough, specifically asking about the security issues I've mentioned in community forums previously, and any other gotchas. So I figured I'd attempt to answer what I can on-list and hopefully others will share too. I'll also take an action to propose a Forum session something along the lines of "GPUs - state of the art & practice" where we can come together in Vancouver to discuss further. Just writing this has prompted me to pick-up where I last left off on this - currently looking for some upstream QEMU expertise... Firstly, there is this wiki page: https://wiki.openstack.org/wiki/GPUs, which should be relevant but could be expanded based on this discussion. **Issues / Gotchas** These days for the most part things just seem to work if you follow the basic docs and then ensure the usual things required with vfio-pci, like making sure no other host drivers are bound to the device/s. These are the basics which have been covered in a number of Summit presentations, most recently see https://www.openstack.org/videos/sydney-2017/not-only- for-miners-gpu-integration-in-nova-environment for a good overview. This blog http://www.vscaler.com/gpu-passthrough/, although a little dated, is still relevant. Perhaps it would be worth adding these things to docs or wiki. One issue that we've hit a couple of times now, most recently only last week, is with apparmor on Ubuntu being too restrictive when the passthrough device needs to be reattached post-snapshot. This has been discussed on-list in the past - see "[Openstack-operators] PCI Passthrough issues" for a good post-mortem from Jon at MIT. So far I'm not sure if this most recent incarnation is due to a new bug with newer cloudarchive hypervisor stack, or because we have stale templates in our Puppet that are overwriting something we should be getting from the package-shipped apparmor rules - if it turns out to be a new bug we'll report upstream... Perhaps more concerning for new deployers today (assuming deep-learning is a major motivator for adopting this capability) is that GPU P2P doesn't work inside a typical guest instance with multiple GPUs passed through. That's because the emulated flat PCI (not even PCIe) topology inside the guest will make the device drivers think this isn't possible. However, GPU clique support was added to QEMU 2.11 in https://github.com/qemu/qemu/commit/dfbee78db8fdf7bc8c151c3d29504bb47438480b#diff-38093e21794c7a4987feb71e498dbdc6. There's no Libvirt support for this yet, so I'd expect it to be at least another couple of cycles before we might see this hitting Nova. In any case, we are about to start kicking the tires on it and will report back. **Security** The big issue that's inherent with PCI passthrough is that you have to give up the whole device (and arguably whole server if you are really concerned about security). There is also potential complexity with switched PCIe topologies, which you're likely to encounter on any host with more than a couple of GPUs - if you have "wrong" PCIe chipset then you may not be able to properly isolate the devices from each other. I believe the hyperscalers may use PCIe switching fabrics with external device housing, as opposed to GPUs directly inside the host. They have gone to some effort to ensure things like PCIe Address Translation Services (ATS) get turned off - ATS is basically an IOMMU bypass cache on the device used to speed up DMA, if that was exploitable on any particular device it could then allow reading arbitrary host memory. See e.g. https://medium.com/google-cloud/exploring-the-nuances-of-pci-and-pcie-7edf44acef94 . Further on the security front, it's important to note that the PCIe specs largely predate our highly virtualised cloud world. Even extensions like SRIOV are comparatively old, and that's not actually implemented for any GPU of interest today (have the Firepros ever come out from behind the marketing curtain?). Device drivers assume root-privileged code has system level access to the hardware. There are a bunch of low-level device control registers exposed through the device's PCI BAR0 config space - my understanding is that there are a few sets of those registers that a guest OS has no business accessing, e.g., power control, compatibility interrupts, bus resets. Many of these could at least allow a malicious guest to brick the device and/or cause the whole host to reset. Unfortunately none of that information is in the public domain save for what the Envy project has managed to reverse engineer: https://envytools.readthedocs.io/en/latest/hw/bus/pci.html, quote: "Todo nuke this file and write a better one - it sucks". Attempts to cajole NVIDIA into releasing info on this have been largely unsuccessful, but I am at least aware they have analysed these issues and given technical guidance on risk mitigations to partners. We haven't solved any of this at Monash, but we're also not running a public cloud and only have a limited set of internal tenants/projects that have access to our GPU flavors, so it's not a big risk to us at the moment. As far as trying to lock some of this down goes, the good news is that QEMU appears to have an existing mechanism in place to block/intercept accesses to these control registers/windows (see hw/vfio/pci-quirks.c). So it's a matter of getting a reference that can be used to add the appropriate quirks... **Future** If the GPU clique support works for P2P that will be great. But at least from NVIDIA's side it seems that the Linux mdev based vGPU is the way forward (you can still have a 1:1 vGPU:pGPU allocation for heavy workloads). Last I heard, we could expect a Linux host-side driver for this within a month or so. There is at least one interesting architectural complication inherent in the vGPU licensing model though, which is that the guest vGPU drivers will need to be able to the vGPU license server/s, which necessarily requires some link between tenant and provider networks. Haven't played with any of this first-hand yet so not sure how problematic (or not) it might be. Anyway, hopefully all this is useful in some way. Perhaps if we get enough customers pressuring NVIDIA SAs to disclose the PCIe security info, it might get us somewhere on the road to securing passthrough. Cheers, -- Blair Bethwaite Senior HPC Consultant Monash eResearch Centre Monash University Room G26, 15 Innovation Walk, Clayton Campus Clayton VIC 3800 Australia Mobile: 0439-545-002 Office: +61 3-9903-2800 <+61%203%209903%202800> -------------- next part -------------- An HTML attachment was scrubbed... URL: From blair.bethwaite at monash.edu Tue Mar 20 08:04:33 2018 From: blair.bethwaite at monash.edu (Blair Bethwaite) Date: Tue, 20 Mar 2018 08:04:33 +0000 Subject: [Openstack-operators] outstanding issues with GPU passthrough In-Reply-To: References: Message-ID: I forgot to specifically address one of the questions that Belmiro raised, which is regarding device clean-up. I guess this would be relevant to Ironic bare-metal clouds too. If the hypervisor has blocked access to problematic areas of the PCI config space then this probably isn't necessary, but as I mentioned below, this isn't happening with QEMU/KVM today. I asked an NVIDIAn about forcing a firmware flash on the device as a possible means to ensure the firmware is correct (something that could be done between device allocations by some management layer like Cyborg). He told this would definitely not be recommended for the risk of bricking the device, besides I couldn't find any tools that do this. Apparently the firmware is signed. However there doesn't seem to be any publicly available technical detail on the signing process, so I don't know whether it enables the device to verify the source of a firmware write, or if it's just something that NVIDIA's own drivers check by reading the firmware ROM. On Tue., 20 Mar. 2018, 17:47 Blair Bethwaite, wrote: > Hi all, > > This has turned into a bit of a screed I'm afraid... > > Last week I had pings from both the folks at CERN and Catalyst Cloud about > GPU-accelerated OpenStack instances with PCI passthrough, specifically > asking about the security issues I've mentioned in community forums > previously, and any other gotchas. So I figured I'd attempt to answer what > I can on-list and hopefully others will share too. I'll also take an action > to propose a Forum session something along the lines of "GPUs - state of > the art & practice" where we can come together in Vancouver to discuss > further. Just writing this has prompted me to pick-up where I last left off > on this - currently looking for some upstream QEMU expertise... > > Firstly, there is this wiki page: https://wiki.openstack.org/wiki/GPUs, > which should be relevant but could be expanded based on this discussion. > > **Issues / Gotchas** > > These days for the most part things just seem to work if you follow the > basic docs and then ensure the usual things required with vfio-pci, like > making sure no other host drivers are bound to the device/s. These are the > basics which have been covered in a number of Summit presentations, most > recently see > https://www.openstack.org/videos/sydney-2017/not-only-for-miners-gpu-integration-in-nova-environment > for a good overview. This blog http://www.vscaler.com/gpu-passthrough/, > although a little dated, is still relevant. Perhaps it would be worth > adding these things to docs or wiki. > > One issue that we've hit a couple of times now, most recently only last > week, is with apparmor on Ubuntu being too restrictive when the passthrough > device needs to be reattached post-snapshot. This has been discussed > on-list in the past - see "[Openstack-operators] PCI Passthrough issues" > for a good post-mortem from Jon at MIT. So far I'm not sure if this most > recent incarnation is due to a new bug with newer cloudarchive hypervisor > stack, or because we have stale templates in our Puppet that are > overwriting something we should be getting from the package-shipped > apparmor rules - if it turns out to be a new bug we'll report upstream... > > Perhaps more concerning for new deployers today (assuming deep-learning is > a major motivator for adopting this capability) is that GPU P2P doesn't > work inside a typical guest instance with multiple GPUs passed through. > That's because the emulated flat PCI (not even PCIe) topology inside the > guest will make the device drivers think this isn't possible. However, GPU > clique support was added to QEMU 2.11 in > https://github.com/qemu/qemu/commit/dfbee78db8fdf7bc8c151c3d29504bb47438480b#diff-38093e21794c7a4987feb71e498dbdc6. > There's no Libvirt support for this yet, so I'd expect it to be at least > another couple of cycles before we might see this hitting Nova. In any > case, we are about to start kicking the tires on it and will report back. > > **Security** > > The big issue that's inherent with PCI passthrough is that you have to > give up the whole device (and arguably whole server if you are really > concerned about security). There is also potential complexity with switched > PCIe topologies, which you're likely to encounter on any host with more > than a couple of GPUs - if you have "wrong" PCIe chipset then you may not > be able to properly isolate the devices from each other. I believe the > hyperscalers may use PCIe switching fabrics with external device housing, > as opposed to GPUs directly inside the host. They have gone to some effort > to ensure things like PCIe Address Translation Services (ATS) get turned > off - ATS is basically an IOMMU bypass cache on the device used to speed up > DMA, if that was exploitable on any particular device it could then allow > reading arbitrary host memory. See e.g. > https://medium.com/google-cloud/exploring-the-nuances-of-pci-and-pcie-7edf44acef94 > . > > Further on the security front, it's important to note that the PCIe specs > largely predate our highly virtualised cloud world. Even extensions like > SRIOV are comparatively old, and that's not actually implemented for any > GPU of interest today (have the Firepros ever come out from behind the > marketing curtain?). Device drivers assume root-privileged code has system > level access to the hardware. There are a bunch of low-level device control > registers exposed through the device's PCI BAR0 config space - my > understanding is that there are a few sets of those registers that a guest > OS has no business accessing, e.g., power control, compatibility > interrupts, bus resets. Many of these could at least allow a malicious > guest to brick the device and/or cause the whole host to reset. > Unfortunately none of that information is in the public domain save for > what the Envy project has managed to reverse engineer: > https://envytools.readthedocs.io/en/latest/hw/bus/pci.html, quote: "Todo > nuke this file and write a better one - it sucks". Attempts to cajole > NVIDIA into releasing info on this have been largely unsuccessful, but I am > at least aware they have analysed these issues and given technical guidance > on risk mitigations to partners. > > We haven't solved any of this at Monash, but we're also not running a > public cloud and only have a limited set of internal tenants/projects that > have access to our GPU flavors, so it's not a big risk to us at the moment. > As far as trying to lock some of this down goes, the good news is that QEMU > appears to have an existing mechanism in place to block/intercept accesses > to these control registers/windows (see hw/vfio/pci-quirks.c). So it's a > matter of getting a reference that can be used to add the appropriate > quirks... > > **Future** > > If the GPU clique support works for P2P that will be great. But at least > from NVIDIA's side it seems that the Linux mdev based vGPU is the way > forward (you can still have a 1:1 vGPU:pGPU allocation for heavy > workloads). Last I heard, we could expect a Linux host-side driver for this > within a month or so. There is at least one interesting architectural > complication inherent in the vGPU licensing model though, which is that the > guest vGPU drivers will need to be able to the vGPU license server/s, which > necessarily requires some link between tenant and provider networks. > Haven't played with any of this first-hand yet so not sure how problematic > (or not) it might be. > > Anyway, hopefully all this is useful in some way. Perhaps if we get enough > customers pressuring NVIDIA SAs to disclose the PCIe security info, it > might get us somewhere on the road to securing passthrough. > > Cheers, > > -- > Blair Bethwaite > Senior HPC Consultant > > Monash eResearch Centre > Monash University > Room G26, 15 Innovation Walk, Clayton Campus > Clayton VIC 3800 > Australia > Mobile: 0439-545-002 > Office: +61 3-9903-2800 <+61%203%209903%202800> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotrmisiak1984 at gmail.com Tue Mar 20 08:26:01 2018 From: piotrmisiak1984 at gmail.com (Piotr Misiak) Date: Tue, 20 Mar 2018 09:26:01 +0100 Subject: [Openstack-operators] sporadic missing vxlan-tunnel-port assignment In-Reply-To: References: <68784b6e-1e41-7431-04a6-392bbd8a7786@googlemail.com> <1c068fd8-25c2-8ee4-b9d6-0a1021ecaae1@gmail.com> Message-ID: <1843048a-17b6-03a8-62d6-876986dbdc7b@gmail.com> Hi Fabian, We are running Pike. We have DVR with centralized SNAT on L3-HA routers. We also have L2 population enabled which I think is responsible for the issue. We can reproduce the issue almost every time we are spawning 100 VMs at once. Usually 3-6 VMs have no connectivity to services running on network nodes like DHCP, DNS so they don't have network configured. As I mentioned in the bug we also observing issues with L3-HA routers provisioned using Heat. After debugging I'm almost sure there is a race condition which is triggered by massive network resources provisioning. When we provision routers manually we always have a working router. On 19.03.2018 15:37, Fabian Zimmermann wrote: > Hi Piotr, > > yes, we still debugging this. > > Im currently trying to reproduce the issue in our dev-env, but without > luck so far. > > Are you able to reproduce the issue? > > We are running on ocata and currently trying to upgrade to pike > followed by queens as fast as possible. > > What kind of neutron implementation do you use? (f.e. DVR with > network-nodes for SNAT)? > > IRC: #openstack -> devfaz > > Thanks a lot, > >  Fabian > > Am 18.03.2018 um 22:37 schrieb Piotr Misiak: >> Hi Fabian, >> >> >> We also encounter issues with OVS table 22. >> >> Please take a look at this bug: >> https://bugs.launchpad.net/neutron/+bug/1754695 >> >> We are still debugging this. >> >> Maybe you hit the same bug? >> >> >> >> On 07.03.2018 08:56, Fabian Zimmermann wrote: >>> Hi, >>> >>> sorry, it is table 22. >>> >>>   Fabian >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From cedlerouge at dalarmor.org Tue Mar 20 14:27:48 2018 From: cedlerouge at dalarmor.org (Cedlerouge) Date: Tue, 20 Mar 2018 15:27:48 +0100 Subject: [Openstack-operators] tracking history of floating IP In-Reply-To: References: Message-ID: <2e3a53443c08b1500aa55c169e9ee378@dalarmor.org> Thanks, This seems to be a good track. I will check that --- Cedlerouge Le 2018-03-18 21:10, Jerome Pansanel a écrit : > Hi, > > We have developed a simple MySQL trigger to register the floating ip > usage: > https://github.com/FranceGrilles/openstack-triggers > > (a recent modification has not been yet committed, that cover the case > where Heat is assigning a floating ip). > > Cheers, > > Jerome > > Le 18/03/2018 à 18:28, Cedlerouge a écrit : >> Hi all >> >> I need to get history of a floating IP, to know which instance or >> which >> user used the floating IP at a specific time in the past. >> I believe this is based on events. Is panko (whith ceilometer) the >> solution or do i setup an ELK to do this ? >> Or Maybe you use another solution, I'm interested on if you have some >> advice or feedback >> >> Best regards >> From blair.bethwaite at gmail.com Tue Mar 20 14:26:05 2018 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Wed, 21 Mar 2018 01:26:05 +1100 Subject: [Openstack-operators] [scientific] IRC meeting 2100UTC Message-ID: Hi all, Reminder there's a Scientific SIG meeting coming up in about 6.5 hours. All comers welcome. (https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_March_20th_2018) ==== IRC Meeting March 20th 2018 ==== 2018-03-20 2100 UTC in channel #openstack-meeting # Forum brainstorming (https://etherpad.openstack.org/p/YVR18-scientific-sig-brainstorming) # Discussion regarding state of GPU passthrough and interest in collaborating to resolve security issues # AOB -- Cheers, ~Blairo From mihalis68 at gmail.com Tue Mar 20 15:16:20 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 20 Mar 2018 11:16:20 -0400 Subject: [Openstack-operators] Ops Meetups team - minutes of team meeting 3/20/2018 Message-ID: Hello Everyone, We had a good meeting today on IRC, links below for the minutes and log. We went over some lesson learned from the Tokyo event as well as plans for the future. On that note, please stand by for some exciting news and discussion about the future of Ops Meetups and OpenStack PTG, as there seems to be increasing support for combining the two events into one. I expect an email thread about this any minute now, here on openstack-operators! Chris Meeting ended Tue Mar 20 15:00:02 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 11:00 AM Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-03-20-14.00.html 11:00 AM Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-03-20-14.00.txt 11:00 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-03-20-14.00.log.html -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Mar 20 15:37:21 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 20 Mar 2018 10:37:21 -0500 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback Message-ID: <5AB12AB1.2020906@openstack.org> Hi there! As discussions are underway for planning the Ops meetup during the second half of the year, I wanted to reach out with some thoughts and to see how the Foundation staff can help. We are committed to supporting the Ops community and want you all to know that it continues to be a major priority for the Foundation. UC Update We've been meeting regularly with the User Committee to help establish goals for the Committee as well as Operators and End Users. There are three critical things that we identified as immediate areas of concern: * How to involve operators, end users, and app-devs that are not in the normal cycle of communications within the community (IRC, MLs, Summit, Forum, etc..) * Ensuring a productive communication loop between the User and Dev communities so feedback from OS Days, local user groups, and Ops Meetups are communicated and brought to the Forum in a way that allows developers to address concerns in future release cycles. * Removing perceived barriers and building relationships between User and Dev communities General Feedback from Ops Meetups We're starting to lay the groundwork to address some of these concerns, but we need feedback from the Ops community before moving forward. Some of the feedback we've gotten from operators is they don't see their needs being met during release cycles. We're hoping you can help us answer a few questions and see if we can figure out a way to improve: * What are the short and long term goals for Ops Meetups? * Do you feel like the existing format is helping to achieve those goals? * How can the OpenStack Foundation staff work to support your efforts? Ops 2H 2018 Meetup In addition to those questions, we'd like to pitch an option for you for the next Ops Meetup. The upcoming PTG is the week of September 10 in North America. We have an opportunity to co-locate the Ops Meetup at the PTG. If the Ops community was interested in this, we would have separate space with your own work sessions and separate branding for the Ops attendees. This would also involve updating the language on the OpenStack website and potentially renaming the PTG to something more inclusive to both groups. Evenings at a co-located event would allow for relationship building and problem sharing. We're pitching this as a way to bring these two groups together, while still allowing them to have distinct productive events. That said, we're in no way trying to force the situation. If, as a group, you decide you'd prefer to continue managing these events on your own, we're happy to support that in whatever way we can. If you have an opinion one way or the other, please weigh in here: https://www.surveymonkey.com/r/OpsMeetup2H2018 Events and Communication Regardless of the location of the event the second half of this year, we would like to continue refining the feedback loop and determine how ambassadors, user group leaders and OpenStack Days play into the mix. We plan to have Forum sessions in Vancouver and Berlin and encourage all Users to attend to discuss ways that we can provide more meaningful discussion between Ops and Devs. Generally, we’ve been discussing a communication architecture around events: * OpenStack Days - Having an Ops track at OpenStack Days in an effort to solicit feedback and open discussion from operators, especially those who might normally not attend other events. The goal here is to generate common operator issues and features, and also share best practices. The Public Cloud WG has been successfully pioneering this approach at several OpenStack Days last year. * Ops Meetup - Take the content generated by all of the OpenStack Days Ops tracks and use them to narrow down how the Ops and Dev communities can get the software to do what it should. The result is a focused list of topics that we can use to create Forum sessions. There are also opportunities to share and document knowledge, talk about technology integration and best practices. * Forum - Using the content generated through the prior two, we propose sessions and discussions held at the Forum to funnel that feedback directly to the dev community and collaborate with them. This is again why I think there is some value in at least trying out the PTG for a cycle or two, even if it's ultimately decided it isn't fruitful. I realize this is a lot to absorb. Please review with relevant parties and let us know your thoughts. The only catch on the PTG is we would need a decision by April 4 in order to allocate the correct amount of space. Thanks much for your time! Jimmy McArthur -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Mar 20 16:03:59 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 20 Mar 2018 17:03:59 +0100 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: <5AB12AB1.2020906@openstack.org> References: <5AB12AB1.2020906@openstack.org> Message-ID: <46d42244-e382-c17b-b74b-4b0351aa446e@openstack.org> Jimmy McArthur wrote: > [...] > We've been meeting regularly with the User Committee to help establish > goals for the Committee as well as Operators and End Users.  There are > three critical things that we identified as immediate areas of concern: > > * How to involve operators, end users, and app-devs that are not in > the normal cycle of communications within the community (IRC, MLs, > Summit, Forum, etc..) > * Ensuring a productive communication loop between the User and Dev > communities so feedback from OS Days, local user groups, and Ops > Meetups are communicated and brought to the Forum in a way that > allows developers to address concerns in future  release cycles. > * Removing perceived barriers and building relationships between User > and Dev communities ++ Great list! > [...] > Ops 2H 2018 Meetup > In addition to those questions, we'd like to pitch an option for you for > the next Ops Meetup.  The upcoming PTG is the week of September 10 in > North America. We have an opportunity to co-locate the Ops Meetup at the > PTG. I think it's generally a good idea, especially for work sessions. The PTG already turned into an event where any group of contributors, whatever their focus is, can meet in person and do some work. We had the Public Cloud WG in Dublin and I feel like they had very productive discussions ! > If the Ops community was interested in this, we would have separate > space with your own work sessions and separate branding for the Ops > attendees. This would also involve updating the language on the > OpenStack website and potentially renaming the PTG to something more > inclusive to both groups. Personally, I'm not a big fan of separate branding (or "co-location"). If the "PTG" name is seen as too developer-centric, I'd rather change the event name (and clearly make it a work event for anyone contributing to OpenStack, whatever the shape of their group). Otherwise we just perpetuate the artificial separation by calling it an ops event co-located with a dev event. It's really a single "contributor" event. -- Thierry Carrez (ttx) From openstack at medberry.net Tue Mar 20 16:11:21 2018 From: openstack at medberry.net (David Medberry) Date: Tue, 20 Mar 2018 10:11:21 -0600 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: <46d42244-e382-c17b-b74b-4b0351aa446e@openstack.org> References: <5AB12AB1.2020906@openstack.org> <46d42244-e382-c17b-b74b-4b0351aa446e@openstack.org> Message-ID: On Tue, Mar 20, 2018 at 10:03 AM, Thierry Carrez wrote: > > Personally, I'm not a big fan of separate branding (or "co-location"). > If the "PTG" name is seen as too developer-centric, I'd rather change > the event name (and clearly make it a work event for anyone contributing > to OpenStack, whatever the shape of their group). Otherwise we just > perpetuate the artificial separation by calling it an ops event > co-located with a dev event. It's really a single "contributor" event. > > -- > Thierry Carrez (ttx) > Amen. What Thierry says. I wasn't in Dublin but I really got the feel from twitter, blogs, and emails it was more than just the PTG going on. Let's acknowledge that with a rename and have the Ops join in not as a "wannabes" but as Community members in full. Thanks all to suggesting/offering to do this. MAKE IT SO. -dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Tue Mar 20 16:25:04 2018 From: amy at demarco.com (Amy Marrich) Date: Tue, 20 Mar 2018 11:25:04 -0500 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: <46d42244-e382-c17b-b74b-4b0351aa446e@openstack.org> References: <5AB12AB1.2020906@openstack.org> <46d42244-e382-c17b-b74b-4b0351aa446e@openstack.org> Message-ID: I had jokingly called it the 'OpenStack Community Working Midcycle' during the UC meeting because I always wondered if the Gathering part of PTG had made it hard for people to get support to go. But I really do like the word contributor mentioned here and I think we should stress that in the re-naming as Operators and their feedback are a very large contribution. Amy (spotz) On Tue, Mar 20, 2018 at 11:03 AM, Thierry Carrez wrote: > Jimmy McArthur wrote: > > [...] > > We've been meeting regularly with the User Committee to help establish > > goals for the Committee as well as Operators and End Users. There are > > three critical things that we identified as immediate areas of concern: > > > > * How to involve operators, end users, and app-devs that are not in > > the normal cycle of communications within the community (IRC, MLs, > > Summit, Forum, etc..) > > * Ensuring a productive communication loop between the User and Dev > > communities so feedback from OS Days, local user groups, and Ops > > Meetups are communicated and brought to the Forum in a way that > > allows developers to address concerns in future release cycles. > > * Removing perceived barriers and building relationships between User > > and Dev communities > > ++ Great list! > > > [...] > > Ops 2H 2018 Meetup > > In addition to those questions, we'd like to pitch an option for you for > > the next Ops Meetup. The upcoming PTG is the week of September 10 in > > North America. We have an opportunity to co-locate the Ops Meetup at the > > PTG. > > I think it's generally a good idea, especially for work sessions. The > PTG already turned into an event where any group of contributors, > whatever their focus is, can meet in person and do some work. We had the > Public Cloud WG in Dublin and I feel like they had very productive > discussions ! > > > If the Ops community was interested in this, we would have separate > > space with your own work sessions and separate branding for the Ops > > attendees. This would also involve updating the language on the > > OpenStack website and potentially renaming the PTG to something more > > inclusive to both groups. > > Personally, I'm not a big fan of separate branding (or "co-location"). > If the "PTG" name is seen as too developer-centric, I'd rather change > the event name (and clearly make it a work event for anyone contributing > to OpenStack, whatever the shape of their group). Otherwise we just > perpetuate the artificial separation by calling it an ops event > co-located with a dev event. It's really a single "contributor" event. > > -- > Thierry Carrez (ttx) > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at medberry.net Tue Mar 20 16:30:50 2018 From: openstack at medberry.net (David Medberry) Date: Tue, 20 Mar 2018 10:30:50 -0600 Subject: [Openstack-operators] Ops Meetups team - minutes of team meeting 3/20/2018 In-Reply-To: References: Message-ID: On Tue, Mar 20, 2018 at 9:16 AM, Chris Morgan wrote: > > On that note, please stand by for some exciting news and discussion about > the future of Ops Meetups and OpenStack PTG, as there seems to be > increasing support for combining the two events into one. I expect an email > thread about this any minute now, here on openstack-operators! > > Chris > > So glad to see the foundation in favor of rounding up the community! Happy to have it happen (and even likely to attend). Thanks Chris, thanks Foundation! -d -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Mar 20 16:45:52 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 20 Mar 2018 11:45:52 -0500 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: <5AB12AB1.2020906@openstack.org> References: <5AB12AB1.2020906@openstack.org> Message-ID: <5AB13AC0.9040309@openstack.org> Posting this from Dave Medberry (and with permission): While no longer technically an operator I'm 100% for colocating PTG and Ops Meetups. I know some folks don't want to feel like tagalongs but I think you are addressing that (and making a specific Ops invite and considering a name change.) So that's my $0.02 worth Goals for Ops Meetups: * Community and F2F time with other Ops * Shared Commiseration and potentially action items if there is such shared commiseration * Wins (how you as an op are winning at operations). Ie., best practices, tips & tricks, etc. * Updates/ino about key projects. We've had Nova cores/ptls, Swift cores/ptls, etc talk at various Ops Meetups. I'd like to see this continue. Ie, what's in the just released thing we need to worry about (or being released as we are there.) What was in the last release that was a gotcha that as we move forward with deployment we need to worry about (over and above what's on the relnotes.) I've sent this to you but feel free to share broadly. -dave David Medberry OpenStack DevOps & Cloud Manageability > Jimmy McArthur > March 20, 2018 at 10:37 AM > Hi there! > > As discussions are underway for planning the Ops meetup during the > second half of the year, I wanted to reach out with some thoughts and > to see how the Foundation staff can help. We are committed to > supporting the Ops community and want you all to know that it > continues to be a major priority for the Foundation. > > UC Update > We've been meeting regularly with the User Committee to help establish > goals for the Committee as well as Operators and End Users. There are > three critical things that we identified as immediate areas of concern: > > * How to involve operators, end users, and app-devs that are not in > the normal cycle of communications within the community (IRC, MLs, > Summit, Forum, etc..) > * Ensuring a productive communication loop between the User and Dev > communities so feedback from OS Days, local user groups, and Ops > Meetups are communicated and brought to the Forum in a way that > allows developers to address concerns in future release cycles. > * Removing perceived barriers and building relationships between > User and Dev communities > > > General Feedback from Ops Meetups > We're starting to lay the groundwork to address some of these > concerns, but we need feedback from the Ops community before moving > forward. Some of the feedback we've gotten from operators is they > don't see their needs being met during release cycles. We're hoping > you can help us answer a few questions and see if we can figure out a > way to improve: > > * What are the short and long term goals for Ops Meetups? > * Do you feel like the existing format is helping to achieve those > goals? > * How can the OpenStack Foundation staff work to support your efforts? > > > Ops 2H 2018 Meetup > In addition to those questions, we'd like to pitch an option for you > for the next Ops Meetup. The upcoming PTG is the week of September 10 > in North America. We have an opportunity to co-locate the Ops Meetup > at the PTG. > > If the Ops community was interested in this, we would have separate > space with your own work sessions and separate branding for the Ops > attendees. This would also involve updating the language on the > OpenStack website and potentially renaming the PTG to something more > inclusive to both groups. > > Evenings at a co-located event would allow for relationship building > and problem sharing. We're pitching this as a way to bring these two > groups together, while still allowing them to have distinct productive > events. That said, we're in no way trying to force the situation. If, > as a group, you decide you'd prefer to continue managing these events > on your own, we're happy to support that in whatever way we can. > > If you have an opinion one way or the other, please weigh in here: > https://www.surveymonkey.com/r/OpsMeetup2H2018 > > Events and Communication > Regardless of the location of the event the second half of this year, > we would like to continue refining the feedback loop and determine how > ambassadors, user group leaders and OpenStack Days play into the mix. > We plan to have Forum sessions in Vancouver and Berlin and encourage > all Users to attend to discuss ways that we can provide more > meaningful discussion between Ops and Devs. Generally, we’ve been > discussing a communication architecture around events: > > * OpenStack Days - Having an Ops track at OpenStack Days in an > effort to solicit feedback and open discussion from operators, > especially those who might normally not attend other events. The > goal here is to generate common operator issues and features, and > also share best practices. The Public Cloud WG has been > successfully pioneering this approach at several OpenStack Days > last year. > * Ops Meetup - Take the content generated by all of the OpenStack > Days Ops tracks and use them to narrow down how the Ops and Dev > communities can get the software to do what it should. The result > is a focused list of topics that we can use to create Forum > sessions. There are also opportunities to share and document > knowledge, talk about technology integration and best practices. > * Forum - Using the content generated through the prior two, we > propose sessions and discussions held at the Forum to funnel that > feedback directly to the dev community and collaborate with them. > > > This is again why I think there is some value in at least trying out > the PTG for a cycle or two, even if it's ultimately decided it isn't > fruitful. > > I realize this is a lot to absorb. Please review with relevant > parties and let us know your thoughts. The only catch on the PTG is > we would need a decision by April 4 in order to allocate the correct > amount of space. > > Thanks much for your time! > Jimmy McArthur > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Mar 20 18:09:16 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 20 Mar 2018 18:09:16 +0000 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: <5AB12AB1.2020906@openstack.org> References: <5AB12AB1.2020906@openstack.org> Message-ID: <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> On 2018-03-20 10:37:21 -0500 (-0500), Jimmy McArthur wrote: [...] > We have an opportunity to co-locate the Ops Meetup at the PTG. [...] To echo what others have said so far, I'm wholeheartedly in favor of this idea. It's no secret I'm not a fan of the seemingly artificial schism in our community between contributors who mostly write software and contributors who mostly run software. There's not enough crossover with the existing event silos, and I'd love to see increasing opportunities for those of us who mostly write software to collaborate more closely with those who mostly run software (and vice versa). Having dedicated events and separate named identities for these overlapping groups of people serves only to further divide us, rather than bring us together where we can better draw on our collective strengths to make something great. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Tim.Bell at cern.ch Tue Mar 20 19:48:31 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Tue, 20 Mar 2018 19:48:31 +0000 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> Message-ID: <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> Interesting debate, thanks for raising it. Would we still need the same style of summit forum if we have the OpenStack Community Working Gathering? One thing I have found with the forum running all week throughout the summit is that it tends to draw audience away from other talks so maybe we could reduce the forum to only a subset of the summit time? Would increasing the attendance level also lead to an increased entrance price compared to the PTG? I seem to remember the Ops meetup entrance price was nominal. Getting the input from the OpenStack days would be very useful to get coverage. I've found them to be well organised community events with good balance between local companies and interesting talks. Tim -----Original Message----- From: Jeremy Stanley Date: Tuesday, 20 March 2018 at 19:15 To: openstack-operators Subject: Re: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback On 2018-03-20 10:37:21 -0500 (-0500), Jimmy McArthur wrote: [...] > We have an opportunity to co-locate the Ops Meetup at the PTG. [...] To echo what others have said so far, I'm wholeheartedly in favor of this idea. It's no secret I'm not a fan of the seemingly artificial schism in our community between contributors who mostly write software and contributors who mostly run software. There's not enough crossover with the existing event silos, and I'd love to see increasing opportunities for those of us who mostly write software to collaborate more closely with those who mostly run software (and vice versa). Having dedicated events and separate named identities for these overlapping groups of people serves only to further divide us, rather than bring us together where we can better draw on our collective strengths to make something great. -- Jeremy Stanley From doug at doughellmann.com Tue Mar 20 20:28:09 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 20 Mar 2018 16:28:09 -0400 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> Message-ID: <1521575953-sup-8576@lrrr.local> Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: > > Interesting debate, thanks for raising it. > > Would we still need the same style of summit forum if we have the > OpenStack Community Working Gathering? One thing I have found with > the forum running all week throughout the summit is that it tends > to draw audience away from other talks so maybe we could reduce the > forum to only a subset of the summit time? I support the idea of having all contributors attend the contributor event (and rebranding it to reflect that change in emphasis), but it's not quite clear how the result would be different from the Forum. Is it just the scheduling? (Having input earlier in the cycle would be convenient, for sure.) Thierry's comment about "work sessions" earlier in the thread seems key. Looking over the agenda for the last Ops Meetup [1], I see a few things that, based solely on the titles (I wasn't able to attend), don't really look like topics that have been discussed at the PTG in the past ("legacy workloads migration to OpenStack", "vSwitch or Offload by SmartNIC?", "NFV hardware design"). Would those move to the Forum? Or some other event like the regional OpenStack Days? Or maybe I'm misunderstanding what was discussed -- I don't question the usefulness of the topics, it's just not clear what the outcomes were, or if those were "work sessions". I see other topics for which there is more clear overlap. Items like "Fast forward upgrades" and "LTS Releases" were covered by PTG sessions and we would have benefited from having more operator input in those sessions. "Documentation" is a tricky one now that most of the work there is being done by project teams. If operator-contributors want to talk more about docs, maybe we'll see them covered more in the project team work sessions, though, and that would be a good thing. I'm interested to hear from folks to attend the Ops meetups regularly to see what they think. Doug [1] https://etherpad.openstack.org/p/TYO-ops-meetup-2018 From thierry at openstack.org Wed Mar 21 09:38:32 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 21 Mar 2018 10:38:32 +0100 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: <1521575953-sup-8576@lrrr.local> References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> <1521575953-sup-8576@lrrr.local> Message-ID: Doug Hellmann wrote: > Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: >> >> Would we still need the same style of summit forum if we have the >> OpenStack Community Working Gathering? One thing I have found with >> the forum running all week throughout the summit is that it tends >> to draw audience away from other talks so maybe we could reduce the >> forum to only a subset of the summit time? > > I support the idea of having all contributors attend the contributor > event (and rebranding it to reflect that change in emphasis), but > it's not quite clear how the result would be different from the > Forum. Is it just the scheduling? (Having input earlier in the cycle > would be convenient, for sure.) > > Thierry's comment about "work sessions" earlier in the thread seems > key. Right, I think the key difference between the PTG and Forum is that one is a work event for engaged contributors that are part of a group spending time on making OpenStack better, while the other is a venue for engaging with everyone in our community. The PTG format is really organized around work groups (whatever their focus is), enabling them to set their short-term goals, assign work items and bootstrap the work. The fact that all those work groups are co-located make it easy to participate in multiple groups, or invite other people to join the discussion where it touches their area of expertise, but it's still mostly a venue for our geographically-distributed workgroups to get together in person and get work done. That's why the agenda is so flexible at the PTG, to maximize the productivity of attendees, even if that can confuse people who can't relate to any specific work group. The Forum format, on the other hand, is organized around specific discussion topics where you want to maximize feedback and input. Forum sessions are not attached to a specific workgroup or team, they are defined by their topic. They are well-advertised on the event schedule, and happen at a precise time. It takes advantage of the thousands of attendees being present to get the most relevant feedback possible. It allows to engage beyond the work groups, to people who can't spend much time getting more engaged and contribute back. The Ops meetup under its current format is mostly work sessions, and those would fit pretty well in the PTG event format. Ideally I would limit the feedback-gathering sessions there and use the Forum (and regional events like OpenStack days) to collect it. That sounds like a better way to reach out to "all users" and take into account their feedback and needs... -- Thierry Carrez (ttx) From pabelanger at redhat.com Wed Mar 21 14:49:22 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 21 Mar 2018 10:49:22 -0400 Subject: [Openstack-operators] Poll: S Release Naming In-Reply-To: <20180313235859.GA14573@localhost.localdomain> References: <20180313235859.GA14573@localhost.localdomain> Message-ID: <20180321144922.GA2922@localhost.localdomain> On Tue, Mar 13, 2018 at 07:58:59PM -0400, Paul Belanger wrote: > Greetings all, > > It is time again to cast your vote for the naming of the S Release. This time > is little different as we've decided to use a public polling option over per > user private URLs for voting. This means, everybody should proceed to use the > following URL to cast their vote: > > https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1&akey=8cfdc1f5df5fe4d3 > > Because this is a public poll, results will currently be only viewable by myself > until the poll closes. Once closed, I'll post the URL making the results > viewable to everybody. This was done to avoid everybody seeing the results while > the public poll is running. > > The poll will officially end on 2018-03-21 23:59:59[1], and results will be > posted shortly after. > > [1] http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst > --- > > According to the Release Naming Process, this poll is to determine the > community preferences for the name of the R release of OpenStack. It is > possible that the top choice is not viable for legal reasons, so the second or > later community preference could wind up being the name. > > Release Name Criteria > > Each release name must start with the letter of the ISO basic Latin alphabet > following the initial letter of the previous release, starting with the > initial release of "Austin". After "Z", the next name should start with > "A" again. > > The name must be composed only of the 26 characters of the ISO basic Latin > alphabet. Names which can be transliterated into this character set are also > acceptable. > > The name must refer to the physical or human geography of the region > encompassing the location of the OpenStack design summit for the > corresponding release. The exact boundaries of the geographic region under > consideration must be declared before the opening of nominations, as part of > the initiation of the selection process. > > The name must be a single word with a maximum of 10 characters. Words that > describe the feature should not be included, so "Foo City" or "Foo Peak" > would both be eligible as "Foo". > > Names which do not meet these criteria but otherwise sound really cool > should be added to a separate section of the wiki page and the TC may make > an exception for one or more of them to be considered in the Condorcet poll. > The naming official is responsible for presenting the list of exceptional > names for consideration to the TC before the poll opens. > > Exact Geographic Region > > The Geographic Region from where names for the S release will come is Berlin > > Proposed Names > > Spree (a river that flows through the Saxony, Brandenburg and Berlin states of > Germany) > > SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin) > > Spandau (One of the twelve boroughs of Berlin) > > Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently > abbreviated as 🍺) > > Steglitz (a locality in the South Western part of the city) > > Springer (Berlin is headquarters of Axel Springer publishing house) > > Staaken (a locality within the Spandau borough) > > Schoenholz (A zone in the Niederschönhausen district of Berlin) > > Shellhaus (A famous office building) > > Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg) > > Schiller (A park in the Mitte borough) > > Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood) > (The adjective form, Saatwinkler is also a really cool bridge but > that form is too long) > > Sonne (Sonnenallee is the name of a large street in Berlin crossing the former > wall, also translates as "sun") > > Savigny (Common place in City-West) > > Soorstreet (Street in Berlin restrict Charlottenburg) > > Solar (Skybar in Berlin) > > See (Seestraße or "See Street" in Berlin) > A friendly reminder, the naming poll will be closing later today (2018-03-21 23:59:59 UTC). If you haven't done so, please take a moment to vote. Thanks, Paul From kgiusti at gmail.com Wed Mar 21 15:30:04 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Wed, 21 Mar 2018 11:30:04 -0400 Subject: [Openstack-operators] Deprecation Notice: Pika driver for oslo.messaging Message-ID: Folks, As announced last year the Oslo team has deprecated support for the Pika transport in oslo.messaging with removal planned for Rocky. We're not aware of any deployments using this transport and its removal is not anticipated to affect anyone. More details can be found in the original announcement [0]. This is notice that the removal is currently underway [1]. Thanks, [0] http://lists.openstack.org/pipermail/openstack-operators/2017-May/013579.html [1] https://review.openstack.org/#/c/536960/ -- Ken Giusti (kgiusti at gmail.com) From mrhillsman at gmail.com Wed Mar 21 15:45:39 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 21 Mar 2018 10:45:39 -0500 Subject: [Openstack-operators] [forum] We want your session ideas for the Vancouver Forum! Message-ID: Hey everyone, Please take time to put ideas for sessions at the forum in the TC and/or UC catch-all etherpads or any of the others that are appropriate: https://wiki.openstack.org/wiki/Forum/Vancouver2018 We really want to get as many session ideas as possible so that the committee has too many to choose from :) Here is an idea of the types of sessions to think about proposing: *Project-specific sessions* Where developers can ask users specific questions about their experience, users can provide feedback from the last release and cross-community collaboration on the priorities and 'blue sky' ideas for the next release can occur. *Strategic, whole-of-community discussions* To think about the big picture, including beyond just one release cycle and new technologies *Cross-project sessions* In a similar vein to what has happened at past design summits, but with increased emphasis on issues that are of relevant to all areas of the community If you have organized any events in the past year you probably have heard of or been in some sessions that are perfect for the Forum. -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Thu Mar 22 00:32:38 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 21 Mar 2018 20:32:38 -0400 Subject: [Openstack-operators] OpenStack "S" Release Naming Preliminary Results Message-ID: <20180322003238.GB14691@localhost.localdomain> Hello all! We decided to run a public poll this time around, we'll likely discuss the process during a TC meeting, but we'd love the hear your feedback. The raw results are below - however ... **PLEASE REMEMBER** that these now have to go through legal vetting. So it is too soon to say 'OpenStack Solar' is our next release, given that previous polls have had some issues with the top choice. In any case, the names will been sent off to legal for vetting. As soon as we have a final winner, I'll let you all know. https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_40b95cb2be3fcdf1&rkey=c04ca6bca83a1427 Result 1. Solar (Condorcet winner: wins contests with all other choices) 2. Stein loses to Solar by 159–138 3. Spree loses to Solar by 175–122, loses to Stein by 148–141 4. Sonne loses to Solar by 190–99, loses to Spree by 174–97 5. Springer loses to Solar by 214–60, loses to Sonne by 147–103 6. Spandau loses to Solar by 195–88, loses to Springer by 125–118 7. See loses to Solar by 203–61, loses to Spandau by 121–111 8. Schiller loses to Solar by 207–70, loses to See by 112–106 9. SBahn loses to Solar by 212–74, loses to Schiller by 111–101 10. Staaken loses to Solar by 219–59, loses to SBahn by 115–89 11. Shellhaus loses to Solar by 213–61, loses to Staaken by 94–85 12. Steglitz loses to Solar by 216–50, loses to Shellhaus by 90–83 13. Saatwinkel loses to Solar by 219–55, loses to Steglitz by 96–57 14. Savigny loses to Solar by 219–51, loses to Saatwinkel by 77–76 15. Schoenholz loses to Solar by 221–46, loses to Savigny by 78–70 16. Suedkreuz loses to Solar by 220–50, loses to Schoenholz by 68–67 17. Soorstreet loses to Solar by 226–32, loses to Suedkreuz by 75–58 - Paul From kevin at benton.pub Thu Mar 22 09:02:23 2018 From: kevin at benton.pub (Kevin Benton) Date: Thu, 22 Mar 2018 04:02:23 -0500 Subject: [Openstack-operators] [openstack operators][neutron] Neutron Router getting address not inside allocation range on provider network In-Reply-To: References: <96de32bbd8ec42008203d31eb4b6a42d@XCH15-05-06.nw.nos.boeing.com> Message-ID: I think you might have uncovered an edge-case that should probably be filed as a bug against Neutron. If a router interface is attached using a reference to a subnet, it always tries to use the address in the "gateway_ip" of the subnet: https://github.com/openstack/neutron/blob/282d3da614f24a6385c63a926a48845d3f6d72a3/neutron/db/l3_db.py#L797-L798 My opinion is that Neutron probably shouldn't allow grabbing the default gateway if you aren't the owner of the subnet, but that is a fix that might not land for a while depending on their priorities (I'm no longer an active developer). In the meantime, I recommend that you create a neutron port as an admin on the public network using the gateway_ip of the network to represent your real gateway router. This will prevent anyone from being able to attach a router using the subnet as a reference since the gateway_ip address will already be in use. Cheers, Kevin Benton On Sat, Mar 17, 2018 at 4:10 PM, Chris Apsey wrote: > All, > > Had a strange incident the other day that seems like it shouldn't be > possible inside of neutron... > > We are currently running Queens on Ubuntu 16.04 w/ the linuxbridge ml2 > plugin with vxlan overlays. We have a single, large provider network that > we have set to 'shared' and 'external', so people who need to do things > that don't work well with NAT can connect their instances directly to the > provider network. Our 'allocation range' as defined in our provider subnet > is dedicated to tenants, so there should be no conflicts. > > The other day, one of our users connected a neutron router to the provider > network (not via the 'external network' option, but rather via the normal > 'add interface' option) and neglected to specify an IP address. The > neutron router decided that it was now the gateway for the entire provider > network and began arp'ing as such (I'm sure you can imagine the results). > > To me, this seems like it should be disallowed inside of neutron (you > shouldn't be able to specify an IP address for a router interface that > isn't explicitly part of your allocation range on said subnet). Does > neutron just expect issues like this to be handled by the physical provider > infrastructure (spoofing prevention, etc.)? > > Thanks, > > --- > v/r > > Chris Apsey > bitskrieg at bitskrieg.net > https://www.bitskrieg.net > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laszlo.budai at gmail.com Thu Mar 22 11:23:51 2018 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Thu, 22 Mar 2018 13:23:51 +0200 Subject: [Openstack-operators] usernames in horizon instance action log Message-ID: <236ddf06-6b83-5010-876f-38246759c53e@gmail.com> Dear all, is there a way to configure horizon to display the username instead of user ID in the instances action log? Right now it displays the: Req Id, Action, Start time, userID, Message Kind regards, Laszlo From bitskrieg at bitskrieg.net Thu Mar 22 13:56:25 2018 From: bitskrieg at bitskrieg.net (Chris Apsey) Date: Thu, 22 Mar 2018 09:56:25 -0400 Subject: [Openstack-operators] [openstack operators][neutron] Neutron Router getting address not inside allocation range on provider network In-Reply-To: References: <96de32bbd8ec42008203d31eb4b6a42d@XCH15-05-06.nw.nos.boeing.com> Message-ID: <1624dff1328.2784.5f0d7f2baa7831a2bbe6450f254d9a24@bitskrieg.net> Thanks, Kevin. I agree that it seems like a bug. I'll go ahead and file. Chris On March 22, 2018 05:10:08 Kevin Benton wrote: I think you might have uncovered an edge-case that should probably be filed as a bug against Neutron. If a router interface is attached using a reference to a subnet, it always tries to use the address in the "gateway_ip" of the subnet: https://github.com/openstack/neutron/blob/282d3da614f24a6385c63a926a48845d3f6d72a3/neutron/db/l3_db.py#L797-L798 My opinion is that Neutron probably shouldn't allow grabbing the default gateway if you aren't the owner of the subnet, but that is a fix that might not land for a while depending on their priorities (I'm no longer an active developer). In the meantime, I recommend that you create a neutron port as an admin on the public network using the gateway_ip of the network to represent your real gateway router. This will prevent anyone from being able to attach a router using the subnet as a reference since the gateway_ip address will already be in use. Cheers, Kevin Benton On Sat, Mar 17, 2018 at 4:10 PM, Chris Apsey wrote: All, Had a strange incident the other day that seems like it shouldn't be possible inside of neutron... We are currently running Queens on Ubuntu 16.04 w/ the linuxbridge ml2 plugin with vxlan overlays. We have a single, large provider network that we have set to 'shared' and 'external', so people who need to do things that don't work well with NAT can connect their instances directly to the provider network. Our 'allocation range' as defined in our provider subnet is dedicated to tenants, so there should be no conflicts. The other day, one of our users connected a neutron router to the provider network (not via the 'external network' option, but rather via the normal 'add interface' option) and neglected to specify an IP address. The neutron router decided that it was now the gateway for the entire provider network and began arp'ing as such (I'm sure you can imagine the results). To me, this seems like it should be disallowed inside of neutron (you shouldn't be able to specify an IP address for a router interface that isn't explicitly part of your allocation range on said subnet). Does neutron just expect issues like this to be handled by the physical provider infrastructure (spoofing prevention, etc.)? Thanks, --- v/r Chris Apsey bitskrieg at bitskrieg.net https://www.bitskrieg.net _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From jon at csail.mit.edu Thu Mar 22 14:03:05 2018 From: jon at csail.mit.edu (Jonathan Proulx) Date: Thu, 22 Mar 2018 10:03:05 -0400 Subject: [Openstack-operators] OpenStack "S" Release Naming Preliminary Results In-Reply-To: <20180322003238.GB14691@localhost.localdomain> References: <20180322003238.GB14691@localhost.localdomain> Message-ID: <20180322140305.GI21100@csail.mit.edu> On Wed, Mar 21, 2018 at 08:32:38PM -0400, Paul Belanger wrote: :6. Spandau loses to Solar by 195–88, loses to Springer by 125–118 Given this is at #6 and formal vetting is yet to come it's probably not much of an issue, but "Spandau's" first association for many will be Nazi war criminals via Spandau Prison https://en.wikipedia.org/wiki/Spandau_Prison So best avoided to say the least. -Jon From ignaziocassano at gmail.com Thu Mar 22 14:06:12 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 22 Mar 2018 15:06:12 +0100 Subject: [Openstack-operators] fwaas v2 Message-ID: Hello all, I am tryining to use fwaas v2 on centos 7 openstack ocata. After creating firewall rules an policy I am looking for creating firewall group . I am able to create the firewall group, but it does not work when I try to set the ports into it. openstack firewall group set --port 87173e27-c2b3-4a67-83d0-d8645d9f309b prova Failed to set firewall group 'prova': Firewall Group Port 87173e27-c2b3-4a67-83d0-d8645d9f309b is invalid Neutron server returns request_ids: ['req-9ef8ad1e-9fad-4956-8aff-907c32d01e1f'] I tried with either router or vm ports but the result is always the same. Anyone could help me, please ? Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at nycresistor.com Thu Mar 22 15:37:34 2018 From: matt at nycresistor.com (Matt Joyce) Date: Thu, 22 Mar 2018 11:37:34 -0400 Subject: [Openstack-operators] OpenStack "S" Release Naming Preliminary Results In-Reply-To: <20180322140305.GI21100@csail.mit.edu> References: <20180322003238.GB14691@localhost.localdomain> <20180322140305.GI21100@csail.mit.edu> Message-ID: +1 On Thu, Mar 22, 2018 at 10:03 AM, Jonathan Proulx wrote: > On Wed, Mar 21, 2018 at 08:32:38PM -0400, Paul Belanger wrote: > > :6. Spandau loses to Solar by 195–88, loses to Springer by 125–118 > > Given this is at #6 and formal vetting is yet to come it's probably > not much of an issue, but "Spandau's" first association for many will > be Nazi war criminals via Spandau Prison > https://en.wikipedia.org/wiki/Spandau_Prison > > So best avoided to say the least. > > -Jon > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mvanwink at rackspace.com Thu Mar 22 21:54:52 2018 From: mvanwink at rackspace.com (Matt Van Winkle) Date: Thu, 22 Mar 2018 21:54:52 +0000 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> <1521575953-sup-8576@lrrr.local> Message-ID: Hey folks, Great discussion! There are number of points to comment on going back through the last few emails. I'll try to do so in line with Theirry's latest below. From a User Committee perspective (and as a member of the Ops Meetup planning team), I am a convert to the idea of co-location, but have come to see a lot of value in it. I'll point some of that out as I respond to specific comments, but first a couple of overarching points. In the current model, the Forum sessions are very much about WHAT the software should do. Keeping the discussions focused on behavior, feature and function has made it much easier for an operator to participate effectively in the conversation versus the older, design sessions, that focused largely on blueprints, coding approaches, etc. These are HOW the developers should make things work and, now, are a large part of the focus of the PTG. I realize it's not that cut and dry, but current model has allowed for this division of "what" and "how" in many areas, and I know several who have found it valuable. The other contextual thing to remember is the PTG was the effective combining of all the various team mid-cycle meetups that were occurring. The current Ops mid-cycle was born in that same period. While it's purpose was a little different, it's spirit is the same - gather a team (in this case operators) together outside the hustle and bustle of a summit to discuss common issues, topics, etc. I'll also point out, that they have been good vehicles in the Ops community to get new folks integrated. For the purpose of this discussion, though, one could argue this is just bringing the last mid-cycle event in to the fold. On 3/21/18, 4:40 AM, "Thierry Carrez" wrote: Doug Hellmann wrote: > Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: >> >> Would we still need the same style of summit forum if we have the >> OpenStack Community Working Gathering? One thing I have found with >> the forum running all week throughout the summit is that it tends >> to draw audience away from other talks so maybe we could reduce the >> forum to only a subset of the summit time? > > I support the idea of having all contributors attend the contributor > event (and rebranding it to reflect that change in emphasis), but > it's not quite clear how the result would be different from the > Forum. Is it just the scheduling? (Having input earlier in the cycle > would be convenient, for sure.) > > Thierry's comment about "work sessions" earlier in the thread seems > key. Right, I think the key difference between the PTG and Forum is that one is a work event for engaged contributors that are part of a group spending time on making OpenStack better, while the other is a venue for engaging with everyone in our community. The PTG format is really organized around work groups (whatever their focus is), enabling them to set their short-term goals, assign work items and bootstrap the work. The fact that all those work groups are co-located make it easy to participate in multiple groups, or invite other people to join the discussion where it touches their area of expertise, but it's still mostly a venue for our geographically-distributed workgroups to get together in person and get work done. That's why the agenda is so flexible at the PTG, to maximize the productivity of attendees, even if that can confuse people who can't relate to any specific work group. Exactly. I know I way over simplified it as working on the "how", but it's very important to honor this aspect of the current PTG. We need this time for the devs and teams to take output from the previous forum sessions (or earlier input) and turn it into plans for the N+1 version. While some folks could drift between sessions, co-locating the Ops mid-cycle is just that - leveraging venue, sponsors, and Foundation staff support across one, larger event - it should NOT disrupt the current spirit of the sessions Theirry describes above The Forum format, on the other hand, is organized around specific discussion topics where you want to maximize feedback and input. Forum sessions are not attached to a specific workgroup or team, they are defined by their topic. They are well-advertised on the event schedule, and happen at a precise time. It takes advantage of the thousands of attendees being present to get the most relevant feedback possible. It allows to engage beyond the work groups, to people who can't spend much time getting more engaged and contribute back. Agreed. Again, I over simplified as the "what", but these sessions are so valuable as the bring dev and ops in a room and focus on what the software needs to do or the impact (positive or negative) that planned behaviors might have on Operators and users. To Tim's earlier question, no I think this change doesn't reduce the need for Forum sessions. If anything, I think it increases the need for us to get REALLY good at channeling output from the Ops mid-cycle in to session topics at the next Summit. The Ops meetup under its current format is mostly work sessions, and those would fit pretty well in the PTG event format. Ideally I would limit the feedback-gathering sessions there and use the Forum (and regional events like OpenStack days) to collect it. That sounds like a better way to reach out to "all users" and take into account their feedback and needs... They are largely work sessions, but independent of the co-location discussion, the UC is focused on improving the ability for tangible output to come from Ops mid-cycles, OpenStack Days and regional meetups - largely in the form of Forum sessions and ultimately changes in the software. So we, as a committee, see a lot of similarities in what you just said. I'm not bold enough to predict exactly how co-location might change the tone/topic of the Ops sessions, but I agree that we shouldn't expect a lot of real-time feedback time with devs at the PTG/mid-summit event (what ever we end up calling it). We want the devs to be focused on what's already planned for the N+1 version or beyond. The conversations/sessions at the Ops portion of the event would hopefully lead to Forum sessions on N+2 features, functions, bug fixes, etc Overall, I still see co-location as a positive move. There will be some tricky bits we need to figure out between to the "two sides" of the event as we want to MINIMIZE any perceived us/them between dev and ops - not add to it. But, the work session themselves, should still honor the spirit of the PTG and Ops Mid-cycle as they are today. We just get the added benefit of time together as a whole community - and hopefully solve a few logistic/finance/sponsorship/venue issues that trouble one event or the other today. Thanks! VW -- Thierry Carrez (ttx) _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From mrhillsman at gmail.com Fri Mar 23 02:08:02 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Thu, 22 Mar 2018 21:08:02 -0500 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> <1521575953-sup-8576@lrrr.local> Message-ID: Thierry and Matt both hit the nail on the head in terms of the very base/purpose/point of the Forum, PTG, and Ops Midcycles and here is my +2 since I have spoke with both and others outside of this thread and agree with them here as I have in individual discussions. If nothing else I agree with Jimmy's original statement of at least giving this a try. On Thu, Mar 22, 2018 at 4:54 PM, Matt Van Winkle wrote: > Hey folks, > Great discussion! There are number of points to comment on going back > through the last few emails. I'll try to do so in line with Theirry's > latest below. From a User Committee perspective (and as a member of the > Ops Meetup planning team), I am a convert to the idea of co-location, but > have come to see a lot of value in it. I'll point some of that out as I > respond to specific comments, but first a couple of overarching points. > > In the current model, the Forum sessions are very much about WHAT the > software should do. Keeping the discussions focused on behavior, feature > and function has made it much easier for an operator to participate > effectively in the conversation versus the older, design sessions, that > focused largely on blueprints, coding approaches, etc. These are HOW the > developers should make things work and, now, are a large part of the focus > of the PTG. I realize it's not that cut and dry, but current model has > allowed for this division of "what" and "how" in many areas, and I know > several who have found it valuable. > > The other contextual thing to remember is the PTG was the effective > combining of all the various team mid-cycle meetups that were occurring. > The current Ops mid-cycle was born in that same period. While it's purpose > was a little different, it's spirit is the same - gather a team (in this > case operators) together outside the hustle and bustle of a summit to > discuss common issues, topics, etc. I'll also point out, that they have > been good vehicles in the Ops community to get new folks integrated. For > the purpose of this discussion, though, one could argue this is just > bringing the last mid-cycle event in to the fold. > > On 3/21/18, 4:40 AM, "Thierry Carrez" wrote: > > Doug Hellmann wrote: > > Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: > >> > >> Would we still need the same style of summit forum if we have the > >> OpenStack Community Working Gathering? One thing I have found with > >> the forum running all week throughout the summit is that it tends > >> to draw audience away from other talks so maybe we could reduce the > >> forum to only a subset of the summit time? > > > > I support the idea of having all contributors attend the contributor > > event (and rebranding it to reflect that change in emphasis), but > > it's not quite clear how the result would be different from the > > Forum. Is it just the scheduling? (Having input earlier in the cycle > > would be convenient, for sure.) > > > > Thierry's comment about "work sessions" earlier in the thread seems > > key. > > Right, I think the key difference between the PTG and Forum is that one > is a work event for engaged contributors that are part of a group > spending time on making OpenStack better, while the other is a venue > for > engaging with everyone in our community. > > The PTG format is really organized around work groups (whatever their > focus is), enabling them to set their short-term goals, assign work > items and bootstrap the work. The fact that all those work groups are > co-located make it easy to participate in multiple groups, or invite > other people to join the discussion where it touches their area of > expertise, but it's still mostly a venue for our > geographically-distributed workgroups to get together in person and get > work done. That's why the agenda is so flexible at the PTG, to maximize > the productivity of attendees, even if that can confuse people who > can't > relate to any specific work group. > > Exactly. I know I way over simplified it as working on the "how", but > it's very important to honor this aspect of the current PTG. We need this > time for the devs and teams to take output from the previous forum sessions > (or earlier input) and turn it into plans for the N+1 version. While some > folks could drift between sessions, co-locating the Ops mid-cycle is just > that - leveraging venue, sponsors, and Foundation staff support across one, > larger event - it should NOT disrupt the current spirit of the sessions > Theirry describes above > > The Forum format, on the other hand, is organized around specific > discussion topics where you want to maximize feedback and input. Forum > sessions are not attached to a specific workgroup or team, they are > defined by their topic. They are well-advertised on the event schedule, > and happen at a precise time. It takes advantage of the thousands of > attendees being present to get the most relevant feedback possible. It > allows to engage beyond the work groups, to people who can't spend much > time getting more engaged and contribute back. > > Agreed. Again, I over simplified as the "what", but these sessions are so > valuable as the bring dev and ops in a room and focus on what the software > needs to do or the impact (positive or negative) that planned behaviors > might have on Operators and users. To Tim's earlier question, no I think > this change doesn't reduce the need for Forum sessions. If anything, I > think it increases the need for us to get REALLY good at channeling output > from the Ops mid-cycle in to session topics at the next Summit. > > The Ops meetup under its current format is mostly work sessions, and > those would fit pretty well in the PTG event format. Ideally I would > limit the feedback-gathering sessions there and use the Forum (and > regional events like OpenStack days) to collect it. That sounds like a > better way to reach out to "all users" and take into account their > feedback and needs... > > They are largely work sessions, but independent of the co-location > discussion, the UC is focused on improving the ability for tangible output > to come from Ops mid-cycles, OpenStack Days and regional meetups - largely > in the form of Forum sessions and ultimately changes in the software. So > we, as a committee, see a lot of similarities in what you just said. I'm > not bold enough to predict exactly how co-location might change the > tone/topic of the Ops sessions, but I agree that we shouldn't expect a lot > of real-time feedback time with devs at the PTG/mid-summit event (what ever > we end up calling it). We want the devs to be focused on what's already > planned for the N+1 version or beyond. The conversations/sessions at the > Ops portion of the event would hopefully lead to Forum sessions on N+2 > features, functions, bug fixes, etc > > Overall, I still see co-location as a positive move. There will be some > tricky bits we need to figure out between to the "two sides" of the event > as we want to MINIMIZE any perceived us/them between dev and ops - not add > to it. But, the work session themselves, should still honor the spirit of > the PTG and Ops Mid-cycle as they are today. We just get the added benefit > of time together as a whole community - and hopefully solve a few > logistic/finance/sponsorship/venue issues that trouble one event or the > other today. > > Thanks! > VW > -- > Thierry Carrez (ttx) > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yihleong at gmail.com Fri Mar 23 04:02:48 2018 From: yihleong at gmail.com (Yih Leong, Sun.) Date: Thu, 22 Mar 2018 21:02:48 -0700 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> <1521575953-sup-8576@lrrr.local> Message-ID: I support the ideas to try colocating the next Ops Midcycle and PTG. Although scheduling could be a potential challenge but it worth give it a try. Also having an joint social event in the evening can also help Dev/Ops to meet and offline discussion. :) On Thursday, March 22, 2018, Melvin Hillsman wrote: > Thierry and Matt both hit the nail on the head in terms of the very > base/purpose/point of the Forum, PTG, and Ops Midcycles and here is my +2 > since I have spoke with both and others outside of this thread and agree > with them here as I have in individual discussions. > > If nothing else I agree with Jimmy's original statement of at least giving > this a try. > > On Thu, Mar 22, 2018 at 4:54 PM, Matt Van Winkle > wrote: > >> Hey folks, >> Great discussion! There are number of points to comment on going back >> through the last few emails. I'll try to do so in line with Theirry's >> latest below. From a User Committee perspective (and as a member of the >> Ops Meetup planning team), I am a convert to the idea of co-location, but >> have come to see a lot of value in it. I'll point some of that out as I >> respond to specific comments, but first a couple of overarching points. >> >> In the current model, the Forum sessions are very much about WHAT the >> software should do. Keeping the discussions focused on behavior, feature >> and function has made it much easier for an operator to participate >> effectively in the conversation versus the older, design sessions, that >> focused largely on blueprints, coding approaches, etc. These are HOW the >> developers should make things work and, now, are a large part of the focus >> of the PTG. I realize it's not that cut and dry, but current model has >> allowed for this division of "what" and "how" in many areas, and I know >> several who have found it valuable. >> >> The other contextual thing to remember is the PTG was the effective >> combining of all the various team mid-cycle meetups that were occurring. >> The current Ops mid-cycle was born in that same period. While it's purpose >> was a little different, it's spirit is the same - gather a team (in this >> case operators) together outside the hustle and bustle of a summit to >> discuss common issues, topics, etc. I'll also point out, that they have >> been good vehicles in the Ops community to get new folks integrated. For >> the purpose of this discussion, though, one could argue this is just >> bringing the last mid-cycle event in to the fold. >> >> On 3/21/18, 4:40 AM, "Thierry Carrez" wrote: >> >> Doug Hellmann wrote: >> > Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: >> >> >> >> Would we still need the same style of summit forum if we have the >> >> OpenStack Community Working Gathering? One thing I have found with >> >> the forum running all week throughout the summit is that it tends >> >> to draw audience away from other talks so maybe we could reduce the >> >> forum to only a subset of the summit time? >> > >> > I support the idea of having all contributors attend the contributor >> > event (and rebranding it to reflect that change in emphasis), but >> > it's not quite clear how the result would be different from the >> > Forum. Is it just the scheduling? (Having input earlier in the cycle >> > would be convenient, for sure.) >> > >> > Thierry's comment about "work sessions" earlier in the thread seems >> > key. >> >> Right, I think the key difference between the PTG and Forum is that >> one >> is a work event for engaged contributors that are part of a group >> spending time on making OpenStack better, while the other is a venue >> for >> engaging with everyone in our community. >> >> The PTG format is really organized around work groups (whatever their >> focus is), enabling them to set their short-term goals, assign work >> items and bootstrap the work. The fact that all those work groups are >> co-located make it easy to participate in multiple groups, or invite >> other people to join the discussion where it touches their area of >> expertise, but it's still mostly a venue for our >> geographically-distributed workgroups to get together in person and >> get >> work done. That's why the agenda is so flexible at the PTG, to >> maximize >> the productivity of attendees, even if that can confuse people who >> can't >> relate to any specific work group. >> >> Exactly. I know I way over simplified it as working on the "how", but >> it's very important to honor this aspect of the current PTG. We need this >> time for the devs and teams to take output from the previous forum sessions >> (or earlier input) and turn it into plans for the N+1 version. While some >> folks could drift between sessions, co-locating the Ops mid-cycle is just >> that - leveraging venue, sponsors, and Foundation staff support across one, >> larger event - it should NOT disrupt the current spirit of the sessions >> Theirry describes above >> >> The Forum format, on the other hand, is organized around specific >> discussion topics where you want to maximize feedback and input. Forum >> sessions are not attached to a specific workgroup or team, they are >> defined by their topic. They are well-advertised on the event >> schedule, >> and happen at a precise time. It takes advantage of the thousands of >> attendees being present to get the most relevant feedback possible. It >> allows to engage beyond the work groups, to people who can't spend >> much >> time getting more engaged and contribute back. >> >> Agreed. Again, I over simplified as the "what", but these sessions are >> so valuable as the bring dev and ops in a room and focus on what the >> software needs to do or the impact (positive or negative) that planned >> behaviors might have on Operators and users. To Tim's earlier question, no >> I think this change doesn't reduce the need for Forum sessions. If >> anything, I think it increases the need for us to get REALLY good at >> channeling output from the Ops mid-cycle in to session topics at the next >> Summit. >> >> The Ops meetup under its current format is mostly work sessions, and >> those would fit pretty well in the PTG event format. Ideally I would >> limit the feedback-gathering sessions there and use the Forum (and >> regional events like OpenStack days) to collect it. That sounds like a >> better way to reach out to "all users" and take into account their >> feedback and needs... >> >> They are largely work sessions, but independent of the co-location >> discussion, the UC is focused on improving the ability for tangible output >> to come from Ops mid-cycles, OpenStack Days and regional meetups - largely >> in the form of Forum sessions and ultimately changes in the software. So >> we, as a committee, see a lot of similarities in what you just said. I'm >> not bold enough to predict exactly how co-location might change the >> tone/topic of the Ops sessions, but I agree that we shouldn't expect a lot >> of real-time feedback time with devs at the PTG/mid-summit event (what ever >> we end up calling it). We want the devs to be focused on what's already >> planned for the N+1 version or beyond. The conversations/sessions at the >> Ops portion of the event would hopefully lead to Forum sessions on N+2 >> features, functions, bug fixes, etc >> >> Overall, I still see co-location as a positive move. There will be some >> tricky bits we need to figure out between to the "two sides" of the event >> as we want to MINIMIZE any perceived us/them between dev and ops - not add >> to it. But, the work session themselves, should still honor the spirit of >> the PTG and Ops Mid-cycle as they are today. We just get the added benefit >> of time together as a whole community - and hopefully solve a few >> logistic/finance/sponsorship/venue issues that trouble one event or the >> other today. >> >> Thanks! >> VW >> -- >> Thierry Carrez (ttx) >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> k-operators >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri Mar 23 12:51:53 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 23 Mar 2018 13:51:53 +0100 Subject: [Openstack-operators] Ocata security groups don't work with LBaaS v2 ports Message-ID: Hi all, following the ocata documentation, I am trying to apply security group to a lbaas v2 port but it seems not working because any filter is applyed. The Port Security Enabled is True on lbaas port, so I expect applying security group should work. Is this a bug ? Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From jon at csail.mit.edu Fri Mar 23 15:07:13 2018 From: jon at csail.mit.edu (Jonathan Proulx) Date: Fri, 23 Mar 2018 11:07:13 -0400 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> <1521575953-sup-8576@lrrr.local> Message-ID: <20180323150713.GK21100@csail.mit.edu> On Thu, Mar 22, 2018 at 09:02:48PM -0700, Yih Leong, Sun. wrote: :I support the ideas to try colocating the next Ops Midcycle and PTG. :Although scheduling could be a potential challenge but it worth give it a :try. : :Also having an joint social event in the evening can also help Dev/Ops to :meet and offline discussion. :) Agreeing stongly with Matt and Melvin's comments about Forum -vs- PTG/OpsMidcycle PTG/OpsMidcycle (as I see them) are about focusing inside teams to get work done ("how" is a a good one word I think). The advantage of colocation is for cross team questions like "we're thinking of doing this thing this way, does this have any impacts on your work my might not have considered", can get a quick respose in the hall, at lunch, or over beers as Yih Leong suggests. Forum has become about coming to gather across groups for more conceptual "what" discussions. So I also thing they are very distinct and I do see potential benefits to colocation. We do need to watch out for downsides. The concerns around colocation seemed mostly about larger events costing more and being generally harder to organize. If we try we will find out if there is merit to this concern, but (IMO) it is important to keep both of the events as cheap and simple as possible. -Jon : :On Thursday, March 22, 2018, Melvin Hillsman wrote: : :> Thierry and Matt both hit the nail on the head in terms of the very :> base/purpose/point of the Forum, PTG, and Ops Midcycles and here is my +2 :> since I have spoke with both and others outside of this thread and agree :> with them here as I have in individual discussions. :> :> If nothing else I agree with Jimmy's original statement of at least giving :> this a try. :> :> On Thu, Mar 22, 2018 at 4:54 PM, Matt Van Winkle :> wrote: :> :>> Hey folks, :>> Great discussion! There are number of points to comment on going back :>> through the last few emails. I'll try to do so in line with Theirry's :>> latest below. From a User Committee perspective (and as a member of the :>> Ops Meetup planning team), I am a convert to the idea of co-location, but :>> have come to see a lot of value in it. I'll point some of that out as I :>> respond to specific comments, but first a couple of overarching points. :>> :>> In the current model, the Forum sessions are very much about WHAT the :>> software should do. Keeping the discussions focused on behavior, feature :>> and function has made it much easier for an operator to participate :>> effectively in the conversation versus the older, design sessions, that :>> focused largely on blueprints, coding approaches, etc. These are HOW the :>> developers should make things work and, now, are a large part of the focus :>> of the PTG. I realize it's not that cut and dry, but current model has :>> allowed for this division of "what" and "how" in many areas, and I know :>> several who have found it valuable. :>> :>> The other contextual thing to remember is the PTG was the effective :>> combining of all the various team mid-cycle meetups that were occurring. :>> The current Ops mid-cycle was born in that same period. While it's purpose :>> was a little different, it's spirit is the same - gather a team (in this :>> case operators) together outside the hustle and bustle of a summit to :>> discuss common issues, topics, etc. I'll also point out, that they have :>> been good vehicles in the Ops community to get new folks integrated. For :>> the purpose of this discussion, though, one could argue this is just :>> bringing the last mid-cycle event in to the fold. :>> :>> On 3/21/18, 4:40 AM, "Thierry Carrez" wrote: :>> :>> Doug Hellmann wrote: :>> > Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: :>> >> :>> >> Would we still need the same style of summit forum if we have the :>> >> OpenStack Community Working Gathering? One thing I have found with :>> >> the forum running all week throughout the summit is that it tends :>> >> to draw audience away from other talks so maybe we could reduce the :>> >> forum to only a subset of the summit time? :>> > :>> > I support the idea of having all contributors attend the contributor :>> > event (and rebranding it to reflect that change in emphasis), but :>> > it's not quite clear how the result would be different from the :>> > Forum. Is it just the scheduling? (Having input earlier in the cycle :>> > would be convenient, for sure.) :>> > :>> > Thierry's comment about "work sessions" earlier in the thread seems :>> > key. :>> :>> Right, I think the key difference between the PTG and Forum is that :>> one :>> is a work event for engaged contributors that are part of a group :>> spending time on making OpenStack better, while the other is a venue :>> for :>> engaging with everyone in our community. :>> :>> The PTG format is really organized around work groups (whatever their :>> focus is), enabling them to set their short-term goals, assign work :>> items and bootstrap the work. The fact that all those work groups are :>> co-located make it easy to participate in multiple groups, or invite :>> other people to join the discussion where it touches their area of :>> expertise, but it's still mostly a venue for our :>> geographically-distributed workgroups to get together in person and :>> get :>> work done. That's why the agenda is so flexible at the PTG, to :>> maximize :>> the productivity of attendees, even if that can confuse people who :>> can't :>> relate to any specific work group. :>> :>> Exactly. I know I way over simplified it as working on the "how", but :>> it's very important to honor this aspect of the current PTG. We need this :>> time for the devs and teams to take output from the previous forum sessions :>> (or earlier input) and turn it into plans for the N+1 version. While some :>> folks could drift between sessions, co-locating the Ops mid-cycle is just :>> that - leveraging venue, sponsors, and Foundation staff support across one, :>> larger event - it should NOT disrupt the current spirit of the sessions :>> Theirry describes above :>> :>> The Forum format, on the other hand, is organized around specific :>> discussion topics where you want to maximize feedback and input. Forum :>> sessions are not attached to a specific workgroup or team, they are :>> defined by their topic. They are well-advertised on the event :>> schedule, :>> and happen at a precise time. It takes advantage of the thousands of :>> attendees being present to get the most relevant feedback possible. It :>> allows to engage beyond the work groups, to people who can't spend :>> much :>> time getting more engaged and contribute back. :>> :>> Agreed. Again, I over simplified as the "what", but these sessions are :>> so valuable as the bring dev and ops in a room and focus on what the :>> software needs to do or the impact (positive or negative) that planned :>> behaviors might have on Operators and users. To Tim's earlier question, no :>> I think this change doesn't reduce the need for Forum sessions. If :>> anything, I think it increases the need for us to get REALLY good at :>> channeling output from the Ops mid-cycle in to session topics at the next :>> Summit. :>> :>> The Ops meetup under its current format is mostly work sessions, and :>> those would fit pretty well in the PTG event format. Ideally I would :>> limit the feedback-gathering sessions there and use the Forum (and :>> regional events like OpenStack days) to collect it. That sounds like a :>> better way to reach out to "all users" and take into account their :>> feedback and needs... :>> :>> They are largely work sessions, but independent of the co-location :>> discussion, the UC is focused on improving the ability for tangible output :>> to come from Ops mid-cycles, OpenStack Days and regional meetups - largely :>> in the form of Forum sessions and ultimately changes in the software. So :>> we, as a committee, see a lot of similarities in what you just said. I'm :>> not bold enough to predict exactly how co-location might change the :>> tone/topic of the Ops sessions, but I agree that we shouldn't expect a lot :>> of real-time feedback time with devs at the PTG/mid-summit event (what ever :>> we end up calling it). We want the devs to be focused on what's already :>> planned for the N+1 version or beyond. The conversations/sessions at the :>> Ops portion of the event would hopefully lead to Forum sessions on N+2 :>> features, functions, bug fixes, etc :>> :>> Overall, I still see co-location as a positive move. There will be some :>> tricky bits we need to figure out between to the "two sides" of the event :>> as we want to MINIMIZE any perceived us/them between dev and ops - not add :>> to it. But, the work session themselves, should still honor the spirit of :>> the PTG and Ops Mid-cycle as they are today. We just get the added benefit :>> of time together as a whole community - and hopefully solve a few :>> logistic/finance/sponsorship/venue issues that trouble one event or the :>> other today. :>> :>> Thanks! :>> VW :>> -- :>> Thierry Carrez (ttx) :>> :>> _______________________________________________ :>> OpenStack-operators mailing list :>> OpenStack-operators at lists.openstack.org :>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac :>> k-operators :>> :>> :>> _______________________________________________ :>> OpenStack-operators mailing list :>> OpenStack-operators at lists.openstack.org :>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators :>> :> :> :> :> -- :> Kind regards, :> :> Melvin Hillsman :> mrhillsman at gmail.com :> mobile: (832) 264-2646 :> :_______________________________________________ :OpenStack-operators mailing list :OpenStack-operators at lists.openstack.org :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- From mriedemos at gmail.com Fri Mar 23 15:35:15 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 23 Mar 2018 10:35:15 -0500 Subject: [Openstack-operators] [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: References: <6E229F29-BAFE-480A-A359-4BECEFE47B65@cern.ch> <93666d2a-c543-169c-fe07-499e5340622b@gmail.com> Message-ID: On 3/21/2018 6:34 AM, 李杰 wrote: > So what should we do then about rebuild the volume backed server?Until > the cinder could re-image a volume? I've added the spec to the 'stuck reviews' section of the nova meeting agenda so it can at least get some discussion there next week. https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting -- Thanks, Matt From zioproto at gmail.com Mon Mar 26 07:32:44 2018 From: zioproto at gmail.com (Saverio Proto) Date: Mon, 26 Mar 2018 09:32:44 +0200 Subject: [Openstack-operators] Ocata security groups don't work with LBaaS v2 ports In-Reply-To: References: Message-ID: Hello Ignazio, it would interesting to know how this works. For instances ports, those ports are created by openvswitch on the compute nodes, where the neutron-agent will take care of the security groups enforcement (via iptables or openvswitch rules). the LBaaS is a namespace that lives where the neutron-lbaasv2-agent is running. The question is if the neutron-lbaasv2-agent is capable for setting iptables rules. I would start to read the code there. Cheers, Saverio 2018-03-23 13:51 GMT+01:00 Ignazio Cassano : > Hi all, > following the ocata documentation, I am trying to apply security group to a > lbaas v2 port but > it seems not working because any filter is applyed. > The Port Security Enabled is True on lbaas port, so I expect applying > security group should work. > Is this a bug ? > Regards > Ignazio > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From ignaziocassano at gmail.com Mon Mar 26 07:37:46 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 26 Mar 2018 09:37:46 +0200 Subject: [Openstack-operators] Ocata security groups don't work with LBaaS v2 ports In-Reply-To: References: Message-ID: Hello Saverio, neutron.lbaas.v2-agent should apply iptables rules but it does not work. Also in redhat exixts the same issue reported here: https://bugzilla.redhat.com/show_bug.cgi?id=1500118 Regards 2018-03-26 9:32 GMT+02:00 Saverio Proto : > Hello Ignazio, > > it would interesting to know how this works. For instances ports, > those ports are created by openvswitch on the compute nodes, where the > neutron-agent will take care of the security groups enforcement (via > iptables or openvswitch rules). > > the LBaaS is a namespace that lives where the neutron-lbaasv2-agent is > running. > > The question is if the neutron-lbaasv2-agent is capable for setting > iptables rules. I would start to read the code there. > > Cheers, > > Saverio > > > 2018-03-23 13:51 GMT+01:00 Ignazio Cassano : > > Hi all, > > following the ocata documentation, I am trying to apply security group > to a > > lbaas v2 port but > > it seems not working because any filter is applyed. > > The Port Security Enabled is True on lbaas port, so I expect applying > > security group should work. > > Is this a bug ? > > Regards > > Ignazio > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Mar 26 13:59:54 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 26 Mar 2018 15:59:54 +0200 Subject: [Openstack-operators] Vancouver Forum - Topic submission tool is now open! Message-ID: <128245d7-a59b-4cdb-1d19-5524d65af2a8@openstack.org> Hi everyone, The Forum in Vancouver is getting closer! As a reminder, the Forum is where we take advantage of having a large community presence at the Summit to discuss and get wide feedback on a variety of topics around the future of OpenStack and adjacent projects. Starting today, our submission tool is open for you to submit abstracts for the most popular sessions that came out of your brainstorming. All teams, work groups and SIGs should now submit their abstracts at: http://forumtopics.openstack.org/ before 11:59PM UTC on Sunday April 15th! We are looking for a good mix of project-specific, cross-project or strategic/whole-of-community discussions, and sessions that emphasis collaboration between our various types of contributors are most welcome! We assume that anything submitted to the system has achieved a good amount of discussion and consensus that it's a worthwhile topic. After submissions close, a team of representatives from the User Committee, the Technical Committee and Foundation staff will take the sessions proposed by the community and fill out the schedule. You can expect the draft schedule to be released around April 22nd. Further details about the Forum can be found at: https://wiki.openstack.org/wiki/Forum Regards, -- Thierry Carrez (ttx) From ignaziocassano at gmail.com Mon Mar 26 15:43:21 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 26 Mar 2018 15:43:21 +0000 Subject: [Openstack-operators] Diskimage-builder lvm Message-ID: Hi all, I read diskimage-builder documentatio but is not clear for me how I can supply lvm configuration to the command. Some yaml files are supplied but diskimage seems to expect json file. Please, could anyone post an example? Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From kgiusti at gmail.com Mon Mar 26 15:51:42 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Mon, 26 Mar 2018 11:51:42 -0400 Subject: [Openstack-operators] [all][oslo] Notice to users of the ZeroMQ transport in oslo.messaging Message-ID: Folks, It's been over a year since the last commit was made to the ZeroMQ driver in oslo.messaging. It is at the point where some of the related unit tests are beginning to fail due to bit rot. None of the current oslo.messaging contributors have a good enough understanding of the codebase to effectively fix it. Personally I'm not sure the driver will work in production at all. Given this it was decided in Dublin that the ZeroMQ driver no longer meets the official policy for in tree driver support [0] and will be deprecated in Rocky. However it would be insincere for the team to give the impression that the driver is maintained for the normal 2 cycle deprecation process. Therefore the driver code will be removed in 'S'. The ZeroMQ driver is the largest body of code of any driver in the oslo.messaging repo, weighing in at over 5k lines of code. For comparison, the rabbitmq kombu driver consists of only about 2K lines of code. If any individuals are willing to commit to ownership of this codebase and keep the driver compliant with policy (see [0]), please follow up with bnemec or myself (kgiusti) on #openstack-oslo. Thanks, [0] https://docs.openstack.org/oslo.messaging/latest/contributor/supported-messaging-drivers.html -- Ken Giusti (kgiusti at gmail.com) From mriedemos at gmail.com Mon Mar 26 20:40:11 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 26 Mar 2018 15:40:11 -0500 Subject: [Openstack-operators] Fwd: [openstack-dev] [nova][placement] Upgrade placement first! In-Reply-To: <2215df39-49fc-c756-eb11-f44c565803dc@fried.cc> References: <2215df39-49fc-c756-eb11-f44c565803dc@fried.cc> Message-ID: <856f0bfd-8e28-fa05-de0c-297a3e49a843@gmail.com> FYI -------- Forwarded Message -------- Subject: [openstack-dev] [nova][placement] Upgrade placement first! Date: Mon, 26 Mar 2018 15:02:23 -0500 From: Eric Fried Reply-To: OpenStack Development Mailing List (not for usage questions) Organization: IBM To: OpenStack Development Mailing List (not for usage questions) Since forever [0], nova has gently recommended [1] that the placement service be upgraded first. However, we've not made any serious effort to test scenarios where this isn't done. For example, we don't have grenade tests running placement at earlier levels. After a(nother) discussion [2] which touched on the impacts - real and imagined - of running new nova against old placement, we finally decided to turn the recommendation into a hard requirement [3]. This gives admins a crystal clear guideline, this lets us simplify our support statement, and also means we don't have to do 406 fallback code anymore. So we can do stuff like [4], and also avoid having to write (and subsequently remove) code like that in the future. Please direct any questions to #openstack-nova Your Faithful Scribe, efried [0] Like, since upgrading placement was a thing. [1] https://docs.openstack.org/nova/latest/user/upgrade.html#rolling-upgrade-process (#2, first bullet) [2] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-03-26.log.html#t2018-03-26T17:35:11 [3] https://review.openstack.org/556631 [4] https://review.openstack.org/556633 __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From andrew at etc.gen.nz Mon Mar 26 23:25:16 2018 From: andrew at etc.gen.nz (Andrew Ruthven) Date: Tue, 27 Mar 2018 12:25:16 +1300 Subject: [Openstack-operators] How are you handling billing/chargeback? In-Reply-To: <20180314161143.2w6skkpmyhvixmyj@redhat.com> References: <20180312192113.znz4eavfze5zg7yn@redhat.com> <20180314161143.2w6skkpmyhvixmyj@redhat.com> Message-ID: <1522106716.27791.37.camel@etc.gen.nz> On Wed, 2018-03-14 at 12:11 -0400, Lars Kellogg-Stedman wrote: > On Mon, Mar 12, 2018 at 03:21:13PM -0400, Lars Kellogg-Stedman wrote: > > I'm curious what folks out there are using for chargeback/billing > > in > > your OpenStack environment. > > So far it looks like everyone is using a homegrown solution.  Is > anyone using an existing product/project? Hey, We (Catalyst Cloud) wrote Distil which takes information out of Ceilometer, and creates appropriate draft invoices in Odoo, the rating information comes from products in Odoo. The accounting system is module based, so should be easy enough to integrate with other systems. We've added a number of other pollsters to Ceilometer to collect various other items we want to bill, and we also have an sflow traffic metering system which allows us to bill for different classes of network traffic. Source code for Distil is here: https://github.com/openstack/distil/ Traffic billing: https://github.com/catalyst/openstack-sflow-traffic-bi lling We've written an API for our customers to be able to retrieve their invoices etc, example client is here: https://github.com/catalyst-cloud/catalystcloud-billing Cheers, Andrew -- Andrew Ruthven, Wellington, New Zealand andrew at etc.gen.nz              | linux.conf.au 2019, Christchurch, NZ New Zealand's only real Cloud: |   https://lca2019.linux.org.au/ https://catalystcloud.nz | From scheuran at linux.vnet.ibm.com Tue Mar 27 07:52:52 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Tue, 27 Mar 2018 09:52:52 +0200 Subject: [Openstack-operators] Diskimage-builder lvm In-Reply-To: References: Message-ID: <07B961D1-EC8A-4967-A515-00A933D273A6@linux.vnet.ibm.com> Hi, I recently did it like this: For Ubuntu: Cherry picked: https://review.openstack.org/#/c/504588/ export DIB_BLOCK_DEVICE_CONFIG=' - local_loop: name: image0 size: 3GB - partitioning: base: image0 label: mbr partitions: - name: root flags: [ boot, primary ] size: 100% - lvm: name: lvm pvs: - name: pv options: ["--force"] base: root vgs: - name: vg base: ["pv"] options: ["--force"] lvs: - name: lv_root base: vg size: 1800M - name: lv_tmp base: vg size: 100M - name: lv_var base: vg size: 500M - name: lv_log base: vg size: 100M - name: lv_audit base: vg size: 100M - name: lv_home base: vg size: 200M - mkfs: name: root_fs base: lv_root label: cloudimage-root type: ext4 mount: mount_point: / fstab: options: "defaults" fsck-passno: 1 ' # See https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1573982 # See https://bugs.launchpad.net/diskimage-builder/+bug/1715686 # need to specify lvm2 module to include it into the ramdisk disk-image-create -p lvm2 ubuntu-minimal # -> During boot the kernel was still not able to find the root disk (identified along uuid) # Executing the following command in the initramfs emergency shell made the volume appear for me $ vgchange -ay $ blkid # /dev/vda1: UUID="O11hwE-0Efq-SwO7-Bko8-hVe6-jyOT-7UAiza" TYPE="LVM2_member" PARTUUID="7885c310-01" # /dev/mapper/vg-lv_root: LABEL="cloudimage-root" UUID="bd6d2d2c-3ca6-4032-99fc-bf20e246e22a" TYPE=“ext4” exit Maybe things are going more smoothly with other distros, or maybe things have been fixed in the meanwhile… Or I did something wrong… Good luck! --- Andreas Scheuring (andreas_s) On 26. Mar 2018, at 17:43, Ignazio Cassano wrote: Hi all, I read diskimage-builder documentatio but is not clear for me how I can supply lvm configuration to the command. Some yaml files are supplied but diskimage seems to expect json file. Please, could anyone post an example? Regards Ignazio _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From scheuran at linux.vnet.ibm.com Tue Mar 27 10:36:00 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Tue, 27 Mar 2018 12:36:00 +0200 Subject: [Openstack-operators] Diskimage-builder lvm In-Reply-To: <07B961D1-EC8A-4967-A515-00A933D273A6@linux.vnet.ibm.com> References: <07B961D1-EC8A-4967-A515-00A933D273A6@linux.vnet.ibm.com> Message-ID: Need to correct my statement from below: It tries to get the root disk by label, that’s not working on ubuntu in combination with lvm. It might work if you put the root volume on a non lvm disk. --- Andreas Scheuring (andreas_s) On 27. Mar 2018, at 09:52, Andreas Scheuring wrote: Hi, I recently did it like this: For Ubuntu: Cherry picked: https://review.openstack.org/#/c/504588/ export DIB_BLOCK_DEVICE_CONFIG=' - local_loop: name: image0 size: 3GB - partitioning: base: image0 label: mbr partitions: - name: root flags: [ boot, primary ] size: 100% - lvm: name: lvm pvs: - name: pv options: ["--force"] base: root vgs: - name: vg base: ["pv"] options: ["--force"] lvs: - name: lv_root base: vg size: 1800M - name: lv_tmp base: vg size: 100M - name: lv_var base: vg size: 500M - name: lv_log base: vg size: 100M - name: lv_audit base: vg size: 100M - name: lv_home base: vg size: 200M - mkfs: name: root_fs base: lv_root label: cloudimage-root type: ext4 mount: mount_point: / fstab: options: "defaults" fsck-passno: 1 ' # See https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1573982 # See https://bugs.launchpad.net/diskimage-builder/+bug/1715686 # need to specify lvm2 module to include it into the ramdisk disk-image-create -p lvm2 ubuntu-minimal # -> During boot the kernel was still not able to find the root disk (identified along uuid) # Executing the following command in the initramfs emergency shell made the volume appear for me $ vgchange -ay $ blkid # /dev/vda1: UUID="O11hwE-0Efq-SwO7-Bko8-hVe6-jyOT-7UAiza" TYPE="LVM2_member" PARTUUID="7885c310-01" # /dev/mapper/vg-lv_root: LABEL="cloudimage-root" UUID="bd6d2d2c-3ca6-4032-99fc-bf20e246e22a" TYPE=“ext4” exit Maybe things are going more smoothly with other distros, or maybe things have been fixed in the meanwhile… Or I did something wrong… Good luck! --- Andreas Scheuring (andreas_s) On 26. Mar 2018, at 17:43, Ignazio Cassano > wrote: Hi all, I read diskimage-builder documentatio but is not clear for me how I can supply lvm configuration to the command. Some yaml files are supplied but diskimage seems to expect json file. Please, could anyone post an example? Regards Ignazio _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Mar 27 14:40:44 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 27 Mar 2018 09:40:44 -0500 Subject: [Openstack-operators] [nova] Hard fail if you try to rename an AZ with instances in it? Message-ID: <2c6ff74e-65e9-d7e2-369e-d7c6fd37798a@gmail.com> Sylvain has had a spec up for awhile [1] about solving an old issue where admins can rename an AZ (via host aggregate metadata changes) while it has instances in it, which likely results in at least user confusion, but probably other issues later if you try to move those instances, e.g. the request spec probably points at the original AZ name and if that's gone (renamed) the scheduler probably pukes (would need to test this). Anyway, I'm wondering if anyone relies on this behavior, or if they consider it a bug that the API allows admins to do this? I tend to consider this a bug in the API, and should just be fixed without a microversion. In other words, you shouldn't have to opt out of broken behavior using microversions. [1] https://review.openstack.org/#/c/446446/ -- Thanks, Matt From jaypipes at gmail.com Tue Mar 27 15:37:34 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 27 Mar 2018 11:37:34 -0400 Subject: [Openstack-operators] [openstack-dev] [nova] Hard fail if you try to rename an AZ with instances in it? In-Reply-To: <2c6ff74e-65e9-d7e2-369e-d7c6fd37798a@gmail.com> References: <2c6ff74e-65e9-d7e2-369e-d7c6fd37798a@gmail.com> Message-ID: <4460ff7f-7a1b-86ac-c37e-dbd7a42631ed@gmail.com> On 03/27/2018 10:40 AM, Matt Riedemann wrote: > Sylvain has had a spec up for awhile [1] about solving an old issue > where admins can rename an AZ (via host aggregate metadata changes) > while it has instances in it, which likely results in at least user > confusion, but probably other issues later if you try to move those > instances, e.g. the request spec probably points at the original AZ name > and if that's gone (renamed) the scheduler probably pukes (would need to > test this). > > Anyway, I'm wondering if anyone relies on this behavior, or if they > consider it a bug that the API allows admins to do this? I tend to > consider this a bug in the API, and should just be fixed without a > microversion. In other words, you shouldn't have to opt out of broken > behavior using microversions. > > [1] https://review.openstack.org/#/c/446446/ Yet another flaw in the "design" of availability zones being metadata key/values on nova host aggregates. If we want to actually fix the issue once and for all, we need to make availability zones a real thing that has a permanent identifier (UUID) and store that permanent identifier in the instance (not the instance metadata). Or we can continue to paper over major architectural weaknesses like this. -jay From ignaziocassano at gmail.com Tue Mar 27 15:46:58 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 27 Mar 2018 15:46:58 +0000 Subject: [Openstack-operators] Diskimage-builder lvm In-Reply-To: <07B961D1-EC8A-4967-A515-00A933D273A6@linux.vnet.ibm.com> References: <07B961D1-EC8A-4967-A515-00A933D273A6@linux.vnet.ibm.com> Message-ID: Many thanks for your help Ignazio Il Mar 27 Mar 2018 09:52 Andreas Scheuring ha scritto: > Hi, I recently did it like this: > > > For Ubuntu: Cherry picked: https://review.openstack.org/#/c/504588/ > > > export DIB_BLOCK_DEVICE_CONFIG=' > - local_loop: > name: image0 > size: 3GB > > - partitioning: > base: image0 > label: mbr > partitions: > - name: root > flags: [ boot, primary ] > size: 100% > - lvm: > name: lvm > pvs: > - name: pv > options: ["--force"] > base: root > > vgs: > - name: vg > base: ["pv"] > options: ["--force"] > > lvs: > - name: lv_root > base: vg > size: 1800M > > - name: lv_tmp > base: vg > size: 100M > > - name: lv_var > base: vg > size: 500M > > - name: lv_log > base: vg > size: 100M > > - name: lv_audit > base: vg > size: 100M > > - name: lv_home > base: vg > size: 200M > - mkfs: > name: root_fs > base: lv_root > label: cloudimage-root > type: ext4 > mount: > mount_point: / > fstab: > options: "defaults" > fsck-passno: 1 > > ' > > # See https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1573982 > # See https://bugs.launchpad.net/diskimage-builder/+bug/1715686 > # need to specify lvm2 module to include it into the ramdisk > disk-image-create -p lvm2 ubuntu-minimal > > > # -> During boot the kernel was still not able to find the root disk > (identified along uuid) > # Executing the following command in the initramfs emergency shell made > the volume appear for me > $ vgchange -ay > $ blkid > # /dev/vda1: UUID="O11hwE-0Efq-SwO7-Bko8-hVe6-jyOT-7UAiza" > TYPE="LVM2_member" PARTUUID="7885c310-01" > # /dev/mapper/vg-lv_root: LABEL="cloudimage-root" > UUID="bd6d2d2c-3ca6-4032-99fc-bf20e246e22a" TYPE=“ext4” > exit > > > > Maybe things are going more smoothly with other distros, or maybe things > have been fixed in the meanwhile… Or I did something wrong… > > Good luck! > > --- > Andreas Scheuring (andreas_s) > > > > On 26. Mar 2018, at 17:43, Ignazio Cassano > wrote: > > Hi all, > I read diskimage-builder documentatio but is not clear for me how I can > supply lvm configuration to the command. > Some yaml files are supplied but diskimage seems to expect json file. > Please, could anyone post an example? > Regards > Ignazio > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Mar 27 16:42:04 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 27 Mar 2018 11:42:04 -0500 Subject: [Openstack-operators] [openstack-dev] [nova] Hard fail if you try to rename an AZ with instances in it? In-Reply-To: <4460ff7f-7a1b-86ac-c37e-dbd7a42631ed@gmail.com> References: <2c6ff74e-65e9-d7e2-369e-d7c6fd37798a@gmail.com> <4460ff7f-7a1b-86ac-c37e-dbd7a42631ed@gmail.com> Message-ID: <3ce67128-07ee-559c-f54d-a0e62b2e38ee@gmail.com> On 3/27/2018 10:37 AM, Jay Pipes wrote: > If we want to actually fix the issue once and for all, we need to make > availability zones a real thing that has a permanent identifier (UUID) > and store that permanent identifier in the instance (not the instance > metadata). Aggregates have a UUID now, exposed in microversion 2.41 (you added it). Is that what you mean by AZs having a UUID, since AZs are modeled as host aggregates? One of the alternatives in the spec is not relying on name as a unique identifier and just make sure everything is held together via the aggregate UUID, which is now possible. -- Thanks, Matt From mihalis68 at gmail.com Tue Mar 27 16:44:00 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 27 Mar 2018 12:44:00 -0400 Subject: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback In-Reply-To: <20180323150713.GK21100@csail.mit.edu> References: <5AB12AB1.2020906@openstack.org> <20180320180916.k52ucl2fqfqugbwb@yuggoth.org> <0BBEC36C-A289-400D-A60B-0D0082B45869@cern.ch> <1521575953-sup-8576@lrrr.local> <20180323150713.GK21100@csail.mit.edu> Message-ID: Hello Everyone, This proposal looks to have very good backing in the community. There was an informal IRC meeting today with the meetups team, some of the foundation folk and others and everyone seems to like a proposal put forward as a sample definition of the combined event - I certainly do, it looks like we could have a really great combined event in September. I volunteered to share that a bit later today with some other info. In the meanwhile if you have a viewpoint please do chime in here as we'd like to declare this agreed by the community ASAP, so in particular IF YOU OBJECT please speak up by end of week, this week. Thanks! Chris On Fri, Mar 23, 2018 at 11:07 AM, Jonathan Proulx wrote: > On Thu, Mar 22, 2018 at 09:02:48PM -0700, Yih Leong, Sun. wrote: > :I support the ideas to try colocating the next Ops Midcycle and PTG. > :Although scheduling could be a potential challenge but it worth give it a > :try. > : > :Also having an joint social event in the evening can also help Dev/Ops to > :meet and offline discussion. :) > > Agreeing stongly with Matt and Melvin's comments about Forum -vs- > PTG/OpsMidcycle > > PTG/OpsMidcycle (as I see them) are about focusing inside teams to get > work done ("how" is a a good one word I think). The advantage of > colocation is for cross team questions like "we're thinking of doing > this thing this way, does this have any impacts on your work my might > not have considered", can get a quick respose in the hall, at lunch, > or over beers as Yih Leong suggests. > > Forum has become about coming to gather across groups for more > conceptual "what" discussions. > > So I also thing they are very distinct and I do see potential benefits > to colocation. > > We do need to watch out for downsides. The concerns around colocation > seemed mostly about larger events costing more and being generally > harder to organize. If we try we will find out if there is merit to > this concern, but (IMO) it is important to keep both of the > events as cheap and simple as possible. > > -Jon > > : > :On Thursday, March 22, 2018, Melvin Hillsman > wrote: > : > :> Thierry and Matt both hit the nail on the head in terms of the very > :> base/purpose/point of the Forum, PTG, and Ops Midcycles and here is my > +2 > :> since I have spoke with both and others outside of this thread and agree > :> with them here as I have in individual discussions. > :> > :> If nothing else I agree with Jimmy's original statement of at least > giving > :> this a try. > :> > :> On Thu, Mar 22, 2018 at 4:54 PM, Matt Van Winkle < > mvanwink at rackspace.com> > :> wrote: > :> > :>> Hey folks, > :>> Great discussion! There are number of points to comment on going back > :>> through the last few emails. I'll try to do so in line with Theirry's > :>> latest below. From a User Committee perspective (and as a member of > the > :>> Ops Meetup planning team), I am a convert to the idea of co-location, > but > :>> have come to see a lot of value in it. I'll point some of that out as > I > :>> respond to specific comments, but first a couple of overarching points. > :>> > :>> In the current model, the Forum sessions are very much about WHAT the > :>> software should do. Keeping the discussions focused on behavior, > feature > :>> and function has made it much easier for an operator to participate > :>> effectively in the conversation versus the older, design sessions, that > :>> focused largely on blueprints, coding approaches, etc. These are HOW > the > :>> developers should make things work and, now, are a large part of the > focus > :>> of the PTG. I realize it's not that cut and dry, but current model has > :>> allowed for this division of "what" and "how" in many areas, and I know > :>> several who have found it valuable. > :>> > :>> The other contextual thing to remember is the PTG was the effective > :>> combining of all the various team mid-cycle meetups that were > occurring. > :>> The current Ops mid-cycle was born in that same period. While it's > purpose > :>> was a little different, it's spirit is the same - gather a team (in > this > :>> case operators) together outside the hustle and bustle of a summit to > :>> discuss common issues, topics, etc. I'll also point out, that they > have > :>> been good vehicles in the Ops community to get new folks integrated. > For > :>> the purpose of this discussion, though, one could argue this is just > :>> bringing the last mid-cycle event in to the fold. > :>> > :>> On 3/21/18, 4:40 AM, "Thierry Carrez" wrote: > :>> > :>> Doug Hellmann wrote: > :>> > Excerpts from Tim Bell's message of 2018-03-20 19:48:31 +0000: > :>> >> > :>> >> Would we still need the same style of summit forum if we have > the > :>> >> OpenStack Community Working Gathering? One thing I have found > with > :>> >> the forum running all week throughout the summit is that it > tends > :>> >> to draw audience away from other talks so maybe we could reduce > the > :>> >> forum to only a subset of the summit time? > :>> > > :>> > I support the idea of having all contributors attend the > contributor > :>> > event (and rebranding it to reflect that change in emphasis), but > :>> > it's not quite clear how the result would be different from the > :>> > Forum. Is it just the scheduling? (Having input earlier in the > cycle > :>> > would be convenient, for sure.) > :>> > > :>> > Thierry's comment about "work sessions" earlier in the thread > seems > :>> > key. > :>> > :>> Right, I think the key difference between the PTG and Forum is that > :>> one > :>> is a work event for engaged contributors that are part of a group > :>> spending time on making OpenStack better, while the other is a > venue > :>> for > :>> engaging with everyone in our community. > :>> > :>> The PTG format is really organized around work groups (whatever > their > :>> focus is), enabling them to set their short-term goals, assign work > :>> items and bootstrap the work. The fact that all those work groups > are > :>> co-located make it easy to participate in multiple groups, or > invite > :>> other people to join the discussion where it touches their area of > :>> expertise, but it's still mostly a venue for our > :>> geographically-distributed workgroups to get together in person and > :>> get > :>> work done. That's why the agenda is so flexible at the PTG, to > :>> maximize > :>> the productivity of attendees, even if that can confuse people who > :>> can't > :>> relate to any specific work group. > :>> > :>> Exactly. I know I way over simplified it as working on the "how", but > :>> it's very important to honor this aspect of the current PTG. We need > this > :>> time for the devs and teams to take output from the previous forum > sessions > :>> (or earlier input) and turn it into plans for the N+1 version. While > some > :>> folks could drift between sessions, co-locating the Ops mid-cycle is > just > :>> that - leveraging venue, sponsors, and Foundation staff support across > one, > :>> larger event - it should NOT disrupt the current spirit of the sessions > :>> Theirry describes above > :>> > :>> The Forum format, on the other hand, is organized around specific > :>> discussion topics where you want to maximize feedback and input. > Forum > :>> sessions are not attached to a specific workgroup or team, they are > :>> defined by their topic. They are well-advertised on the event > :>> schedule, > :>> and happen at a precise time. It takes advantage of the thousands > of > :>> attendees being present to get the most relevant feedback > possible. It > :>> allows to engage beyond the work groups, to people who can't spend > :>> much > :>> time getting more engaged and contribute back. > :>> > :>> Agreed. Again, I over simplified as the "what", but these sessions are > :>> so valuable as the bring dev and ops in a room and focus on what the > :>> software needs to do or the impact (positive or negative) that planned > :>> behaviors might have on Operators and users. To Tim's earlier > question, no > :>> I think this change doesn't reduce the need for Forum sessions. If > :>> anything, I think it increases the need for us to get REALLY good at > :>> channeling output from the Ops mid-cycle in to session topics at the > next > :>> Summit. > :>> > :>> The Ops meetup under its current format is mostly work sessions, > and > :>> those would fit pretty well in the PTG event format. Ideally I > would > :>> limit the feedback-gathering sessions there and use the Forum (and > :>> regional events like OpenStack days) to collect it. That sounds > like a > :>> better way to reach out to "all users" and take into account their > :>> feedback and needs... > :>> > :>> They are largely work sessions, but independent of the co-location > :>> discussion, the UC is focused on improving the ability for tangible > output > :>> to come from Ops mid-cycles, OpenStack Days and regional meetups - > largely > :>> in the form of Forum sessions and ultimately changes in the software. > So > :>> we, as a committee, see a lot of similarities in what you just said. > I'm > :>> not bold enough to predict exactly how co-location might change the > :>> tone/topic of the Ops sessions, but I agree that we shouldn't expect a > lot > :>> of real-time feedback time with devs at the PTG/mid-summit event (what > ever > :>> we end up calling it). We want the devs to be focused on what's > already > :>> planned for the N+1 version or beyond. The conversations/sessions at > the > :>> Ops portion of the event would hopefully lead to Forum sessions on N+2 > :>> features, functions, bug fixes, etc > :>> > :>> Overall, I still see co-location as a positive move. There will be > some > :>> tricky bits we need to figure out between to the "two sides" of the > event > :>> as we want to MINIMIZE any perceived us/them between dev and ops - not > add > :>> to it. But, the work session themselves, should still honor the > spirit of > :>> the PTG and Ops Mid-cycle as they are today. We just get the added > benefit > :>> of time together as a whole community - and hopefully solve a few > :>> logistic/finance/sponsorship/venue issues that trouble one event or > the > :>> other today. > :>> > :>> Thanks! > :>> VW > :>> -- > :>> Thierry Carrez (ttx) > :>> > :>> _______________________________________________ > :>> OpenStack-operators mailing list > :>> OpenStack-operators at lists.openstack.org > :>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac > :>> k-operators > :>> > :>> > :>> _______________________________________________ > :>> OpenStack-operators mailing list > :>> OpenStack-operators at lists.openstack.org > :>> http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-operators > :>> > :> > :> > :> > :> -- > :> Kind regards, > :> > :> Melvin Hillsman > :> mrhillsman at gmail.com > :> mobile: (832) 264-2646 > :> > > :_______________________________________________ > :OpenStack-operators mailing list > :OpenStack-operators at lists.openstack.org > :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > -- > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Mar 27 17:49:06 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 27 Mar 2018 13:49:06 -0400 Subject: [Openstack-operators] IMPORTANT - future of ops meetups! Message-ID: Hello Everyone You've probably see the thread about possibly combining ops meetups with PTG to make a new broader event. Here is a rough draft of what that would actually look like: *"Monday and Tuesday are cross-project days where ops are welcome to attend SIG and other discussions, and if not interested can be travel days or whatever for them. Then Wed-Thurs are the two tracks/events where the ops folks have what has traditionally been done for ops meetups. Then Friday is a travel day or ops can stick around to follow up with dev-side thingsthat they weren't able to get to over the week or wanted to follow up on."* Thanks to Sean McGinnis for proposing this to get the ball rolling. This would mean there's a "normal" ops meetup for two days on days 3 and 4 of this combined event, with the option of attending earlier (days 1 and 2) if you want to contribute to dev/ops/openstack community sessions (e.g. SIGs), and possibly also staying a 5th day. The event is currently pencilled in for September in a central part of the USA. It will be organized by the Foundation logistically but the various sub-groups own their technical agendas. We unfortunately forgot to record the chat as a formal meeting, but you can see the raw IRC chat here http://eavesdrop.openstack.org/irclogs/%23openstack-operators/%23openstack- operators.2018-03-27.log.html#t2018-03-27T14:03:40 With the current level of support for this idea, it looks likely to happen, but, particularly if you object, please speak up ASAP. Note that this would be instead of the tentative idea we had of an event in NYC in August. If this is welcomed by the operators community, I'll certainly try to swap the sponsorship from my employer over to this as I feel it will be even more valuable and initial feedback from other potential sponsors is favorable. Please make your voice heard on this issue! Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From stig.openstack at telfer.org Tue Mar 27 22:36:09 2018 From: stig.openstack at telfer.org (Stig Telfer) Date: Tue, 27 Mar 2018 23:36:09 +0100 Subject: [Openstack-operators] [scientific] IRC Meeting, Wednesday 1100UTC: Cyborg and Forum Message-ID: Hi all - We have a Scientific SIG meeting on Wednesday at 1100 UTC. This week’s agenda is at: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_March_28th_2018 We’ll be hearing from Zhipeng Huang about the Cyborg project for managing hardware acceleration resources, and discussing forum topics to propose for the Vancouver summit. Everyone is welcome. Cheers, Stig -------------- next part -------------- An HTML attachment was scrubbed... URL: From pa at pauloangelo.com Wed Mar 28 00:57:00 2018 From: pa at pauloangelo.com (Paulo Angelo) Date: Tue, 27 Mar 2018 21:57:00 -0300 Subject: [Openstack-operators] Problems with neutron-gateway Message-ID: Hi all, At first, thank you for the OpenStack juju charms. They are awesome! I'm testing them in a lab and I found a problem using the neutron gateway charm with OpenStack queens. It is installing on the system just packages related to the Open vSwitch stuff. The packages neutron-dhcp-agent, neutron-metering-agent, neutron-lbaas-agent have not been installed. Is there an option missing? Thank you in advance. Best regards, Paulo Angelo -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Mar 28 07:59:45 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 28 Mar 2018 09:59:45 +0200 Subject: [Openstack-operators] IMPORTANT - future of ops meetups! In-Reply-To: References: Message-ID: Chris Morgan wrote: > You've probably see the thread about possibly combining ops meetups > with PTG to make a new broader event. Here is a rough draft of what that > would actually look like: > > *"Monday and Tuesday are cross-project days where ops are welcome to > attend SIG and other discussions, and if not interested can be travel > days or whatever for them. Then Wed-Thurs are the two tracks/events > where the ops folks have what has traditionally been done for ops > meetups. Then Friday is a travel day or ops can stick around to follow > up with dev-side things > that they weren't able to get to over the week or wanted to follow up on."* Yes traditionally we have tried to roughly organize the time, to focus the first two days on horizontal / cross-project / guild / SIG work, and the next three days more on vertical work. But that is more of a rough guideline than a hard rule: the constraints of available space make that division a bit fuzzy. For example in Dublin, Manila has been asking to avoid being scheduled at the same time as Cinder, and got scheduled to Tuesday and Friday. So while it's very likely that the traditional Ops meetup sessions would happen on Wed-Thu, don't plan travel just yet ! Once we know how many rooms we have, which groups want space and for how many days, we'll work on a more precise allocation. > Thanks to Sean McGinnis for proposing this to get the ball rolling. This > would mean there's a "normal" ops meetup for two days on days 3 and 4 of > this combined event, with the option of attending earlier (days 1 and 2) > if you want to contribute to dev/ops/openstack community sessions (e.g. > SIGs), and possibly also staying a 5th day. It's also worth noting that it is possible for groups to book additional meeting spots outside the pre-allocated time and space. If some work groups realize they would like to meet on Monday to get work done, that's definitely possible. It all appears on the dynamic event schedule to make it easy to see what's going on. -- Thierry Carrez (ttx) From tobias at citynetwork.se Wed Mar 28 11:39:26 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Wed, 28 Mar 2018 13:39:26 +0200 Subject: [Openstack-operators] [publiccloud-wg] Reminder bi-weekly meeting Public Cloud WG Message-ID: Hi all, Time again for a meeting for the Public Cloud WG - at our new time and channel - tomorrow at 1400 UTC in #openstack-publiccloud Agenda and etherpad at: https://etherpad.openstack.org/p/publiccloud-wg Cheers, Tobias Rydberg -- Tobias Rydberg Senior Developer Mobile: +46 733 312780 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From mriedemos at gmail.com Wed Mar 28 19:32:30 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 28 Mar 2018 14:32:30 -0500 Subject: [Openstack-operators] IMPORTANT - future of ops meetups! In-Reply-To: References: Message-ID: On 3/27/2018 12:49 PM, Chris Morgan wrote: > With the current level of support for this idea, it looks likely to > happen, but, particularly if you object, please speak up ASAP. Note that > this would be instead of the tentative idea we had of an event in NYC in > August. If this is welcomed by the operators community, I'll certainly > try to swap the sponsorship from my employer over to this as I feel it > will be even more valuable and initial feedback from other potential > sponsors is favorable. > > Please make your voice heard on this issue! Not to be too much of a downer here, but with the future of the PTG survey I just took, does the ops community want to wait until that's all sorted out? i.e. what happens if the future of the PTG means there is no PTG, and it's all just munged together at the Forum, which is no longer at the beginning of the cycle for vertical teams to plan their release (like the old design summit) and is prohibitively expensive for lowly devs and ops to attend? Would the ops community just want to continue doing what they are doing? -- Thanks, Matt From jimmy at openstack.org Wed Mar 28 20:10:20 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 28 Mar 2018 15:10:20 -0500 Subject: [Openstack-operators] IMPORTANT - future of ops meetups! In-Reply-To: References: Message-ID: <5ABBF6AC.5020002@openstack.org> Matt, Good question. The PTG is definitely occurring in the Fall. We would evaluate the value of it as a community based on feedback from the PTG/Ops meetup, Board feedback, survey results, budget, etc... If it was decided not to hold the PTG in 2019, we would of course need to reevaluate at that time. Let me know if I can answer any further questions. Thanks, Jimmy > Matt Riedemann > March 28, 2018 at 2:32 PM > > > Not to be too much of a downer here, but with the future of the PTG > survey I just took, does the ops community want to wait until that's > all sorted out? i.e. what happens if the future of the PTG means there > is no PTG, and it's all just munged together at the Forum, which is no > longer at the beginning of the cycle for vertical teams to plan their > release (like the old design summit) and is prohibitively expensive > for lowly devs and ops to attend? Would the ops community just want to > continue doing what they are doing? > > Chris Morgan > March 27, 2018 at 12:49 PM > Hello Everyone > You've probably see the thread about possibly combining ops meetups > with PTG to make a new broader event. Here is a rough draft of what > that would actually look like: > > *"Monday and Tuesday are cross-project days where ops are welcome to > attend SIG and other discussions, and if not interested can be travel > days or whatever for them. Then Wed-Thurs are the two tracks/events > where the ops folks have what has traditionally been done for ops > meetups. Then Friday is a travel day or ops can stick around to follow > up with dev-side things > that they weren't able to get to over the week or wanted to follow up > on."* > > Thanks to Sean McGinnis for proposing this to get the ball rolling. > This would mean there's a "normal" ops meetup for two days on days 3 > and 4 of this combined event, with the option of attending earlier > (days 1 and 2) if you want to contribute to dev/ops/openstack > community sessions (e.g. SIGs), and possibly also staying a 5th day. > > The event is currently pencilled in for September in a central part of > the USA. It will be organized by the Foundation logistically but the > various sub-groups own their technical agendas. > > We unfortunately forgot to record the chat as a formal meeting, but > you can see the raw IRC chat here > http://eavesdrop.openstack.org/irclogs/%23openstack-operators/%23openstack-operators.2018-03-27.log.html#t2018-03-27T14:03:40 > > > With the current level of support for this idea, it looks likely to > happen, but, particularly if you object, please speak up ASAP. Note > that this would be instead of the tentative idea we had of an event in > NYC in August. If this is welcomed by the operators community, I'll > certainly try to swap the sponsorship from my employer over to this as > I feel it will be even more valuable and initial feedback from other > potential sponsors is favorable. > > Please make your voice heard on this issue! > > Chris > > -- > Chris Morgan > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Wed Mar 28 20:34:50 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Wed, 28 Mar 2018 16:34:50 -0400 Subject: [Openstack-operators] IMPORTANT - future of ops meetups! In-Reply-To: <5ABBF6AC.5020002@openstack.org> References: <5ABBF6AC.5020002@openstack.org> Message-ID: the next PTG is confirmed, and is in a time and region that meets the community expectations for the next ops meetups, so i think giving it a go is fine and appropriate. If no more PTGs happen, or if they become less appropriate somehow, kick-starting the independent events again is quite easy since these are some of the most no-frills events I've been to - a venue, lunch, wifi, some content etherpads, some moderators, you can have an Ops Meetup! Chris On Wed, Mar 28, 2018 at 4:10 PM, Jimmy McArthur wrote: > Matt, > > Good question. The PTG is definitely occurring in the Fall. We would > evaluate the value of it as a community based on feedback from the PTG/Ops > meetup, Board feedback, survey results, budget, etc... If it was decided > not to hold the PTG in 2019, we would of course need to reevaluate at that > time. > > Let me know if I can answer any further questions. > > Thanks, > Jimmy > > Matt Riedemann > March 28, 2018 at 2:32 PM > > > Not to be too much of a downer here, but with the future of the PTG survey > I just took, does the ops community want to wait until that's all sorted > out? i.e. what happens if the future of the PTG means there is no PTG, and > it's all just munged together at the Forum, which is no longer at the > beginning of the cycle for vertical teams to plan their release (like the > old design summit) and is prohibitively expensive for lowly devs and ops to > attend? Would the ops community just want to continue doing what they are > doing? > > Chris Morgan > March 27, 2018 at 12:49 PM > Hello Everyone > You've probably see the thread about possibly combining ops meetups with > PTG to make a new broader event. Here is a rough draft of what that would > actually look like: > > > *"Monday and Tuesday are cross-project days where ops are welcome to > attend SIG and other discussions, and if not interested can be travel days > or whatever for them. Then Wed-Thurs are the two tracks/events where the > ops folks have what has traditionally been done for ops meetups. Then > Friday is a travel day or ops can stick around to follow up with dev-side > thingsthat they weren't able to get to over the week or wanted to follow up > on."* > > Thanks to Sean McGinnis for proposing this to get the ball rolling. This > would mean there's a "normal" ops meetup for two days on days 3 and 4 of > this combined event, with the option of attending earlier (days 1 and 2) if > you want to contribute to dev/ops/openstack community sessions (e.g. SIGs), > and possibly also staying a 5th day. > > The event is currently pencilled in for September in a central part of the > USA. It will be organized by the Foundation logistically but the various > sub-groups own their technical agendas. > > We unfortunately forgot to record the chat as a formal meeting, but you > can see the raw IRC chat here > http://eavesdrop.openstack.org/irclogs/%23openstack-operator > s/%23openstack-operators.2018-03-27.log.html#t2018-03-27T14:03:40 > > With the current level of support for this idea, it looks likely to > happen, but, particularly if you object, please speak up ASAP. Note that > this would be instead of the tentative idea we had of an event in NYC in > August. If this is welcomed by the operators community, I'll certainly try > to swap the sponsorship from my employer over to this as I feel it will be > even more valuable and initial feedback from other potential sponsors is > favorable. > > Please make your voice heard on this issue! > > Chris > > -- > Chris Morgan > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Mar 28 21:59:48 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 28 Mar 2018 16:59:48 -0500 Subject: [Openstack-operators] IMPORTANT - future of ops meetups! In-Reply-To: References: <5ABBF6AC.5020002@openstack.org> Message-ID: <16064ae9-88e7-2b01-4387-7f965a61c9b1@gmail.com> On 3/28/2018 3:34 PM, Chris Morgan wrote: > If no more PTGs happen, or if they become less appropriate somehow, > kick-starting the independent events again is quite easy since these are > some of the most no-frills events I've been to - a venue, lunch, wifi, > some content etherpads, some moderators, you can have an Ops Meetup! Yeah, exact same with the old mid-cycle meetup formats, which if we had to revive those because of no PTG, I guess would have to not be mid-cycle, but beginning of cycle. -- Thanks, Matt From ed at leafe.com Thu Mar 29 03:25:48 2018 From: ed at leafe.com (Ed Leafe) Date: Wed, 28 Mar 2018 22:25:48 -0500 Subject: [Openstack-operators] IMPORTANT - future of ops meetups! In-Reply-To: <16064ae9-88e7-2b01-4387-7f965a61c9b1@gmail.com> References: <5ABBF6AC.5020002@openstack.org> <16064ae9-88e7-2b01-4387-7f965a61c9b1@gmail.com> Message-ID: On Mar 28, 2018, at 4:59 PM, Matt Riedemann wrote: > >> If no more PTGs happen, or if they become less appropriate somehow, kick-starting the independent events again is quite easy since these are some of the most no-frills events I've been to - a venue, lunch, wifi, some content etherpads, some moderators, you can have an Ops Meetup! > > Yeah, exact same with the old mid-cycle meetup formats, which if we had to revive those because of no PTG, I guess would have to not be mid-cycle, but beginning of cycle. You all should have gotten a survey request about the PTG from the Foundation. Please fill that out, and (assuming that you find the PTG valuable) let them know how important it is to your project's success. -- Ed Leafe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From iain.macdonnell at oracle.com Thu Mar 29 03:40:19 2018 From: iain.macdonnell at oracle.com (iain MacDonnell) Date: Wed, 28 Mar 2018 20:40:19 -0700 Subject: [Openstack-operators] nova-placement-api tuning Message-ID: <76b24db4-bdbb-663c-7d60-4eaaedfe3eac@oracle.com> Looking for recommendations on tuning of nova-placement-api. I have a few moderately-sized deployments (~200 nodes, ~4k instances), currently on Ocata, and instance creation is getting very slow as they fill up. I discovered that calls to placement seem to be taking a long time, and even this is horribly slow: $ time curl http://apihost:8778/ {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} real 0m20.656s user 0m0.003s sys 0m0.001s $ nova-placement-api is running under mod_wsgi with the "standard"(?) config, i.e.: ... WSGIProcessGroup nova-placement-api WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova group=nova WSGIScriptAlias / /usr/bin/nova-placement-api ... Seems that the service is overwhelmed. Other WSGI services on the same host are responding in a timely manner. I'm thinking of bumping up "processes" quite a bit (maybe like 16 or so). I'm assuming that increasing "threads" wouldn't help(?). Any lessons from experience in this area? Other suggestions? I'm looking at things like turning off scheduler_tracks_instance_changes, since affinity scheduling is not needed (at least so-far), but not sure that that will help with placement load (seems like it might, though?) TIA, ~iain From cdent+os at anticdent.org Thu Mar 29 08:19:23 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 29 Mar 2018 09:19:23 +0100 (BST) Subject: [Openstack-operators] nova-placement-api tuning In-Reply-To: <76b24db4-bdbb-663c-7d60-4eaaedfe3eac@oracle.com> References: <76b24db4-bdbb-663c-7d60-4eaaedfe3eac@oracle.com> Message-ID: On Wed, 28 Mar 2018, iain MacDonnell wrote: > Looking for recommendations on tuning of nova-placement-api. I have a few > moderately-sized deployments (~200 nodes, ~4k instances), currently on Ocata, > and instance creation is getting very slow as they fill up. This should be well within the capabilities of an appropriately installed placement service, so I reckon something is weird about your installation. More within. > $ time curl http://apihost:8778/ > {"error": {"message": "The request you have made requires authentication.", > "code": 401, "title": "Unauthorized"}} > real 0m20.656s > user 0m0.003s > sys 0m0.001s This is good choice for trying to determine what's up because it avoids any interaction with the database and most of the stack of code: the web server answers, runs a very small percentage of the placement python stack and kicks out the 401. So this mostly indicates that socket accept is taking forever. > nova-placement-api is running under mod_wsgi with the "standard"(?) config, > i.e.: Do you recall where this configuration comes from? The settings for WSGIDaemonProcess are not very good and if there is some packaging or documentation that is settings this way it would be good to find it and fix it. Depending on what else is on the host running placement I'd boost processes to number of cores divided by 2, 3 or 4 and boost threads to around 25. Or you can leave 'threads' off and it will default to 15 (at least in recent versions of mod wsgi). With the settings a below you're basically saying that you want to handle 3 connections at a time, which isn't great, since each of your compute-nodes wants to talk to placement multiple times a minute (even when nothing is happening). Tweaking the number of processes versus the number of threads depends on whether it appears that the processes are cpu or I/O bound. More threads helps when things are I/O bound. > ... > WSGIProcessGroup nova-placement-api > WSGIApplicationGroup %{GLOBAL} > WSGIPassAuthorization On > WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova > group=nova > WSGIScriptAlias / /usr/bin/nova-placement-api > ... [snip] > Other suggestions? I'm looking at things like turning off > scheduler_tracks_instance_changes, since affinity scheduling is not needed > (at least so-far), but not sure that that will help with placement load > (seems like it might, though?) This won't impact the placement service itself. A while back I did some experiments with trying to overload placement by using the fake virt driver in devstack and wrote it up at https://anticdent.org/placement-scale-fun.html The gist was that with a properly tuned placement service it was other parts of the system that suffered first. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From moreira.belmiro.email.lists at gmail.com Thu Mar 29 09:13:47 2018 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Thu, 29 Mar 2018 11:13:47 +0200 Subject: [Openstack-operators] nova-placement-api tuning In-Reply-To: References: <76b24db4-bdbb-663c-7d60-4eaaedfe3eac@oracle.com> Message-ID: Hi, with Ocata upgrade we decided to run local placements (one service per cellV1) because we were nervous about possible scalability issues but specially the increase of the schedule time. Fortunately, this is now been address with the placement-req-filter work. We started slowly to aggregate our local placements into a the central one (required for cellsV2). Currently we have >7000 compute nodes (>40k requests per minute) into this central placement. Still ~2000 compute nodes to go. Some lessons so far... - Scale keystone accordingly when enabling placement. - Don't forget to configure memcache for keystone_authtoken. - Change apache mpm default from prefork to event/worker. - Increase the WSGI number of processes/threads considering where placement is running. - Have enough placement nodes considering your number of requests. - Monitor the request time. This impacts VM scheduling. Also, depending how it's configured the LB can also start removing placement nodes. - DB could be a bottleneck. We are still learning how to have a stable placement at scale. It would be great if others can share their experiences. Belmiro CERN On Thu, Mar 29, 2018 at 10:19 AM, Chris Dent wrote: > On Wed, 28 Mar 2018, iain MacDonnell wrote: > > Looking for recommendations on tuning of nova-placement-api. I have a few >> moderately-sized deployments (~200 nodes, ~4k instances), currently on >> Ocata, and instance creation is getting very slow as they fill up. >> > > This should be well within the capabilities of an appropriately > installed placement service, so I reckon something is weird about > your installation. More within. > > $ time curl http://apihost:8778/ >> {"error": {"message": "The request you have made requires >> authentication.", "code": 401, "title": "Unauthorized"}} >> real 0m20.656s >> user 0m0.003s >> sys 0m0.001s >> > > This is good choice for trying to determine what's up because it > avoids any interaction with the database and most of the stack of > code: the web server answers, runs a very small percentage of the > placement python stack and kicks out the 401. So this mostly > indicates that socket accept is taking forever. > > nova-placement-api is running under mod_wsgi with the "standard"(?) >> config, i.e.: >> > > Do you recall where this configuration comes from? The settings for > WSGIDaemonProcess are not very good and if there is some packaging > or documentation that is settings this way it would be good to find > it and fix it. > > Depending on what else is on the host running placement I'd boost > processes to number of cores divided by 2, 3 or 4 and boost threads to > around 25. Or you can leave 'threads' off and it will default to 15 > (at least in recent versions of mod wsgi). > > With the settings a below you're basically saying that you want to > handle 3 connections at a time, which isn't great, since each of > your compute-nodes wants to talk to placement multiple times a > minute (even when nothing is happening). > > Tweaking the number of processes versus the number of threads > depends on whether it appears that the processes are cpu or I/O > bound. More threads helps when things are I/O bound. > > ... >> WSGIProcessGroup nova-placement-api >> WSGIApplicationGroup %{GLOBAL} >> WSGIPassAuthorization On >> WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova >> group=nova >> WSGIScriptAlias / /usr/bin/nova-placement-api >> ... >> > > [snip] > > Other suggestions? I'm looking at things like turning off >> scheduler_tracks_instance_changes, since affinity scheduling is not >> needed (at least so-far), but not sure that that will help with placement >> load (seems like it might, though?) >> > > This won't impact the placement service itself. > > A while back I did some experiments with trying to overload > placement by using the fake virt driver in devstack and wrote it up > at https://anticdent.org/placement-scale-fun.html > > The gist was that with a properly tuned placement service it was > other parts of the system that suffered first. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Thu Mar 29 09:24:02 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Thu, 29 Mar 2018 17:24:02 +0800 Subject: [Openstack-operators] [nova] about use spice console Message-ID: Hi,all Now I want to use spice console replace novnc in instance.But the openstack documentation is a bit sparse on what configuration parameters to enable for SPICE console access. But my result is the nova-compute service and nova-consoleauth service failed,and the log tell me the "IOError: [Errno 13] Permission denied: /etc/nova/policy.json".So can you help me achieve this?Thank you very much. ENV is Pike or Queens release devstack. This is the step: 1.on controller: yum install -y spice-server spice-protocol openstack-nova-spicehtml5proxy spice-html5 change the nova.conf [default] vnc_enabled=false [spice] html5proxy_host=controller_ip html5proxy_port=6082 keymap=en-us stop the novnc service start the spicehtml5proxy.service systemctl start openstack-nova-spicehtml5proxy.service 2.on conmpute: yum install -y spice-server spice-protocol spice-html5 change the nova-cpu.conf [default] vnc_enabled=false [spice] agent_enabled = True enabled = True html5proxy_base_url = http://controller_ip:6082/spice_auto.html html5proxy_host = 0.0.0.0 html5proxy_port = 6082 keymap = en-us server_listen = 127.0.0.1 server_proxyclient_address = 127.0.0.1 restart the compute service Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Thu Mar 29 11:24:32 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 29 Mar 2018 12:24:32 +0100 (BST) Subject: [Openstack-operators] nova-placement-api tuning In-Reply-To: References: <76b24db4-bdbb-663c-7d60-4eaaedfe3eac@oracle.com> Message-ID: On Thu, 29 Mar 2018, Belmiro Moreira wrote: [lots of great advice snipped] > - Change apache mpm default from prefork to event/worker. > - Increase the WSGI number of processes/threads considering where placement > is running. Another option is to switch to nginx and uwsgi. In situations where the web server is essentially operating as a proxy to another process which is being the WSGI server, nginx has a history of being very effective. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From iain.macdonnell at oracle.com Thu Mar 29 16:17:41 2018 From: iain.macdonnell at oracle.com (iain MacDonnell) Date: Thu, 29 Mar 2018 09:17:41 -0700 Subject: [Openstack-operators] nova-placement-api tuning In-Reply-To: References: <76b24db4-bdbb-663c-7d60-4eaaedfe3eac@oracle.com> Message-ID: On 03/29/2018 01:19 AM, Chris Dent wrote: > On Wed, 28 Mar 2018, iain MacDonnell wrote: > >> Looking for recommendations on tuning of nova-placement-api. I have a >> few moderately-sized deployments (~200 nodes, ~4k instances), >> currently on Ocata, and instance creation is getting very slow as they >> fill up. > > This should be well within the capabilities of an appropriately > installed placement service, so I reckon something is weird about > your installation. More within. > >> $ time curl http://apihost:8778/ >> >> {"error": {"message": "The request you have made requires >> authentication.", "code": 401, "title": "Unauthorized"}} >> real    0m20.656s >> user    0m0.003s >> sys    0m0.001s > > This is good choice for trying to determine what's up because it > avoids any interaction with the database and most of the stack of > code: the web server answers, runs a very small percentage of the > placement python stack and kicks out the 401. So this mostly > indicates that socket accept is taking forever. Well, this test connects and gets a 400 immediately: echo | nc -v apihost 8778 so I don't think it's at at the socket level, but, I assume, the actual WSGI app, once the socket connection is established. I did try to choose a test that tickles the app, but doesn't "get too deep", as you say. >> nova-placement-api is running under mod_wsgi with the "standard"(?) >> config, i.e.: > > Do you recall where this configuration comes from? The settings for > WSGIDaemonProcess are not very good and if there is some packaging > or documentation that is settings this way it would be good to find > it and fix it. Good question. I could have sworn it was in the installation guide, but I can't find it now. It must have come from RDO, i.e.: https://github.com/rdo-packages/nova-distgit/blob/rpm-master/nova-placement-api.conf > Depending on what else is on the host running placement I'd boost > processes to number of cores divided by 2, 3 or 4 and boost threads to > around 25. Or you can leave 'threads' off and it will default to 15 > (at least in recent versions of mod wsgi). > > With the settings a below you're basically saying that you want to > handle 3 connections at a time, which isn't great, since each of > your compute-nodes wants to talk to placement multiple times a > minute (even when nothing is happening). Right, that was my basic assessment too.... so now I'm trying to figure out how it should be tuned, but had not been able to find any guidelines, so thought of asking here. You've confirmed that I'm on the right track (or at least "a" right track). > Tweaking the number of processes versus the number of threads > depends on whether it appears that the processes are cpu or I/O > bound. More threads helps when things are I/O bound. Interesting. Will keep that in mind. Thanks! >> ... >>  WSGIProcessGroup nova-placement-api >>  WSGIApplicationGroup %{GLOBAL} >>  WSGIPassAuthorization On >>  WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova >> group=nova >>  WSGIScriptAlias / /usr/bin/nova-placement-api >> ... > > [snip] > >> Other suggestions? I'm looking at things like turning off >> scheduler_tracks_instance_changes, since affinity scheduling is not >> needed (at least so-far), but not sure that that will help with >> placement load (seems like it might, though?) > > This won't impact the placement service itself. It seemed like it might be causing the compute nodes to make calls to update allocations, so I was thinking it might reduce the load a bit, but I didn't confirm that. This was "clutching at straws" - hopefully I won't need to now. > A while back I did some experiments with trying to overload > placement by using the fake virt driver in devstack and wrote it up > at https://anticdent.org/placement-scale-fun.html > > > The gist was that with a properly tuned placement service it was > other parts of the system that suffered first. Interesting. Thanks for sharing that! ~iain From cdent+os at anticdent.org Thu Mar 29 17:05:43 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 29 Mar 2018 18:05:43 +0100 (BST) Subject: [Openstack-operators] nova-placement-api tuning In-Reply-To: References: <76b24db4-bdbb-663c-7d60-4eaaedfe3eac@oracle.com> Message-ID: On Thu, 29 Mar 2018, iain MacDonnell wrote: >> placement python stack and kicks out the 401. So this mostly >> indicates that socket accept is taking forever. > > Well, this test connects and gets a 400 immediately: > > echo | nc -v apihost 8778 > > so I don't think it's at at the socket level, but, I assume, the actual WSGI > app, once the socket connection is established. I did try to choose a test > that tickles the app, but doesn't "get too deep", as you say. Sorry I was being terribly non-specific. I meant generically somewhere along the way from the either the TCP socket that is accept the initial http connection to 8778 or the unix domain socket that is between apache2 and the wsgi daemon process. As you've discerned, the TCP socket and apache2 are fine. > Good question. I could have sworn it was in the installation guide, but I > can't find it now. It must have come from RDO, i.e.: > > https://github.com/rdo-packages/nova-distgit/blob/rpm-master/nova-placement-api.conf Ooph. I'll see if I can find someone to talk to about that. > Right, that was my basic assessment too.... so now I'm trying to figure out > how it should be tuned, but had not been able to find any guidelines, so > thought of asking here. You've confirmed that I'm on the right track (or at > least "a" right track). The mod wsgi docs have a fair bit of stuff about tuning in them, but it is mixed in amongst various things, but http://modwsgi.readthedocs.io/en/develop/user-guides/processes-and-threading.html might be a good starting point. >>> Other suggestions? I'm looking at things like turning off >>> scheduler_tracks_instance_changes, since affinity scheduling is not needed >>> (at least so-far), but not sure that that will help with placement load >>> (seems like it might, though?) >> >> This won't impact the placement service itself. > > It seemed like it might be causing the compute nodes to make calls to update > allocations, so I was thinking it might reduce the load a bit, but I didn't > confirm that. This was "clutching at straws" - hopefully I won't need to now. There's duplication of instance state going to both placement and the nova-scheduler. The number of calls from nova-compute to placement reduces a bit as you updgrade to newer releases. It's still more than we'd prefer. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From mriedemos at gmail.com Thu Mar 29 17:21:53 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 29 Mar 2018 12:21:53 -0500 Subject: [Openstack-operators] [openstack-dev] [all][stable] No more stable Phases welcome Extended Maintenance In-Reply-To: <20180329083625.GO13389@thor.bakeyournoodle.com> References: <20180329083625.GO13389@thor.bakeyournoodle.com> Message-ID: <30a61fef-48e5-2d4b-ab08-4dd805b0ab71@gmail.com> On 3/29/2018 3:36 AM, Tony Breeds wrote: > Hi all, > At Sydney we started the process of change on the stable branches. > Recently we merged a TC resolution[1] to alter the EOL process. The > next step is refinining the stable policy itself. > > I've created a review to do that. I think it covers most of the points > from Sydney and Dublin. > > Please check it out: > https://review.openstack.org/#/c/552733/ > > Yours Tony. > > [1]https://review.openstack.org/548916 +ops -- Thanks, Matt From allison at openstack.org Thu Mar 29 17:22:05 2018 From: allison at openstack.org (Allison Price) Date: Thu, 29 Mar 2018 12:22:05 -0500 Subject: [Openstack-operators] OpenStack User Survey: Identity Service, Networking and Block Storage Drivers Answer Options Message-ID: <9F1755CF-6823-4662-887E-C6D17F962C2D@openstack.org> Hi everyone, We are opening the OpenStack User Survey submission process next month and wanted to collect operator feedback on the answer choices for three particular questions: Identity Service (Keystone) drivers, Network (Neutron) drivers and Block Storage (Cinder) drivers. We want to make sure that we have a list of the most commonly used drivers so that we can collect the appropriate data from OpenStack users. Each of the questions will have a free text “Other” option, so they don’t need to be comprehensive, but if you think that there is a driver that should be included, please reply on this email thread or contact me directly. Thanks! Allison Allison Price OpenStack Foundation allison at openstack.org Which OpenStack Identity Service (Keystone) drivers are you using? Active Directory KVS LDAP PAM SQL (default) Templated Other Which OpenStack Network (Neutron) drivers are you using? Cisco UCS / Nexus ML2 - Cisco APIC ML2 - Linux Bridge ML2 - Mellanox ML2 - MidoNet ML2 - OpenDaylight ML2 - Open vSwitch nova-network VMware NSX (formerly NIcira NVP) A10 Networks Arista Big Switch Brocade Embrace Extreme Networks Hyper-V IBM SDN-VE Linux Bridge Mellanox Meta PluginP MidoNet Modular Layer 2 Plugin (ML2) NEC OpenFlow OpenDaylight Nuage Networks One Convergence NVSD Tungsten Fabric (OpenContrail) Open vSwitch PLUMgrid Ruijie Networks Ryu OpenFlow Controller ML2 - Alcatel-Lucent Omniswitch ML2 - Arista ML2 - Big Switch ML2 - Brocade VDX/VCS ML2 - Calico ML2 - Cisco DFA ML2 - Cloudbase Hyper-V ML2 - Freescale SDN ML2 - Freescale FWaaS ML2 - Fujitsu Converged Fabric Switch ML2 - Huawei Agile Controller ML2 - Mellanox SR-IOV ML2 - Nuage Networks ML2 - One Convergence ML2 - ONOS ML2 - OpenFlow Agent ML2 - Pluribus ML2 - Fail-F ML2 - VMware DVS Other Which OpenStack Block Storage (Cinder) drivers are you using? Ceph RBD Coraid Dell EqualLogic EMC GlusterFS HDS HP 3PAR HP LeftHand Huawei IBM GPFS IBM NAS IBM Storwize IBM XIV / DS8000 LVM (default) Mellanox NetApp Nexenta NFS ProphetStor SAN / Solaris Scality Sheepdog SolidFire VMware VMDK Windows Server 2012 Xenapi NFS XenAPI Storage Manager Zadara Other -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Mar 29 17:34:04 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 29 Mar 2018 12:34:04 -0500 Subject: [Openstack-operators] nova-placement-api tuning In-Reply-To: References: <76b24db4-bdbb-663c-7d60-4eaaedfe3eac@oracle.com> Message-ID: <0286c039-5046-2eb7-a260-7faa45393e59@gmail.com> On 3/29/2018 12:05 PM, Chris Dent wrote: >>>> Other suggestions? I'm looking at things like turning off >>>> scheduler_tracks_instance_changes, since affinity scheduling is not >>>> needed (at least so-far), but not sure that that will help with >>>> placement load (seems like it might, though?) >>> >>> This won't impact the placement service itself. >> >> It seemed like it might be causing the compute nodes to make calls to >> update allocations, so I was thinking it might reduce the load a bit, >> but I didn't confirm that. This was "clutching at straws" - hopefully >> I won't need to now. > > There's duplication of instance state going to both placement and > the nova-scheduler. The number of calls from nova-compute to > placement reduces a bit as you updgrade to newer releases. It's > still more than we'd prefer. As Chris said, scheduler_tracks_instance_changes doesn't have anything to do with Placement, and it will add more RPC load to your system because all computes are RPC casting to the scheduler for every instance create/delete/move operation along with a periodic that runs, by default, every minute on each compute service to sync things up. The primary need for scheduler_tracks_instance_changes is the (anti-)affinity filters in the scheduler (and maybe if you're using the CachingScheduler). If you don't enable the (anti-)affinity filters (they are enabled by default), then you can disable scheduler_tracks_instance_changes. Note that you can still disable scheduler_tracks_instance_changes and run the affinity filters, but the scheduler will likely make poor decisions in a busy cloud which can result in reschedules, which are also expensive. Long-term, we hope to remove the need for scheduler_tracks_instance_changes at all because we should have all of the information we need about the instances in the Placement service, which is generally considered global to the deployment. However, we don't yet have a way to model affinity/distance in Placement, and that's what's holding us back from removing scheduler_tracks_instance_changes and the existing affinity filters. -- Thanks, Matt From iain.macdonnell at oracle.com Thu Mar 29 17:40:03 2018 From: iain.macdonnell at oracle.com (iain MacDonnell) Date: Thu, 29 Mar 2018 10:40:03 -0700 Subject: [Openstack-operators] nova-placement-api tuning In-Reply-To: References: <76b24db4-bdbb-663c-7d60-4eaaedfe3eac@oracle.com> Message-ID: <0d0bcbd6-b77f-b001-3a28-bb8eaacb4cee@oracle.com> On 03/29/2018 04:24 AM, Chris Dent wrote: > On Thu, 29 Mar 2018, Belmiro Moreira wrote: > > [lots of great advice snipped] > >> - Change apache mpm default from prefork to event/worker. >> - Increase the WSGI number of processes/threads considering where >> placement >> is running. If I'm reading http://modwsgi.readthedocs.io/en/develop/user-guides/processes-and-threading.html right, it seems that the MPM is not pertinent when using WSGIDaemonProcess. > Another option is to switch to nginx and uwsgi. In situations where > the web server is essentially operating as a proxy to another > process which is being the WSGI server, nginx has a history of being > very effective. Evaluating adoption of uwsgi is on my to-do list ... not least because it'd enable restarting of services individually... ~iain From cdent+os at anticdent.org Thu Mar 29 17:44:41 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 29 Mar 2018 18:44:41 +0100 (BST) Subject: [Openstack-operators] nova-placement-api tuning In-Reply-To: <0d0bcbd6-b77f-b001-3a28-bb8eaacb4cee@oracle.com> References: <76b24db4-bdbb-663c-7d60-4eaaedfe3eac@oracle.com> <0d0bcbd6-b77f-b001-3a28-bb8eaacb4cee@oracle.com> Message-ID: On Thu, 29 Mar 2018, iain MacDonnell wrote: > If I'm reading > > http://modwsgi.readthedocs.io/en/develop/user-guides/processes-and-threading.html > > right, it seems that the MPM is not pertinent when using WSGIDaemonProcess. It doesn't impact the number wsgi processes that will exist or how they are configured, but it does control the flexibility with which apache itself will scale to accept initial connections. That's not a problem you're yet seeing at your scale, but is an issue when the number of compute nodes gets much bigger. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From pabelanger at redhat.com Thu Mar 29 19:10:25 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 29 Mar 2018 15:10:25 -0400 Subject: [Openstack-operators] All Hail our Newest Release Name - OpenStack Stein Message-ID: <20180329191025.GC1172@localhost.localdomain> Hi everybody! As the subject reads, the "S" release of OpenStack is officially "Stein". As been with previous elections this wasn't the first choice, that was "Solar". Solar was judged to have legal risk, so as per our name selection process, we moved to the next name on the list. Thanks to everybody who participated, and look forward to making OpenStack Stein a great release. Paul From Tim.Bell at cern.ch Thu Mar 29 19:27:38 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Thu, 29 Mar 2018 19:27:38 +0000 Subject: [Openstack-operators] OpenStack User Survey: Identity Service, Networking and Block Storage Drivers Answer Options In-Reply-To: <9F1755CF-6823-4662-887E-C6D17F962C2D@openstack.org> References: <9F1755CF-6823-4662-887E-C6D17F962C2D@openstack.org> Message-ID: <339AB2CE-A99C-43C0-8C3B-875520050DFF@cern.ch> Allison, In the past, there has been some confusion on the ML2 driver since many of the drivers are both ML2 based and have specific drivers. Had you an approach in mind for this time? It does mean that the results won’t be directly comparable but cleaning up this confusion would seem worth it in the longer term. Tim From: Allison Price Date: Thursday, 29 March 2018 at 19:24 To: openstack-operators Subject: [Openstack-operators] OpenStack User Survey: Identity Service, Networking and Block Storage Drivers Answer Options Hi everyone, We are opening the OpenStack User Survey submission process next month and wanted to collect operator feedback on the answer choices for three particular questions: Identity Service (Keystone) drivers, Network (Neutron) drivers and Block Storage (Cinder) drivers. We want to make sure that we have a list of the most commonly used drivers so that we can collect the appropriate data from OpenStack users. Each of the questions will have a free text “Other” option, so they don’t need to be comprehensive, but if you think that there is a driver that should be included, please reply on this email thread or contact me directly. Thanks! Allison Allison Price OpenStack Foundation allison at openstack.org Which OpenStack Identity Service (Keystone) drivers are you using? · Active Directory · KVS · LDAP · PAM · SQL (default) · Templated · Other Which OpenStack Network (Neutron) drivers are you using? · Cisco UCS / Nexus · ML2 - Cisco APIC · ML2 - Linux Bridge · ML2 - Mellanox · ML2 - MidoNet · ML2 - OpenDaylight · ML2 - Open vSwitch · nova-network · VMware NSX (formerly NIcira NVP) · A10 Networks · Arista · Big Switch · Brocade · Embrace · Extreme Networks · Hyper-V · IBM SDN-VE · Linux Bridge · Mellanox · Meta PluginP · MidoNet · Modular Layer 2 Plugin (ML2) · NEC OpenFlow · OpenDaylight · Nuage Networks · One Convergence NVSD · Tungsten Fabric (OpenContrail) · Open vSwitch · PLUMgrid · Ruijie Networks · Ryu OpenFlow Controller · ML2 - Alcatel-Lucent Omniswitch · ML2 - Arista · ML2 - Big Switch · ML2 - Brocade VDX/VCS · ML2 - Calico · ML2 - Cisco DFA · ML2 - Cloudbase Hyper-V · ML2 - Freescale SDN · ML2 - Freescale FWaaS · ML2 - Fujitsu Converged Fabric Switch · ML2 - Huawei Agile Controller · ML2 - Mellanox SR-IOV · ML2 - Nuage Networks · ML2 - One Convergence · ML2 - ONOS · ML2 - OpenFlow Agent · ML2 - Pluribus · ML2 - Fail-F · ML2 - VMware DVS · Other Which OpenStack Block Storage (Cinder) drivers are you using? · Ceph RBD · Coraid · Dell EqualLogic · EMC · GlusterFS · HDS · HP 3PAR · HP LeftHand · Huawei · IBM GPFS · IBM NAS · IBM Storwize · IBM XIV / DS8000 · LVM (default) · Mellanox · NetApp · Nexenta · NFS · ProphetStor · SAN / Solaris · Scality · Sheepdog · SolidFire · VMware VMDK · Windows Server 2012 · Xenapi NFS · XenAPI Storage Manager · Zadara · Other -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Thu Mar 29 23:39:57 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Fri, 30 Mar 2018 07:39:57 +0800 Subject: [Openstack-operators] [openstack-dev] All Hail our Newest Release Name - OpenStack Stein In-Reply-To: <20180329191025.GC1172@localhost.localdomain> References: <20180329191025.GC1172@localhost.localdomain> Message-ID: In hindsight, it would be much fun the R release named Ramm :P On Fri, Mar 30, 2018 at 3:10 AM, Paul Belanger wrote: > Hi everybody! > > As the subject reads, the "S" release of OpenStack is officially "Stein". > As > been with previous elections this wasn't the first choice, that was > "Solar". > > Solar was judged to have legal risk, so as per our name selection process, > we > moved to the next name on the list. > > Thanks to everybody who participated, and look forward to making OpenStack > Stein > a great release. > > Paul > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Mar 29 23:45:22 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 29 Mar 2018 18:45:22 -0500 Subject: [Openstack-operators] OpenStack User Survey: Identity Service, Networking and Block Storage Drivers Answer Options In-Reply-To: <9F1755CF-6823-4662-887E-C6D17F962C2D@openstack.org> References: <9F1755CF-6823-4662-887E-C6D17F962C2D@openstack.org> Message-ID: <20180329234522.GA5654@sm-xps> Hey Allison, I have a few comments below about the Cinder drivers. Would love to hear everyone's input too. On Thu, Mar 29, 2018 at 12:22:05PM -0500, Allison Price wrote: > Hi everyone, > > We are opening the OpenStack User Survey submission process next month and wanted to collect operator feedback on the answer choices for three particular questions: Identity Service (Keystone) drivers, Network (Neutron) drivers and Block Storage (Cinder) drivers. We want to make sure that we have a list of the most commonly used drivers so that we can collect the appropriate data from OpenStack users. Each of the questions will have a free text “Other” option, so they don’t need to be comprehensive, but if you think that there is a driver that should be included, please reply on this email thread or contact me directly. > > Thanks! > Allison > Which OpenStack Block Storage (Cinder) drivers are you using? > Ceph RBD Coraid - This was removed from Cinder several releases ago after only a short time in-tree. Dell EqualLogic - May want to indicate an alias for its current name - Dell EMC PS Series EMC - This one may be confusing as there are several EMC drivers, and now that it is Dell EMC, a couple more (see previous entry) GlusterFS - This was also removed, by Red Hat. But that may be a good reason to include it to see if anyone is still using it. > HDS > HP 3PAR > HP LeftHand > Huawei > IBM GPFS > IBM NAS > IBM Storwize > IBM XIV / DS8000 > LVM (default) > Mellanox > NetApp > Nexenta > NFS > ProphetStor SAN / Solaris - Not sure what this is. > Scality > Sheepdog SolidFire - now one of several NetApp drivers > VMware VMDK Windows Server 2012 - SMBFS driver? Xenapi NFS - no idea what this is. XenAPI Storage Manager - Or this one. > Zadara > Other From lijie at unitedstack.com Fri Mar 30 02:17:46 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Fri, 30 Mar 2018 10:17:46 +0800 Subject: [Openstack-operators] [nova] about use spice console In-Reply-To: References: Message-ID: The error info is : CRITICAL nova [None req-a84d278b-43db-4c94-864b-7a9733aa772c None None] Unhandled error: IOError: [Errno 13] Permission denied: '/etc/nova/policy.json' ERROR nova Traceback (most recent call last): ERROR nova File "/usr/bin/nova-compute", line 10, in ERROR nova sys.exit(main()) ERROR nova File "/opt/stack/nova/nova/cmd/compute.py", line 57, in main ERROR nova topic=compute_rpcapi.RPC_TOPIC) ERROR nova File "/opt/stack/nova/nova/service.py", line 240, in create ERROR nova periodic_interval_max=periodic_interval_max) ERROR nova File "/opt/stack/nova/nova/service.py", line 116, in __init__ ERROR nova self.manager = manager_class(host=self.host, *args, **kwargs) ERROR nova File "/opt/stack/nova/nova/compute/manager.py", line 509, in __init__ ERROR nova self.compute_api = compute.API() ERROR nova File "/opt/stack/nova/nova/compute/__init__.py", line 39, in API ERROR nova return importutils.import_object(class_name, *args, **kwargs) ERROR nova File "/usr/lib/python2.7/site-packages/oslo_utils/importutils.py", line 44, in import_object ERROR nova return import_class(import_str)(*args, **kwargs) ERROR nova File "/opt/stack/nova/nova/compute/api.py", line 254, in __init__ ERROR nova self.compute_rpcapi = compute_rpcapi.ComputeAPI() ERROR nova File "/opt/stack/nova/nova/compute/rpcapi.py", line 354, in __init__ ERROR nova self.router = rpc.ClientRouter(default_client) ERROR nova File "/opt/stack/nova/nova/rpc.py", line 414, in __init__ ERROR nova self.run_periodic_tasks(nova.context.RequestContext(overwrite=False)) ERROR nova File "/opt/stack/nova/nova/context.py", line 146, in __init__ ERROR nova self.is_admin = policy.check_is_admin(self) ERROR nova File "/opt/stack/nova/nova/policy.py", line 177, in check_is_admin ERROR nova init() ERROR nova File "/opt/stack/nova/nova/policy.py", line 75, in init ERROR nova _ENFORCER.load_rules() ERROR nova File "/usr/lib/python2.7/site-packages/oslo_policy/policy.py", line 537, in load_rules ERROR nova overwrite=self.overwrite) ERROR nova File "/usr/lib/python2.7/site-packages/oslo_policy/policy.py", line 675, in _load_policy_file ERROR nova self._file_cache, path, force_reload=force_reload) ERROR nova File "/usr/lib/python2.7/site-packages/oslo_policy/_cache_handler.py", line 41, in read_cached_file ERROR nova with open(filename) as fap: ERROR nova IOError: [Errno 13] Permission denied: '/etc/nova/policy.json' ------------------ Original ------------------ From: "李杰"; Date: Thu, Mar 29, 2018 05:24 PM To: "openstack-operators"; Subject: [Openstack-operators] [nova] about use spice console Hi,all Now I want to use spice console replace novnc in instance.But the openstack documentation is a bit sparse on what configuration parameters to enable for SPICE console access. But my result is the nova-compute service and nova-consoleauth service failed,and the log tell me the "IOError: [Errno 13] Permission denied: /etc/nova/policy.json".So can you help me achieve this?Thank you very much. ENV is Pike or Queens release devstack. This is the step: 1.on controller: yum install -y spice-server spice-protocol openstack-nova-spicehtml5proxy spice-html5 change the nova.conf [default] vnc_enabled=false [spice] html5proxy_host=controller_ip html5proxy_port=6082 keymap=en-us stop the novnc service start the spicehtml5proxy.service systemctl start openstack-nova-spicehtml5proxy.service 2.on conmpute: yum install -y spice-server spice-protocol spice-html5 change the nova-cpu.conf [default] vnc_enabled=false [spice] agent_enabled = True enabled = True html5proxy_base_url = http://controller_ip:6082/spice_auto.html html5proxy_host = 0.0.0.0 html5proxy_port = 6082 keymap = en-us server_listen = 127.0.0.1 server_proxyclient_address = 127.0.0.1 restart the compute service Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Fri Mar 30 14:26:43 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 30 Mar 2018 16:26:43 +0200 Subject: [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Solar" release Message-ID: <20180330142643.ff3czxy35khmjakx@eukaryote> The last version bump was in "Pike" release (commit: b980df0, 11-Feb-2017), and we didn't do any bump during "Queens". So it's time to increment the versions (which will also makes us get rid of some backward compatibility cruft), and pick future versions of libvirt and QEMU. As it stands, during the "Pike" release the advertized NEXT_MIN versions were set to: libvirt 1.3.1 and QEMU 2.5.0 -- but they weren't actually bumped for the "Queens" release. So they will now be applied for the "Rocky" release. (Hmm, but note that libvirt 1.3.1 was released more than 2 years ago[1].) While at it, we should also discuss about what will be the NEXT_MIN libvirt and QEMU versions for the "Solar" release. To that end, I've spent going through different distributions and updated the DistroSupportMatrix Wiki[2]. Taking the DistroSupportMatrix into picture, for the sake of discussion, how about the following NEXT_MIN versions for "Solar" release: (a) libvirt: 3.2.0 (released on 23-Feb-2017) This satisfies most distributions, but will affect Debian "Stretch", as they only have 3.0.0 in the stable branch -- I've checked their repositories[3][4]. Although the latest update for the stable release "Stretch (9.4)" was released only on 10-March-2018, I don't think they increment libvirt and QEMU versions in stable. Is there another way for "Stretch (9.4)" users to get the relevant versions from elsewhere? (b) QEMU: 2.9.0 (released on 20-Apr-2017) This too satisfies most distributions but will affect Oracle Linux -- which seem to ship QEMU 1.5.3 (released in August 2013) with their "7", from the Wiki. And will also affect Debian "Stretch" -- as it only has 2.8.0 Can folks chime in here? [1] https://www.redhat.com/archives/libvirt-announce/2016-January/msg00002.html [2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix [3] https://packages.qa.debian.org/libv/libvirt.html [4] https://packages.qa.debian.org/libv/libvirt.html -- /kashyap From sean.mcginnis at gmx.com Fri Mar 30 14:49:17 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 30 Mar 2018 09:49:17 -0500 Subject: [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Solar" release In-Reply-To: <20180330142643.ff3czxy35khmjakx@eukaryote> References: <20180330142643.ff3czxy35khmjakx@eukaryote> Message-ID: <20180330144917.GA7872@sm-xps> > While at it, we should also discuss about what will be the NEXT_MIN > libvirt and QEMU versions for the "Solar" release. To that end, I've > spent going through different distributions and updated the > DistroSupportMatrix Wiki[2]. > > Taking the DistroSupportMatrix into picture, for the sake of discussion, > how about the following NEXT_MIN versions for "Solar" release: > Correction - for the "Stein" release. :) From mrhillsman at gmail.com Fri Mar 30 16:36:26 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Fri, 30 Mar 2018 11:36:26 -0500 Subject: [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Solar" release In-Reply-To: <20180330144917.GA7872@sm-xps> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180330144917.GA7872@sm-xps> Message-ID: ;) On Fri, Mar 30, 2018 at 9:49 AM, Sean McGinnis wrote: > > While at it, we should also discuss about what will be the NEXT_MIN > > libvirt and QEMU versions for the "Solar" release. To that end, I've > > spent going through different distributions and updated the > > DistroSupportMatrix Wiki[2]. > > > > Taking the DistroSupportMatrix into picture, for the sake of discussion, > > how about the following NEXT_MIN versions for "Solar" release: > > > Correction - for the "Stein" release. :) > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From iain.macdonnell at oracle.com Fri Mar 30 17:11:05 2018 From: iain.macdonnell at oracle.com (iain MacDonnell) Date: Fri, 30 Mar 2018 10:11:05 -0700 Subject: [Openstack-operators] nova-placement-api tuning In-Reply-To: References: <76b24db4-bdbb-663c-7d60-4eaaedfe3eac@oracle.com> Message-ID: On 03/29/2018 02:13 AM, Belmiro Moreira wrote: > Some lessons so far... > - Scale keystone accordingly when enabling placement. Speaking of which; I suppose I have the same question for keystone (currently running under httpd also). I'm currently using threads=1, based on this (IIRC): https://bugs.launchpad.net/puppet-keystone/+bug/1602530 but I'm not sure if that's valid? Between placement and ceilometer feeding gnocchi, keystone is kept very busy. Recommendations for processes/threads for keystone? And any other tuning hints... ? Thanks! ~iain From kchamart at redhat.com Sat Mar 31 13:17:52 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Sat, 31 Mar 2018 15:17:52 +0200 Subject: [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Solar" release In-Reply-To: <20180330144917.GA7872@sm-xps> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180330144917.GA7872@sm-xps> Message-ID: <20180331131752.rtdm3c3dw7iyyqyn@eukaryote> On Fri, Mar 30, 2018 at 09:49:17AM -0500, Sean McGinnis wrote: > > While at it, we should also discuss about what will be the NEXT_MIN > > libvirt and QEMU versions for the "Solar" release. To that end, I've > > spent going through different distributions and updated the > > DistroSupportMatrix Wiki[2]. > > > > Taking the DistroSupportMatrix into picture, for the sake of discussion, > > how about the following NEXT_MIN versions for "Solar" release: > > > Correction - for the "Stein" release. :) Darn, I should've triple-checked before I assumed it is to be "Solar". If "Stein" is confirmed; I'll re-send this email with the correct release name for clarity. Thanks for correcting. -- /kashyap From kchamart at redhat.com Sat Mar 31 13:41:44 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Sat, 31 Mar 2018 15:41:44 +0200 Subject: [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Solar" release In-Reply-To: <20180331131752.rtdm3c3dw7iyyqyn@eukaryote> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180330144917.GA7872@sm-xps> <20180331131752.rtdm3c3dw7iyyqyn@eukaryote> Message-ID: <20180331134144.zue2lpoz5o32zwjh@eukaryote> On Sat, Mar 31, 2018 at 03:17:52PM +0200, Kashyap Chamarthy wrote: > On Fri, Mar 30, 2018 at 09:49:17AM -0500, Sean McGinnis wrote: [...] > > > Taking the DistroSupportMatrix into picture, for the sake of discussion, > > > how about the following NEXT_MIN versions for "Solar" release: > > > > > Correction - for the "Stein" release. :) > > Darn, I should've triple-checked before I assumed it is to be "Solar". > If "Stein" is confirmed; I'll re-send this email with the correct > release name for clarity. It actually is: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128899.html -- All Hail our Newest Release Name - OpenStack Stein (That email went into 'openstack-operators' maildir for me; my filtering fault.) I won't start another thread;, will just leave this existing thread intact, as people will read it as: "whatever name the 'S' release ends up with" (as 'fungi' put it on IRC). [...] -- /kashyap From kchamart at redhat.com Sat Mar 31 14:09:29 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Sat, 31 Mar 2018 16:09:29 +0200 Subject: [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <20180330142643.ff3czxy35khmjakx@eukaryote> References: <20180330142643.ff3czxy35khmjakx@eukaryote> Message-ID: <20180331140929.r5kj3qyrefvsovwf@eukaryote> [Meta comment: corrected the email subject: "Solar" --> "Stein"] On Fri, Mar 30, 2018 at 04:26:43PM +0200, Kashyap Chamarthy wrote: > The last version bump was in "Pike" release (commit: b980df0, > 11-Feb-2017), and we didn't do any bump during "Queens". So it's time > to increment the versions (which will also makes us get rid of some > backward compatibility cruft), and pick future versions of libvirt and > QEMU. > > As it stands, during the "Pike" release the advertized NEXT_MIN versions > were set to: libvirt 1.3.1 and QEMU 2.5.0 -- but they weren't actually > bumped for the "Queens" release. So they will now be applied for the > "Rocky" release. (Hmm, but note that libvirt 1.3.1 was released more > than 2 years ago[1].) > > While at it, we should also discuss about what will be the NEXT_MIN > libvirt and QEMU versions for the "Solar" release. To that end, I've > spent going through different distributions and updated the > DistroSupportMatrix Wiki[2]. > > Taking the DistroSupportMatrix into picture, for the sake of discussion, > how about the following NEXT_MIN versions for "Solar" release: > > (a) libvirt: 3.2.0 (released on 23-Feb-2017) > > This satisfies most distributions, but will affect Debian "Stretch", > as they only have 3.0.0 in the stable branch -- I've checked their > repositories[3][4]. Although the latest update for the stable > release "Stretch (9.4)" was released only on 10-March-2018, I don't > think they increment libvirt and QEMU versions in stable. Is > there another way for "Stretch (9.4)" users to get the relevant > versions from elsewhere? > > (b) QEMU: 2.9.0 (released on 20-Apr-2017) > > This too satisfies most distributions but will affect Oracle Linux > -- which seem to ship QEMU 1.5.3 (released in August 2013) with > their "7", from the Wiki. And will also affect Debian "Stretch" -- > as it only has 2.8.0 > > Can folks chime in here? > > [1] https://www.redhat.com/archives/libvirt-announce/2016-January/msg00002.html > [2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix > [3] https://packages.qa.debian.org/libv/libvirt.html > [4] https://packages.qa.debian.org/libv/libvirt.html > > -- > /kashyap -- /kashyap