From mgagne at iweb.com Wed Apr 1 00:06:03 2015 From: mgagne at iweb.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=) Date: Tue, 31 Mar 2015 20:06:03 -0400 Subject: [Openstack-operators] Security around enterprise credentials and OpenStack API Message-ID: <551B366B.1090607@iweb.com> Hi, Lets say I wish to use an existing enterprise LDAP service to manage my OpenStack users so I only have one place to manage users. How would you manage authentication and credentials from a security point of view? Do you tell your users to use their enterprise credentials or do you use an other method/credentials? The reason is that (usually) enterprise credentials also give access to a whole lot of systems other than OpenStack itself. And it goes without saying that I'm not fond of the idea of storing my password in plain text to be used by some scripts I created. What's your opinion/suggestion? Do you guys have a second credential system solely used for OpenStack? -- Mathieu From matt at mattfischer.com Wed Apr 1 00:35:08 2015 From: matt at mattfischer.com (Matt Fischer) Date: Tue, 31 Mar 2015 18:35:08 -0600 Subject: [Openstack-operators] Security around enterprise credentials and OpenStack API In-Reply-To: <551B366B.1090607@iweb.com> References: <551B366B.1090607@iweb.com> Message-ID: Mathieu, We LDAP (AD) with a fallback to MySQL. This allows us to store service accounts (like nova) and "team accounts" for use in Jenkins/scripts etc in MySQL. We only do Identity via LDAP and we have a forked copy of this driver (https://github.com/SUSE-Cloud/keystone-hybrid-backend) to do this. We don't have any permissions to write into LDAP or move people into groups, so we keep a copy of users locally for purposes of user-list operations. The only interaction between OpenStack and LDAP for us is when that driver tries a bind. On Tue, Mar 31, 2015 at 6:06 PM, Mathieu Gagn? wrote: > Hi, > > Lets say I wish to use an existing enterprise LDAP service to manage my > OpenStack users so I only have one place to manage users. > > How would you manage authentication and credentials from a security > point of view? Do you tell your users to use their enterprise > credentials or do you use an other method/credentials? > > The reason is that (usually) enterprise credentials also give access to > a whole lot of systems other than OpenStack itself. And it goes without > saying that I'm not fond of the idea of storing my password in plain > text to be used by some scripts I created. > > What's your opinion/suggestion? Do you guys have a second credential > system solely used for OpenStack? > > -- > Mathieu > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc.heckmann at ubisoft.com Wed Apr 1 02:58:47 2015 From: marc.heckmann at ubisoft.com (Marc Heckmann) Date: Wed, 1 Apr 2015 02:58:47 +0000 Subject: [Openstack-operators] Security around enterprise credentials and OpenStack API In-Reply-To: References: <551B366B.1090607@iweb.com> Message-ID: Hi all, I was going to post a similar question this evening, so I decided to just bounce on Mathieu?s question. See below inline. > On Mar 31, 2015, at 8:35 PM, Matt Fischer wrote: > > Mathieu, > > We LDAP (AD) with a fallback to MySQL. This allows us to store service accounts (like nova) and "team accounts" for use in Jenkins/scripts etc in MySQL. We only do Identity via LDAP and we have a forked copy of this driver (https://github.com/SUSE-Cloud/keystone-hybrid-backend) to do this. We don't have any permissions to write into LDAP or move people into groups, so we keep a copy of users locally for purposes of user-list operations. The only interaction between OpenStack and LDAP for us is when that driver tries a bind. > > > >> On Tue, Mar 31, 2015 at 6:06 PM, Mathieu Gagn? wrote: >> Hi, >> >> Lets say I wish to use an existing enterprise LDAP service to manage my >> OpenStack users so I only have one place to manage users. >> >> How would you manage authentication and credentials from a security >> point of view? Do you tell your users to use their enterprise >> credentials or do you use an other method/credentials? We too have integration with enterprise credentials through LDAP, but as you suggest, we certainly don?t want users to use those credentials in scripts or store them on instances. Instead we have a custom Web portal where they can create separate Keystone credentials for their project/tenant which are stored in Keystone?s MySQL database. Our LDAP integration actually happens at a level above Keystone. We don?t actually let users acquire Keystone tokens using their LDAP accounts. We?re not really happy with this solution, it?s a hack and we are looking to revamp it entirely. The problem is that I never have been able to find a clear answer on how to do this with Keystone. I?m actually quite partial to the way AWS IAM works. Especially the instance ?role" features. Roles in AWS IAM is similar to TRUSTS in Keystone except that it is integrated into the instance metadata. It?s pretty cool. Other than that, RBAC policies in Openstack get us a good way towards IAM like functionality. We just need a policy editor in Horizon. Anyway, the problem is around delegation of credentials which are used non-interactively. We need to limit what those users can do (through RBAC policy) but also somehow make the credentials ephemeral. If someone (Keystone developer?) could point us in the right direction, that would be great. Thanks in advance. >> >> The reason is that (usually) enterprise credentials also give access to >> a whole lot of systems other than OpenStack itself. And it goes without >> saying that I'm not fond of the idea of storing my password in plain >> text to be used by some scripts I created. >> >> What's your opinion/suggestion? Do you guys have a second credential >> system solely used for OpenStack? >> >> -- >> Mathieu >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From comnea.dani at gmail.com Wed Apr 1 06:17:50 2015 From: comnea.dani at gmail.com (Daniel Comnea) Date: Wed, 1 Apr 2015 07:17:50 +0100 Subject: [Openstack-operators] Security around enterprise credentials and OpenStack API In-Reply-To: References: <551B366B.1090607@iweb.com> Message-ID: + developers mailing list, hopefully a developer might be able to chime in. On Wed, Apr 1, 2015 at 3:58 AM, Marc Heckmann wrote: > Hi all, > > I was going to post a similar question this evening, so I decided to just > bounce on Mathieu?s question. See below inline. > > > On Mar 31, 2015, at 8:35 PM, Matt Fischer wrote: > > > > Mathieu, > > > > We LDAP (AD) with a fallback to MySQL. This allows us to store service > accounts (like nova) and "team accounts" for use in Jenkins/scripts etc in > MySQL. We only do Identity via LDAP and we have a forked copy of this > driver (https://github.com/SUSE-Cloud/keystone-hybrid-backend) to do > this. We don't have any permissions to write into LDAP or move people into > groups, so we keep a copy of users locally for purposes of user-list > operations. The only interaction between OpenStack and LDAP for us is when > that driver tries a bind. > > > > > > > >> On Tue, Mar 31, 2015 at 6:06 PM, Mathieu Gagn? wrote: > >> Hi, > >> > >> Lets say I wish to use an existing enterprise LDAP service to manage my > >> OpenStack users so I only have one place to manage users. > >> > >> How would you manage authentication and credentials from a security > >> point of view? Do you tell your users to use their enterprise > >> credentials or do you use an other method/credentials? > > We too have integration with enterprise credentials through LDAP, but as > you suggest, we certainly don?t want users to use those credentials in > scripts or store them on instances. Instead we have a custom Web portal > where they can create separate Keystone credentials for their > project/tenant which are stored in Keystone?s MySQL database. Our LDAP > integration actually happens at a level above Keystone. We don?t actually > let users acquire Keystone tokens using their LDAP accounts. > > We?re not really happy with this solution, it?s a hack and we are looking > to revamp it entirely. The problem is that I never have been able to find a > clear answer on how to do this with Keystone. > > I?m actually quite partial to the way AWS IAM works. Especially the > instance ?role" features. Roles in AWS IAM is similar to TRUSTS in Keystone > except that it is integrated into the instance metadata. It?s pretty cool. > > Other than that, RBAC policies in Openstack get us a good way towards IAM > like functionality. We just need a policy editor in Horizon. > > Anyway, the problem is around delegation of credentials which are used > non-interactively. We need to limit what those users can do (through RBAC > policy) but also somehow make the credentials ephemeral. > > If someone (Keystone developer?) could point us in the right direction, > that would be great. > > Thanks in advance. > > >> > >> The reason is that (usually) enterprise credentials also give access to > >> a whole lot of systems other than OpenStack itself. And it goes without > >> saying that I'm not fond of the idea of storing my password in plain > >> text to be used by some scripts I created. > >> > >> What's your opinion/suggestion? Do you guys have a second credential > >> system solely used for OpenStack? > >> > >> -- > >> Mathieu > >> > >> _______________________________________________ > >> OpenStack-operators mailing list > >> OpenStack-operators at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Wed Apr 1 19:19:42 2015 From: ayoung at redhat.com (Adam Young) Date: Wed, 01 Apr 2015 15:19:42 -0400 Subject: [Openstack-operators] Security around enterprise credentials and OpenStack API In-Reply-To: References: <551B366B.1090607@iweb.com> Message-ID: <551C44CE.8000508@redhat.com> On 03/31/2015 10:58 PM, Marc Heckmann wrote: > Hi all, > > I was going to post a similar question this evening, so I decided to just bounce on Mathieu?s question. See below inline. > >> On Mar 31, 2015, at 8:35 PM, Matt Fischer wrote: >> >> Mathieu, >> >> We LDAP (AD) with a fallback to MySQL. This allows us to store service accounts (like nova) and "team accounts" for use in Jenkins/scripts etc in MySQL. We only do Identity via LDAP and we have a forked copy of this driver (https://github.com/SUSE-Cloud/keystone-hybrid-backend) to do this. We don't have any permissions to write into LDAP or move people into groups, so we keep a copy of users locally for purposes of user-list operations. The only interaction between OpenStack and LDAP for us is when that driver tries a bind. >> >> >> >>> On Tue, Mar 31, 2015 at 6:06 PM, Mathieu Gagn? wrote: >>> Hi, >>> >>> Lets say I wish to use an existing enterprise LDAP service to manage my >>> OpenStack users so I only have one place to manage users. >>> >>> How would you manage authentication and credentials from a security >>> point of view? Do you tell your users to use their enterprise >>> credentials or do you use an other method/credentials? > We too have integration with enterprise credentials through LDAP, but as you suggest, we certainly don?t want users to use those credentials in scripts or store them on instances. Instead we have a custom Web portal where they can create separate Keystone credentials for their project/tenant which are stored in Keystone?s MySQL database. Our LDAP integration actually happens at a level above Keystone. We don?t actually let users acquire Keystone tokens using their LDAP accounts. > > We?re not really happy with this solution, it?s a hack and we are looking to revamp it entirely. The problem is that I never have been able to find a clear answer on how to do this with Keystone. > > I?m actually quite partial to the way AWS IAM works. Especially the instance ?role" features. Roles in AWS IAM is similar to TRUSTS in Keystone except that it is integrated into the instance metadata. It?s pretty cool. > > Other than that, RBAC policies in Openstack get us a good way towards IAM like functionality. We just need a policy editor in Horizon. > > Anyway, the problem is around delegation of credentials which are used non-interactively. We need to limit what those users can do (through RBAC policy) but also somehow make the credentials ephemeral. > > If someone (Keystone developer?) could point us in the right direction, that would be great. Kerberos. Works well for Keystone. http://adam.younglogic.com/2014/07/kerberos-for-horizon-and-keystone/ We are working on Kerberos support for Horizon as well, but we might not get it blessed by Kilo time frame. I think there might be a better approach on the Horizon front using SSSD and Federation: http://adam.younglogic.com/2015/03/key-fed-lookup-redux/ That will likely work with Horizon as is (in Kilo) but I have not yet tested it...doing so is on my short list. What CERN is doing is using SAML for everything, and using ADFS with a discovery page to let you use X509, Kerberos, or Password to get a SAML assertion, and then handing that over to Horizon. SAML against Keystone CLI wise requires ECP support on the SAML side...you've been warned,. > > Thanks in advance. > >>> The reason is that (usually) enterprise credentials also give access to >>> a whole lot of systems other than OpenStack itself. And it goes without >>> saying that I'm not fond of the idea of storing my password in plain >>> text to be used by some scripts I created. >>> >>> What's your opinion/suggestion? Do you guys have a second credential >>> system solely used for OpenStack? >>> >>> -- >>> Mathieu >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From klindgren at godaddy.com Wed Apr 1 22:27:34 2015 From: klindgren at godaddy.com (Kris G. Lindgren) Date: Wed, 1 Apr 2015 22:27:34 +0000 Subject: [Openstack-operators] Fwd: [openstack-dev] [release] oslo.messaging 1.8.1 In-Reply-To: References: <51E25278-E5AD-4197-A85B-AFF8C57F1294@doughellmann.com> Message-ID: Also, If you are running oslo.messaging 1.8.1 or higher and are wondering why you are no longer seeing notifications from nova. Change notification_driver=nova.openstack.common.notifier.rpc_notifier to notification_driver=messaging and you will start seeing notifications for nova events again. I assume this will apply to other services as well - but we are configured to receive notifications from nova. ____________________________________________ Kris Lindgren Senior Linux Systems Engineer GoDaddy, LLC. On 3/25/15, 8:30 AM, "Davanum Srinivas" wrote: >FYI. for those waiting to try oslo.messaging rabbitmq heartbeat support. > >-- dims > >---------- Forwarded message ---------- >From: Doug Hellmann >Date: Wed, Mar 25, 2015 at 10:13 AM >Subject: [openstack-dev] [release] oslo.messaging 1.8.1 >To: "OpenStack Development Mailing List (not for usage questions)" > > > >We are pleased to announce the release of: > >oslo.messaging 1.8.1: Oslo Messaging API > >This is a Kilo-series patch release, fixing several bugs. > >For more details, please see the git log history below and: > > http://launchpad.net/oslo.messaging/+milestone/1.8.1 > >Please report issues through launchpad: > > http://bugs.launchpad.net/oslo.messaging > >Changes in oslo.messaging 1.8.0..1.8.1 >-------------------------------------- > >57fad97 Publish tracebacks only on debug level >b5f91b2 Reconnect on connection lost in heartbeat thread >ac8bdb6 cleanup connection pool return >ee18dc5 rabbit: Improves logging >db99154 fix up verb tense in log message >64bdd80 rabbit: heartbeat implementation >9b14d1a Add support for multiple namespaces in Targets > >Diffstat (except docs and test files) >------------------------------------- > >oslo_messaging/_drivers/amqp.py | 44 ++- >oslo_messaging/_drivers/amqpdriver.py | 15 +- >oslo_messaging/_drivers/impl_qpid.py | 2 +- >oslo_messaging/_drivers/impl_rabbit.py | 346 >++++++++++++++++++++--- >oslo_messaging/rpc/dispatcher.py | 2 +- >oslo_messaging/target.py | 9 +- >11 files changed, 541 insertions(+), 70 deletions(-) >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >-- >Davanum Srinivas :: https://twitter.com/dims > >_______________________________________________ >OpenStack-operators mailing list >OpenStack-operators at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From harlowja at outlook.com Wed Apr 1 22:39:31 2015 From: harlowja at outlook.com (Joshua Harlow) Date: Wed, 1 Apr 2015 15:39:31 -0700 Subject: [Openstack-operators] Fwd: [openstack-dev] [release] oslo.messaging 1.8.1 In-Reply-To: References: <51E25278-E5AD-4197-A85B-AFF8C57F1294@doughellmann.com> Message-ID: I hope that in the future https://review.openstack.org/#/c/140318/ can help out making this more obvious; as a good example set that works can be very very helpful to understand why/what the options are... Docs that are good would help to... Kris G. Lindgren wrote: > Also, > > If you are running oslo.messaging 1.8.1 or higher and are wondering why > you are no longer seeing notifications from nova. Change > notification_driver=nova.openstack.common.notifier.rpc_notifier to > notification_driver=messaging and you will start seeing notifications for > nova events again. I assume this will apply to other services as well - > but we are configured to receive notifications from nova. > > ____________________________________________ > > Kris Lindgren > Senior Linux Systems Engineer > GoDaddy, LLC. > > > > > On 3/25/15, 8:30 AM, "Davanum Srinivas" wrote: > >> FYI. for those waiting to try oslo.messaging rabbitmq heartbeat support. >> >> -- dims >> >> ---------- Forwarded message ---------- >> From: Doug Hellmann >> Date: Wed, Mar 25, 2015 at 10:13 AM >> Subject: [openstack-dev] [release] oslo.messaging 1.8.1 >> To: "OpenStack Development Mailing List (not for usage questions)" >> >> >> >> We are pleased to announce the release of: >> >> oslo.messaging 1.8.1: Oslo Messaging API >> >> This is a Kilo-series patch release, fixing several bugs. >> >> For more details, please see the git log history below and: >> >> http://launchpad.net/oslo.messaging/+milestone/1.8.1 >> >> Please report issues through launchpad: >> >> http://bugs.launchpad.net/oslo.messaging >> >> Changes in oslo.messaging 1.8.0..1.8.1 >> -------------------------------------- >> >> 57fad97 Publish tracebacks only on debug level >> b5f91b2 Reconnect on connection lost in heartbeat thread >> ac8bdb6 cleanup connection pool return >> ee18dc5 rabbit: Improves logging >> db99154 fix up verb tense in log message >> 64bdd80 rabbit: heartbeat implementation >> 9b14d1a Add support for multiple namespaces in Targets >> >> Diffstat (except docs and test files) >> ------------------------------------- >> >> oslo_messaging/_drivers/amqp.py | 44 ++- >> oslo_messaging/_drivers/amqpdriver.py | 15 +- >> oslo_messaging/_drivers/impl_qpid.py | 2 +- >> oslo_messaging/_drivers/impl_rabbit.py | 346 >> ++++++++++++++++++++--- >> oslo_messaging/rpc/dispatcher.py | 2 +- >> oslo_messaging/target.py | 9 +- >> 11 files changed, 541 insertions(+), 70 deletions(-) >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> -- >> Davanum Srinivas :: https://twitter.com/dims >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From george.shuklin at gmail.com Thu Apr 2 12:11:05 2015 From: george.shuklin at gmail.com (George Shuklin) Date: Thu, 02 Apr 2015 15:11:05 +0300 Subject: [Openstack-operators] Hundreds of instances per host Message-ID: <551D31D9.8080100@gmail.com> Hello. Did someone have experience with many-many instances on a single host? What kind of issues you find on '200 instances' (300) borderline? Any specific performance issues, stability? (KVM) I've just start playing with that idea and I see that 150 instances consume significant amount of CPU for nothing (300%, with 4% per qemu process), and libvirtd eats for 30% in the peak, plus about 11% for nova-compute. But this is in the lab conditions. Any real-life experience? Thanks! From stefano at openstack.org Fri Apr 3 21:59:33 2015 From: stefano at openstack.org (Stefano Maffulli) Date: Fri, 03 Apr 2015 14:59:33 -0700 Subject: [Openstack-operators] OpenStack Community Weekly Newsletter (Mar 27 - Apr 3) Message-ID: <1428098373.7280.17.camel@sputacchio.gateway.2wire.net> Game changer: inside Adobe?s new Marketing Cloud architecture The Adobe Marketing Cloud powers software-as-a-service marketing for outfits such as the Los Angeles Kings, eBags and Verizon Wireless. When the IT infrastructure team in Adobe?s Digital Marketing group realized it was time for an evolutionary revamp, it turned to OpenStack and VMware to up its game. Nominations for OpenStack PTLs (Program Technical Leads) are now open Nominations for OpenStack PTLs (Program Technical Leads) are now open and will remain open until April 9, 2015 05:59 UTC. To announce your candidacy please start a new openstack-dev at lists.openstack.org mailing list thread with the program name as a tag, example [Glance] PTL Candidacy with the body as your announcement of intent. People who are not the candidate, please refrain from posting +1 to the candidate announcement posting. Additional information about the nomination process can be found on the wiki. Additions to OpenStack git namespace * Magnum is now in the openstack git namespace * [congress] is an openstack project The Road to Vancouver * Sign up for OpenStack Upstream Training in Vancouver * Canada Visa Information * Official Hotel Room Blocks * 2015 OpenStack T-Shirt Design Contest * Preparation to Design summit * Liberty Design Summit planning * [Nova] Tracking ideas for summit sessions * [QA] Tracking Ideas for Summit Sessions * [neutron] Design Summit Session etherpad Relevant Conversations * The Evolution of core developer to maintainer? * [Nova][Neutron] Status of the nova-network to Neutron migration work * "First App" Tutorial for OpenStack * [api] Erring is Caring: An API Working Group Guideline for Errors * "The Security Team" for OpenStack Deadlines and Development Priorities * [Nova] Identifying release critical bugs in Kilo * [neutron] FF and our march towards the RC * [cinder] Proposals for Liberty Summit * [Cinder] Bug Triage - Call for Participation * [sahara] Design summit proposals for Liberty Summit * [Nova] Liberty specs are now open * [neutron] Liberty Specs are now open! Security Advisories and Notices * None Tips ?n Tricks * By Victoria Mart?nez de la Cruz: Creation of Trove-Compatible Images for RDO Reports from Previous Events * OpenStack QA Code Sprint in NYC * Best Practices and Considerations Deploying OpenStack In Production Upcoming Events * Apr 06 - 07, 2015 April Meetup: ?I Can Haz Moar Networks?? w/Midokura & Cumulus Networks Morrisville, NC, US * Apr 07, 2015 April Sydney Meetup * Apr 08 - 16, 2015 PyCon 2015 Montreal, Quebec, CA * Apr 08 - 09, 2015 Software Defined Networks (SDN) & Linux Based Network OS (#20) Washington D.C., DC, US * Apr 09 - 10, 2015 Are you getting the most out of Cinder block storage in OpenStack? * Apr 09, 2015 OpenStack Howto part 1 - Install and Run Prague, CZ * Apr 13 - 14, 2015 OpenStack Live Santa Clara, CA, US * Apr 13 - 16, 2015 StackAttack! A HOLatCollaborate15 Las Vegas, NV, US * Apr 15, 2015 iX OpenStack Tag K?ln, NRW, DE * Apr 15 - 16, 2015 1? Hangout OpenStack Brasil 2015 Brasil, BR * Apr 16 - 18, 2015 Open Cloud 2015 Beijing, Beijing, CN * Apr 16, 2015 8th OpenStack Meetup Stockholm Stockholm, SE * Apr 16, 2015 8th OpenStack User Group Nordics meetup Stockholm, SE * Apr 21 - 22, 2015 CONNECT 2015 Melbourne, Victoria, AU * Apr 22 - 23, 2015 China SDNNFV Conference Beijing, CN * Apr 22, 2015 OpenStack NYC Meetup New York, NY, US * Apr 23, 2015 OpenStack Philadelphia Meetup Philadelphia, PA, US * May 05 - 07, 2015 CeBIT AU 2015 Sydney, NSW, AU * May 18 - 22, 2015 OpenStack Summit May 2015 Vancouver, BC Other News * Parallels goes open source, wants OpenStack?s help to penetrate enterprise * Passion + community support = Success with OpenStack * python-barbicanclient 3.0.3 released * Million level scalability test report from cascading * What's Up Doc? Apr 2 2015 * [release] taskflow 0.8.1 * Call for testing: 2014.2.3 candidate tarballs The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom at openstack.org Mon Apr 6 01:29:21 2015 From: tom at openstack.org (Tom Fifield) Date: Mon, 06 Apr 2015 09:29:21 +0800 Subject: [Openstack-operators] Ops @ Vancouver Summit - agenda brainstorming In-Reply-To: <2D352D0CD819F64F9715B1B89695400D5C71DB91@ORSMSX113.amr.corp.intel.com> References: <5510F034.1000902@openstack.org> <2D352D0CD819F64F9715B1B89695400D5C71DB91@ORSMSX113.amr.corp.intel.com> Message-ID: <5521E171.7020205@openstack.org> A noble aim - I'd support this kind of activity from anyone who wants to do it :) Can you explain more about what you're thinking? On 31/03/15 05:26, Barrett, Carol L wrote: > Tom - In support of helping to create impact from the feedback, I suggest we partner someone from the Product Work Group with each of the Moderators. The role of the PWG partner would be to identify/summarize the top 1-2 use cases, recruit a couple of operators to help flesh it out and then bring into the roadmap discussion. > > Thoughts? > Carol > > -----Original Message----- > From: Tom Fifield [mailto:tom at openstack.org] > Sent: Monday, March 23, 2015 10:04 PM > To: OpenStack Operators > Subject: [Openstack-operators] Ops @ Vancouver Summit - agenda brainstorming > > Hi all, > > As you've no doubt guessed - the Ops Summit will return in Vancouver, and we need your help to work out what should be on the agenda. > > If you're new: note that this is in addition to the operations (and > other) conference track's presentations. It's aimed at giving us a design-summit-style place to congregate, swap best practices, ideas and give feedback. > > > We've listened to feedback from the past events, especially Paris, and I'm delighted to report that from Vancouver on we're a fully integrated part of the actual design summit. No more random other hotels far away from things and dodgy signage put together at the last minute - it's time to go Pro :) > > > One biggest pieces of feedback we have regarding the organisation of these events so far is that you want to see direct action happen as a result of our discussions. We've had some success with this mid-cycle, for example: > > * Rabbit - heartbeat patch merged [1], now listed as top 5 issue for ops > * OVS - recommendation for vswitch at least 2.1.2 [2] > * Nova Feedback - large range of action items [3] > * Security - raised priority on policy.json docs [4] > > ... just a few amoungst dozens. > > However, we can continue to improve, so please - when you are suggesting sessions in the below etherpad please make sure you phrase them in a way that will result in _actions_ and _results_. > > > > ********************************************************************** > > Please propose session ideas on: > > https://etherpad.openstack.org/p/YVR-ops-meetup > > ensuring you read the new instructions to make sessions 'actionable'. > > > ********************************************************************** > > > > The room allocation is still being worked out (all hail Thierry!), but the current thinking is that the general sessions will all be on one day (likely Tuesday), and the working groups will be distributed throughout the week. > > > More as it comes, and as always, further information about ops meetups and notes from the past can be found on the wiki @: > > https://wiki.openstack.org/wiki/Operations/Meetups > > > Regards, > > > Tom > > > > > [1] https://review.openstack.org/#/c/146047/ > [2] https://etherpad.openstack.org/p/PHL-ops-OVS > [3] > http://lists.openstack.org/pipermail/openstack-dev/2015-March/058801.html > [4] https://bugs.launchpad.net/openstack-manuals/+bug/1311067 > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From tom at openstack.org Mon Apr 6 01:47:59 2015 From: tom at openstack.org (Tom Fifield) Date: Mon, 06 Apr 2015 09:47:59 +0800 Subject: [Openstack-operators] Ops @ Vancouver Summit - agenda brainstorming In-Reply-To: <55193FE9.7050408@openstack.org> References: <5510F034.1000902@openstack.org> <55193FE9.7050408@openstack.org> Message-ID: <5521E5CF.4040109@openstack.org> Many thanks to those who have contributed so far, especially the moderator volunteers! This is the final ping - get to the etherpad now if you would like input on the Ops @ Vancouver Summit things. So far we have only 3 architecture show and tell sessions, from our regular contributors. The Architecture show and tell sessions were very popular according to summit feedback in Paris - it would be a shame if we reduced the amount of time for it due to lack of participation :) All you have to do is speak for 5-8 minutes about your architecture, tell us a story about when something exploded or share anything generally cool you think other ops would enjoy :) >> https://etherpad.openstack.org/p/YVR-ops-meetup Regards, Tom On 30/03/15 20:22, Tom Fifield wrote: > Hi all, > > Giving this another stoke. > > So far we have zero volunteers for the architecture show and tell > lightening talks, and very few moderator volunteers :) > > If you have ideas about what should be on the agenda for ops summit > sessions at Vancouver, > >> >> ********************************************************************** >> >> Please propose session ideas on: >> >> https://etherpad.openstack.org/p/YVR-ops-meetup >> >> ensuring you read the new instructions to make sessions 'actionable'. >> >> >> ********************************************************************** > > We basically need to get this done in a week or so, giving us enough > time to promote the agenda before the summit. > > > Regards, > > > Tom > > On 24/03/15 13:03, Tom Fifield wrote: >> Hi all, >> >> As you've no doubt guessed - the Ops Summit will return in Vancouver, >> and we need your help to work out what should be on the agenda. >> >> If you're new: note that this is in addition to the operations (and >> other) conference track's presentations. It's aimed at giving us a >> design-summit-style place to congregate, swap best practices, ideas and >> give feedback. >> >> >> We've listened to feedback from the past events, especially Paris, and >> I'm delighted to report that from Vancouver on we're a fully integrated >> part of the actual design summit. No more random other hotels far away >> from things and dodgy signage put together at the last minute - it's >> time to go Pro :) >> >> >> One biggest pieces of feedback we have regarding the organisation of >> these events so far is that you want to see direct action happen as a >> result of our discussions. We've had some success with this mid-cycle, >> for example: >> >> * Rabbit - heartbeat patch merged [1], now listed as top 5 issue for ops >> * OVS - recommendation for vswitch at least 2.1.2 [2] >> * Nova Feedback - large range of action items [3] >> * Security - raised priority on policy.json docs [4] >> >> ... just a few amoungst dozens. >> >> However, we can continue to improve, so please - when you are suggesting >> sessions in the below etherpad please make sure you phrase them in a way >> that will result in _actions_ and _results_. >> >> >> >> ********************************************************************** >> >> Please propose session ideas on: >> >> https://etherpad.openstack.org/p/YVR-ops-meetup >> >> ensuring you read the new instructions to make sessions 'actionable'. >> >> >> ********************************************************************** >> >> >> >> The room allocation is still being worked out (all hail Thierry!), but >> the current thinking is that the general sessions will all be on one day >> (likely Tuesday), and the working groups will be distributed throughout >> the week. >> >> >> More as it comes, and as always, further information about ops meetups >> and notes from the past can be found on the wiki @: >> >> https://wiki.openstack.org/wiki/Operations/Meetups >> >> >> Regards, >> >> >> Tom >> >> >> >> >> [1] https://review.openstack.org/#/c/146047/ >> [2] https://etherpad.openstack.org/p/PHL-ops-OVS >> [3] >> http://lists.openstack.org/pipermail/openstack-dev/2015-March/058801.html >> [4] https://bugs.launchpad.net/openstack-manuals/+bug/1311067 >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From tom at openstack.org Mon Apr 6 03:21:33 2015 From: tom at openstack.org (Tom Fifield) Date: Mon, 06 Apr 2015 11:21:33 +0800 Subject: [Openstack-operators] User Survey - Deadline Apr 8 In-Reply-To: <55137E9E.7000109@openstack.org> References: <55137E9E.7000109@openstack.org> Message-ID: <5521FBBD.4000706@openstack.org> Reminder: deadline is April 8th. Please go to: http://www.openstack.org/user-survey and tell your friends. Any problems - mail me :) Regards, Tom On 26/03/15 11:35, Tom Fifield wrote: > Hi all, > > As you know, before previous summits we have been running a survey of > OpenStack users. We?re doing it again, and we need your help! > > If you are running an OpenStack cloud, please participate > in the survey [http://www.openstack.org/user-survey]. If you already > completed the survey before, you can simply log back in to update your > deployment details and answer a few new questions. Please note that if > your survey response has not been updated in 12 months, it will expire, > so we encourage you to take this time to update your existing profile so > your deployment can be included in the upcoming analysis. > > As a member of our community, please help us spread the word. We're > trying to gather as much real-world deployment data as possible to share > back with you. We have made it easier to fill out and added some new > questions (eg packaging, containers) > > The information provided is confidential and will only be presented in > aggregate unless the user consents to making it public. > > The deadline to complete the survey and be part of the next report is > April 8 at 23:00 UTC. > > Questions? Check out the FAQ [https://www.openstack.org/user-survey/faq] > or contact me ;) > > Thanks for your help! > > > Regards, > > > Tom > From carol.l.barrett at intel.com Mon Apr 6 20:53:49 2015 From: carol.l.barrett at intel.com (Barrett, Carol L) Date: Mon, 6 Apr 2015 20:53:49 +0000 Subject: [Openstack-operators] Ops @ Vancouver Summit - agenda brainstorming In-Reply-To: <5521E171.7020205@openstack.org> References: <5510F034.1000902@openstack.org> <2D352D0CD819F64F9715B1B89695400D5C71DB91@ORSMSX113.amr.corp.intel.com> <5521E171.7020205@openstack.org> Message-ID: <2D352D0CD819F64F9715B1B89695400D5C725CA8@ORSMSX113.amr.corp.intel.com> My thought is for each work session or general session, listen to the discussion to capture key information to write-up a use case. This includes asking clarifying questions along the way, During the wrap-up we'd want to establish priorities for the use cases and ask for a couple of Operator volunteers to review/assist in the development of the use case. If possible, during work sessions, we'd like to leave Vancouver with a strong draft of the use case. >From there we'd put the use cases in the cross-project repo and use the product work group process for socializing with the community and integrating (as appropriate) on the roadmap. I'm open to other thoughts and approaches, so pls offer your thoughts. Thanks Carol -----Original Message----- From: Tom Fifield [mailto:tom at openstack.org] Sent: Sunday, April 05, 2015 6:29 PM To: Barrett, Carol L; OpenStack Operators Subject: Re: [Openstack-operators] Ops @ Vancouver Summit - agenda brainstorming A noble aim - I'd support this kind of activity from anyone who wants to do it :) Can you explain more about what you're thinking? On 31/03/15 05:26, Barrett, Carol L wrote: > Tom - In support of helping to create impact from the feedback, I suggest we partner someone from the Product Work Group with each of the Moderators. The role of the PWG partner would be to identify/summarize the top 1-2 use cases, recruit a couple of operators to help flesh it out and then bring into the roadmap discussion. > > Thoughts? > Carol > > -----Original Message----- > From: Tom Fifield [mailto:tom at openstack.org] > Sent: Monday, March 23, 2015 10:04 PM > To: OpenStack Operators > Subject: [Openstack-operators] Ops @ Vancouver Summit - agenda > brainstorming > > Hi all, > > As you've no doubt guessed - the Ops Summit will return in Vancouver, and we need your help to work out what should be on the agenda. > > If you're new: note that this is in addition to the operations (and > other) conference track's presentations. It's aimed at giving us a design-summit-style place to congregate, swap best practices, ideas and give feedback. > > > We've listened to feedback from the past events, especially Paris, and > I'm delighted to report that from Vancouver on we're a fully > integrated part of the actual design summit. No more random other > hotels far away from things and dodgy signage put together at the last > minute - it's time to go Pro :) > > > One biggest pieces of feedback we have regarding the organisation of these events so far is that you want to see direct action happen as a result of our discussions. We've had some success with this mid-cycle, for example: > > * Rabbit - heartbeat patch merged [1], now listed as top 5 issue for > ops > * OVS - recommendation for vswitch at least 2.1.2 [2] > * Nova Feedback - large range of action items [3] > * Security - raised priority on policy.json docs [4] > > ... just a few amoungst dozens. > > However, we can continue to improve, so please - when you are suggesting sessions in the below etherpad please make sure you phrase them in a way that will result in _actions_ and _results_. > > > > ********************************************************************** > > Please propose session ideas on: > > https://etherpad.openstack.org/p/YVR-ops-meetup > > ensuring you read the new instructions to make sessions 'actionable'. > > > ********************************************************************** > > > > The room allocation is still being worked out (all hail Thierry!), but the current thinking is that the general sessions will all be on one day (likely Tuesday), and the working groups will be distributed throughout the week. > > > More as it comes, and as always, further information about ops meetups and notes from the past can be found on the wiki @: > > https://wiki.openstack.org/wiki/Operations/Meetups > > > Regards, > > > Tom > > > > > [1] https://review.openstack.org/#/c/146047/ > [2] https://etherpad.openstack.org/p/PHL-ops-OVS > [3] > http://lists.openstack.org/pipermail/openstack-dev/2015-March/058801.h > tml [4] https://bugs.launchpad.net/openstack-manuals/+bug/1311067 > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operator > s > From aishwarya.adyanthaya at accenture.com Tue Apr 7 12:16:00 2015 From: aishwarya.adyanthaya at accenture.com (aishwarya.adyanthaya at accenture.com) Date: Tue, 7 Apr 2015 12:16:00 +0000 Subject: [Openstack-operators] Endpoints in kubernetes Message-ID: Hello, I'm trying to integrate Kubernetes with OpenStack and I've started with the master node. While creating the master node I downloaded the Kubernetes.git and next I executed the command 'make release'. During the release, I got few lines that read 'Error on creating endpoints: endpoints "service1" not found' though in the end of the execution it read successful. When I checked the network it read 'Error: cannot sync with the cluster using endpoints http://127.0.0.1:4001'. Does anyone know if we need to add extra packages or other configurations that needs to be done before getting Kubernetes.git on the node to rectify the error? Thanks! ________________________________ This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. ______________________________________________________________________________________ www.accenture.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From aishwarya.adyanthaya at accenture.com Tue Apr 7 12:17:42 2015 From: aishwarya.adyanthaya at accenture.com (aishwarya.adyanthaya at accenture.com) Date: Tue, 7 Apr 2015 12:17:42 +0000 Subject: [Openstack-operators] Endpoints in kubernetes Message-ID: <5462b1727db5452084cf045b7c849f5f@CO2PR42MB188.048d.mgd.msft.net> Hello, I'm trying to integrate Kubernetes with OpenStack and I've started with the master node. While creating the master node I downloaded the Kubernetes.git and next I executed the command 'make release'. During the release, I got few lines that read 'Error on creating endpoints: endpoints "service1" not found' though in the end of the execution it read successful. When I checked the network it read 'Error: cannot sync with the cluster using endpoints http://127.0.0.1:4001'. Does anyone know if we need to add extra packages or other configurations that needs to be done before getting Kubernetes.git on the node to rectify the error? Thanks! ________________________________ This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. ______________________________________________________________________________________ www.accenture.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From afreedland at mirantis.com Tue Apr 7 15:12:50 2015 From: afreedland at mirantis.com (Alex Freedland) Date: Tue, 7 Apr 2015 08:12:50 -0700 Subject: [Openstack-operators] Endpoints in kubernetes In-Reply-To: References: Message-ID: Here is a link for disk image builder for Kubernetes images with use in Murano: https://github.com/stackforge/murano/tree/master/contrib/elements/kubernetes It has all steps to preinstall Kubernetes on top of Ubuntu. Alex Freedland Co-Founder and Chairman Mirantis, Inc. On Tue, Apr 7, 2015 at 5:16 AM, wrote: > Hello, > > > > I?m trying to integrate Kubernetes with OpenStack and I?ve started with > the master node. While creating the master node I downloaded the > Kubernetes.git and next I executed the command ?make release?. During the > release, I got few lines that read ?Error on creating endpoints: endpoints > "service1" not found? though in the end of the execution it read > successful. When I checked the network it read ?Error: cannot sync with > the cluster using endpoints http://127.0.0.1:4001?. > > > > Does anyone know if we need to add extra packages or other configurations > that needs to be done before getting Kubernetes.git on the node to rectify > the error? > > > > Thanks! > > ------------------------------ > > This message is for the designated recipient only and may contain > privileged, proprietary, or otherwise confidential information. If you have > received it in error, please notify the sender immediately and delete the > original. Any other use of the e-mail by you is prohibited. Where allowed > by local law, electronic communications with Accenture and its affiliates, > including e-mail and instant messaging (including content), may be scanned > by our systems for the purposes of information security and assessment of > internal compliance with Accenture policy. > > ______________________________________________________________________________________ > > www.accenture.com > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc.heckmann at ubisoft.com Tue Apr 7 15:36:27 2015 From: marc.heckmann at ubisoft.com (Marc Heckmann) Date: Tue, 7 Apr 2015 15:36:27 +0000 Subject: [Openstack-operators] Dynamic Policy for Access Control In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E50102865948@CERNXCHG41.cern.ch> References: <54EB4AE2.1000203@redhat.com> <5D7F9996EA547448BC6C54C8C5AAF4E50102865948@CERNXCHG41.cern.ch> Message-ID: <1428424599.61768.6.camel@localhost.localdomain> My apologies for not seeing this sooner as the topic is of great interest. My comments below inline.. On Mon, 2015-02-23 at 16:41 +0000, Tim Bell wrote: > > -----Original Message----- > > From: Adam Young [mailto:ayoung at redhat.com] > > Sent: 23 February 2015 16:45 > > To: openstack-operators at lists.openstack.org > > Subject: [Openstack-operators] Dynamic Policy for Access Control > > > > "Admin can do everything!" has been a common lament, heard for multiple > > summits. Its more than just a development issue. I'd like to fix that. I think we > > all would. > > > > > > I'm looking to get some Operator input on the Dynamic Policy issue. I wrote up a > > general overview last fall, after the Kilo summit: > > > > https://adam.younglogic.com/2014/11/dynamic-policy-in-keystone/ I agree with everything in that post. I would add the following comments: 1. I doubt this will change, but to be clear, we cannot lose the ability to create custom roles and limit the capabilities of the standard roles. For example, if I wanted to limit the ability to make images public or limit the ability to associate a floating IP. 2. This work should not be done in vacuum. Ideally, Horizon support for assigning roles to users and editing policy should be released at the same time or not long after. I realize that this is easier said than done, but it will be important in order for the feature to get used. > > > > > > Some of what I am looking at is: what are the general roles that Operators > > would like to have by default when deploying OpenStack? > > > > As I described in http://openstack-in-production.blogspot.ch/2015/02/delegation-of-roles.html, we've got (mapped per-project to an AD group) > > - operator (start/stop/reboot/console) > - accounting (read ceilometer data for reporting) > > > I've submitted a talk about policy for the Summit: > > https://www.openstack.org/vote-vancouver/presentation/dynamic-policy-for- > > access-control > > > > If you want, please vote for it, but even if it does not get selected, I'd like to > > discuss Policy with the operators at the summit, as input to the Keystone > > development effort. > > > > Sounds like a good topic for the ops meetup track. > > > Feedback greatly welcome. > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From sorrison at gmail.com Wed Apr 8 03:45:46 2015 From: sorrison at gmail.com (Sam Morrison) Date: Wed, 8 Apr 2015 13:45:46 +1000 Subject: [Openstack-operators] Flat network with linux bridge plugin In-Reply-To: <0f7b01d04b5f$ef1a5100$cd4ef300$@eurecom.fr> References: <0f7b01d04b5f$ef1a5100$cd4ef300$@eurecom.fr> Message-ID: <66A29CC3-24C1-4DCC-AE07-36E72723E62C@gmail.com> Hi Daniele, I?ve started playing with neutron too and have the exact same issue. Did you find a solution? Cheers, Sam > On 18 Feb 2015, at 8:47 pm, Daniele Venzano wrote: > > Hello, > > I?m trying to configure a very simple Neutron setup. > > On the compute nodes I want a linux bridge connected to a physical interface on one side and the VMs on the other side. This I have, by using the linux bridge agent and a physnet1:em1 mapping in the config file. > > On the controller side I need the dhcp and metadata agents. I installed and configured them. They start, no errors in logs. I see a namespace with a ns-* interface in it for dhcp. Outside the namespace I see a tap* interface without IP address, not connected to anything. > I installed the linux bridge agent also on the controller node, hoping it would create the bridge between the physnet interface and the dhcp namespace tap interface, but it just sits there and does nothing. > > So: I have VMs sending DHCP requests. I see the requests on the controller node, but the dhcp namespace is not connected to anything. > I can provide logs and config files, but probably I just need a hint in the right direction. > > On the network controller: > Do I need a bridge to connect the namespace to the physical interface? > Should this bridge be created by me by hand, or by the linuxbridge agent? Should I run the linuxbridge agent on the network controller? > > I do not want/have a l3 agent. I want to have just one shared network for all tenants, very simple. > > Thanks, > Daniele > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniele.venzano at eurecom.fr Wed Apr 8 08:00:12 2015 From: daniele.venzano at eurecom.fr (Daniele Venzano) Date: Wed, 8 Apr 2015 10:00:12 +0200 Subject: [Openstack-operators] Flat network with linux bridge plugin In-Reply-To: <66A29CC3-24C1-4DCC-AE07-36E72723E62C@gmail.com> References: <0f7b01d04b5f$ef1a5100$cd4ef300$@eurecom.fr> <66A29CC3-24C1-4DCC-AE07-36E72723E62C@gmail.com> Message-ID: <045c01d071d2$0a406e90$1ec14bb0$@eurecom.fr> Well, I found a way to make it work. Yes, you need a bridge (brctl addbr ...). You need to create it by hand and add the interfaces (physical and dnsmasq namespace) to it. The linuxbridge agent installed on the network node does not do anything. The problem with this is that the interface for the namespace is created after an arbitrary amount of time by one of the neutron daemons, so you cannot simply put the bridge creation in one of the boot scripts, but you have to wait for the interface to appear. From: Sam Morrison [mailto:sorrison at gmail.com] Sent: Wednesday 08 April 2015 05:46 To: Daniele Venzano Cc: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] Flat network with linux bridge plugin Hi Daniele, I?ve started playing with neutron too and have the exact same issue. Did you find a solution? Cheers, Sam On 18 Feb 2015, at 8:47 pm, Daniele Venzano wrote: Hello, I?m trying to configure a very simple Neutron setup. On the compute nodes I want a linux bridge connected to a physical interface on one side and the VMs on the other side. This I have, by using the linux bridge agent and a physnet1:em1 mapping in the config file. On the controller side I need the dhcp and metadata agents. I installed and configured them. They start, no errors in logs. I see a namespace with a ns-* interface in it for dhcp. Outside the namespace I see a tap* interface without IP address, not connected to anything. I installed the linux bridge agent also on the controller node, hoping it would create the bridge between the physnet interface and the dhcp namespace tap interface, but it just sits there and does nothing. So: I have VMs sending DHCP requests. I see the requests on the controller node, but the dhcp namespace is not connected to anything. I can provide logs and config files, but probably I just need a hint in the right direction. On the network controller: Do I need a bridge to connect the namespace to the physical interface? Should this bridge be created by me by hand, or by the linuxbridge agent? Should I run the linuxbridge agent on the network controller? I do not want/have a l3 agent. I want to have just one shared network for all tenants, very simple. Thanks, Daniele _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From lleelm at outlook.com Wed Apr 8 08:40:26 2015 From: lleelm at outlook.com (LeeKies) Date: Wed, 8 Apr 2015 08:40:26 +0000 Subject: [Openstack-operators] floatingip with security group Message-ID: I create a VM with a default security group , then I create and associate a floating ip with this VM.But I can't connect the floating ip, I check the security group, and I think it's the sg problem. I add a rule in default sg, and then I can connect the floating ip. When I create a floating ip , Does I have to add a rule in security group to allow the ip for ingress ?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From lleelm at outlook.com Wed Apr 8 08:42:44 2015 From: lleelm at outlook.com (LeeKies) Date: Wed, 8 Apr 2015 08:42:44 +0000 Subject: [Openstack-operators] [Neutron]floatingip with security group Message-ID: I create a VM with a default security group , then I create and associate a floating ip with this VM.But I can't connect the floating ip, I check the security group, and I think it's the sg problem. I add a rule in default sg, and then I can connect the floating ip. When I create a floating ip , Does I have to add a rule in security group to allow the ip for ingress ?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From comnea.dani at gmail.com Wed Apr 8 09:29:08 2015 From: comnea.dani at gmail.com (Daniel Comnea) Date: Wed, 8 Apr 2015 10:29:08 +0100 Subject: [Openstack-operators] Flat network with linux bridge plugin In-Reply-To: <045c01d071d2$0a406e90$1ec14bb0$@eurecom.fr> References: <0f7b01d04b5f$ef1a5100$cd4ef300$@eurecom.fr> <66A29CC3-24C1-4DCC-AE07-36E72723E62C@gmail.com> <045c01d071d2$0a406e90$1ec14bb0$@eurecom.fr> Message-ID: Which release are you using it, on which OS ? On Wed, Apr 8, 2015 at 9:00 AM, Daniele Venzano wrote: > Well, I found a way to make it work. > > Yes, you need a bridge (brctl addbr ...). > > You need to create it by hand and add the interfaces (physical and dnsmasq > namespace) to it. > > The linuxbridge agent installed on the network node does not do anything. > > > > The problem with this is that the interface for the namespace is created > after an arbitrary amount of time by one of the neutron daemons, so you > cannot simply put the bridge creation in one of the boot scripts, but you > have to wait for the interface to appear. > > > > > > *From:* Sam Morrison [mailto:sorrison at gmail.com] > *Sent:* Wednesday 08 April 2015 05:46 > *To:* Daniele Venzano > *Cc:* openstack-operators at lists.openstack.org > *Subject:* Re: [Openstack-operators] Flat network with linux bridge plugin > > > > Hi Daniele, > > > > I?ve started playing with neutron too and have the exact same issue. Did > you find a solution? > > > > Cheers, > > Sam > > > > > > > > On 18 Feb 2015, at 8:47 pm, Daniele Venzano > wrote: > > > > Hello, > > > > I?m trying to configure a very simple Neutron setup. > > > > On the compute nodes I want a linux bridge connected to a physical > interface on one side and the VMs on the other side. This I have, by using > the linux bridge agent and a physnet1:em1 mapping in the config file. > > > > On the controller side I need the dhcp and metadata agents. I installed > and configured them. They start, no errors in logs. I see a namespace with > a ns-* interface in it for dhcp. Outside the namespace I see a tap* > interface without IP address, not connected to anything. > > I installed the linux bridge agent also on the controller node, hoping it > would create the bridge between the physnet interface and the dhcp > namespace tap interface, but it just sits there and does nothing. > > > > So: I have VMs sending DHCP requests. I see the requests on the controller > node, but the dhcp namespace is not connected to anything. > > I can provide logs and config files, but probably I just need a hint in > the right direction. > > > > On the network controller: > > Do I need a bridge to connect the namespace to the physical interface? > > Should this bridge be created by me by hand, or by the linuxbridge agent? > Should I run the linuxbridge agent on the network controller? > > > > I do not want/have a l3 agent. I want to have just one shared network for > all tenants, very simple. > > > > Thanks, > > Daniele > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From comnea.dani at gmail.com Wed Apr 8 09:32:32 2015 From: comnea.dani at gmail.com (Daniel Comnea) Date: Wed, 8 Apr 2015 10:32:32 +0100 Subject: [Openstack-operators] [Openstack-dev] resource quotas limit per stacks within a project In-Reply-To: References: Message-ID: + operators Hard to believe nobody is facing this problems, even on small shops you end up with multiple stacks part of the same tenant/ project. Thanks, Dani On Wed, Apr 1, 2015 at 8:10 PM, Daniel Comnea wrote: > Any ideas/ thoughts please? > > In VMware world is basically the same feature provided by the resource > pool. > > > Thanks, > Dani > > On Tue, Mar 31, 2015 at 10:37 PM, Daniel Comnea > wrote: > >> Hi all, >> >> I'm trying to understand what options i have for the below use case... >> >> Having multiple stacks (various number of instances) deployed within 1 >> Openstack project (tenant), how can i guarantee that there will be no >> race after the project resources. >> >> E.g - say i have few stacks like >> >> stack 1 = production >> stack 2 = development >> stack 3 = integration >> >> i don't want to be in a situation where stack 3 (because of a need to run >> some heavy tests) will use all of the resources for a short while while >> production will suffer from it. >> >> Any ideas? >> >> Thanks, >> Dani >> >> P.S - i'm aware of the heavy work being put into improving the quotas or >> the CPU pinning however that is at the project level >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniele.venzano at eurecom.fr Wed Apr 8 10:21:07 2015 From: daniele.venzano at eurecom.fr (Daniele Venzano) Date: Wed, 8 Apr 2015 12:21:07 +0200 Subject: [Openstack-operators] Flat network with linux bridge plugin In-Reply-To: References: <0f7b01d04b5f$ef1a5100$cd4ef300$@eurecom.fr> <66A29CC3-24C1-4DCC-AE07-36E72723E62C@gmail.com> <045c01d071d2$0a406e90$1ec14bb0$@eurecom.fr> Message-ID: <04c901d071e5$ba0c4480$2e24cd80$@eurecom.fr> Juno (from ubuntu cloud), on Ubuntu 14.04 From: daniel.comnea at gmail.com [mailto:daniel.comnea at gmail.com] On Behalf Of Daniel Comnea Sent: Wednesday 08 April 2015 11:29 To: Daniele Venzano Cc: Sam Morrison; openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] Flat network with linux bridge plugin Which release are you using it, on which OS ? On Wed, Apr 8, 2015 at 9:00 AM, Daniele Venzano wrote: Well, I found a way to make it work. Yes, you need a bridge (brctl addbr ...). You need to create it by hand and add the interfaces (physical and dnsmasq namespace) to it. The linuxbridge agent installed on the network node does not do anything. The problem with this is that the interface for the namespace is created after an arbitrary amount of time by one of the neutron daemons, so you cannot simply put the bridge creation in one of the boot scripts, but you have to wait for the interface to appear. From: Sam Morrison [mailto:sorrison at gmail.com] Sent: Wednesday 08 April 2015 05:46 To: Daniele Venzano Cc: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] Flat network with linux bridge plugin Hi Daniele, I?ve started playing with neutron too and have the exact same issue. Did you find a solution? Cheers, Sam On 18 Feb 2015, at 8:47 pm, Daniele Venzano wrote: Hello, I?m trying to configure a very simple Neutron setup. On the compute nodes I want a linux bridge connected to a physical interface on one side and the VMs on the other side. This I have, by using the linux bridge agent and a physnet1:em1 mapping in the config file. On the controller side I need the dhcp and metadata agents. I installed and configured them. They start, no errors in logs. I see a namespace with a ns-* interface in it for dhcp. Outside the namespace I see a tap* interface without IP address, not connected to anything. I installed the linux bridge agent also on the controller node, hoping it would create the bridge between the physnet interface and the dhcp namespace tap interface, but it just sits there and does nothing. So: I have VMs sending DHCP requests. I see the requests on the controller node, but the dhcp namespace is not connected to anything. I can provide logs and config files, but probably I just need a hint in the right direction. On the network controller: Do I need a bridge to connect the namespace to the physical interface? Should this bridge be created by me by hand, or by the linuxbridge agent? Should I run the linuxbridge agent on the network controller? I do not want/have a l3 agent. I want to have just one shared network for all tenants, very simple. Thanks, Daniele _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdorman at godaddy.com Wed Apr 8 14:38:48 2015 From: mdorman at godaddy.com (Mike Dorman) Date: Wed, 8 Apr 2015 14:38:48 +0000 Subject: [Openstack-operators] [Neutron]floatingip with security group In-Reply-To: References: Message-ID: <8E897DA7-9D4B-40F6-8E04-B717BF672276@godaddy.com> Yup, you need to configure an ?address pair? for the floating IP. This isn?t specifically a security groups thing, but it will allow traffic to the floating IP to pass into the VM to which it is associated. Under the covers, it?s implemented similarly to security groups, but is not directly a security groups function. From: LeeKies Date: Wednesday, April 8, 2015 at 2:42 AM To: OpenStack Operators Subject: [Openstack-operators] [Neutron]floatingip with security group I create a VM with a default security group , then I create and associate a floating ip with this VM. But I can't connect the floating ip, I check the security group, and I think it's the sg problem. I add a rule in default sg, and then I can connect the floating ip. When I create a floating ip , Does I have to add a rule in security group to allow the ip for ingress ?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From klindgren at godaddy.com Wed Apr 8 14:54:27 2015 From: klindgren at godaddy.com (Kris G. Lindgren) Date: Wed, 8 Apr 2015 14:54:27 +0000 Subject: [Openstack-operators] [Openstack-dev] resource quotas limit per stacks within a project In-Reply-To: References: Message-ID: Why wouldn't you separate you dev/test/productiion via tenants as well? That's what we encourage our users to do. This would let you create flavors that give dev/test less resources under exhaustion conditions and production more resources. You could even pin dev/test to specific hypervisors/areas of the cloud and let production have the rest via those flavors. ____________________________________________ Kris Lindgren Senior Linux Systems Engineer GoDaddy, LLC. From: Daniel Comnea > Date: Wednesday, April 8, 2015 at 3:32 AM To: Daniel Comnea > Cc: "OpenStack Development Mailing List (not for usage questions)" >, "openstack-operators at lists.openstack.org" > Subject: Re: [Openstack-operators] [Openstack-dev] resource quotas limit per stacks within a project + operators Hard to believe nobody is facing this problems, even on small shops you end up with multiple stacks part of the same tenant/ project. Thanks, Dani On Wed, Apr 1, 2015 at 8:10 PM, Daniel Comnea > wrote: Any ideas/ thoughts please? In VMware world is basically the same feature provided by the resource pool. Thanks, Dani On Tue, Mar 31, 2015 at 10:37 PM, Daniel Comnea > wrote: Hi all, I'm trying to understand what options i have for the below use case... Having multiple stacks (various number of instances) deployed within 1 Openstack project (tenant), how can i guarantee that there will be no race after the project resources. E.g - say i have few stacks like stack 1 = production stack 2 = development stack 3 = integration i don't want to be in a situation where stack 3 (because of a need to run some heavy tests) will use all of the resources for a short while while production will suffer from it. Any ideas? Thanks, Dani P.S - i'm aware of the heavy work being put into improving the quotas or the CPU pinning however that is at the project level -------------- next part -------------- An HTML attachment was scrubbed... URL: From klindgren at godaddy.com Wed Apr 8 15:00:52 2015 From: klindgren at godaddy.com (Kris G. Lindgren) Date: Wed, 8 Apr 2015 15:00:52 +0000 Subject: [Openstack-operators] Flat network with linux bridge plugin In-Reply-To: <04c901d071e5$ba0c4480$2e24cd80$@eurecom.fr> References: <0f7b01d04b5f$ef1a5100$cd4ef300$@eurecom.fr> <66A29CC3-24C1-4DCC-AE07-36E72723E62C@gmail.com> <045c01d071d2$0a406e90$1ec14bb0$@eurecom.fr> <04c901d071e5$ba0c4480$2e24cd80$@eurecom.fr> Message-ID: We run this exact configuration with the exception that we are using OVS instead of linux bridge agent. On your "Network nodes" (those running metadata/dhcp) you need to configure them exactly like you do you compute services from the standpoint of the L2 agent. Once we did that when the l2 agent starts it creates the bridges it cares about and the dhcp agent then gets plugged into those bridges. We didn't have to specifically create any bridges or manually plug vifs into it to get everything to work. I would be highly surprised if the linuxbridge agent acted any differently. Mainly because the dhcp agent consumes an IP/port on the network, no different than a vm would. So the L2 agent should plug it for you automatically. ____________________________________________ Kris Lindgren Senior Linux Systems Engineer GoDaddy, LLC. From: Daniele Venzano > Organization: EURECOM Date: Wednesday, April 8, 2015 at 4:21 AM To: 'Daniel Comnea' > Cc: "openstack-operators at lists.openstack.org" > Subject: Re: [Openstack-operators] Flat network with linux bridge plugin Juno (from ubuntu cloud), on Ubuntu 14.04 From: daniel.comnea at gmail.com [mailto:daniel.comnea at gmail.com] On Behalf Of Daniel Comnea Sent: Wednesday 08 April 2015 11:29 To: Daniele Venzano Cc: Sam Morrison; openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] Flat network with linux bridge plugin Which release are you using it, on which OS ? On Wed, Apr 8, 2015 at 9:00 AM, Daniele Venzano > wrote: Well, I found a way to make it work. Yes, you need a bridge (brctl addbr ...). You need to create it by hand and add the interfaces (physical and dnsmasq namespace) to it. The linuxbridge agent installed on the network node does not do anything. The problem with this is that the interface for the namespace is created after an arbitrary amount of time by one of the neutron daemons, so you cannot simply put the bridge creation in one of the boot scripts, but you have to wait for the interface to appear. From: Sam Morrison [mailto:sorrison at gmail.com] Sent: Wednesday 08 April 2015 05:46 To: Daniele Venzano Cc: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] Flat network with linux bridge plugin Hi Daniele, I?ve started playing with neutron too and have the exact same issue. Did you find a solution? Cheers, Sam On 18 Feb 2015, at 8:47 pm, Daniele Venzano > wrote: Hello, I?m trying to configure a very simple Neutron setup. On the compute nodes I want a linux bridge connected to a physical interface on one side and the VMs on the other side. This I have, by using the linux bridge agent and a physnet1:em1 mapping in the config file. On the controller side I need the dhcp and metadata agents. I installed and configured them. They start, no errors in logs. I see a namespace with a ns-* interface in it for dhcp. Outside the namespace I see a tap* interface without IP address, not connected to anything. I installed the linux bridge agent also on the controller node, hoping it would create the bridge between the physnet interface and the dhcp namespace tap interface, but it just sits there and does nothing. So: I have VMs sending DHCP requests. I see the requests on the controller node, but the dhcp namespace is not connected to anything. I can provide logs and config files, but probably I just need a hint in the right direction. On the network controller: Do I need a bridge to connect the namespace to the physical interface? Should this bridge be created by me by hand, or by the linuxbridge agent? Should I run the linuxbridge agent on the network controller? I do not want/have a l3 agent. I want to have just one shared network for all tenants, very simple. Thanks, Daniele _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From klindgren at godaddy.com Wed Apr 8 15:17:22 2015 From: klindgren at godaddy.com (Kris G. Lindgren) Date: Wed, 8 Apr 2015 15:17:22 +0000 Subject: [Openstack-operators] [Neutron]floatingip with security group In-Reply-To: <8E897DA7-9D4B-40F6-8E04-B717BF672276@godaddy.com> References: <8E897DA7-9D4B-40F6-8E04-B717BF672276@godaddy.com> Message-ID: Mike is talking about our specific way of doing floating ips - which is not the default for neutron, so you do *NOT* have to add an allowed-address pair for the floating ip to work. You will however have to add to the security group rules to allow traffic from whatever networks are connecting to your floating ip. The reason for this is because the floating Ip is performed via nat. So traffic from say the internet hits the floating IP and is destination nat'd to the IP of you vm. So from your vm's stand point it sees traffic from the internet trying to connect to it. If the security group rules on the vm do not allow this traffic it will be dropped. ____________________________________________ Kris Lindgren Senior Linux Systems Engineer GoDaddy, LLC. From: Michael Dorman > Date: Wednesday, April 8, 2015 at 8:38 AM To: OpenStack Operators > Subject: Re: [Openstack-operators] [Neutron]floatingip with security group Yup, you need to configure an "address pair" for the floating IP. This isn't specifically a security groups thing, but it will allow traffic to the floating IP to pass into the VM to which it is associated. Under the covers, it's implemented similarly to security groups, but is not directly a security groups function. From: LeeKies Date: Wednesday, April 8, 2015 at 2:42 AM To: OpenStack Operators Subject: [Openstack-operators] [Neutron]floatingip with security group I create a VM with a default security group , then I create and associate a floating ip with this VM. But I can't connect the floating ip, I check the security group, and I think it's the sg problem. I add a rule in default sg, and then I can connect the floating ip. When I create a floating ip , Does I have to add a rule in security group to allow the ip for ingress ?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniele.venzano at eurecom.fr Wed Apr 8 15:47:23 2015 From: daniele.venzano at eurecom.fr (Daniele Venzano) Date: Wed, 8 Apr 2015 17:47:23 +0200 Subject: [Openstack-operators] Flat network with linux bridge plugin In-Reply-To: References: <0f7b01d04b5f$ef1a5100$cd4ef300$@eurecom.fr> <66A29CC3-24C1-4DCC-AE07-36E72723E62C@gmail.com> <045c01d071d2$0a406e90$1ec14bb0$@eurecom.fr> <04c901d071e5$ba0c4480$2e24cd80$@eurecom.fr> Message-ID: <05b201d07213$4e694010$eb3bc030$@eurecom.fr> I am 99% sure I configured the linuxbridge agent on the network node the same way as on the compute nodes, but it was doing nothing. But I did it a while ago, so I could be wrong. Anyway having the agent constantly running just to create a bridge at boot is bit of a waste. The next maintenance window I will try again, just to understand. From: Kris G. Lindgren [mailto:klindgren at godaddy.com] Sent: Wednesday 08 April 2015 17:01 To: Daniele Venzano; 'Daniel Comnea' Cc: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] Flat network with linux bridge plugin We run this exact configuration with the exception that we are using OVS instead of linux bridge agent. On your "Network nodes" (those running metadata/dhcp) you need to configure them exactly like you do you compute services from the standpoint of the L2 agent. Once we did that when the l2 agent starts it creates the bridges it cares about and the dhcp agent then gets plugged into those bridges. We didn't have to specifically create any bridges or manually plug vifs into it to get everything to work. I would be highly surprised if the linuxbridge agent acted any differently. Mainly because the dhcp agent consumes an IP/port on the network, no different than a vm would. So the L2 agent should plug it for you automatically. ____________________________________________ Kris Lindgren Senior Linux Systems Engineer GoDaddy, LLC. From: Daniele Venzano Organization: EURECOM Date: Wednesday, April 8, 2015 at 4:21 AM To: 'Daniel Comnea' Cc: "openstack-operators at lists.openstack.org" Subject: Re: [Openstack-operators] Flat network with linux bridge plugin Juno (from ubuntu cloud), on Ubuntu 14.04 From: daniel.comnea at gmail.com [mailto:daniel.comnea at gmail.com] On Behalf Of Daniel Comnea Sent: Wednesday 08 April 2015 11:29 To: Daniele Venzano Cc: Sam Morrison; openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] Flat network with linux bridge plugin Which release are you using it, on which OS ? On Wed, Apr 8, 2015 at 9:00 AM, Daniele Venzano wrote: Well, I found a way to make it work. Yes, you need a bridge (brctl addbr ...). You need to create it by hand and add the interfaces (physical and dnsmasq namespace) to it. The linuxbridge agent installed on the network node does not do anything. The problem with this is that the interface for the namespace is created after an arbitrary amount of time by one of the neutron daemons, so you cannot simply put the bridge creation in one of the boot scripts, but you have to wait for the interface to appear. From: Sam Morrison [mailto:sorrison at gmail.com] Sent: Wednesday 08 April 2015 05:46 To: Daniele Venzano Cc: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] Flat network with linux bridge plugin Hi Daniele, I?ve started playing with neutron too and have the exact same issue. Did you find a solution? Cheers, Sam On 18 Feb 2015, at 8:47 pm, Daniele Venzano wrote: Hello, I?m trying to configure a very simple Neutron setup. On the compute nodes I want a linux bridge connected to a physical interface on one side and the VMs on the other side. This I have, by using the linux bridge agent and a physnet1:em1 mapping in the config file. On the controller side I need the dhcp and metadata agents. I installed and configured them. They start, no errors in logs. I see a namespace with a ns-* interface in it for dhcp. Outside the namespace I see a tap* interface without IP address, not connected to anything. I installed the linux bridge agent also on the controller node, hoping it would create the bridge between the physnet interface and the dhcp namespace tap interface, but it just sits there and does nothing. So: I have VMs sending DHCP requests. I see the requests on the controller node, but the dhcp namespace is not connected to anything. I can provide logs and config files, but probably I just need a hint in the right direction. On the network controller: Do I need a bridge to connect the namespace to the physical interface? Should this bridge be created by me by hand, or by the linuxbridge agent? Should I run the linuxbridge agent on the network controller? I do not want/have a l3 agent. I want to have just one shared network for all tenants, very simple. Thanks, Daniele _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From everett.toews at RACKSPACE.COM Wed Apr 8 22:06:58 2015 From: everett.toews at RACKSPACE.COM (Everett Toews) Date: Wed, 8 Apr 2015 22:06:58 +0000 Subject: [Openstack-operators] [openstack-dev] [all][log] Openstack HTTP error codes In-Reply-To: References: Message-ID: On Jan 29, 2015, at 8:34 PM, Rochelle Grober > wrote: Hi folks! Changed the tags a bit because this is a discussion for all projects and dovetails with logging rationalization/standards/ At the Paris summit, we had a number of session on logging that kept circling back to Error Codes. But, these codes would not be http codes, rather, as others have pointed out, codes related to the calling entities and referring entities and the actions that happened or didn?t. Format suggestions were gathered from the Operators and from some senior developers. The Logging Working Group is planning to put forth a spec for discussion on formats and standards before the Ops mid-cycle meetup. Working from a Glance proposal on error codes: https://review.openstack.org/#/c/127482/ and discussions with operators and devs, we have a strawman to propose. We also have a number of requirements from Ops and some Devs. Here is the basic idea: Code for logs would have four segments: Project Vendor/Component Error Catalog number Criticality Def [A-Z] [A-Z] [A-Z] - [{0-9}|{A-Z}][A-Z] - [0000-9999]- [0-9] Ex. CIN- NA- 0001- 2 Cinder NetApp driver error no Criticality Ex. GLA- 0A- 0051 3 Glance Api error no Criticality Three letters for project, Either a two letter vendor code or a number and letter for 0+letter for internal component of project (like API=0A, Controller =0C, etc), four digit error number which could be subsetted for even finer granularity, and a criticality number. This is for logging purposes and tracking down root cause faster for operators, but if an error is generated, why can the same codes be used internally for the code as externally for the logs? This also allows for a unique message to be associated with the error code that is more descriptive and that can be pre translated. Again, for logging purposes, the error code would not be part of the message payload, but part of the headers. Referrer IDs and other info would still be expected in the payload of the message and could include instance ids/names, NICs or VIFs, etc. The message headers is code in Oslo.log and when using the Oslo.log library, will be easy to use. Since this discussion came up, I thought I needed to get this info out to folks and advertise that anyone will be able to comment on the spec to drive it to agreement. I will be advertising it here and on Ops and Product-WG mailing lists. I?d also like to invite anyone who want to participate in discussions to join them. We?ll be starting a bi-weekly or weekly IRC meeting (also announced in the stated MLs) in February. And please realize that other than Oslo.log, the changes to make the errors more useable will be almost entirely community created standards with community created tools to help enforce them. None of which exist yet, FYI. Hi Rocky, The API WG is trying to come up with a guideline for an error format for the HTTP APIs [1]. In that error format is a code field that I was hoping could match the code in the logs you mention above. I noticed in the Logging WG meetings [2] that you mention an "Error Code Spec?. I?d like to be able to use info from that spec in the example [2] of the error format. Has there been any progress on that spec? Can you link me to it? Also, if you have time for a review of the error format, I?d like to hear your thoughts. Thanks, Everett [1] https://review.openstack.org/#/c/167793/ [2] https://wiki.openstack.org/wiki/Meetings/log-wg [3] https://review.openstack.org/#/c/167793/4/guidelines/errors-example.json,unified -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorrison at gmail.com Wed Apr 8 23:14:09 2015 From: sorrison at gmail.com (Sam Morrison) Date: Thu, 9 Apr 2015 09:14:09 +1000 Subject: [Openstack-operators] Flat network with linux bridge plugin In-Reply-To: <05b201d07213$4e694010$eb3bc030$@eurecom.fr> References: <0f7b01d04b5f$ef1a5100$cd4ef300$@eurecom.fr> <66A29CC3-24C1-4DCC-AE07-36E72723E62C@gmail.com> <045c01d071d2$0a406e90$1ec14bb0$@eurecom.fr> <04c901d071e5$ba0c4480$2e24cd80$@eurecom.fr> <05b201d07213$4e694010$eb3bc030$@eurecom.fr> Message-ID: <1B562E5E-66FA-4D05-877F-47F1B1218FEB@gmail.com> I reset neutron (cleared DB etc.) and rebooted the network node and it worked fine the second time around. I think the first time something went wrong talking to the DB initially. I fixed up the config but then it was impossible for it to fix itself. eg. restarting the neutron agents did nothing. Sam > On 9 Apr 2015, at 1:47 am, Daniele Venzano wrote: > > I am 99% sure I configured the linuxbridge agent on the network node the same way as on the compute nodes, but it was doing nothing. > But I did it a while ago, so I could be wrong. Anyway having the agent constantly running just to create a bridge at boot is bit of a waste. > The next maintenance window I will try again, just to understand. > > > From: Kris G. Lindgren [mailto:klindgren at godaddy.com] > Sent: Wednesday 08 April 2015 17:01 > To: Daniele Venzano; 'Daniel Comnea' > Cc: openstack-operators at lists.openstack.org > Subject: Re: [Openstack-operators] Flat network with linux bridge plugin > > We run this exact configuration with the exception that we are using OVS instead of linux bridge agent. On your "Network nodes" (those running metadata/dhcp) you need to configure them exactly like you do you compute services from the standpoint of the L2 agent. Once we did that when the l2 agent starts it creates the bridges it cares about and the dhcp agent then gets plugged into those bridges. We didn't have to specifically create any bridges or manually plug vifs into it to get everything to work. > > I would be highly surprised if the linuxbridge agent acted any differently. Mainly because the dhcp agent consumes an IP/port on the network, no different than a vm would. So the L2 agent should plug it for you automatically. > ____________________________________________ > > Kris Lindgren > Senior Linux Systems Engineer > GoDaddy, LLC. > > From: Daniele Venzano > > Organization: EURECOM > Date: Wednesday, April 8, 2015 at 4:21 AM > To: 'Daniel Comnea' > > Cc: "openstack-operators at lists.openstack.org " > > Subject: Re: [Openstack-operators] Flat network with linux bridge plugin > > Juno (from ubuntu cloud), on Ubuntu 14.04 > > From: daniel.comnea at gmail.com [mailto:daniel.comnea at gmail.com ] On Behalf Of Daniel Comnea > Sent: Wednesday 08 April 2015 11:29 > To: Daniele Venzano > Cc: Sam Morrison; openstack-operators at lists.openstack.org > Subject: Re: [Openstack-operators] Flat network with linux bridge plugin > > Which release are you using it, on which OS ? > > > On Wed, Apr 8, 2015 at 9:00 AM, Daniele Venzano > wrote: > Well, I found a way to make it work. > Yes, you need a bridge (brctl addbr ...). > You need to create it by hand and add the interfaces (physical and dnsmasq namespace) to it. > The linuxbridge agent installed on the network node does not do anything. > > The problem with this is that the interface for the namespace is created after an arbitrary amount of time by one of the neutron daemons, so you cannot simply put the bridge creation in one of the boot scripts, but you have to wait for the interface to appear. > > > From: Sam Morrison [mailto:sorrison at gmail.com ] > Sent: Wednesday 08 April 2015 05:46 > To: Daniele Venzano > Cc: openstack-operators at lists.openstack.org > Subject: Re: [Openstack-operators] Flat network with linux bridge plugin > > Hi Daniele, > > I?ve started playing with neutron too and have the exact same issue. Did you find a solution? > > Cheers, > Sam > > > >> On 18 Feb 2015, at 8:47 pm, Daniele Venzano > wrote: >> >> Hello, >> >> I?m trying to configure a very simple Neutron setup. >> >> On the compute nodes I want a linux bridge connected to a physical interface on one side and the VMs on the other side. This I have, by using the linux bridge agent and a physnet1:em1 mapping in the config file. >> >> On the controller side I need the dhcp and metadata agents. I installed and configured them. They start, no errors in logs. I see a namespace with a ns-* interface in it for dhcp. Outside the namespace I see a tap* interface without IP address, not connected to anything. >> I installed the linux bridge agent also on the controller node, hoping it would create the bridge between the physnet interface and the dhcp namespace tap interface, but it just sits there and does nothing. >> >> So: I have VMs sending DHCP requests. I see the requests on the controller node, but the dhcp namespace is not connected to anything. >> I can provide logs and config files, but probably I just need a hint in the right direction. >> >> On the network controller: >> Do I need a bridge to connect the namespace to the physical interface? >> Should this bridge be created by me by hand, or by the linuxbridge agent? Should I run the linuxbridge agent on the network controller? >> >> I do not want/have a l3 agent. I want to have just one shared network for all tenants, very simple. >> >> Thanks, >> Daniele >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfa at zumbi.com.ar Thu Apr 9 06:03:02 2015 From: gfa at zumbi.com.ar (gustavo panizzo (gfa)) Date: Thu, 09 Apr 2015 14:03:02 +0800 Subject: [Openstack-operators] vm live migrated ended on a different az Message-ID: <55261616.7030400@zumbi.com.ar> hello, last night our ops live migrated (nova live-migration --block-migrate $vm) a group of vm to do hw maintenance. some of the vm ended on a different AZ making the vm unusable (we have different upstream network connectivity on each AZ) I haven't read all the logs yet, but i remember when i created the zones i did test live migrations and it always remain in the same AZ. I googled and nothing appears, i wonder if somebody else had this problem or is not even supposed to work and i got lucky until today? of course, i have setup AZ filter scheduler_available_filters=nova.scheduler.filters.all_filters scheduler_default_filters=RetryFilter,AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,DiskFilter,ComputeFilter,ImagePropertiesFilter -- 1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333 From amuller at redhat.com Thu Apr 9 14:52:19 2015 From: amuller at redhat.com (Assaf Muller) Date: Thu, 9 Apr 2015 10:52:19 -0400 (EDT) Subject: [Openstack-operators] [Neutron] The specs process, effective operators feedback and product management In-Reply-To: <1524636922.17358007.1428588368861.JavaMail.zimbra@redhat.com> Message-ID: <639534235.17402447.1428591139863.JavaMail.zimbra@redhat.com> The Neutron specs process was introduced during the Juno timecycle. At the time it was mostly a bureaucratic bottleneck (The ability to say no) to ease the pain of cores and manage workloads throughout a cycle. Perhaps this is a somewhat naive outlook, but I see other positives, such as more upfront design (Some is better than none), less high level talk during the implementation review process and more focus on the details, and 'free' documentation for every major change to the project (Some would say this is kind of a big deal; What better way to write documentation than to force the developers to do it in order for their features to get merged). That being said, you can only get a feature merged if you propose a spec, and the only people largely proposing specs are developers. This ingrains the open source culture of developer focused evolution, that, while empowering and great for developers, is bad for product managers, users (That are sometimes under-presented, as is the case I'm trying to make) and generally causes a lack of a cohesive vision. Like it or not, the specs process and the driver's team approval process form a sort of product management, deciding what features will ultimately go in to Neutron and in what time frame. We shouldn't ignore the fact that we clearly have people and product managers pulling the strings in the background, often deciding where developers will spend their time and what specs to propose, for the purpose of this discussion. I argue that managers often don't have the tools to understand what is important to the project, only to their own customers. The Neutron drivers team, on the other hand, don't have a clear incentive (Or I suspect the will) to spend enormous amounts of time doing 'product management', as being a driver is essentially your third or fourth job by this point, and are the same people solving gate issues, merging code, triaging bugs and so on. I'd like to avoid to go in to a discussion of what's wrong with the current specs process as I'm sure people have heard me complain about this in #openstack-neutron plenty of times before. Instead, I'd like to suggest a system that would perhaps get us to implement specs that are currently not being proposed, and give an additional form of input that would make sure that the development community is spending it's time in the right places. While 'super users' have been given more exposure, and operators summits give operators an additional tool to provide feedback, from a developer's point of view, the input is non-empiric and scattered. I also have a hunch that operators still feel their voice is not being heard. I propose an upvote/downvote system (Think Reddit), where everyone (Operators especially) would upload paragraph long explanations of what they think is missing in Neutron. The proposals have to be actionable (So 'Neutron sucks', while of great humorous value, isn't something I can do anything about), and I suspect the downvote system will help self-regulate that anyway. The proposals are not specs, but are like product RFEs, so for example there would not be a 'testing' section, as these proposals will not replace the specs process anyway but augment it as an additional form of input. Proposals can range from new features (Role based access control for Neutron resources, dynamic routing, Neutron availability zones, QoS, ...) to quality of life improvements (Missing logs, too many DEBUG level logs, poor trouble shooting areas with an explanation of what could be improved, ...) to long standing bugs, Nova network parity issues, and whatever else may be irking the operators community. The proposals would have to be moderated (Closing duplicates, low quality submissions and implemented proposals for example) and if that is a concern then I volunteer to do so. This system will also give drivers a 'way out': The last cycle we spent time refactoring this and that, and developers love doing that so it's easy to get behind. I think that as in the next cycles we move back to features, friction will rise and the process will reveal its flaws. Something to consider: Maybe the top proposal takes a day to implement. Maybe some low priority bug is actually the second highest proposal. Maybe all of the currently marked 'critical' bugs don't even appear on the list. Maybe we aren't spending our time where we should be. And now a word from our legal team: In order for this to be viable, the system would have to be a *non binding*, *additional* form of input. The top proposal *could* be declined for the same reasons that specs are currently being declined. It would not replace any of our current systems or processes. Assaf Muller, Cloud Networking Engineer Red Hat From mestery at mestery.com Thu Apr 9 15:04:52 2015 From: mestery at mestery.com (Kyle Mestery) Date: Thu, 9 Apr 2015 10:04:52 -0500 Subject: [Openstack-operators] [Neutron] The specs process, effective operators feedback and product management In-Reply-To: <639534235.17402447.1428591139863.JavaMail.zimbra@redhat.com> References: <1524636922.17358007.1428588368861.JavaMail.zimbra@redhat.com> <639534235.17402447.1428591139863.JavaMail.zimbra@redhat.com> Message-ID: On Thu, Apr 9, 2015 at 9:52 AM, Assaf Muller wrote: > The Neutron specs process was introduced during the Juno timecycle. At the > time it > was mostly a bureaucratic bottleneck (The ability to say no) to ease the > pain of cores > and manage workloads throughout a cycle. Perhaps this is a somewhat naive > outlook, > but I see other positives, such as more upfront design (Some is better > than none), > less high level talk during the implementation review process and more > focus on the details, > and 'free' documentation for every major change to the project (Some would > say this > is kind of a big deal; What better way to write documentation than to > force the developers > to do it in order for their features to get merged). > > Right. Keep in mind that for Liberty we're making changes to this process. For instance, I've already indicated specs which were approved for Kilo but failed were moved to kilo-backlog. To get them into Liberty, you just propose a patch which moves the patch in the liberty directory. We already have a bunch that have taken this path. I hope we can merge the patches for these specs in Liberty-1. > That being said, you can only get a feature merged if you propose a spec, > and the only > people largely proposing specs are developers. This ingrains the open > source culture of > developer focused evolution, that, while empowering and great for > developers, is bad > for product managers, users (That are sometimes under-presented, as is the > case I'm trying > to make) and generally causes a lack of a cohesive vision. Like it or not, > the specs process > and the driver's team approval process form a sort of product management, > deciding what > features will ultimately go in to Neutron and in what time frame. > > We haven't done anything to limit reviews of specs by these other users, and in fact, I would love for more users to review these specs. > We shouldn't ignore the fact that we clearly have people and product > managers pulling the strings > in the background, often deciding where developers will spend their time > and what specs to propose, > for the purpose of this discussion. I argue that managers often don't have > the tools to understand > what is important to the project, only to their own customers. The Neutron > drivers team, on the other hand, > don't have a clear incentive (Or I suspect the will) to spend enormous > amounts of time doing 'product management', > as being a driver is essentially your third or fourth job by this point, > and are the same people > solving gate issues, merging code, triaging bugs and so on. I'd like to > avoid to go in to a discussion of what's > wrong with the current specs process as I'm sure people have heard me > complain about this in > #openstack-neutron plenty of times before. Instead, I'd like to suggest a > system that would perhaps > get us to implement specs that are currently not being proposed, and give > an additional form of > input that would make sure that the development community is spending it's > time in the right places. > > While these are valid points, the fact that a spec merges isn't an indication that hte code will merge. We have plenty of examples of that in the past two releases. Thus, there are issues beyond the specs process which may prevent your code from merging for an approved spec. That said, I admire your guile in proposing some changes. :) > While 'super users' have been given more exposure, and operators summits > give operators > an additional tool to provide feedback, from a developer's point of view, > the input is > non-empiric and scattered. I also have a hunch that operators still feel > their voice is not being heard. > > Agreed. > I propose an upvote/downvote system (Think Reddit), where everyone > (Operators especially) would upload > paragraph long explanations of what they think is missing in Neutron. The > proposals have to be actionable > (So 'Neutron sucks', while of great humorous value, isn't something I can > do anything about), > and I suspect the downvote system will help self-regulate that anyway. The > proposals are not specs, but are > like product RFEs, so for example there would not be a 'testing' section, > as these proposals will not > replace the specs process anyway but augment it as an additional form of > input. Proposals can range > from new features (Role based access control for Neutron resources, > dynamic routing, > Neutron availability zones, QoS, ...) to quality of life improvements > (Missing logs, too many > DEBUG level logs, poor trouble shooting areas with an explanation of what > could be improved, ...) > to long standing bugs, Nova network parity issues, and whatever else may > be irking the operators community. > The proposals would have to be moderated (Closing duplicates, low quality > submissions and implemented proposals > for example) and if that is a concern then I volunteer to do so. > > Anytime you introduce a voting system you provide incentive to game the system. I am not in favor of a voting system for anything involving specs. If people think things are important, they should be reviewing specs and collaborating to write specs. There are examples of people who have written specs and not done the work. Perhaps what we really need is for people to write specs with no assignee initially. Then we could have people looking for things to work on (there are many, I've been approached by many in the last months) to take those specs up. > This system will also give drivers a 'way out': The last cycle we spent > time refactoring this and that, > and developers love doing that so it's easy to get behind. I think that as > in the next cycles we move back to features, > friction will rise and the process will reveal its flaws. > > Something to consider: Maybe the top proposal takes a day to implement. > Maybe some low priority bug is actually > the second highest proposal. Maybe all of the currently marked 'critical' > bugs don't even appear on the list. > Maybe we aren't spending our time where we should be. > > And now a word from our legal team: In order for this to be viable, the > system would have to be a > *non binding*, *additional* form of input. The top proposal *could* be > declined for the same reasons > that specs are currently being declined. It would not replace any of our > current systems or processes. > > I like the intent here, but I'm not sure we need an additional layer of input. What about the current specs process and bugs in LP isn't working that this will address specifically? It seems to me like you're saying people don't know how to use these, and this is another avenue for those people to suggest input into the project. I'm pondering the implications of that now. Thanks, Kyle > Assaf Muller, Cloud Networking Engineer > Red Hat > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From comnea.dani at gmail.com Thu Apr 9 19:09:35 2015 From: comnea.dani at gmail.com (Daniel Comnea) Date: Thu, 9 Apr 2015 20:09:35 +0100 Subject: [Openstack-operators] [Openstack-dev] resource quotas limit per stacks within a project In-Reply-To: References: Message-ID: Thanks for your reply Kris. I'd love to but we're forced to by an in-house app we built (in the same space with Murano to offer a Service catalogue for various services) deployment. IT must be a different path to corss the bridge given the circumstances. Dani On Wed, Apr 8, 2015 at 3:54 PM, Kris G. Lindgren wrote: > Why wouldn't you separate you dev/test/productiion via tenants as well? > That?s what we encourage our users to do. This would let you create > flavors that give dev/test less resources under exhaustion conditions and > production more resources. You could even pin dev/test to specific > hypervisors/areas of the cloud and let production have the rest via those > flavors. > ____________________________________________ > > Kris Lindgren > Senior Linux Systems Engineer > GoDaddy, LLC. > > From: Daniel Comnea > Date: Wednesday, April 8, 2015 at 3:32 AM > To: Daniel Comnea > Cc: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org>, " > openstack-operators at lists.openstack.org" < > openstack-operators at lists.openstack.org> > Subject: Re: [Openstack-operators] [Openstack-dev] resource quotas limit > per stacks within a project > > + operators > > Hard to believe nobody is facing this problems, even on small shops you > end up with multiple stacks part of the same tenant/ project. > > Thanks, > Dani > > On Wed, Apr 1, 2015 at 8:10 PM, Daniel Comnea > wrote: > >> Any ideas/ thoughts please? >> >> In VMware world is basically the same feature provided by the resource >> pool. >> >> >> Thanks, >> Dani >> >> On Tue, Mar 31, 2015 at 10:37 PM, Daniel Comnea >> wrote: >> >>> Hi all, >>> >>> I'm trying to understand what options i have for the below use case... >>> >>> Having multiple stacks (various number of instances) deployed within 1 >>> Openstack project (tenant), how can i guarantee that there will be no >>> race after the project resources. >>> >>> E.g - say i have few stacks like >>> >>> stack 1 = production >>> stack 2 = development >>> stack 3 = integration >>> >>> i don't want to be in a situation where stack 3 (because of a need to >>> run some heavy tests) will use all of the resources for a short while while >>> production will suffer from it. >>> >>> Any ideas? >>> >>> Thanks, >>> Dani >>> >>> P.S - i'm aware of the heavy work being put into improving the quotas >>> or the CPU pinning however that is at the project level >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gustavo.randich at gmail.com Thu Apr 9 21:52:24 2015 From: gustavo.randich at gmail.com (Gustavo Randich) Date: Thu, 9 Apr 2015 18:52:24 -0300 Subject: [Openstack-operators] [Neutron][Nova] No Valid Host when booting new VM with Public IP In-Reply-To: <5509E284.2060102@gmail.com> References: <5509E284.2060102@gmail.com> Message-ID: Hi everybody, I'm trying to setup exactly this (instance attached directly to ext-net), using VLAN external networks. Everything seems to work OK: the VM sees the DHCP server in the network node, routes get configured inside the VM, but when it sends ARP asking for the MAC address of the external default gateway, it never gets the answer, because the arp-reply frame is dropped by br-vlan in the compute node (ovs for external network). The cause of the drop is that it is not tagged with any VLAN ID. So, my question is: - Should the external facing OVS (br-ex or br-vlan) have as a port one which receives tagged frames from the external network (i.e. eth0) or untagged frames (i.e. vlanXX, eth0.XX)? Thanks! On Wed, Mar 18, 2015 at 5:39 PM, George Shuklin wrote: > We have that configuration and it works fine. Even better than L3 NAT on > neutron routers. > > Tenant's VM works perfect with external networks and white IPs, but you > should make external network available on each compute node (ml2_conf.ini). > > > On 03/18/2015 07:29 PM, Adam Lawson wrote: > > What I'm trying to do is force OpenStack to do something it normally > doesn't do for the sake of learning and experimentation. I.e. bind a public > network to a VM so it can be accessed outside the cloud when floating IP's > are normally required. I know there are namespace issues at play which may > prevent this from working, just trying to scope the boundaries of what I > can and cannot do really. > > > * Adam Lawson* > > AQORN, Inc. > 427 North Tatnall Street > Ste. 58461 > Wilmington, Delaware 19801-2230 > Toll-free: (844) 4-AQORN-NOW ext. 101 > International: +1 302-387-4660 > Direct: +1 916-246-2072 > > > On Wed, Mar 18, 2015 at 7:08 AM, Pedro Sousa wrote: > >> Hi Adam >> >> For external network you should use floating ips to access externally to >> your instances if I understood correctly. >> >> Regards >> Em 16/03/2015 20:56, "Adam Lawson" escreveu: >> >>> Got a strange error and I'm really hoping to get some help with it >>> since it has be scratching my head. >>> >>> When I create a VM within Horizon and select the PRIVATE network, it >>> boots up great. >>> When I attempt to create a VM within Horizon and include the PUBLIC >>> network (either by itself or with the private network), it fails with a "No >>> valid host found" error. >>> >>> I looked at the nova-api and the nova-scheduler logs on the controller >>> and the most I've found are errors/warnings binding VIF's but I'm not 100% >>> certain it's the root cause although I believe it's related. >>> >>> I didn't find any WARNINGS or ERRORS in the compute or network node. >>> >>> Setup: >>> >>> - 1 physical host running 4 KVM domains/guests >>> - 1x Controller >>> - 1x Networ >>> - 1x Volume >>> - 1x Compute >>> >>> >>> *Controller Node:* >>> nova.conf (http://pastebin.com/q3e9cntH) >>> >>> - neutron.conf (http://pastebin.com/ukEVzBbN) >>> - ml2_conf.ini (http://pastebin.com/w10jBGZC) >>> - nova-api.log (http://pastebin.com/My99Mg2z) >>> - nova-scheduler (http://pastebin.com/Nb75Z6yH) >>> - neutron-server.log (http://pastebin.com/EQVQPVDF) >>> >>> >>> *Network Node:* >>> >>> - l3_agent.ini (http://pastebin.com/DBaD1F5x) >>> - neutron.conf (http://pastebin.com/Bb3qkNi7) >>> - ml2_conf.ini (http://pastebin.com/xEC1Bs9L) >>> >>> >>> *Compute Node:* >>> >>> - nova.conf (http://pastebin.com/K6SiE9Pw) >>> - nova-compute.conf (http://pastebin.com/9Mz30b4v) >>> - neutron.conf (http://pastebin.com/Le4wYRr4) >>> - ml2_conf.ini (http://pastebin.com/nnyhC8mV) >>> >>> >>> *Back-end:* >>> Physical switch >>> >>> Any thoughts on what could be causing this? >>> >>> * Adam Lawson* >>> >>> AQORN, Inc. >>> 427 North Tatnall Street >>> Ste. 58461 >>> Wilmington, Delaware 19801-2230 >>> Toll-free: (844) 4-AQORN-NOW ext. 101 >>> International: +1 302-387-4660 <%2B1%20302-387-4660> >>> Direct: +1 916-246-2072 <%2B1%20916-246-2072> >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >>> > > > _______________________________________________ > OpenStack-operators mailing listOpenStack-operators at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rochelle.grober at huawei.com Thu Apr 9 23:09:45 2015 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Thu, 9 Apr 2015 23:09:45 +0000 Subject: [Openstack-operators] [openstack-dev] [Neutron] The specs process, effective operators feedback and product management In-Reply-To: References: <1524636922.17358007.1428588368861.JavaMail.zimbra@redhat.com> <639534235.17402447.1428591139863.JavaMail.zimbra@redhat.com> Message-ID: Adding the product working group mailing list to this thread [1] I?d like to introduce the Neutron Drivers (and others) to the Product Working Group. We are a bunch of ?influencers? who are focused on adding value through some broader outlooks in the OpenStack community. And yes, many of us are or have been Product/Program/Project managers. We?ve just started (first meeting was at the Paris summit) and have been brainstorming on ways our skillsets could be applied to the OpenStack Environment. A number of us have stepped up to provide some drive for cross project efforts and all of us have been reaching out to work with PTLs and developers and Ops and Users on how we can lighten their loads. One way we are looking at is to provide some tracking of development efforts and plans, along with providing Use Cases to projects that will help developers figure out how to address User and Operator functionality or other gaps. This discussion is perfect for our mailing list, and IRC scheduled chat and a cross project session at the summit (at a minimum). I?ll leave others in the Product WG to respond in line (even though I have lots of thoughts on this, too), but let?s try to get a working session or two, or even a lunch session on this topic with all interested parties at the Summit. And yes, I think that more, different patches in the specs area could allow for mapping use cases and other user requests to individual specs and/or blueprints and/or bug reports. It?s a matter of communicating both what is needed and how and where to do it. --Rocky [1] http://lists.openstack.org/pipermail/openstack-dev/2015-April/061071.html From: Salvatore Orlando [mailto:sorlando at nicira.com] Sent: Thursday, April 09, 2015 09:19 To: OpenStack Development Mailing List (not for usage questions) Cc: OpenStack Operators Subject: Re: [openstack-dev] [Openstack-operators] [Neutron] The specs process, effective operators feedback and product management On 9 April 2015 at 17:04, Kyle Mestery > wrote: On Thu, Apr 9, 2015 at 9:52 AM, Assaf Muller > wrote: The Neutron specs process was introduced during the Juno timecycle. At the time it was mostly a bureaucratic bottleneck (The ability to say no) to ease the pain of cores and manage workloads throughout a cycle. Perhaps this is a somewhat naive outlook, but I see other positives, such as more upfront design (Some is better than none), less high level talk during the implementation review process and more focus on the details, and 'free' documentation for every major change to the project (Some would say this is kind of a big deal; What better way to write documentation than to force the developers to do it in order for their features to get merged). Right. Keep in mind that for Liberty we're making changes to this process. For instance, I've already indicated specs which were approved for Kilo but failed were moved to kilo-backlog. To get them into Liberty, you just propose a patch which moves the patch in the liberty directory. We already have a bunch that have taken this path. I hope we can merge the patches for these specs in Liberty-1. It was never meant to be a bureaucratic bottleneck, although the ability of moving out early in the process blueprint that did not fit in the scope of the current release (or in the scope of the project altogether) was a goal. However, it became a bureaucratic step - it has been surely been perceived as that. Fast tracking blueprints which were already approved makes sense. I believe the process should be made even slimmer, removing the deadlines for spec proposal and approval, and making the approval process simpler - with reviewers being a lot less pedant on one side, and proposer not expecting approval of a spec to be a binding contract on the other side. That being said, you can only get a feature merged if you propose a spec, and the only people largely proposing specs are developers. This ingrains the open source culture of developer focused evolution, that, while empowering and great for developers, is bad for product managers, users (That are sometimes under-presented, as is the case I'm trying to make) and generally causes a lack of a cohesive vision. Like it or not, the specs process and the driver's team approval process form a sort of product management, deciding what features will ultimately go in to Neutron and in what time frame. We haven't done anything to limit reviews of specs by these other users, and in fact, I would love for more users to review these specs. I think your analysis is correct. Neutron is a developer-led community, and that's why the "drivers" acting also as "product managers" approve specifications. I don't want to discuss here the merits of the drivers team - that probably deserves another discussion thread - but as Kyle says no-one has been discouraged for reviewing specs and influencing the decision process. The neutron-drivers meetings were very open in my opinion. However, if this meant - as you say - that users, operators, and product managers (yes, them too ;) ) were left off this process, I'm happy to hear proposals to improve it. We shouldn't ignore the fact that we clearly have people and product managers pulling the strings in the background, often deciding where developers will spend their time and what specs to propose, for the purpose of this discussion. I argue that managers often don't have the tools to understand what is important to the project, only to their own customers. The Neutron drivers team, on the other hand, don't have a clear incentive (Or I suspect the will) to spend enormous amounts of time doing 'product management', as being a driver is essentially your third or fourth job by this point, and are the same people solving gate issues, merging code, triaging bugs and so on. I'd like to avoid to go in to a discussion of what's wrong with the current specs process as I'm sure people have heard me complain about this in #openstack-neutron plenty of times before. Yes I have heard you complaining. Ideally I would borrow concepts from anarchism to define an ideal way in which various contributors should take over the different. However, I am afraid this will quickly translate in a sort of extreme neo-liberism which will probably lead the project with self destruction. But I'm all up for a change in the process since what we have now is drifting towards Soviet-style bureaucracy. Jokes apart, I think you are right, the process as it is just adds responsibilities to a subset of people who are already busy with other duties, increasing frustration in people who depends on them (being one of these people I am fully aware of that!) Instead, I'd like to suggest a system that would perhaps get us to implement specs that are currently not being proposed, and give an additional form of input that would make sure that the development community is spending it's time in the right places. While these are valid points, the fact that a spec merges isn't an indication that hte code will merge. We have plenty of examples of that in the past two releases. Thus, there are issues beyond the specs process which may prevent your code from merging for an approved spec. That said, I admire your guile in proposing some changes. :) While 'super users' have been given more exposure, and operators summits give operators an additional tool to provide feedback, from a developer's point of view, the input is non-empiric and scattered. I also have a hunch that operators still feel their voice is not being heard. Agreed. I propose an upvote/downvote system (Think Reddit), where everyone (Operators especially) would upload paragraph long explanations of what they think is missing in Neutron. The proposals have to be actionable (So 'Neutron sucks', while of great humorous value, isn't something I can do anything about), and I suspect the downvote system will help self-regulate that anyway. The proposals are not specs, but are like product RFEs, so for example there would not be a 'testing' section, as these proposals will not replace the specs process anyway but augment it as an additional form of input. I personally do not believe in the viability of a system which gives priority to features based on "popular acclamation", mostly for the reasons kyle explains above. Also, "additional form of input" leaves an uncomfortable grey area where whoever is authorized to approve spec will need to strike the right balance between what they believe are the right engineering priorities and what are the priorities set by the community through the system you propose. Also, rather than having a different system, we might just sum the score associated with current neutron specs. Proposals can range from new features (Role based access control for Neutron resources, dynamic routing, Neutron availability zones, QoS, ...) to quality of life improvements (Missing logs, too many DEBUG level logs, poor trouble shooting areas with an explanation of what could be improved, ...) to long standing bugs, Nova network parity issues, and whatever else may be irking the operators community. The proposals would have to be moderated (Closing duplicates, low quality submissions and implemented proposals for example) and if that is a concern then I volunteer to do so. I think that the answer to a process issue is never more process. But this is just my personal opinion! Anytime you introduce a voting system you provide incentive to game the system. I am not in favor of a voting system for anything involving specs. If people think things are important, they should be reviewing specs and collaborating to write specs. There are examples of people who have written specs and not done the work. Do you mean the plugin perestroika thing? I'll never write a spec I don't intend to implement myself again!!! Perhaps what we really need is for people to write specs with no assignee initially. Then we could have people looking for things to work on (there are many, I've been approached by many in the last months) to take those specs up. This system will also give drivers a 'way out': The last cycle we spent time refactoring this and that, and developers love doing that so it's easy to get behind. I think that as in the next cycles we move back to features, friction will rise and the process will reveal its flaws. I think the flaws are already quite clear, or do you reckon that there is something worse looming? Something to consider: Maybe the top proposal takes a day to implement. Maybe some low priority bug is actually the second highest proposal. Maybe all of the currently marked 'critical' bugs don't even appear on the list. Maybe we aren't spending our time where we should be. Those are all valid concerns. As a member of the drivers team I've personally asked myself these questions. Obviously this does not imply I had the right answers! And now a word from our legal team: In order for this to be viable, the system would have to be a *non binding*, *additional* form of input. The top proposal *could* be declined for the same reasons that specs are currently being declined. It would not replace any of our current systems or processes. I like the intent here, but I'm not sure we need an additional layer of input. What about the current specs process and bugs in LP isn't working that this will address specifically? It seems to me like you're saying people don't know how to use these, and this is another avenue for those people to suggest input into the project. I'm pondering the implications of that now. Since it seems there a 1:1 mapping between specs and "proposals", your idea might be implemented in gerrit. Rather than submitting a proposal, submit a spec, and then have people give it a +1 or -1. Collect scores, and publish them on a web page. Even if I do not think it is the right think to have it as a part of the decision process, it is probably a useful thing to have, and requires low maintenance. For the specs process, I would rather consider how to delegate decision making around specs - for instance involving SMEs and cross-product team members - and how to make this process as smooth as possible. Thanks, Kyle Assaf Muller, Cloud Networking Engineer Red Hat _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From blak111 at gmail.com Fri Apr 10 16:34:08 2015 From: blak111 at gmail.com (Kevin Benton) Date: Fri, 10 Apr 2015 09:34:08 -0700 Subject: [Openstack-operators] [Neutron] The specs process, effective operators feedback and product management In-Reply-To: <639534235.17402447.1428591139863.JavaMail.zimbra@redhat.com> References: <1524636922.17358007.1428588368861.JavaMail.zimbra@redhat.com> <639534235.17402447.1428591139863.JavaMail.zimbra@redhat.com> Message-ID: >The Neutron drivers team, on the other hand, don't have a clear incentive (Or I suspect the will) to spend enormous amounts of time doing 'product management', as being a driver is essentially your third or fourth job by this point, and are the same people solving gate issues, merging code, triaging bugs and so on. Are you hinting here that there should be a separate team of people from the developers who are deciding what should and should not be worked on in Neutron? Have there been examples of that working in other open source projects where the majority of the development isn't driven by one employer? I ask that because I don't see much of an incentive for a developer to follow requirements generated by people not familiar with the Neutron code base. One of the roles of the driver team is to determine what is feasible in the release cycle. How would that be possible without actively contributing or (at a minimum) being involved in code reviews? On Thu, Apr 9, 2015 at 7:52 AM, Assaf Muller wrote: > The Neutron specs process was introduced during the Juno timecycle. At the > time it > was mostly a bureaucratic bottleneck (The ability to say no) to ease the > pain of cores > and manage workloads throughout a cycle. Perhaps this is a somewhat naive > outlook, > but I see other positives, such as more upfront design (Some is better > than none), > less high level talk during the implementation review process and more > focus on the details, > and 'free' documentation for every major change to the project (Some would > say this > is kind of a big deal; What better way to write documentation than to > force the developers > to do it in order for their features to get merged). > > That being said, you can only get a feature merged if you propose a spec, > and the only > people largely proposing specs are developers. This ingrains the open > source culture of > developer focused evolution, that, while empowering and great for > developers, is bad > for product managers, users (That are sometimes under-presented, as is the > case I'm trying > to make) and generally causes a lack of a cohesive vision. Like it or not, > the specs process > and the driver's team approval process form a sort of product management, > deciding what > features will ultimately go in to Neutron and in what time frame. > > We shouldn't ignore the fact that we clearly have people and product > managers pulling the strings > in the background, often deciding where developers will spend their time > and what specs to propose, > for the purpose of this discussion. I argue that managers often don't have > the tools to understand > what is important to the project, only to their own customers. The Neutron > drivers team, on the other hand, > don't have a clear incentive (Or I suspect the will) to spend enormous > amounts of time doing 'product management', > as being a driver is essentially your third or fourth job by this point, > and are the same people > solving gate issues, merging code, triaging bugs and so on. I'd like to > avoid to go in to a discussion of what's > wrong with the current specs process as I'm sure people have heard me > complain about this in > #openstack-neutron plenty of times before. Instead, I'd like to suggest a > system that would perhaps > get us to implement specs that are currently not being proposed, and give > an additional form of > input that would make sure that the development community is spending it's > time in the right places. > > While 'super users' have been given more exposure, and operators summits > give operators > an additional tool to provide feedback, from a developer's point of view, > the input is > non-empiric and scattered. I also have a hunch that operators still feel > their voice is not being heard. > > I propose an upvote/downvote system (Think Reddit), where everyone > (Operators especially) would upload > paragraph long explanations of what they think is missing in Neutron. The > proposals have to be actionable > (So 'Neutron sucks', while of great humorous value, isn't something I can > do anything about), > and I suspect the downvote system will help self-regulate that anyway. The > proposals are not specs, but are > like product RFEs, so for example there would not be a 'testing' section, > as these proposals will not > replace the specs process anyway but augment it as an additional form of > input. Proposals can range > from new features (Role based access control for Neutron resources, > dynamic routing, > Neutron availability zones, QoS, ...) to quality of life improvements > (Missing logs, too many > DEBUG level logs, poor trouble shooting areas with an explanation of what > could be improved, ...) > to long standing bugs, Nova network parity issues, and whatever else may > be irking the operators community. > The proposals would have to be moderated (Closing duplicates, low quality > submissions and implemented proposals > for example) and if that is a concern then I volunteer to do so. > > This system will also give drivers a 'way out': The last cycle we spent > time refactoring this and that, > and developers love doing that so it's easy to get behind. I think that as > in the next cycles we move back to features, > friction will rise and the process will reveal its flaws. > > Something to consider: Maybe the top proposal takes a day to implement. > Maybe some low priority bug is actually > the second highest proposal. Maybe all of the currently marked 'critical' > bugs don't even appear on the list. > Maybe we aren't spending our time where we should be. > > And now a word from our legal team: In order for this to be viable, the > system would have to be a > *non binding*, *additional* form of input. The top proposal *could* be > declined for the same reasons > that specs are currently being declined. It would not replace any of our > current systems or processes. > > > Assaf Muller, Cloud Networking Engineer > Red Hat > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Kevin Benton -------------- next part -------------- An HTML attachment was scrubbed... URL: From alopgeek at gmail.com Fri Apr 10 19:48:49 2015 From: alopgeek at gmail.com (Abel Lopez) Date: Fri, 10 Apr 2015 12:48:49 -0700 Subject: [Openstack-operators] Short survey on Guest Images Message-ID: <6FCA51FF-43C6-4343-B3ED-F9A174C3C113@gmail.com> If you're somehow involved in creating/providing images for glance, I have a very short survey that I'd like input on. https://docs.google.com/forms/d/1HR500i8raQiH72GwEbpFuUQA8OvA7Uae2e-39Y6YSQ0/viewform?usp=send_form Responses will help me as part of my presentation in Vancouver. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: Message signed with OpenPGP using GPGMail URL: From gustavo.randich at gmail.com Fri Apr 10 20:13:26 2015 From: gustavo.randich at gmail.com (Gustavo Randich) Date: Fri, 10 Apr 2015 17:13:26 -0300 Subject: [Openstack-operators] how to attach VMs to external/vlan network directly Message-ID: Hi, I've tried without success to attach instances directly to external VLAN network using "provider:network_type vlan"; below are the details. Using "provider:network_type flat" I made it work. I was basically following this: http://www.s3it.uzh.ch/blog/openstack-neutron-vlan/ Any idea will be appreciated. ML2 CONFIG ml2_type_vlan network_vlan_ranges vlannet:81:91 bridge_mappings vlannet:br-vlan enable_security_group True enable_ipset True firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver NETWORK CREATION neutron net-create vlan81 --router:external True --provider:physical_network vlannet --provider:network_type vlan --provider:segmentation_id 81 --shared neutron subnet-create vlan81 10.111.81.0/24 --name vlan81 --allocation-pool start=10.111.81.65,end=10.111.81.126 --enable-dhcp --gateway 10.111.81.254 --dns-nameserver 10.1.1.68 --dns-nameserver 10.1.1.42 --host-route destination=169.254.169.254/32,nexthop=10.111.81.65 NETWORK CONFIGURATION OF COMPUTE/NETWORK NODES em1 \ bond0 -> vlan81 -> | br-vlan | <-> | br-int | em2 / DEBUGGING root at juno-dev02:~# ovs-ofctl dump-flows br-vlan NXST_FLOW reply (xid=0x4): cookie=0x0, duration=2991.439s, table=0, n_packets=5374, n_bytes=357405, idle_age=2, priority=4,in_port=5,dl_vlan=1 actions=mod_vlan_vid:81,NORMAL cookie=0x0, duration=3439.498s, table=0, n_packets=3792, n_bytes=159460, idle_age=0, priority=2,in_port=5 actions=drop cookie=0x0, duration=3440.064s, table=0, n_packets=113217, n_bytes=18712371, idle_age=0, priority=1 actions=NORMAL root at juno-dev02:~# ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=3891.241s, table=0, n_packets=1448, n_bytes=157908, idle_age=1236, priority=3,in_port=7,dl_vlan=81 actions=mod_vlan_vid:1,NORMAL cookie=0x0, duration=4339.435s, table=0, n_packets=26538, n_bytes=1584268, idle_age=1, priority=2,in_port=7 actions=drop cookie=0x0, duration=4340.224s, table=0, n_packets=10686, n_bytes=590535, idle_age=1, priority=1 actions=NORMAL cookie=0x0, duration=4340.153s, table=23, n_packets=0, n_bytes=0, idle_age=4340, priority=0 actions=drop root at juno-dev02:~# nova list +--------------------------------------+------+--------+------------+-------------+---------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+------------+-------------+---------------------+ | 2f1a2cba-6fc7-45ae-a251-e709ab8b7ecc | test | ACTIVE | - | Running | vlan81=10.111.81.66 | +--------------------------------------+------+--------+------------+-------------+---------------------+ Instance can reach DHCP server on network node (10.111.81.65), but cannot reach default gateway (10.111.81.254), nor any host of the external network. The br-vlan bridge shows outgoing ARP packets tagged with vlan 81, and ARP replies not tagged, which I suppose it then drops because it does not match the first rule of br-vlan. The br-int bridge shows only outgoin ARP packets: root at juno-dev02:~# tcpdump -env -i br-vlan icmp or arp tcpdump: listening on br-vlan, link-type EN10MB (Ethernet), capture size 65535 bytes 11:45:56.463254 fa:16:3e:58:74:5b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 81, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.111.81.254 tell 10.111.81.66, length 28 11:45:56.464491 78:19:f7:9b:2a:41 > fa:16:3e:58:74:5b, ethertype ARP (0x0806), length 56: Ethernet (len 6), IPv4 (len 4), Reply 10.111.81.254 is-at 78:19:f7:9b:2a:41, length 42 11:45:57.461253 fa:16:3e:58:74:5b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 81, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.111.81.254 tell 10.111.81.66, length 28 11:45:57.462765 78:19:f7:9b:2a:41 > fa:16:3e:58:74:5b, ethertype ARP (0x0806), length 56: Ethernet (len 6), IPv4 (len 4), Reply 10.111.81.254 is-at 78:19:f7:9b:2a:41, length 42 root at juno-dev02:~# tcpdump -env -i br-int icmp or arp tcpdump: WARNING: br-int: no IPv4 address assigned tcpdump: listening on br-int, link-type EN10MB (Ethernet), capture size 65535 bytes 11:29:49.330084 fa:16:3e:58:74:5b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.111.81.254 tell 10.111.81.66, length 28 11:29:50.162106 fa:16:3e:58:74:5b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.111.81.49 tell 10.111.81.66, length 28 11:29:51.180111 fa:16:3e:58:74:5b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.111.81.49 tell 10.111.81.66, length 28 -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano at openstack.org Fri Apr 10 21:28:10 2015 From: stefano at openstack.org (Stefano Maffulli) Date: Fri, 10 Apr 2015 14:28:10 -0700 Subject: [Openstack-operators] OpenStack Community Weekly Newsletter (Apr 3 - 10) Message-ID: <1428701290.22374.0.camel@sputacchio.gateway.2wire.net> Making your list and checking it twice for the OpenStack Vancouver Summit Emily Hugenbruch shared her pre-summit checklists: great tips there. OVN and OpenStack Integration Development Update The Open vSwitch project announced the OVN effort back in January and Russell Bryant started to look into it. He reports about OVN, calling it ?a promising open source backend for OpenStack Neutron?. OpenStack in the classroom Amrith Kumar announced an effort to work with educational institutions in Massachusetts and near Toronto (where Tesora has offices) to try and make available a course on cloud computing with OpenStack as the exemplar system. OpenStack In The Enterprise A new mini-portal on openstack.org website to discover use cases and white papers from the enterprise world. The Road to Vancouver * Sign up for OpenStack Upstream Training in Vancouver * Canada Visa Information * Official Hotel Room Blocks * 2015 OpenStack T-Shirt Design Contest * Preparation to Design summit * Liberty Design Summit - Proposed slot allocation * What's a Design Summit? There is still room for Upstream Training Relevant Conversations * PTL Voting is now open * Kilo stable branches for "other" libraries * The Evolution of core developer to maintainer? * The Core Deficiency * Implementation of Pacemaker Managed OpenStack VM Recovery * Neutron Kilo Retrospective and a Look Toward Liberty * [Neutron] The specs process, effective operators feedback and product management Deadlines and Development Priorities * [Nova] Identifying release critical bugs in Kilo * [Cinder] Bug Triage - Call for Participation * [Nova] Liberty specs are now open and [Nova] Trunk is now Liberty * [neutron] Liberty Specs are now open! Security Advisories and Notices * None Tips ?n Tricks * By Adam Young: Horizon WebSSO via SSSD * By Assaf Muller: Multinode DVR Devstack * By Alessandro Pilotti: VirtualBox driver for OpenStack Upcoming Events * Apr 13 - 14, 2015 OpenStack Live Santa Clara, CA, US * Apr 13 - 16, 2015 StackAttack! A HOLatCollaborate15 Las Vegas, NV, US * Apr 15, 2015 iX OpenStack Tag K?ln, NRW, DE * Apr 15 - 16, 2015 1? Hangout OpenStack Brasil 2015 Brasil, BR * Apr 16 - 18, 2015 Open Cloud 2015 Beijing, Beijing, CN * Apr 16, 2015 8th OpenStack Meetup Stockholm Stockholm, SE * Apr 16, 2015 8th OpenStack User Group Nordics meetup Stockholm, SE * Apr 21 - 22, 2015 CONNECT 2015 Melbourne, Victoria, AU * Apr 22 - 23, 2015 China SDNNFV Conference Beijing, CN * Apr 22, 2015 OpenStack NYC Meetup New York, NY, US * Apr 23, 2015 OpenStack Philadelphia Meetup Philadelphia, PA, US * May 05 - 07, 2015 CeBIT AU 2015 Sydney, NSW, AU * May 18 - 22, 2015 OpenStack Summit May 2015 Vancouver, BC Other News * A community distribution of OpenStack * Why adopting OpenStack for dev/test matters * Gophercloud Update * Why OpenStack will change the way we live, work and play The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Fri Apr 10 21:30:07 2015 From: ayoung at redhat.com (Adam Young) Date: Fri, 10 Apr 2015 17:30:07 -0400 Subject: [Openstack-operators] Dynamic Policy for Access Control In-Reply-To: <1428424599.61768.6.camel@localhost.localdomain> References: <54EB4AE2.1000203@redhat.com> <5D7F9996EA547448BC6C54C8C5AAF4E50102865948@CERNXCHG41.cern.ch> <1428424599.61768.6.camel@localhost.localdomain> Message-ID: <552840DF.1050000@redhat.com> On 04/07/2015 11:36 AM, Marc Heckmann wrote: > My apologies for not seeing this sooner as the topic is of great > interest. My comments below inline.. > > On Mon, 2015-02-23 at 16:41 +0000, Tim Bell wrote: >>> -----Original Message----- >>> From: Adam Young [mailto:ayoung at redhat.com] >>> Sent: 23 February 2015 16:45 >>> To: openstack-operators at lists.openstack.org >>> Subject: [Openstack-operators] Dynamic Policy for Access Control >>> >>> "Admin can do everything!" has been a common lament, heard for multiple >>> summits. Its more than just a development issue. I'd like to fix that. I think we >>> all would. >>> >>> >>> I'm looking to get some Operator input on the Dynamic Policy issue. I wrote up a >>> general overview last fall, after the Kilo summit: >>> >>> https://adam.younglogic.com/2014/11/dynamic-policy-in-keystone/ > I agree with everything in that post. > > I would add the following comments: > > 1. I doubt this will change, but to be clear, we cannot lose the ability > to create custom roles and limit the capabilities of the standard roles. > For example, if I wanted to limit the ability to make images public or > limit the ability to associate a floating IP. That is a baseline consideration. We are hoping to make custom roles the norm. > > 2. This work should not be done in vacuum. Ideally, Horizon support for > assigning roles to users and editing policy should be released at the > same time or not long after. I realize that this is easier said than > done, but it will be important in order for the feature to get used. Role assignments will be done the same way they are now, as Horizon fetches the list of roles from Keystone. Editing policy will require a new UI. I don't see that happening in Horizon until the Keystone mechanism is finalized. Thanks for the response. We can carry on the conversation at the Summit. > >>> >>> Some of what I am looking at is: what are the general roles that Operators >>> would like to have by default when deploying OpenStack? >>> >> As I described in http://openstack-in-production.blogspot.ch/2015/02/delegation-of-roles.html, we've got (mapped per-project to an AD group) >> >> - operator (start/stop/reboot/console) >> - accounting (read ceilometer data for reporting) >> >>> I've submitted a talk about policy for the Summit: >>> https://www.openstack.org/vote-vancouver/presentation/dynamic-policy-for- >>> access-control >>> >>> If you want, please vote for it, but even if it does not get selected, I'd like to >>> discuss Policy with the operators at the summit, as input to the Keystone >>> development effort. >>> >> Sounds like a good topic for the ops meetup track. >> >>> Feedback greatly welcome. >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From rochelle.grober at huawei.com Sat Apr 11 01:05:11 2015 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Sat, 11 Apr 2015 01:05:11 +0000 Subject: [Openstack-operators] [all] [api] [log] Erring is Caring In-Reply-To: <160927AC-D86B-4CD2-A646-E02A3C68EA1F@rackspace.com> References: <160927AC-D86B-4CD2-A646-E02A3C68EA1F@rackspace.com> Message-ID: Hi all, The first draft of the spec for Log Message Error Codes (from the Log Working Group) is out for review: https://review.openstack.org/#/c/172552/ Please comment, and please look for the way forward so that the different places, ways and reasons errors get reported provide consistency across the projects the make up the Openstack ecosystem. Consistency across uses will speed problem solving and will provide a common language across the diversity of users of the OpenStack code and environments. This cross project spec is focused on just a part of the Log Message "header", but it is the start of where log messages need to go. It dovetails in with developer focused API errors, but is aimed at the log files the operators rely on to keep their clouds running. Over the next couple of days, I will also specifically add reviewers to the list if you haven't already commented on the spec;-). Thanks for your patience waiting for this. It *is* a work in progress, but I think the meat is there for discussion. Also, please look at and comment on: Return Request ID to caller https://review.openstack.org/#/c/156508/ as this is also critical to get right for logging and developer efforts. --Rocky -----Original Message----- From: Everett Toews [mailto:everett.toews at RACKSPACE.COM] Sent: Tuesday, March 31, 2015 14:36 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [all] [api] Erring is Caring Hi All, An API Working Group Guideline for Errors https://review.openstack.org/#/c/167793/ Errors are a crucial part of the developer experience when using an API. As developers learn the API they inevitably run into errors. The quality and consistency of the error messages returned to them will play a large part in how quickly they can learn the API, how they can be more effective with the API, and how much they enjoy using the API. We need consistency across all services for the error format returned in the response body. The Way Forward I did a bit of research into the current state of consistency in errors across OpenStack services [1]. Since no services seem to respond with a top-level "errors" key, it's possible that they could just include this key in the response body along with their usual response and the 2 can live side-by-side for some deprecation period. Hopefully those services with unstructured errors should okay with adding some structure. That said, the current error formats aren't documented anywhere that I've seen so this all feels fair game anyway. How this would get implemented in code is up to you. It could eventually be implemented in all projects individually or perhaps a Oslo utility is called for. However, this discussion is not about the implementation. This discussion is about the error format. The Review I've explicitly added all of the API WG and Logging WG CPLs as reviewers to that patch but feedback from all is welcome. You can find a more readable version of patch set 4 at [2]. I see the "id" and "code" fields as the connection point to what the logging working group is doing. Thanks, Everett [1] https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Errors [2] http://docs-draft.openstack.org/93/167793/4/check/gate-api-wg-docs/e2f5b6e//doc/build/html/guidelines/errors.html __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mkassawara at gmail.com Sat Apr 11 15:55:00 2015 From: mkassawara at gmail.com (Matt Kassawara) Date: Sat, 11 Apr 2015 10:55:00 -0500 Subject: [Openstack-operators] how to attach VMs to external/vlan network directly In-Reply-To: References: Message-ID: Does vlan81 between bond0 and br-vlan reflect a VLAN subinterface on the host? If so, you need to remove it and attach bond0 directly to br-vlan because Open vSwitch performs the tagging for you. On Fri, Apr 10, 2015 at 3:13 PM, Gustavo Randich wrote: > Hi, > > I've tried without success to attach instances directly to external VLAN > network using "provider:network_type vlan"; below are the details. Using > "provider:network_type flat" I made it work. > > I was basically following this: > http://www.s3it.uzh.ch/blog/openstack-neutron-vlan/ > > Any idea will be appreciated. > > ML2 CONFIG > ml2_type_vlan network_vlan_ranges vlannet:81:91 > bridge_mappings vlannet:br-vlan > enable_security_group True > enable_ipset True > firewall_driver > neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver > > NETWORK CREATION > neutron net-create vlan81 --router:external True > --provider:physical_network vlannet --provider:network_type vlan > --provider:segmentation_id 81 --shared > neutron subnet-create vlan81 10.111.81.0/24 --name vlan81 > --allocation-pool start=10.111.81.65,end=10.111.81.126 --enable-dhcp > --gateway 10.111.81.254 --dns-nameserver 10.1.1.68 --dns-nameserver > 10.1.1.42 --host-route destination=169.254.169.254/32,nexthop=10.111.81.65 > > NETWORK CONFIGURATION OF COMPUTE/NETWORK NODES > em1 \ > bond0 -> vlan81 -> | br-vlan | <-> | br-int | > em2 / > > DEBUGGING > > root at juno-dev02:~# ovs-ofctl dump-flows br-vlan > NXST_FLOW reply (xid=0x4): > cookie=0x0, duration=2991.439s, table=0, n_packets=5374, n_bytes=357405, > idle_age=2, priority=4,in_port=5,dl_vlan=1 actions=mod_vlan_vid:81,NORMAL > cookie=0x0, duration=3439.498s, table=0, n_packets=3792, n_bytes=159460, > idle_age=0, priority=2,in_port=5 actions=drop > cookie=0x0, duration=3440.064s, table=0, n_packets=113217, > n_bytes=18712371, idle_age=0, priority=1 actions=NORMAL > > root at juno-dev02:~# ovs-ofctl dump-flows br-int > NXST_FLOW reply (xid=0x4): > cookie=0x0, duration=3891.241s, table=0, n_packets=1448, n_bytes=157908, > idle_age=1236, priority=3,in_port=7,dl_vlan=81 actions=mod_vlan_vid:1,NORMAL > cookie=0x0, duration=4339.435s, table=0, n_packets=26538, > n_bytes=1584268, idle_age=1, priority=2,in_port=7 actions=drop > cookie=0x0, duration=4340.224s, table=0, n_packets=10686, n_bytes=590535, > idle_age=1, priority=1 actions=NORMAL > cookie=0x0, duration=4340.153s, table=23, n_packets=0, n_bytes=0, > idle_age=4340, priority=0 actions=drop > > root at juno-dev02:~# nova list > > +--------------------------------------+------+--------+------------+-------------+---------------------+ > | ID | Name | Status | Task State | > Power State | Networks | > > +--------------------------------------+------+--------+------------+-------------+---------------------+ > | 2f1a2cba-6fc7-45ae-a251-e709ab8b7ecc | test | ACTIVE | - | > Running | vlan81=10.111.81.66 | > > +--------------------------------------+------+--------+------------+-------------+---------------------+ > > > Instance can reach DHCP server on network node (10.111.81.65), but cannot > reach default gateway (10.111.81.254), nor any host of the external network. > > The br-vlan bridge shows outgoing ARP packets tagged with vlan 81, and ARP > replies not tagged, which I suppose it then drops because it does not match > the first rule of br-vlan. > > The br-int bridge shows only outgoin ARP packets: > > > root at juno-dev02:~# tcpdump -env -i br-vlan icmp or arp > tcpdump: listening on br-vlan, link-type EN10MB (Ethernet), capture size > 65535 bytes > 11:45:56.463254 fa:16:3e:58:74:5b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q > (0x8100), length 46: vlan 81, p 0, ethertype ARP, Ethernet (len 6), IPv4 > (len 4), Request who-has 10.111.81.254 tell 10.111.81.66, length 28 > 11:45:56.464491 78:19:f7:9b:2a:41 > fa:16:3e:58:74:5b, ethertype ARP > (0x0806), length 56: Ethernet (len 6), IPv4 (len 4), Reply 10.111.81.254 > is-at 78:19:f7:9b:2a:41, length 42 > 11:45:57.461253 fa:16:3e:58:74:5b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q > (0x8100), length 46: vlan 81, p 0, ethertype ARP, Ethernet (len 6), IPv4 > (len 4), Request who-has 10.111.81.254 tell 10.111.81.66, length 28 > 11:45:57.462765 78:19:f7:9b:2a:41 > fa:16:3e:58:74:5b, ethertype ARP > (0x0806), length 56: Ethernet (len 6), IPv4 (len 4), Reply 10.111.81.254 > is-at 78:19:f7:9b:2a:41, length 42 > > root at juno-dev02:~# tcpdump -env -i br-int icmp or arp > tcpdump: WARNING: br-int: no IPv4 address assigned > tcpdump: listening on br-int, link-type EN10MB (Ethernet), capture size > 65535 bytes > 11:29:49.330084 fa:16:3e:58:74:5b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q > (0x8100), length 46: vlan 1, p 0, ethertype ARP, Ethernet (len 6), IPv4 > (len 4), Request who-has 10.111.81.254 tell 10.111.81.66, length 28 > 11:29:50.162106 fa:16:3e:58:74:5b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q > (0x8100), length 46: vlan 1, p 0, ethertype ARP, Ethernet (len 6), IPv4 > (len 4), Request who-has 10.111.81.49 tell 10.111.81.66, length 28 > 11:29:51.180111 fa:16:3e:58:74:5b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q > (0x8100), length 46: vlan 1, p 0, ethertype ARP, Ethernet (len 6), IPv4 > (len 4), Request who-has 10.111.81.49 tell 10.111.81.66, length 28 > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gustavo.randich at gmail.com Sun Apr 12 04:26:24 2015 From: gustavo.randich at gmail.com (Gustavo Randich) Date: Sun, 12 Apr 2015 01:26:24 -0300 Subject: [Openstack-operators] how to attach VMs to external/vlan network directly In-Reply-To: References: Message-ID: Yes, vlan81 is a subinterface. That is the problem. Will try adding bond0 to br-vlan... Thanks! On Saturday, April 11, 2015, Matt Kassawara wrote: > Does vlan81 between bond0 and br-vlan reflect a VLAN subinterface on the > host? If so, you need to remove it and attach bond0 directly to br-vlan > because Open vSwitch performs the tagging for you. > > On Fri, Apr 10, 2015 at 3:13 PM, Gustavo Randich < > gustavo.randich at gmail.com > > wrote: > >> Hi, >> >> I've tried without success to attach instances directly to external VLAN >> network using "provider:network_type vlan"; below are the details. Using >> "provider:network_type flat" I made it work. >> >> I was basically following this: >> http://www.s3it.uzh.ch/blog/openstack-neutron-vlan/ >> >> Any idea will be appreciated. >> >> ML2 CONFIG >> ml2_type_vlan network_vlan_ranges vlannet:81:91 >> bridge_mappings vlannet:br-vlan >> enable_security_group True >> enable_ipset True >> firewall_driver >> neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver >> >> NETWORK CREATION >> neutron net-create vlan81 --router:external True >> --provider:physical_network vlannet --provider:network_type vlan >> --provider:segmentation_id 81 --shared >> neutron subnet-create vlan81 10.111.81.0/24 --name vlan81 >> --allocation-pool start=10.111.81.65,end=10.111.81.126 --enable-dhcp >> --gateway 10.111.81.254 --dns-nameserver 10.1.1.68 --dns-nameserver >> 10.1.1.42 --host-route destination= >> 169.254.169.254/32,nexthop=10.111.81.65 >> >> NETWORK CONFIGURATION OF COMPUTE/NETWORK NODES >> em1 \ >> bond0 -> vlan81 -> | br-vlan | <-> | br-int | >> em2 / >> >> DEBUGGING >> >> root at juno-dev02:~# ovs-ofctl dump-flows br-vlan >> NXST_FLOW reply (xid=0x4): >> cookie=0x0, duration=2991.439s, table=0, n_packets=5374, n_bytes=357405, >> idle_age=2, priority=4,in_port=5,dl_vlan=1 actions=mod_vlan_vid:81,NORMAL >> cookie=0x0, duration=3439.498s, table=0, n_packets=3792, n_bytes=159460, >> idle_age=0, priority=2,in_port=5 actions=drop >> cookie=0x0, duration=3440.064s, table=0, n_packets=113217, >> n_bytes=18712371, idle_age=0, priority=1 actions=NORMAL >> >> root at juno-dev02:~# ovs-ofctl dump-flows br-int >> NXST_FLOW reply (xid=0x4): >> cookie=0x0, duration=3891.241s, table=0, n_packets=1448, n_bytes=157908, >> idle_age=1236, priority=3,in_port=7,dl_vlan=81 actions=mod_vlan_vid:1,NORMAL >> cookie=0x0, duration=4339.435s, table=0, n_packets=26538, >> n_bytes=1584268, idle_age=1, priority=2,in_port=7 actions=drop >> cookie=0x0, duration=4340.224s, table=0, n_packets=10686, >> n_bytes=590535, idle_age=1, priority=1 actions=NORMAL >> cookie=0x0, duration=4340.153s, table=23, n_packets=0, n_bytes=0, >> idle_age=4340, priority=0 actions=drop >> >> root at juno-dev02:~# nova list >> >> +--------------------------------------+------+--------+------------+-------------+---------------------+ >> | ID | Name | Status | Task State | >> Power State | Networks | >> >> +--------------------------------------+------+--------+------------+-------------+---------------------+ >> | 2f1a2cba-6fc7-45ae-a251-e709ab8b7ecc | test | ACTIVE | - | >> Running | vlan81=10.111.81.66 | >> >> +--------------------------------------+------+--------+------------+-------------+---------------------+ >> >> >> Instance can reach DHCP server on network node (10.111.81.65), but cannot >> reach default gateway (10.111.81.254), nor any host of the external network. >> >> The br-vlan bridge shows outgoing ARP packets tagged with vlan 81, and >> ARP replies not tagged, which I suppose it then drops because it does not >> match the first rule of br-vlan. >> >> The br-int bridge shows only outgoin ARP packets: >> >> >> root at juno-dev02:~# tcpdump -env -i br-vlan icmp or arp >> tcpdump: listening on br-vlan, link-type EN10MB (Ethernet), capture size >> 65535 bytes >> 11:45:56.463254 fa:16:3e:58:74:5b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q >> (0x8100), length 46: vlan 81, p 0, ethertype ARP, Ethernet (len 6), IPv4 >> (len 4), Request who-has 10.111.81.254 tell 10.111.81.66, length 28 >> 11:45:56.464491 78:19:f7:9b:2a:41 > fa:16:3e:58:74:5b, ethertype ARP >> (0x0806), length 56: Ethernet (len 6), IPv4 (len 4), Reply 10.111.81.254 >> is-at 78:19:f7:9b:2a:41, length 42 >> 11:45:57.461253 fa:16:3e:58:74:5b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q >> (0x8100), length 46: vlan 81, p 0, ethertype ARP, Ethernet (len 6), IPv4 >> (len 4), Request who-has 10.111.81.254 tell 10.111.81.66, length 28 >> 11:45:57.462765 78:19:f7:9b:2a:41 > fa:16:3e:58:74:5b, ethertype ARP >> (0x0806), length 56: Ethernet (len 6), IPv4 (len 4), Reply 10.111.81.254 >> is-at 78:19:f7:9b:2a:41, length 42 >> >> root at juno-dev02:~# tcpdump -env -i br-int icmp or arp >> tcpdump: WARNING: br-int: no IPv4 address assigned >> tcpdump: listening on br-int, link-type EN10MB (Ethernet), capture size >> 65535 bytes >> 11:29:49.330084 fa:16:3e:58:74:5b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q >> (0x8100), length 46: vlan 1, p 0, ethertype ARP, Ethernet (len 6), IPv4 >> (len 4), Request who-has 10.111.81.254 tell 10.111.81.66, length 28 >> 11:29:50.162106 fa:16:3e:58:74:5b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q >> (0x8100), length 46: vlan 1, p 0, ethertype ARP, Ethernet (len 6), IPv4 >> (len 4), Request who-has 10.111.81.49 tell 10.111.81.66, length 28 >> 11:29:51.180111 fa:16:3e:58:74:5b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q >> (0x8100), length 46: vlan 1, p 0, ethertype ARP, Ethernet (len 6), IPv4 >> (len 4), Request who-has 10.111.81.49 tell 10.111.81.66, length 28 >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aishwarya.adyanthaya at accenture.com Mon Apr 13 03:16:30 2015 From: aishwarya.adyanthaya at accenture.com (aishwarya.adyanthaya at accenture.com) Date: Mon, 13 Apr 2015 03:16:30 +0000 Subject: [Openstack-operators] Endpoints in kubernetes In-Reply-To: References: Message-ID: Hi Alex, I went to the link you specified (https://github.com/stackforge/murano/tree/master/contrib/elements/kubernetes) and downloaded the packages for Kubernetes (like etcd, Kubernetes, flannel) but found no configuration files where I could edit the service configuration. I checked my machine for the configuration but couldn?t find it. If you could let me know on it that would be really helpful. Thank you in advance. Aishwarya Adyanthaya From: Alex Freedland [mailto:afreedland at mirantis.com] Sent: Tuesday, April 07, 2015 8:43 PM To: Adyanthaya, Aishwarya Cc: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] Endpoints in kubernetes Here is a link for disk image builder for Kubernetes images with use in Murano: https://github.com/stackforge/murano/tree/master/contrib/elements/kubernetes It has all steps to preinstall Kubernetes on top of Ubuntu. Alex Freedland Co-Founder and Chairman Mirantis, Inc. On Tue, Apr 7, 2015 at 5:16 AM, > wrote: Hello, I?m trying to integrate Kubernetes with OpenStack and I?ve started with the master node. While creating the master node I downloaded the Kubernetes.git and next I executed the command ?make release?. During the release, I got few lines that read ?Error on creating endpoints: endpoints "service1" not found? though in the end of the execution it read successful. When I checked the network it read ?Error: cannot sync with the cluster using endpoints http://127.0.0.1:4001?. Does anyone know if we need to add extra packages or other configurations that needs to be done before getting Kubernetes.git on the node to rectify the error? Thanks! ________________________________ This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. ______________________________________________________________________________________ www.accenture.com _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From mvanwink at rackspace.com Mon Apr 13 04:30:15 2015 From: mvanwink at rackspace.com (Matt Van Winkle) Date: Mon, 13 Apr 2015 04:30:15 +0000 Subject: [Openstack-operators] [Large Deployment Team] Meeting this Thursday at 16:00 UTC Message-ID: Hello all, Just a reminder that we have a meeting this Thursday at 16:00 UTC. (Reminder to some us, this makes it an hour earlier :) ). I'm working on confirming the room after the confusion a couple months ago and will update tomorrow with more details. We will discuss the outputs from the session at the mid cycle and determine what we want to accomplish in Vancouver. I'l touch base tomorrow, but wanted to get this on your calendars. Thanks! MAtt -------------- next part -------------- An HTML attachment was scrubbed... URL: From aishwarya.adyanthaya at accenture.com Mon Apr 13 05:06:08 2015 From: aishwarya.adyanthaya at accenture.com (aishwarya.adyanthaya at accenture.com) Date: Mon, 13 Apr 2015 05:06:08 +0000 Subject: [Openstack-operators] kubernetes minion services Message-ID: <41da12394d0f4fea83aa5ebf76691073@CO2PR42MB188.048d.mgd.msft.net> Hi, I'm trying to integrate Kubernetes with Openstack and I have been following the site 'devops.profitbricks.com' for it. The master node is running services such as kube-apiserver, kube-scheduler, kube-controller-manager but the etcd service seem to be missing even after the configuration was done. The services on minions aren't in running state after the configuration are done. I tried manually starting it with the service start command and checked for its status but it seems to be in stop/waiting state. Could anyone point out what I have to do? Thanks in advance! Aishwarya Adyanthaya ________________________________ This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. ______________________________________________________________________________________ www.accenture.com From tom at openstack.org Mon Apr 13 09:32:41 2015 From: tom at openstack.org (Tom Fifield) Date: Mon, 13 Apr 2015 18:32:41 +0900 Subject: [Openstack-operators] Draft Agenda for the Vancouver Ops Summit Sessions Message-ID: <552B8D39.2020309@openstack.org> Hi, Thank you very much to the sixty-odd people who contributed to the planning etherpad. I've taken that etherpad, Paris summit attendee feedback survey results, PHL ops feedback etherpad and munged all that into a first-draft schedule for Vancouver. If you're new: note that this is in addition to the operations (and other) conference track's presentations. It's aimed at giving us a design-summit-style place to congregate, swap best practices, ideas and give feedback. As a reminder, we have two different kind of sessions - General Sessions (Tuesday), which are interactive planning discussions for the operator community, and**Working groups (Wednesday)**focus on specific topics aiming to make concrete tasks in that area. Below is the draft schedule. Please take a look and reply with your thoughts! We need to get this ticked off fairly quickly, to ensure we find moderators for the sessions and can get them advertised as soon as possible. _*General Sessions*_ Tuesday Big Room 1 Big Room 2 Big Room 3 11:15 - 11:55 Ops Summit "101" / The Story So Far Federation - Keystone & other - what do people need? RabbitMQ 12:05 - 12:45 How do we fix logging? Architecture Show and Tell Ceilometer - what needs fixing? 12:45 - 2:00 2:00 - 2:40 Billing / show back / charge back - how do I do that? Architecture Show and Tell - Special Edition Cinder Feedback 2:50 - 3:30 OnBoard & Integration of Legacy Apps User Committee Session Hypervisor Tuning 3:40 - 4:20 Security CI/CD and Deployment Database 4:20 - 4:40 4:40 - 5:20 internal evangelism - convincing your C-level exec to back OpenStack Operating multi-site OpenStack installations in practice Nova Feedback (inc EC2 API) 5:30 - 6:10 Customer onboarding and offboarding Containers - what do you want? Neutron Feedback _*Working Groups*_ Wednesday Room 1 Room 2 Room 3 9:00 - 9:40 Telco Chef HPC Working Group 9:50 - 10:30 Telco Puppet HPC Working Group 10:30 - 11:00 11:00 - 11:40 Monitoring & Tools Working Group Ansible Ops Tags Working Group 11:50 - 12:30 Monitoring & Tools Working Group Ceph Ops Tags Working Group 12:30 - 1:50 1:50 - 2:30 Large Deployments Team Burning Issues Tech Choices (eg is MongoDB OK?) 2:40 - 3:20 Large Deployments Team Burning Issues CMDB 3:30 - 4:10 Large Deployments Team Docs Data-Plane technology transitions. 4:10 - 4:30 4:30 - 5:10 Upgrades Packaging nova-network 5:20 - 6:00 Upgrades Packaging nova-network Regards, Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From aishwarya.adyanthaya at accenture.com Mon Apr 13 12:06:49 2015 From: aishwarya.adyanthaya at accenture.com (aishwarya.adyanthaya at accenture.com) Date: Mon, 13 Apr 2015 12:06:49 +0000 Subject: [Openstack-operators] kubernetes services Message-ID: <575aaf51c9f9412dab31f785d70f6ddd@CO2PR42MB188.048d.mgd.msft.net> Hi, I'm trying to integrate Kubernetes with Openstack and I have been following the site 'devops.profitbricks.com' for it. The master node is running services such as kube-apiserver, kube-scheduler, kube-controller-manager but the etcd service seem to be missing even after the configuration was done. The services on minions aren't in running state after the configuration are done. I tried manually starting it with the service start command and checked for its status but it seems to be in stop/waiting state. Could anyone point out what I have to do? Thanks in advance! Aishwarya Adyanthaya ________________________________ This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. ______________________________________________________________________________________ www.accenture.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.shuklin at gmail.com Mon Apr 13 21:56:43 2015 From: george.shuklin at gmail.com (George Shuklin) Date: Tue, 14 Apr 2015 00:56:43 +0300 Subject: [Openstack-operators] Draft Agenda for the Vancouver Ops Summit Sessions In-Reply-To: <552B8D39.2020309@openstack.org> References: <552B8D39.2020309@openstack.org> Message-ID: <552C3B9B.6040904@gmail.com> On 04/13/2015 12:32 PM, Tom Fifield wrote: What kind of projects will be a sessions 'Architecture Show and Tell' and 'Architecture Show and Tell - Special Edition' about? Thanks. On 04/13/2015 12:32 PM, Tom Fifield wrote: [cut] > > _*General Sessions*_ > > Tuesday Big Room 1 Big Room 2 Big Room 3 > 11:15 - 11:55 Ops Summit "101" / The Story So Far Federation - > Keystone & other - what do people need? RabbitMQ > 12:05 - 12:45 How do we fix logging? Architecture Show and Tell > Ceilometer - what needs fixing? > 12:45 - 2:00 > > > > 2:00 - 2:40 Billing / show back / charge back - how do I do that? > Architecture Show and Tell - Special Edition Cinder Feedback > 2:50 - 3:30 > > > [cut] -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom at openstack.org Tue Apr 14 00:10:31 2015 From: tom at openstack.org (Tom Fifield) Date: Tue, 14 Apr 2015 09:10:31 +0900 Subject: [Openstack-operators] Draft Agenda for the Vancouver Ops Summit Sessions In-Reply-To: <552C3B9B.6040904@gmail.com> References: <552B8D39.2020309@openstack.org> <552C3B9B.6040904@gmail.com> Message-ID: <552C5AF7.2060508@openstack.org> Hi George, You can see the list on the planning etherpad at: https://etherpad.openstack.org/p/YVR-ops-meetup These are not projects, but people talking about their own clouds, in lightening talk form :) Special Edition is a carry over from last summit where we had an 'upgrades' special edition. I've yet to see whether we have a nice logical grouping for this time. Regards, Tom On 14/04/15 06:56, George Shuklin wrote: > > On 04/13/2015 12:32 PM, Tom Fifield wrote: > > > What kind of projects will be a sessions 'Architecture Show and Tell' > and 'Architecture Show and Tell - Special Edition' about? > > Thanks. > > > On 04/13/2015 12:32 PM, Tom Fifield wrote: > > [cut] >> >> _*General Sessions*_ >> >> Tuesday Big Room 1 Big Room 2 Big Room 3 >> 11:15 - 11:55 Ops Summit "101" / The Story So Far Federation - >> Keystone & other - what do people need? RabbitMQ >> 12:05 - 12:45 How do we fix logging? Architecture Show and Tell >> Ceilometer - what needs fixing? >> 12:45 - 2:00 >> >> >> >> 2:00 - 2:40 Billing / show back / charge back - how do I do that? >> Architecture Show and Tell - Special Edition Cinder Feedback >> 2:50 - 3:30 >> >> >> > [cut] > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From gokrokvertskhov at mirantis.com Tue Apr 14 00:23:56 2015 From: gokrokvertskhov at mirantis.com (Georgy Okrokvertskhov) Date: Mon, 13 Apr 2015 17:23:56 -0700 Subject: [Openstack-operators] kubernetes services In-Reply-To: <575aaf51c9f9412dab31f785d70f6ddd@CO2PR42MB188.048d.mgd.msft.net> References: <575aaf51c9f9412dab31f785d70f6ddd@CO2PR42MB188.048d.mgd.msft.net> Message-ID: Hi, If you don't see etcd process it probably means that you did not initialize an etc cluster. You need to have ETCD properly configured before you install Kubernetes as Kubernetes uses ETCd to distribute its configuration. There are two ways to setup ETCd cluster. The first one requires to use some hub which will be a coordination point between ETCd nodes. The second one is manual way when you add each ETCd member one by one providing initialization configuration string as an option on each member. Each time you add an ETCd member this init string will be changed. Here are bash scripts which are used in Kubernetes cluster automation in Murano: https://github.com/stackforge/murano-apps/tree/master/kubernetes/io.murano.apps.docker.kubernetes.KubernetesCluster/Resources/scripts the sequence or ETCd is the following: on ETCd master (first VM): master-etcd-setup.sh --==Adding an ETCd memeber ==- 1) On master: master-add-member.sh 2) take the init string generated after script execution 3) On new member: member-etcd-setup.sh Just take a look on the scripts and you will figure out how to setup a cluster. Thanks Gosha On Mon, Apr 13, 2015 at 5:06 AM, wrote: > Hi, > > > > I'm trying to integrate Kubernetes with Openstack and I have been > following the site 'devops.profitbricks.com' for it. The master node is > running services such as kube-apiserver, kube-scheduler, > kube-controller-manager but the etcd service seem to be missing even after > the configuration was done. The services on minions aren't in running state > after the configuration are done. I tried manually starting it with the > service start command and checked for its status but it seems to be in > stop/waiting state. > > > > Could anyone point out what I have to do? Thanks in advance! > > > > Aishwarya Adyanthaya > > > > ------------------------------ > > This message is for the designated recipient only and may contain > privileged, proprietary, or otherwise confidential information. If you have > received it in error, please notify the sender immediately and delete the > original. Any other use of the e-mail by you is prohibited. Where allowed > by local law, electronic communications with Accenture and its affiliates, > including e-mail and instant messaging (including content), may be scanned > by our systems for the purposes of information security and assessment of > internal compliance with Accenture policy. > > ______________________________________________________________________________________ > > www.accenture.com > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 -------------- next part -------------- An HTML attachment was scrubbed... URL: From clint at fewbar.com Tue Apr 14 17:21:20 2015 From: clint at fewbar.com (Clint Byrum) Date: Tue, 14 Apr 2015 10:21:20 -0700 Subject: [Openstack-operators] [all] QPID incompatible with python 3 and untested in gate -- what to do? Message-ID: <1429031332-sup-4892@fewbar.com> Hello! There's been some recent progress on python3 compatibility for core libraries that OpenStack depends on[1], and this is likely to open the flood gates for even more python3 problems to be found and fixed. Recently a proposal was made to make oslo.messaging start to run python3 tests[2], and it was found that qpid-python is not python3 compatible yet. This presents us with questions: Is anyone using QPID, and if so, should we add gate testing for it? If not, can we deprecate the driver? In the most recent survey results I could find [3] I don't even see message broker mentioned, whereas Databases in use do vary somewhat. Currently it would appear that only oslo.messaging runs functional tests against QPID. I was unable to locate integration testing for it, but I may not know all of the places to dig around to find that. So, please let us know if QPID is important to you. Otherwise it may be time to unburden ourselves of its maintenance. [1] https://pypi.python.org/pypi/eventlet/0.17.3 [2] https://review.openstack.org/#/c/172135/ [3] http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014 From clint at fewbar.com Tue Apr 14 17:22:15 2015 From: clint at fewbar.com (Clint Byrum) Date: Tue, 14 Apr 2015 10:22:15 -0700 Subject: [Openstack-operators] [all] QPID incompatible with python 3 and untested in gate -- what to do? Message-ID: <1429032117-sup-4270@fewbar.com> Hello! There's been some recent progress on python3 compatibility for core libraries that OpenStack depends on[1], and this is likely to open the flood gates for even more python3 problems to be found and fixed. Recently a proposal was made to make oslo.messaging start to run python3 tests[2], and it was found that qpid-python is not python3 compatible yet. This presents us with questions: Is anyone using QPID, and if so, should we add gate testing for it? If not, can we deprecate the driver? In the most recent survey results I could find [3] I don't even see message broker mentioned, whereas Databases in use do vary somewhat. Currently it would appear that only oslo.messaging runs functional tests against QPID. I was unable to locate integration testing for it, but I may not know all of the places to dig around to find that. So, please let us know if QPID is important to you. Otherwise it may be time to unburden ourselves of its maintenance. [1] https://pypi.python.org/pypi/eventlet/0.17.3 [2] https://review.openstack.org/#/c/172135/ [3] http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014 From ozamiatin at mirantis.com Tue Apr 14 17:54:17 2015 From: ozamiatin at mirantis.com (ozamiatin) Date: Tue, 14 Apr 2015 20:54:17 +0300 Subject: [Openstack-operators] Oslo.messaging ZeroMQ driver changes in Liberty Message-ID: <552D5449.2020106@mirantis.com> Hi, Does anyone use any version of ZeroMQ driver in production deployment? If you do, please leave your comments in [1], or reply to this letter. [1] https://review.openstack.org/#/c/171131/ Thanks, Oleksii Zamiatin From jacobgodin at gmail.com Tue Apr 14 17:56:09 2015 From: jacobgodin at gmail.com (Jacob Godin) Date: Tue, 14 Apr 2015 14:56:09 -0300 Subject: [Openstack-operators] [Neutron] Floating IPs / Router Gateways Message-ID: Hi folks, Looking for a bit of advice on how to accomplish something with Neutron. Our setup uses OVS+GRE with isolated tenant networks. Currently, we have one large network servicing our Floating IPs as well as our external router interfaces. What we're looking to do is actually have two distinct networks handle these tasks. One for FIPs and another for routers. Is this possible? -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Apr 14 18:55:22 2015 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 14 Apr 2015 14:55:22 -0400 Subject: [Openstack-operators] [puppet] OPS meetup in Vancouver Summit Message-ID: <552D629A.7020901@redhat.com> Operators, Your feedback is one of the reasons why our Puppet modules are very popular; we would like to continue to organize a Puppet session during the next OPS meeting in Vancouver summit. To track all what we need to cover, I created a new etherpad: https://etherpad.openstack.org/p/liberty-summit-ops-puppet Feel free to bring topics based on your feedback from your operator point of view of using Puppet modules for OpenStack. We are a weekly meeting [1], so we will make sure to track the topics and prepare an agenda *before* the summit. Thanks, [1] https://wiki.openstack.org/wiki/Meetings/PuppetOpenStack#Agenda -- Emilien Macchi -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From amuller at redhat.com Tue Apr 14 21:06:52 2015 From: amuller at redhat.com (Assaf Muller) Date: Tue, 14 Apr 2015 17:06:52 -0400 (EDT) Subject: [Openstack-operators] [Neutron] Floating IPs / Router Gateways In-Reply-To: <564711130.21500133.1429045564374.JavaMail.zimbra@redhat.com> References: <564711130.21500133.1429045564374.JavaMail.zimbra@redhat.com> Message-ID: <1510428524.21500280.1429045612339.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Hi folks, > > Looking for a bit of advice on how to accomplish something with Neutron. Our > setup uses OVS+GRE with isolated tenant networks. Currently, we have one > large network servicing our Floating IPs as well as our external router > interfaces. > > What we're looking to do is actually have two distinct networks handle these > tasks. One for FIPs and another for routers. > > Is this possible? Every router is allocated an IP address on the same network as the floating IPs it serves. This is unavoidable at this time. I don't see how you could work around this and separate the two. Can you expand on what you're trying to accomplish and why? There's work going on in this area planned for Liberty and it would be interesting to hear your use case. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From blak111 at gmail.com Tue Apr 14 21:08:43 2015 From: blak111 at gmail.com (Kevin Benton) Date: Tue, 14 Apr 2015 14:08:43 -0700 Subject: [Openstack-operators] [Neutron] Floating IPs / Router Gateways In-Reply-To: References: Message-ID: Probably not in the way you want. You can have two subnets on the external network. Then you would set the allocation pool to be nothing for the subnet that you want the router interfaces to be attached to to make sure floating IPs aren't allocated from it. Then whenever you attach a router interface you would need to explicitly set the IP address to something from the first subnet. This obviously won't work well if you want it to automatically happen for tenants, but it might work if you are setting up the infrastructure yourself. On Tue, Apr 14, 2015 at 10:56 AM, Jacob Godin wrote: > Hi folks, > > Looking for a bit of advice on how to accomplish something with Neutron. > Our setup uses OVS+GRE with isolated tenant networks. Currently, we have > one large network servicing our Floating IPs as well as our external router > interfaces. > > What we're looking to do is actually have two distinct networks handle > these tasks. One for FIPs and another for routers. > > Is this possible? > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- Kevin Benton -------------- next part -------------- An HTML attachment was scrubbed... URL: From jacobgodin at gmail.com Tue Apr 14 21:12:35 2015 From: jacobgodin at gmail.com (Jacob Godin) Date: Tue, 14 Apr 2015 18:12:35 -0300 Subject: [Openstack-operators] [Neutron] Floating IPs / Router Gateways In-Reply-To: References: Message-ID: Thanks Kevin. That might work in some instances, however our tenants have the ability to create their own routers and allocate their gateway. I suppose we could hack some code in to restrict what networks are usable for routers vs floating IPs. On Tue, Apr 14, 2015 at 6:08 PM, Kevin Benton wrote: > Probably not in the way you want. > > You can have two subnets on the external network. Then you would set the > allocation pool to be nothing for the subnet that you want the router > interfaces to be attached to to make sure floating IPs aren't allocated > from it. Then whenever you attach a router interface you would need to > explicitly set the IP address to something from the first subnet. > > This obviously won't work well if you want it to automatically happen for > tenants, but it might work if you are setting up the infrastructure > yourself. > > On Tue, Apr 14, 2015 at 10:56 AM, Jacob Godin > wrote: > >> Hi folks, >> >> Looking for a bit of advice on how to accomplish something with Neutron. >> Our setup uses OVS+GRE with isolated tenant networks. Currently, we have >> one large network servicing our Floating IPs as well as our external router >> interfaces. >> >> What we're looking to do is actually have two distinct networks handle >> these tasks. One for FIPs and another for routers. >> >> Is this possible? >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > > > -- > Kevin Benton > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jacobgodin at gmail.com Tue Apr 14 21:12:48 2015 From: jacobgodin at gmail.com (Jacob Godin) Date: Tue, 14 Apr 2015 18:12:48 -0300 Subject: [Openstack-operators] [Neutron] Floating IPs / Router Gateways In-Reply-To: <1510428524.21500280.1429045612339.JavaMail.zimbra@redhat.com> References: <564711130.21500133.1429045564374.JavaMail.zimbra@redhat.com> <1510428524.21500280.1429045612339.JavaMail.zimbra@redhat.com> Message-ID: Absolutely. We're trying to reduce our public IPv4 usage, so having one per tenant network (not even including floating IPs) is a drain. Instead of having: instance -> (gateway IP) virtual router NAT (public IP) -> (public gateway) router We want to have: instance -> (gateway IP) virtual router NAT (private IP) -> (private gateway) router NAT We have lots of networks, so this would have a huge impact on our IP usage. Also, in this case, floating IP addresses would still work the same way that they do now.\ Thanks Assaf On Tue, Apr 14, 2015 at 6:06 PM, Assaf Muller wrote: > ----- Original Message ----- > > Hi folks, > > > > Looking for a bit of advice on how to accomplish something with Neutron. > Our > > setup uses OVS+GRE with isolated tenant networks. Currently, we have one > > large network servicing our Floating IPs as well as our external router > > interfaces. > > > > What we're looking to do is actually have two distinct networks handle > these > > tasks. One for FIPs and another for routers. > > > > Is this possible? > > Every router is allocated an IP address on the same network as the > floating IPs it serves. > This is unavoidable at this time. I don't see how you could work around > this and separate > the two. Can you expand on what you're trying to accomplish and why? > There's work going > on in this area planned for Liberty and it would be interesting to hear > your use case. > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mspreitz at us.ibm.com Tue Apr 14 21:22:19 2015 From: mspreitz at us.ibm.com (Mike Spreitzer) Date: Tue, 14 Apr 2015 17:22:19 -0400 Subject: [Openstack-operators] [Neutron] Floating IPs / Router Gateways In-Reply-To: References: <564711130.21500133.1429045564374.JavaMail.zimbra@redhat.com> <1510428524.21500280.1429045612339.JavaMail.zimbra@redhat.com> Message-ID: Jacob Godin wrote on 04/14/2015 05:12:48 PM: > Absolutely. We're trying to reduce our public IPv4 usage, so having > one per tenant network (not even including floating IPs) is a drain. I am having exactly the same issue. I am currently solving it with a different hack that nobody likes, I will not even describe it here. But total agreement that the problem is important. IPv6 is the ultimate answer, provided there is a reasonably smooth transition. I think we will need to support a tenant that is using both v4 and v6 during his transition. This will require NAT between a tenant's v4 and v6. Regards, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jacobgodin at gmail.com Tue Apr 14 21:24:03 2015 From: jacobgodin at gmail.com (Jacob Godin) Date: Tue, 14 Apr 2015 18:24:03 -0300 Subject: [Openstack-operators] [Neutron] Floating IPs / Router Gateways In-Reply-To: References: <564711130.21500133.1429045564374.JavaMail.zimbra@redhat.com> <1510428524.21500280.1429045612339.JavaMail.zimbra@redhat.com> Message-ID: Hey Mike, Would you send along your solution off-list? I'm curious, and I won't judge :) On Tue, Apr 14, 2015 at 6:22 PM, Mike Spreitzer wrote: > Jacob Godin wrote on 04/14/2015 05:12:48 PM: > > > Absolutely. We're trying to reduce our public IPv4 usage, so having > > one per tenant network (not even including floating IPs) is a drain. > > I am having exactly the same issue. I am currently solving it with a > different hack that nobody likes, I will not even describe it here. But > total agreement that the problem is important. > > IPv6 is the ultimate answer, provided there is a reasonably smooth > transition. I think we will need to support a tenant that is using both v4 > and v6 during his transition. This will require NAT between a tenant's v4 > and v6. > > Regards, > Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From mvanwink at rackspace.com Wed Apr 15 03:04:14 2015 From: mvanwink at rackspace.com (Matt Van Winkle) Date: Wed, 15 Apr 2015 03:04:14 +0000 Subject: [Openstack-operators] [Large Deployments] UPDATE: Meeting this Thursday at 16:00 UTC #openstack-meeting Message-ID: Hello again, folks! I went over all the schedules at: https://wiki.openstack.org/wiki/Meetings And for the Even months (Feb, April, June, etc). #openstack-meeting was open at 16:00 UTC. So, I've updated the above to include our meeting and added: https://wiki.openstack.org/wiki/Meetings/LDT I still need to put our previous minutes in there. I'll work with folks to get this added to the feed and we'll try to pick an official time for the APAC friendly time slot for the ODD months. We only met in January at the alternating time because March was the mid cycle. We'll need to decide on Thursday if we want to try and have a May meeting with the Summit going on or wait till July to have the next alternate meeting time. The main focus of this meeting will be debrief form the mid-cycle and prep for Vancouver. Also, I was wrong earlier, for those of us that have Daylight savings, this will be an our LATER than usual. (For example 11:00am Central versus 10) Please excuse my thinking backwards in the last note. See you all on Thursday! Matt PS - I've gone over the meeting page several times, but please let me know if I just completely missed a conflict on rooms. From: Matt Van Winkle > Date: Sunday, April 12, 2015 11:30 PM To: "openstack-operators at lists.openstack.org" > Subject: [Openstack-operators] [Large Deployment Team] Meeting this Thursday at 16:00 UTC Hello all, Just a reminder that we have a meeting this Thursday at 16:00 UTC. (Reminder to some us, this makes it an hour earlier :) ). I'm working on confirming the room after the confusion a couple months ago and will update tomorrow with more details. We will discuss the outputs from the session at the mid cycle and determine what we want to accomplish in Vancouver. I'l touch base tomorrow, but wanted to get this on your calendars. Thanks! MAtt -------------- next part -------------- An HTML attachment was scrubbed... URL: From comnea.dani at gmail.com Wed Apr 15 06:33:25 2015 From: comnea.dani at gmail.com (Daniel Comnea) Date: Wed, 15 Apr 2015 07:33:25 +0100 Subject: [Openstack-operators] [Neutron] Floating IPs / Router Gateways In-Reply-To: References: <564711130.21500133.1429045564374.JavaMail.zimbra@redhat.com> <1510428524.21500280.1429045612339.JavaMail.zimbra@redhat.com> Message-ID: Mike, pls share the solution, some are interested even if is a hack as long as it gets the job done. On Tue, Apr 14, 2015 at 10:24 PM, Jacob Godin wrote: > Hey Mike, > > Would you send along your solution off-list? I'm curious, and I won't > judge :) > > On Tue, Apr 14, 2015 at 6:22 PM, Mike Spreitzer > wrote: > >> Jacob Godin wrote on 04/14/2015 05:12:48 PM: >> >> > Absolutely. We're trying to reduce our public IPv4 usage, so having >> > one per tenant network (not even including floating IPs) is a drain. >> >> I am having exactly the same issue. I am currently solving it with a >> different hack that nobody likes, I will not even describe it here. But >> total agreement that the problem is important. >> >> IPv6 is the ultimate answer, provided there is a reasonably smooth >> transition. I think we will need to support a tenant that is using both v4 >> and v6 during his transition. This will require NAT between a tenant's v4 >> and v6. >> >> Regards, >> Mike > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mspreitz at us.ibm.com Wed Apr 15 07:13:53 2015 From: mspreitz at us.ibm.com (Mike Spreitzer) Date: Wed, 15 Apr 2015 03:13:53 -0400 Subject: [Openstack-operators] [Neutron] Floating IPs / Router Gateways In-Reply-To: References: <564711130.21500133.1429045564374.JavaMail.zimbra@redhat.com> <1510428524.21500280.1429045612339.JavaMail.zimbra@redhat.com> Message-ID: > From: Daniel Comnea > To: Jacob Godin > Cc: Mike Spreitzer/Watson/IBM at IBMUS, OpenStack Operators operators at lists.openstack.org> > Date: 04/15/2015 02:34 AM > Subject: Re: [Openstack-operators] [Neutron] Floating IPs / Router Gateways > Sent by: daniel.comnea at gmail.com > > Mike, pls share the solution, some are interested even if is a hack > as long as it gets the job done. > > > On Tue, Apr 14, 2015 at 10:24 PM, Jacob Godin wrote: > Hey Mike, > > Would you send along your solution off-list? I'm curious, and I won't judge :) > > On Tue, Apr 14, 2015 at 6:22 PM, Mike Spreitzer wrote: > Jacob Godin wrote on 04/14/2015 05:12:48 PM: > > > Absolutely. We're trying to reduce our public IPv4 usage, so having > > one per tenant network (not even including floating IPs) is a drain. > > I am having exactly the same issue. I am currently solving it with > a different hack that nobody likes, I will not even describe it > here. But total agreement that the problem is important. > > IPv6 is the ultimate answer, provided there is a reasonably smooth > transition. I think we will need to support a tenant that is using > both v4 and v6 during his transition. This will require NAT between > a tenant's v4 and v6. > > Regards, > Mike OK, you asked for it. What we do is share Neutron routers, and add some iptables rules that prevent communication between the tenants sharing a router. I told you it was a hack. Regards, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jacobgodin at gmail.com Wed Apr 15 12:36:12 2015 From: jacobgodin at gmail.com (Jacob Godin) Date: Wed, 15 Apr 2015 09:36:12 -0300 Subject: [Openstack-operators] [Neutron] Floating IPs / Router Gateways In-Reply-To: References: <564711130.21500133.1429045564374.JavaMail.zimbra@redhat.com> <1510428524.21500280.1429045612339.JavaMail.zimbra@redhat.com> Message-ID: Ah, gotcha. So you're not using overlapping subnets then. Unfortunately that hack wouldn't work in our environment, but definitely something that others might consider using. On Wed, Apr 15, 2015 at 4:13 AM, Mike Spreitzer wrote: > > From: Daniel Comnea > > To: Jacob Godin > > Cc: Mike Spreitzer/Watson/IBM at IBMUS, OpenStack Operators > operators at lists.openstack.org> > > Date: 04/15/2015 02:34 AM > > Subject: Re: [Openstack-operators] [Neutron] Floating IPs / Router > Gateways > > Sent by: daniel.comnea at gmail.com > > > > Mike, pls share the solution, some are interested even if is a hack > > as long as it gets the job done. > > > > > > > On Tue, Apr 14, 2015 at 10:24 PM, Jacob Godin > wrote: > > Hey Mike, > > > > Would you send along your solution off-list? I'm curious, and I won't > judge :) > > > > On Tue, Apr 14, 2015 at 6:22 PM, Mike Spreitzer > wrote: > > Jacob Godin wrote on 04/14/2015 05:12:48 PM: > > > > > Absolutely. We're trying to reduce our public IPv4 usage, so having > > > one per tenant network (not even including floating IPs) is a drain. > > > > I am having exactly the same issue. I am currently solving it with > > a different hack that nobody likes, I will not even describe it > > here. But total agreement that the problem is important. > > > > IPv6 is the ultimate answer, provided there is a reasonably smooth > > transition. I think we will need to support a tenant that is using > > both v4 and v6 during his transition. This will require NAT between > > a tenant's v4 and v6. > > > > Regards, > > Mike > > OK, you asked for it. What we do is share Neutron routers, and add some > iptables rules that prevent communication between the tenants sharing a > router. I told you it was a hack. > > Regards, > Mike > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From boris at pavlovic.me Wed Apr 15 12:45:18 2015 From: boris at pavlovic.me (Boris Pavlovic) Date: Wed, 15 Apr 2015 15:45:18 +0300 Subject: [Openstack-operators] [openstack-dev][openstack-ops][rally][announce] What's new in Rally v0.0.3 Message-ID: Hello, Rally team is happy to say that we cut new release 0.0.3. *Release stats:* +------------------+-----------------+ | Commits | 53 | +------------------+-----------------+ | Bug fixes | 14 | +------------------+-----------------+ | Dev cycle | 33 days | +------------------+-----------------+ | Release date | 14/Apr/2015 | +------------------+-----------------+ | New scenarios | 11 | +------------------+-----------------+ | New slas | 2 | +------------------+-----------------+ *New features:* - Add the ability to specify versions for clients in benchmark scenarios You can call self.clients(?glance?, ?2?) and get client initialized for specific API version. - Add API for tempest uninstall $ rally-manage tempest uninstall # removes fully tempest for active deployment - Add a ?uuids-only option to rally task list $ rally task list ?uuids-only # returns list with only task uuids - Adds endpoint to ?fromenv deployment creation $ rally deployment create ?fromenv # recognizes standard OS_ENDPOINT environment variable - Configure SSL per deployment Now SSL information is deployment specific not Rally specific and rally.conf option is deprecated. Take a look at sample. For more details take a look at release notes: http://boris-42.me/rally-v0-0-3-whats-new/ or here https://rally.readthedocs.org/en/latest/release_notes/v0.0.3.html Best regards, Boris Pavlovic -------------- next part -------------- An HTML attachment was scrubbed... URL: From comnea.dani at gmail.com Wed Apr 15 12:50:03 2015 From: comnea.dani at gmail.com (Daniel Comnea) Date: Wed, 15 Apr 2015 13:50:03 +0100 Subject: [Openstack-operators] [Neutron] Floating IPs / Router Gateways In-Reply-To: References: <564711130.21500133.1429045564374.JavaMail.zimbra@redhat.com> <1510428524.21500280.1429045612339.JavaMail.zimbra@redhat.com> Message-ID: sure but appreciate your response. Dani On Wed, Apr 15, 2015 at 1:36 PM, Jacob Godin wrote: > Ah, gotcha. So you're not using overlapping subnets then. > > Unfortunately that hack wouldn't work in our environment, but definitely > something that others might consider using. > > On Wed, Apr 15, 2015 at 4:13 AM, Mike Spreitzer > wrote: > >> > From: Daniel Comnea >> > To: Jacob Godin >> > Cc: Mike Spreitzer/Watson/IBM at IBMUS, OpenStack Operators > > operators at lists.openstack.org> >> > Date: 04/15/2015 02:34 AM >> > Subject: Re: [Openstack-operators] [Neutron] Floating IPs / Router >> Gateways >> > Sent by: daniel.comnea at gmail.com >> > >> > Mike, pls share the solution, some are interested even if is a hack >> > as long as it gets the job done. >> > >> >> > >> > On Tue, Apr 14, 2015 at 10:24 PM, Jacob Godin >> wrote: >> > Hey Mike, >> > >> > Would you send along your solution off-list? I'm curious, and I won't >> judge :) >> > >> > On Tue, Apr 14, 2015 at 6:22 PM, Mike Spreitzer >> wrote: >> > Jacob Godin wrote on 04/14/2015 05:12:48 PM: >> > >> > > Absolutely. We're trying to reduce our public IPv4 usage, so having >> > > one per tenant network (not even including floating IPs) is a drain. >> > >> > I am having exactly the same issue. I am currently solving it with >> > a different hack that nobody likes, I will not even describe it >> > here. But total agreement that the problem is important. >> > >> > IPv6 is the ultimate answer, provided there is a reasonably smooth >> > transition. I think we will need to support a tenant that is using >> > both v4 and v6 during his transition. This will require NAT between >> > a tenant's v4 and v6. >> > >> > Regards, >> > Mike >> >> OK, you asked for it. What we do is share Neutron routers, and add some >> iptables rules that prevent communication between the tenants sharing a >> router. I told you it was a hack. >> >> Regards, >> Mike >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clint at fewbar.com Wed Apr 15 21:26:38 2015 From: clint at fewbar.com (Clint Byrum) Date: Wed, 15 Apr 2015 14:26:38 -0700 Subject: [Openstack-operators] [all] QPID incompatible with python 3 and untested in gate -- what to do? In-Reply-To: <1429031332-sup-4892@fewbar.com> References: <1429031332-sup-4892@fewbar.com> Message-ID: <1429132713-sup-2744@fewbar.com> I sent this to the dev list as well, and a nice discussion broke out: http://lists.openstack.org/pipermail/openstack-dev/2015-April/061467.html Given that discussion, I've added a spec for review to help clarify messaging backends: https://review.openstack.org/174105 Excerpts from Clint Byrum's message of 2015-04-14 10:21:20 -0700: > Hello! There's been some recent progress on python3 compatibility for > core libraries that OpenStack depends on[1], and this is likely to open > the flood gates for even more python3 problems to be found and fixed. > > Recently a proposal was made to make oslo.messaging start to run python3 > tests[2], and it was found that qpid-python is not python3 compatible yet. > > This presents us with questions: Is anyone using QPID, and if so, should > we add gate testing for it? If not, can we deprecate the driver? In the > most recent survey results I could find [3] I don't even see message > broker mentioned, whereas Databases in use do vary somewhat. > > Currently it would appear that only oslo.messaging runs functional tests > against QPID. I was unable to locate integration testing for it, but I > may not know all of the places to dig around to find that. > > So, please let us know if QPID is important to you. Otherwise it may be > time to unburden ourselves of its maintenance. > > [1] https://pypi.python.org/pypi/eventlet/0.17.3 > [2] https://review.openstack.org/#/c/172135/ > [3] http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014 From mismith at overstock.com Wed Apr 15 23:33:04 2015 From: mismith at overstock.com (Mike Smith) Date: Wed, 15 Apr 2015 23:33:04 +0000 Subject: [Openstack-operators] No more identity groups? Message-ID: Hello all. On my Havana-version openstack cloud, Horizon has the option of creating groups of users and then adding groups of users to tenants instead of having to add users individually. On my Juno-version cloud, this feature seems to no longer exist. I checked the keystone catalog and both are using v2.0 of the identify API. Does anybody know if this is something that we removed, or if there is something I need to do to enable this identity group functionality? Thanks, Mike ________________________________ CONFIDENTIALITY NOTICE: This message is intended only for the use and review of the individual or entity to which it is addressed and may contain information that is privileged and confidential. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message solely to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately by telephone or return email. Thank you. From Kevin.Fox at pnnl.gov Wed Apr 15 23:47:44 2015 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 15 Apr 2015 23:47:44 +0000 Subject: [Openstack-operators] No more identity groups? In-Reply-To: References: Message-ID: <1A3C52DFCD06494D8528644858247BF01A1D3008@EX10MBOX03.pnnl.gov> Its part of the keystone v3 api I think. I've only seen it show up when I configure horizon to specifically talk v3 to keystone. Thanks, Kevin ________________________________________ From: Mike Smith [mismith at overstock.com] Sent: Wednesday, April 15, 2015 4:33 PM To: Subject: [Openstack-operators] No more identity groups? Hello all. On my Havana-version openstack cloud, Horizon has the option of creating groups of users and then adding groups of users to tenants instead of having to add users individually. On my Juno-version cloud, this feature seems to no longer exist. I checked the keystone catalog and both are using v2.0 of the identify API. Does anybody know if this is something that we removed, or if there is something I need to do to enable this identity group functionality? Thanks, Mike ________________________________ CONFIDENTIALITY NOTICE: This message is intended only for the use and review of the individual or entity to which it is addressed and may contain information that is privileged and confidential. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message solely to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately by telephone or return email. Thank you. _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From matt at mattfischer.com Thu Apr 16 01:23:24 2015 From: matt at mattfischer.com (Matt Fischer) Date: Wed, 15 Apr 2015 19:23:24 -0600 Subject: [Openstack-operators] logging for Keystone on user/project delete/create operations Message-ID: I'd like to have some better logging when certain CRUD operations happen in Keystone, for example, when a project is deleted. I specifically mean "any" when I say better since right now I'm not seeing anything even when Verbose is enabled. This is pretty frustrating for me because these are rather important events, certainly more important than my load balancers hitting Keystone which it's happily logging twice a second. I know that Keystone supports some audit event notifications [1]. Can I simply have these reflect back into the main logs somehow? [1] - http://docs.openstack.org/developer/keystone/event_notifications.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From mismith at overstock.com Thu Apr 16 01:49:30 2015 From: mismith at overstock.com (Mike Smith) Date: Thu, 16 Apr 2015 01:49:30 +0000 Subject: [Openstack-operators] No more identity groups? In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A1D3008@EX10MBOX03.pnnl.gov> References: <1A3C52DFCD06494D8528644858247BF01A1D3008@EX10MBOX03.pnnl.gov> Message-ID: <912B7924-0CC1-4A78-9AD6-D49CAAAD07B1@overstock.com> Thanks. I made the following changes to the local_settings file for horizon and now the groups show up: OPENSTACK_API_VERSIONS = { "identity": 3 } OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST > On Apr 15, 2015, at 5:47 PM, Fox, Kevin M wrote: > > Its part of the keystone v3 api I think. I've only seen it show up when I configure horizon to specifically talk v3 to keystone. > > Thanks, > Kevin > ________________________________________ > From: Mike Smith [mismith at overstock.com] > Sent: Wednesday, April 15, 2015 4:33 PM > To: > Subject: [Openstack-operators] No more identity groups? > > Hello all. On my Havana-version openstack cloud, Horizon has the option of creating groups of users and then adding groups of users to tenants instead of having to add users individually. On my Juno-version cloud, this feature seems to no longer exist. I checked the keystone catalog and both are using v2.0 of the identify API. > > Does anybody know if this is something that we removed, or if there is something I need to do to enable this identity group functionality? > > Thanks, > Mike > > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ________________________________ CONFIDENTIALITY NOTICE: This message is intended only for the use and review of the individual or entity to which it is addressed and may contain information that is privileged and confidential. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message solely to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately by telephone or return email. Thank you. From mangelajo at redhat.com Thu Apr 16 05:10:59 2015 From: mangelajo at redhat.com (Miguel Angel Ajo Pelayo) Date: Thu, 16 Apr 2015 07:10:59 +0200 Subject: [Openstack-operators] logging for Keystone on user/project delete/create operations In-Reply-To: References: Message-ID: I?m not involved in the keystone project, but I?d recommend you to start by filling a blueprint asking for it, and explaining what you just said here: https://blueprints.launchpad.net/keystone I?d also try to contact Keystone PTL (I?m not sure who is the PTL). Best regards, Miguel ?ngel > On 16/4/2015, at 3:23, Matt Fischer wrote: > > I'd like to have some better logging when certain CRUD operations happen in Keystone, for example, when a project is deleted. I specifically mean "any" when I say better since right now I'm not seeing anything even when Verbose is enabled. > > This is pretty frustrating for me because these are rather important events, certainly more important than my load balancers hitting Keystone which it's happily logging twice a second. > > I know that Keystone supports some audit event notifications [1]. Can I simply have these reflect back into the main logs somehow? > > > [1] - http://docs.openstack.org/developer/keystone/event_notifications.html _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators Miguel Angel Ajo -------------- next part -------------- An HTML attachment was scrubbed... URL: From comnea.dani at gmail.com Thu Apr 16 07:18:16 2015 From: comnea.dani at gmail.com (Daniel Comnea) Date: Thu, 16 Apr 2015 08:18:16 +0100 Subject: [Openstack-operators] No more identity groups? In-Reply-To: <912B7924-0CC1-4A78-9AD6-D49CAAAD07B1@overstock.com> References: <1A3C52DFCD06494D8528644858247BF01A1D3008@EX10MBOX03.pnnl.gov> <912B7924-0CC1-4A78-9AD6-D49CAAAD07B1@overstock.com> Message-ID: Am i right in thinking v3 is not in Icehouse hence the above trick won't work? On Thu, Apr 16, 2015 at 2:49 AM, Mike Smith wrote: > Thanks. I made the following changes to the local_settings file for > horizon and now the groups show up: > > OPENSTACK_API_VERSIONS = { > "identity": 3 > } > > OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST > > > > On Apr 15, 2015, at 5:47 PM, Fox, Kevin M wrote: > > > > Its part of the keystone v3 api I think. I've only seen it show up when > I configure horizon to specifically talk v3 to keystone. > > > > Thanks, > > Kevin > > ________________________________________ > > From: Mike Smith [mismith at overstock.com] > > Sent: Wednesday, April 15, 2015 4:33 PM > > To: > > Subject: [Openstack-operators] No more identity groups? > > > > Hello all. On my Havana-version openstack cloud, Horizon has the > option of creating groups of users and then adding groups of users to > tenants instead of having to add users individually. On my Juno-version > cloud, this feature seems to no longer exist. I checked the keystone > catalog and both are using v2.0 of the identify API. > > > > Does anybody know if this is something that we removed, or if there is > something I need to do to enable this identity group functionality? > > > > Thanks, > > Mike > > > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > ________________________________ > > CONFIDENTIALITY NOTICE: This message is intended only for the use and > review of the individual or entity to which it is addressed and may contain > information that is privileged and confidential. If the reader of this > message is not the intended recipient, or the employee or agent responsible > for delivering the message solely to the intended recipient, you are hereby > notified that any dissemination, distribution or copying of this > communication is strictly prohibited. If you have received this > communication in error, please notify sender immediately by telephone or > return email. Thank you. > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Thu Apr 16 10:10:41 2015 From: Tim.Bell at cern.ch (Tim Bell) Date: Thu, 16 Apr 2015 10:10:41 +0000 Subject: [Openstack-operators] OpenStack HPC/HTC operator session Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E501029CEA09@CERNXCHG43.cern.ch> There is an opportunity for a get-together of the people running OpenStack for HPC/HTC workloads at Vancouver. Getting the best of out of the cloud while at the same time meeting the needs of the user communities is a challenging job so sharing experiences for an hour would be really useful. We're collecting +1s for attendance on openstack-hpc at lists.openstack.org so please reply there if you'd be interested. Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From dstanek at dstanek.com Thu Apr 16 11:56:30 2015 From: dstanek at dstanek.com (David Stanek) Date: Thu, 16 Apr 2015 07:56:30 -0400 Subject: [Openstack-operators] logging for Keystone on user/project delete/create operations In-Reply-To: References: Message-ID: On Thu, Apr 16, 2015 at 1:10 AM, Miguel Angel Ajo Pelayo < mangelajo at redhat.com> wrote: > I?m not involved in the keystone project, but I?d recommend you to start > by filling a blueprint > asking for it, and explaining what you just said here: > > https://blueprints.launchpad.net/keystone > Adding a blueprint for discussion would be a good idea if you think you want a change to the project. > I?d also try to contact Keystone PTL (I?m not sure who is the PTL). > Morgan Fainberg is out PTL. > > Best regards, > Miguel ?ngel > > On 16/4/2015, at 3:23, Matt Fischer wrote: > > I'd like to have some better logging when certain CRUD operations happen > in Keystone, for example, when a project is deleted. I specifically mean > "any" when I say better since right now I'm not seeing anything even when > Verbose is enabled. > > This is pretty frustrating for me because these are rather important > events, certainly more important than my load balancers hitting Keystone > which it's happily logging twice a second. > > I know that Keystone supports some audit event notifications [1]. Can I > simply have these reflect back into the main logs somehow? > > It would be possible (and trivial) to add logging messages at the INFO level, but I'm not sure that is what you really want. I don't know much about the operational side at this point, but I'm hoping that there's a way to consume the notification events and then write them to a log if that's what you wish to do. > > [1] - > http://docs.openstack.org/developer/keystone/event_notifications.html > > -- David blog: http://www.traceback.org twitter: http://twitter.com/dstanek www: http://dstanek.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Thu Apr 16 14:50:43 2015 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Thu, 16 Apr 2015 07:50:43 -0700 Subject: [Openstack-operators] logging for Keystone on user/project delete/create operations In-Reply-To: References: Message-ID: <8E86A734-2D7D-431B-BD17-349BCFE9A833@gmail.com> > On Apr 16, 2015, at 04:56, David Stanek wrote: > > > >> On Thu, Apr 16, 2015 at 1:10 AM, Miguel Angel Ajo Pelayo wrote: >> I?m not involved in the keystone project, but I?d recommend you to start by filling a blueprint >> asking for it, and explaining what you just said here: >> >> https://blueprints.launchpad.net/keystone > > Adding a blueprint for discussion would be a good idea if you think you want a change to the project. > > >> >> I?d also try to contact Keystone PTL (I?m not sure who is the PTL). > > Morgan Fainberg is out PTL. > > >> >> Best regards, >> Miguel ?ngel >> >>> On 16/4/2015, at 3:23, Matt Fischer wrote: >>> >>> I'd like to have some better logging when certain CRUD operations happen in Keystone, for example, when a project is deleted. I specifically mean "any" when I say better since right now I'm not seeing anything even when Verbose is enabled. >>> >>> This is pretty frustrating for me because these are rather important events, certainly more important than my load balancers hitting Keystone which it's happily logging twice a second. >>> >>> I know that Keystone supports some audit event notifications [1]. Can I simply have these reflect back into the main logs somehow? > > It would be possible (and trivial) to add logging messages at the INFO level, but I'm not sure that is what you really want. I don't know much about the operational side at this point, but I'm hoping that there's a way to consume the notification events and then write them to a log if that's what you wish to do. > It wouldn't hurt us to see logging expanded upon. As long as the logging conforms to the cross-project logging spec[1]. [1] https://github.com/openstack/openstack-specs/blob/master/specs/log-guidelines.rst --Morgan Sent via mobile -------------- next part -------------- An HTML attachment was scrubbed... URL: From gord at live.ca Thu Apr 16 14:58:14 2015 From: gord at live.ca (gordon chung) Date: Thu, 16 Apr 2015 10:58:14 -0400 Subject: [Openstack-operators] logging for Keystone on user/project delete/create operations In-Reply-To: <8E86A734-2D7D-431B-BD17-349BCFE9A833@gmail.com> References: , , , <8E86A734-2D7D-431B-BD17-349BCFE9A833@gmail.com> Message-ID: _______________________________ > From: morgan.fainberg at gmail.com? > Date: Thu, 16 Apr 2015 07:50:43 -0700? > To: dstanek at dstanek.com? > CC: openstack-operators at lists.openstack.org? > Subject: Re: [Openstack-operators] logging for Keystone on user/project? > delete/create operations? >? >? >? > On Apr 16, 2015, at 04:56, David Stanek? > > wrote:? >? >? >? > On Thu, Apr 16, 2015 at 1:10 AM, Miguel Angel Ajo Pelayo? > > wrote:? > I?m not involved in the keystone project, but I?d recommend you to? > start by filling a blueprint? > asking for it, and explaining what you just said here:? >? > https://blueprints.launchpad.net/keystone? >? > Adding a blueprint for discussion would be a good idea if you think you? > want a change to the project.? >? >? >? > I?d also try to contact Keystone PTL (I?m not sure who is the PTL).? >? > Morgan Fainberg is out PTL.? >? >? >? > Best regards,? > Miguel ?ngel? >? > On 16/4/2015, at 3:23, Matt Fischer? > > wrote:? >? > I'd like to have some better logging when certain CRUD operations? > happen in Keystone, for example, when a project is deleted. I? > specifically mean "any" when I say better since right now I'm not? > seeing anything even when Verbose is enabled.? >? > This is pretty frustrating for me because these are rather important? > events, certainly more important than my load balancers hitting? > Keystone which it's happily logging twice a second.? >? > I know that Keystone supports some audit event notifications [1]. Can I? > simply have these reflect back into the main logs somehow?? >? > It would be possible (and trivial) to add logging messages at the INFO? > level, but I'm not sure that is what you really want. I don't know much? > about the operational side at this point, but I'm hoping that there's a? > way to consume the notification events and then write them to a log if? > that's what you wish to do.? Ceilometer listens to these notifications currently and it's possible to write them to a file rather than a database. a lot of this functionality was worked on in Kilo but there may be a way to support this in Juno and Icehouse (disclaimer: may require some patching and even more patching, respectively) cheers, gord From jlk at bluebox.net Thu Apr 16 16:50:31 2015 From: jlk at bluebox.net (Jesse Keating) Date: Thu, 16 Apr 2015 09:50:31 -0700 Subject: [Openstack-operators] logging for Keystone on user/project delete/create operations In-Reply-To: References: <8E86A734-2D7D-431B-BD17-349BCFE9A833@gmail.com> Message-ID: Standing up Ceilometer (and patching things) just to be able to log this stuff to a file seems rather... heavy handed? We understand that these things are emitted via notifications, but as of right now trying to do anything with those notifications such as simply logging them requires too much additional infrastructure. - jlk On Thu, Apr 16, 2015 at 7:58 AM, gordon chung wrote: > > > _______________________________ > > From: morgan.fainberg at gmail.com > > Date: Thu, 16 Apr 2015 07:50:43 -0700 > > To: dstanek at dstanek.com > > CC: openstack-operators at lists.openstack.org > > Subject: Re: [Openstack-operators] logging for Keystone on user/project > > delete/create operations > > > > > > > > On Apr 16, 2015, at 04:56, David Stanek > > > wrote: > > > > > > > > On Thu, Apr 16, 2015 at 1:10 AM, Miguel Angel Ajo Pelayo > > > wrote: > > I?m not involved in the keystone project, but I?d recommend you to > > start by filling a blueprint > > asking for it, and explaining what you just said here: > > > > https://blueprints.launchpad.net/keystone > > > > Adding a blueprint for discussion would be a good idea if you think you > > want a change to the project. > > > > > > > > I?d also try to contact Keystone PTL (I?m not sure who is the PTL). > > > > Morgan Fainberg is out PTL. > > > > > > > > Best regards, > > Miguel ?ngel > > > > On 16/4/2015, at 3:23, Matt Fischer > > > wrote: > > > > I'd like to have some better logging when certain CRUD operations > > happen in Keystone, for example, when a project is deleted. I > > specifically mean "any" when I say better since right now I'm not > > seeing anything even when Verbose is enabled. > > > > This is pretty frustrating for me because these are rather important > > events, certainly more important than my load balancers hitting > > Keystone which it's happily logging twice a second. > > > > I know that Keystone supports some audit event notifications [1]. Can I > > simply have these reflect back into the main logs somehow? > > > > It would be possible (and trivial) to add logging messages at the INFO > > level, but I'm not sure that is what you really want. I don't know much > > about the operational side at this point, but I'm hoping that there's a > > way to consume the notification events and then write them to a log if > > that's what you wish to do. > > Ceilometer listens to these notifications currently and it's possible to > write them to a file rather than a database. a lot of this functionality > was worked on in Kilo but there may be a way to support this in Juno and > Icehouse (disclaimer: may require some patching and even more patching, > respectively) > > cheers, > gord > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at mattfischer.com Thu Apr 16 17:06:48 2015 From: matt at mattfischer.com (Matt Fischer) Date: Thu, 16 Apr 2015 11:06:48 -0600 Subject: [Openstack-operators] logging for Keystone on user/project delete/create operations In-Reply-To: References: <8E86A734-2D7D-431B-BD17-349BCFE9A833@gmail.com> Message-ID: So I've not had time to fully test this, but Steve Martinelli let me know that you can simply set the notification_driver to "log" and keystone will log them. The format is a bit more obtuse than I'd like to see and it uses ids rather than names which means I'd need to do some correlation later. Here's what I get out of Juno on a user-create: 2015-04-16 15:07:10.137 23939 INFO oslo.messaging.notification.identity.user.created [-] {"priority": "INFO", "event_type": "identity.user.created", "timestamp": "2015-04-16 15:07:10.137145", "publisher_id": "identity.dev01-keystone-001", "payload": {"resource_info": "c79e7f9f64d04ae78b3e069e5cafb9fd"}, "message_id": "3b1bce1f-8049-4b4f-986f-2ada3d918ee7"} In the message above the c79e is the ID of the user I just created. Another downside so far seems to be that its logging too much, including every time someone authenticates and that it's at INFO level which I hate because it logs my LB connection. I think the former may be configurable, the latter is not unless I hack Keystone. Also in K you can set the format for these either to basic or cadf. I've pushed a puppet change to allow this to be set via puppet. This gets me way ahead of where I wanted to be without using either rabbitmq or ceilometer (which we've gutted from our environment anyway). On Thu, Apr 16, 2015 at 10:50 AM, Jesse Keating wrote: > Standing up Ceilometer (and patching things) just to be able to log this > stuff to a file seems rather... heavy handed? We understand that these > things are emitted via notifications, but as of right now trying to do > anything with those notifications such as simply logging them requires too > much additional infrastructure. > > > - jlk > > On Thu, Apr 16, 2015 at 7:58 AM, gordon chung wrote: > >> >> >> _______________________________ >> > From: morgan.fainberg at gmail.com >> > Date: Thu, 16 Apr 2015 07:50:43 -0700 >> > To: dstanek at dstanek.com >> > CC: openstack-operators at lists.openstack.org >> > Subject: Re: [Openstack-operators] logging for Keystone on user/project >> > delete/create operations >> > >> > >> > >> > On Apr 16, 2015, at 04:56, David Stanek >> > > wrote: >> > >> > >> > >> > On Thu, Apr 16, 2015 at 1:10 AM, Miguel Angel Ajo Pelayo >> > > wrote: >> > I?m not involved in the keystone project, but I?d recommend you to >> > start by filling a blueprint >> > asking for it, and explaining what you just said here: >> > >> > https://blueprints.launchpad.net/keystone >> > >> > Adding a blueprint for discussion would be a good idea if you think you >> > want a change to the project. >> > >> > >> > >> > I?d also try to contact Keystone PTL (I?m not sure who is the PTL). >> > >> > Morgan Fainberg is out PTL. >> > >> > >> > >> > Best regards, >> > Miguel ?ngel >> > >> > On 16/4/2015, at 3:23, Matt Fischer >> > > wrote: >> > >> > I'd like to have some better logging when certain CRUD operations >> > happen in Keystone, for example, when a project is deleted. I >> > specifically mean "any" when I say better since right now I'm not >> > seeing anything even when Verbose is enabled. >> > >> > This is pretty frustrating for me because these are rather important >> > events, certainly more important than my load balancers hitting >> > Keystone which it's happily logging twice a second. >> > >> > I know that Keystone supports some audit event notifications [1]. Can I >> > simply have these reflect back into the main logs somehow? >> > >> > It would be possible (and trivial) to add logging messages at the INFO >> > level, but I'm not sure that is what you really want. I don't know much >> > about the operational side at this point, but I'm hoping that there's a >> > way to consume the notification events and then write them to a log if >> > that's what you wish to do. >> >> Ceilometer listens to these notifications currently and it's possible to >> write them to a file rather than a database. a lot of this functionality >> was worked on in Kilo but there may be a way to support this in Juno and >> Icehouse (disclaimer: may require some patching and even more patching, >> respectively) >> >> cheers, >> gord >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gord at live.ca Thu Apr 16 18:25:10 2015 From: gord at live.ca (gordon chung) Date: Thu, 16 Apr 2015 14:25:10 -0400 Subject: [Openstack-operators] logging for Keystone on user/project delete/create operations In-Reply-To: References: , , , <8E86A734-2D7D-431B-BD17-349BCFE9A833@gmail.com>, , Message-ID: > Standing up Ceilometer (and patching things) just to be able to log? > this stuff to a file seems rather... heavy handed? We understand that? > these things are emitted via notifications, but as of right now trying? > to do anything with those notifications such as simply logging them? > requires too much additional infrastructure.? agreed. it's really dependent on what your use case is... just pointing out an option to the statement: "hoping that there's a?way to consume the notification events and then write them to a log". if you're just looking to track operations of a single keystone service, then you probably don't want notifications. if you're looking to collate them against various other things then you'll need some service/tool to either listen to notifications or grab/process logs from multiple places. cheers, gord > _______________________________ >> From: morgan.fainberg at gmail.com >> Date: Thu, 16 Apr 2015 07:50:43 -0700 >> To: dstanek at dstanek.com >> CC: > openstack-operators at lists.openstack.org >> Subject: Re: [Openstack-operators] logging for Keystone on user/project >> delete/create operations >> >> >> >> On Apr 16, 2015, at 04:56, David Stanek >> > >> > wrote: >> >> >> >> On Thu, Apr 16, 2015 at 1:10 AM, Miguel Angel Ajo Pelayo >> > >> > wrote: >> I?m not involved in the keystone project, but I?d recommend you to >> start by filling a blueprint >> asking for it, and explaining what you just said here: >> >> https://blueprints.launchpad.net/keystone >> >> Adding a blueprint for discussion would be a good idea if you think you >> want a change to the project. >> >> >> >> I?d also try to contact Keystone PTL (I?m not sure who is the PTL). >> >> Morgan Fainberg is out PTL. >> >> >> >> Best regards, >> Miguel ?ngel >> >> On 16/4/2015, at 3:23, Matt Fischer >> > >> > wrote: >> >> I'd like to have some better logging when certain CRUD operations >> happen in Keystone, for example, when a project is deleted. I >> specifically mean "any" when I say better since right now I'm not >> seeing anything even when Verbose is enabled. >> >> This is pretty frustrating for me because these are rather important >> events, certainly more important than my load balancers hitting >> Keystone which it's happily logging twice a second. >> >> I know that Keystone supports some audit event notifications [1]. Can I >> simply have these reflect back into the main logs somehow? >> >> It would be possible (and trivial) to add logging messages at the INFO >> level, but I'm not sure that is what you really want. I don't know much >> about the operational side at this point, but I'm hoping that there's a >> way to consume the notification events and then write them to a log if >> that's what you wish to do. > > Ceilometer listens to these notifications currently and it's possible > to write them to a file rather than a database. a lot of this > functionality was worked on in Kilo but there may be a way to support > this in Juno and Icehouse (disclaimer: may require some patching and > even more patching, respectively) > > cheers, > gord > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ OpenStack-operators > mailing list OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From stefano at openstack.org Fri Apr 17 18:21:10 2015 From: stefano at openstack.org (Stefano Maffulli) Date: Fri, 17 Apr 2015 11:21:10 -0700 Subject: [Openstack-operators] OpenStack Community Weekly Newsletter (Apr 10 - 17) Message-ID: <1429294870.3412.111.camel@sputacchio.gateway.2wire.net> OpenStack DefCore Community Review ? TWO Sessions April 21 (agenda) During the DefCore process, we?ve had regular community check points to review and discuss the latest materials from the committee. With the latest work on the official process and flurry of Guidelines, we?ve got a lot of concrete material to show. To accommodate global participants, we?ll have TWO sessions (and record both): 1. April 21 8 am Central (1 pm UTC) https://join.me/874-029-687 2. April 21 8 pm Central (9 am Hong Kong) https://join.me/903-179-768 Help pick the new logos for OpenStack User Groups & OpenStack Days We are introducing a unique logo for OpenStack User Groups, to help recognize official groups and their important role in helping grow the OpenStack Community. The deadline to complete the survey is Wednesday April 22, 2015. Let your voice be heard: vote for the Superuser Award! The Superuser Awards, first launched at the Paris Summit, recognize a team that uses OpenStack to meaningfully improve their business and differentiate in a competitive industry, while also contributing back to the community. This time, the OpenStack community votes to select the winner. * National Supercomputer Center in Guangzhou * Walmart * eBay Inc. * Comcast OpenStack breaks a new record for active contributors With 556 active contributors per month, this first quarter of 2015 is the highest in the history of OpenStack. Compared to the previous quarter, core and regular contributors have also increased 10 percent and 20 percent, respectively. You can read the complete report, including details of the methodology, on the Activity Board repository or take deep into the activity of OpenStack individual projects. PTL Election Conclusion and Results Congratulations to the newly elected PTL John Garbutt for Nova and thank you Michael Still for the hard work. The other PTLs have been confirmed. The list and full announcement on the mailing list. The Road to Vancouver * Canada Visa Information * Official Hotel Room Blocks * 2015 OpenStack T-Shirt Design Contest * Preparation to Design summit * Liberty Design Summit - Proposed room / time layout * What's a Design Summit? We can squeeze few more people in Upstream Training Relevant Conversations * Proposed Liberty release schedule Deadlines and Development Priorities * Candidate proposals for TC (Technical Committee) positions are now open * proposed/kilo is dead, long live stable/kilo Security Advisories and Notices * Unauthorized delete of versioned Swift object (CVE-2015-1856) * S3Token TLS cert verification option not honored (CVE-2015-1852) Tips ?n Tricks * By Sergey Melikyan: OpenStack Murano application catalog: Heat-based applications in the cloud * By Steve Dake: Preserving contaner properties via volume mounts * By Assaf Muller: Distributed Virtual Routing ? Overview and East/West Routing and Distributed Virtual Routing ? SNAT and Distributed Virtual Routing ? Floating IPs * By Michael J. Clarkson, Jr.: OpenStack Juno multi compute node Lab ? Ready to use environment on AWS and Google Cloud * By Adam Young: Creating a new Network for a dual NIC VM and Using the openstack command line interface to create a new server. Upcoming Events * Apr 17, 2015 Inaugural OpenStack PDX Meetup * Apr 17, 2015 SFBay OpenStack Advanced Track #OSSFO Topic: Keystone * Apr 18, 2015 April Sydney Hackathon * Apr 18, 2015 OpenStack India Meetup, Bangalore Bangalore, Karnataka, IN * Apr 20, 2015 Assembl?e g?n?rale OpenStackFR 2015 Paris, FR * Apr 20, 2015 Meetup#14 Placement intelligent et SLA avanc? avec Nova Scheduler Paris, FR * Apr 20, 2015 Melbourne Meetup featuring Randy Bias * Apr 20 - 21, 2015 OpenStack Conference, part of the CONNECT Show Melbourne, AU * Apr 20, 2015 Ubuntu OpenStack Roadshow London, GB * Apr 20, 2015 OpenStackFR annual general meeting Paris, US * Apr 20, 2015 Advanced placement with Nova Paris, FR * Apr 21 - 22, 2015 CONNECT 2015 Melbourne, Victoria, AU * Apr 21 - 24, 2015 The Great Indian HP Code Off @ GIDS Bangalore, Karnataka, IN * Apr 21, 2015 Ubuntu OpenStack Roadshow Paris, FR * Apr 22, 2015 Canonical Ubuntu OpenStack Roadshow Amsterdam, NL * Apr 22, 2015 Data storage in clouds Athens, GR * Apr 22, 2015 OpenStack?te farkli mimari ornekleri, farklari, artilari, eksileri * Apr 22 - 23, 2015 China SDNNFV Conference Beijing, CN * Apr 22, 2015 OpenStack NYC Meetup New York, NY, US * Apr 22, 2015 Ubuntu OpenStack Roadshow Amsterdam, NL * Apr 22, 2015 Data storage in OpenStack clouds Athens, GR * Apr 23, 2015 OpenStack Philadelphia Meetup Philadelphia, PA, US * Apr 23, 2015 OpenStack L.A. Meetup Pasadena, CA, US * Apr 23, 2015 Ubuntu OpenStack Roadshow Frankfurt, DE * Apr 23, 2015 Business Agility with OpenStack New York City, New York, US * May 05 - 07, 2015 CeBIT AU 2015 Sydney, NSW, AU * May 09, 2015 OpenStack Meetup Hanoi Hanoi, Hanoi, VN * May 18 - 22, 2015 OpenStack Summit May 2015 Vancouver, BC * Jun 02, 2015 OpenStack Day LATAM Mexico City, MX * Jun 04, 2015 OpenStack Days Istanbul ? 2015 Istanbul, TR * Jun 08, 2015 OpenStack CEE Day 2015 Budapest, HU * Jun 11, 2015 OpenStack DACH Day 2015 Berlin, DE * Jun 15, 2015 OpenStack Israel Tel Aviv, IL * Jul 20 - 24, 2015 OSCON 2015 Portland, OR, US Other News * OpenStack Technical Committee candidates * Real innovation? It means keeping cloud solutions simple * What's Up Doc? April 17, 2015 The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano at openstack.org Fri Apr 17 18:28:35 2015 From: stefano at openstack.org (Stefano Maffulli) Date: Fri, 17 Apr 2015 11:28:35 -0700 Subject: [Openstack-operators] OpenStack Community Weekly Newsletter (Apr 10 - 17) In-Reply-To: <1429294870.3412.111.camel@sputacchio.gateway.2wire.net> References: <1429294870.3412.111.camel@sputacchio.gateway.2wire.net> Message-ID: <1429295315.3412.114.camel@sputacchio.gateway.2wire.net> On Fri, 2015-04-17 at 11:21 -0700, Stefano Maffulli wrote: > PTL Election Conclusion and Results > Congratulations to the newly elected PTL John Garbutt for Nova and > thank you Michael Still for the hard work. The other PTLs have been > confirmed. The list and full announcement on the mailing list. Actually, I made a mistake here: there are a bunch of new PTLs. What I meant to write is that only Nova had a change due to an election, while the other changes were unchallenged successions with only one candidate. I apologize for the mistake. /stef From carl at ecbaldwin.net Fri Apr 17 19:18:26 2015 From: carl at ecbaldwin.net (Carl Baldwin) Date: Fri, 17 Apr 2015 13:18:26 -0600 Subject: [Openstack-operators] [Neutron] Floating IPs / Router Gateways Message-ID: Jacob, I don't have your original email from which to reply. So, hopefully this finds you just as well. The bad news is that I don't have an immediate answer to address this. However, I thought it was worth mentioning where the future may lead. I have been thinking about the scenario that you describe for a while now. I've started to write blueprints for Liberty to address this. The first blueprint specification [1] describes adding private backing subnets to an external network. Initially, I'll use this capability to eliminate public IP waste in distributed routers. I'm writing a follow-on blueprint to this that will leverage it to eliminate the virtual routers' dedicated public IP addresses completely. Routers' gateway addresses will then be allocated only from the private subnet. I haven't posted the specification yet but will try to post it today. Your infrastructure will have to provide your own SNAT to the internet from these private addresses but it sounds like you've already an idea for that based on your description: > We want to have: instance -> (gateway IP) virtual router NAT (private IP) -> (private gateway) router NAT (this NAT provided by your infrastructure). If we can manage to implement these two blueprints in Liberty then we would have the perfect solution for you. Carl [1] https://review.openstack.org/#/c/172244/ From hillad at gmail.com Fri Apr 17 20:05:07 2015 From: hillad at gmail.com (Andy Hill) Date: Fri, 17 Apr 2015 16:05:07 -0400 Subject: [Openstack-operators] [Large Deployments] Proposed change to default status of compute nodes Message-ID: Hi all, At the mid-cycle meetup[1], we discussed changing the status of newly added compute nodes from enabled to disabled. I've created a blueprint/spec to propose this change[2] in Nova. Please add your comments/feedback. Thanks, -AH [1] https://etherpad.openstack.org/p/PHL-ops-large-deployments [2] https://blueprints.launchpad.net/nova/+spec/disable-compute-default [3] https://review.openstack.org/#/c/175037/ From james.page at ubuntu.com Fri Apr 17 20:15:11 2015 From: james.page at ubuntu.com (James Page) Date: Fri, 17 Apr 2015 21:15:11 +0100 Subject: [Openstack-operators] OpenStack Kilo RC1 for Ubuntu 14.04 LTS and Ubuntu 15.04. Message-ID: <553169CF.2000209@ubuntu.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi All The Ubuntu OpenStack Engineering team is pleased to announce the general availability of the first release candidate of the OpenStack Kilo release in Ubuntu 15.04 development and for Ubuntu 14.04 LTS via the Ubuntu Cloud Archive. Ubuntu 14.04 LTS - ---------------- You can enable the Ubuntu Cloud Archive for OpenStack Kilo on Ubuntu 14.04 installations by running the following commands: sudo add-apt-repository cloud-archive:kilo sudo apt-get update The Ubuntu Cloud Archive for Kilo includes updates for Nova, Glance, Keystone, Neutron, Cinder, Horizon, Ceilometer and Heat; Ceph (0.94.1), RabbitMQ (3.4.2), QEMU (2.2), libvirt (1.2.12) and Open vSwitch (2.3.1) back-ports from 15.04 development have also been provide d. You can checkout the full list of packages and versions at [0]. Note that for Swift we're still at version 2.2.2 - we're currently reviewing whether to include 2.3.0 for general release, or whether to defer that upgrade to next cycle. Ubuntu 15.04 development - ------------------------ No extra steps required; just start installing OpenStack! New OpenStack components - ------------------------ In addition to Trove, Sahara and Ironic we have now added Designate and Manila to the Ubuntu 15.04 and to the Ubuntu Cloud Archive for Ubuntu 14.04 LTS. Neutron Driver Decomposition - ---------------------------- As of Kilo RC1, Ubuntu are only tracking the decomposition of Neutron FWaaS, LBaaS and VPNaaS from Neutron core in the Ubuntu archive; we expect to add additional packages for other Neutron ML2 mechanism drivers and plugins early during the Liberty/15.10 development cycle - we'll provide these as backports to OpenStack Kilo users as and when they become available. OpenStack Kilo Release - ---------------------- We have the slightly exciting situation this cycle in that OpenStack Kilo releases a week after Ubuntu 15.04; The Ubuntu OpenStack Engineering team will be working on a stable update for all OpenStack projects as soon as OpenStack Kilo is released. I'd anticipate that these updates should be available around a week after the kilo release date. Reporting bugs - -------------- Any issues please report bugs using the 'ubuntu-bug' tool: sudo ubuntu-bug nova-conductor this will ensure that bugs get logged in the right place in Launchpad. Thanks and have fun! Cheers James [0] http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/kilo_ve rsions.html - -- James Page Ubuntu and Debian Developer james.page at ubuntu.com jamespage at debian.org -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCAAGBQJVMWnPAAoJEL/srsug59jDh4wP/3nOep9Pj1Pmy5+0KQTG1YZ4 MkGnR38H0wd1xtSJ3nm3TTnB6zfp2PYuka6VPtndOOKbsGbxNZHit8DYwqnOyWSO 1d1Sp18ebQinPgEGFt5wDJmJKD3vvq7iz84q++B9uWalFYVD1Wn0cBiZd9Z9AV/7 JVPECWXCn4ZYkVFyt4RSYOeFSZfM4/iBkhChxWERGd+vG7boLAEnA48Aqk0sn6ma NlGAhUYgEPGSCL+b0oEcxU5GXqMsDoCc5P1H2NNm9xYutKPWYczn+eche07oqEeN 0hA71Eth1pIhyfzqY9KTkc3KGuTF70JduJERmNA2DqStr0XclOhMgO8oNoqxlZ/N 8m+o4XdpQkD8IVYY3flTRQOt8TvHHLixZVVGHeZ6dinvPOX8LksTOwn1FCQsURN3 VWwAlxPmNJQphNZZdQbRpkA3dH+lEQlNajMhgd4UNk2JXNqHQ+yeoHcettlVkwmv UqV3MWqX99lDrODHsNEw9mNVU2jh1TRg38Gzk26H6TISnNI9QNjDxKzEidenLQAa 6josINTIroVgOUDHOQRP/Him0jGJumpzIU/OO8DCql4MNRNtbz6vJFzcHND1IfUt 5kbKJnBrysWh+oySXkhOXZ9v0OIxnt9k5ZiekxtQ6b5bmkO9MF9eneTcI2pzjJfH CphQnWEqYglCW30/vvXq =JD2G -----END PGP SIGNATURE----- From annegentle at justwriteclick.com Sat Apr 18 00:03:07 2015 From: annegentle at justwriteclick.com (Anne Gentle) Date: Fri, 17 Apr 2015 19:03:07 -0500 Subject: [Openstack-operators] [Openstack] OpenStack Kilo RC1 for Ubuntu 14.04 LTS and Ubuntu 15.04. In-Reply-To: References: <553169CF.2000209@ubuntu.com> Message-ID: On Fri, Apr 17, 2015 at 5:29 PM, Martinx - ????? wrote: > AWESOME!!! > > Installing it right now! > > BTW, where can I find the "trunk" documentation? > > The following docs for Kilo: > > http://docs.openstack.org/trunk/install-guide/install/apt/ or > > http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_keystone.html > > ...does not exists anymore... :-/ > We're planning to move all /trunk/ to /draft/, but for now, review work-in-progress through reviewing patches, please. https://review.openstack.org/#/c/174878/ is one recent example, and here is where it is built to: http://docs-draft.openstack.org/78/174878/5/check/gate-openstack-manuals-tox-doc-publish-checkbuild/38d6b1e//publish-docs/trunk/install-guide/install/apt/content/index.html Appreciate the work here James -- we'd love early coordination with docs through interacting with the Install Guide specialty team. They meet weekly Tuesdays at 13:00. https://wiki.openstack.org/wiki/Meetings#Install_Guide_Update_Meeting Thanks, Anne > > Thank you! > Thiago > > On 17 April 2015 at 17:15, James Page wrote: > >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA256 >> >> Hi All >> >> The Ubuntu OpenStack Engineering team is pleased to announce the >> general availability of the first release candidate of the OpenStack >> Kilo release in Ubuntu 15.04 development and for Ubuntu 14.04 LTS via >> the Ubuntu Cloud Archive. >> >> Ubuntu 14.04 LTS >> - ---------------- >> >> You can enable the Ubuntu Cloud Archive for OpenStack Kilo on Ubuntu >> 14.04 installations by running the following commands: >> >> sudo add-apt-repository cloud-archive:kilo >> sudo apt-get update >> >> The Ubuntu Cloud Archive for Kilo includes updates for Nova, Glance, >> Keystone, Neutron, Cinder, Horizon, Ceilometer and Heat; Ceph >> (0.94.1), RabbitMQ (3.4.2), QEMU (2.2), libvirt (1.2.12) and Open >> vSwitch (2.3.1) back-ports from 15.04 development have also been provide >> d. >> >> You can checkout the full list of packages and versions at [0]. >> >> Note that for Swift we're still at version 2.2.2 - we're currently >> reviewing whether to include 2.3.0 for general release, or whether to >> defer that upgrade to next cycle. >> >> Ubuntu 15.04 development >> - ------------------------ >> >> No extra steps required; just start installing OpenStack! >> >> New OpenStack components >> - ------------------------ >> >> In addition to Trove, Sahara and Ironic we have now added Designate >> and Manila to the Ubuntu 15.04 and to the Ubuntu Cloud Archive for >> Ubuntu 14.04 LTS. >> >> Neutron Driver Decomposition >> - ---------------------------- >> >> As of Kilo RC1, Ubuntu are only tracking the decomposition of Neutron >> FWaaS, LBaaS and VPNaaS from Neutron core in the Ubuntu archive; we >> expect to add additional packages for other Neutron ML2 mechanism >> drivers and plugins early during the Liberty/15.10 development cycle - >> we'll provide these as backports to OpenStack Kilo users as and when >> they become available. >> >> OpenStack Kilo Release >> - ---------------------- >> >> We have the slightly exciting situation this cycle in that OpenStack >> Kilo releases a week after Ubuntu 15.04; The Ubuntu OpenStack >> Engineering team will be working on a stable update for all OpenStack >> projects as soon as OpenStack Kilo is released. I'd anticipate that >> these updates should be available around a week after the kilo release >> date. >> >> Reporting bugs >> - -------------- >> >> Any issues please report bugs using the 'ubuntu-bug' tool: >> >> sudo ubuntu-bug nova-conductor >> >> this will ensure that bugs get logged in the right place in Launchpad. >> >> Thanks and have fun! >> >> Cheers >> >> James >> >> [0] >> http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/kilo_ve >> rsions.html >> >> - -- >> James Page >> Ubuntu and Debian Developer >> james.page at ubuntu.com >> jamespage at debian.org >> -----BEGIN PGP SIGNATURE----- >> Version: GnuPG v2 >> >> iQIcBAEBCAAGBQJVMWnPAAoJEL/srsug59jDh4wP/3nOep9Pj1Pmy5+0KQTG1YZ4 >> MkGnR38H0wd1xtSJ3nm3TTnB6zfp2PYuka6VPtndOOKbsGbxNZHit8DYwqnOyWSO >> 1d1Sp18ebQinPgEGFt5wDJmJKD3vvq7iz84q++B9uWalFYVD1Wn0cBiZd9Z9AV/7 >> JVPECWXCn4ZYkVFyt4RSYOeFSZfM4/iBkhChxWERGd+vG7boLAEnA48Aqk0sn6ma >> NlGAhUYgEPGSCL+b0oEcxU5GXqMsDoCc5P1H2NNm9xYutKPWYczn+eche07oqEeN >> 0hA71Eth1pIhyfzqY9KTkc3KGuTF70JduJERmNA2DqStr0XclOhMgO8oNoqxlZ/N >> 8m+o4XdpQkD8IVYY3flTRQOt8TvHHLixZVVGHeZ6dinvPOX8LksTOwn1FCQsURN3 >> VWwAlxPmNJQphNZZdQbRpkA3dH+lEQlNajMhgd4UNk2JXNqHQ+yeoHcettlVkwmv >> UqV3MWqX99lDrODHsNEw9mNVU2jh1TRg38Gzk26H6TISnNI9QNjDxKzEidenLQAa >> 6josINTIroVgOUDHOQRP/Him0jGJumpzIU/OO8DCql4MNRNtbz6vJFzcHND1IfUt >> 5kbKJnBrysWh+oySXkhOXZ9v0OIxnt9k5ZiekxtQ6b5bmkO9MF9eneTcI2pzjJfH >> CphQnWEqYglCW30/vvXq >> =JD2G >> -----END PGP SIGNATURE----- >> >> -- >> ubuntu-server mailing list >> ubuntu-server at lists.ubuntu.com >> https://lists.ubuntu.com/mailman/listinfo/ubuntu-server >> More info: https://wiki.ubuntu.com/ServerTeam >> > > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > -- Anne Gentle annegentle at justwriteclick.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mspreitz at us.ibm.com Sat Apr 18 02:12:36 2015 From: mspreitz at us.ibm.com (Mike Spreitzer) Date: Fri, 17 Apr 2015 22:12:36 -0400 Subject: [Openstack-operators] [Neutron] Floating IPs / Router Gateways In-Reply-To: References: <564711130.21500133.1429045564374.JavaMail.zimbra@redhat.com> <1510428524.21500280.1429045612339.JavaMail.zimbra@redhat.com> Message-ID: > From: Jacob Godin > To: Mike Spreitzer/Watson/IBM at IBMUS > Cc: Daniel Comnea , OpenStack Operators > > Date: 04/15/2015 08:37 AM > Subject: Re: [Openstack-operators] [Neutron] Floating IPs / Router Gateways > > Ah, gotcha. So you're not using overlapping subnets then. > > Unfortunately that hack wouldn't work in our environment, but > definitely something that others might consider using. Right, the solution I am using now imposes address constraints between tenants that share a router. I need to eliminate constraints between tenants, so I have to abandon the solution I am using. So I, too, am looking for different solution. I want to support a lot of tenants doing fairly unrestricted stuff, so all the connections --- from their Compute Instances that do NOT have a floating IP --- to public servers is more than I want to SNAT onto a *single* public address. Thanks, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From mspreitz at us.ibm.com Sat Apr 18 02:44:33 2015 From: mspreitz at us.ibm.com (Mike Spreitzer) Date: Fri, 17 Apr 2015 22:44:33 -0400 Subject: [Openstack-operators] [Neutron] Floating IPs / Router Gateways In-Reply-To: References: <564711130.21500133.1429045564374.JavaMail.zimbra@redhat.com> <1510428524.21500280.1429045612339.JavaMail.zimbra@redhat.com> Message-ID: > From: Mike Spreitzer/Watson/IBM at IBMUS > > > From: Jacob Godin > > > > Ah, gotcha. So you're not using overlapping subnets then. > > > > Unfortunately that hack wouldn't work in our environment, but > > definitely something that others might consider using. > > Right, the solution I am using now imposes address constraints > between tenants that share a router. I need to eliminate > constraints between tenants, so I have to abandon the solution I am > using. So I, too, am looking for different solution. > > I want to support a lot of tenants doing fairly unrestricted stuff, > so all the connections --- from their Compute Instances that do NOT > have a floating IP --- to public servers is more than I want to SNAT > onto a *single* public address. I found a few tantalizing leads in http://specs.openstack.org/openstack/neutron-specs/index.html I can not check them out fully right now because review.openstack.org is temporarily down. http://specs.openstack.org/openstack/neutron-specs/specs/kilo/specify-router-ext-ip.html "Allow the external IP address of a router to be specified" If you, like I, are intermediating the calls on Neutron and can transform a less specific call by the tenant into a precise formulation of your choosing (as either admin or the tenant, on a case by case basis), you can use the following solution. Let the "external" network known to Neutron not be the actual public network but rather some other private network. Using control over the router's IP on that other private network, scrunch all the router IP addresses into a dense range that is not in the allocation range. Thus, the router IP addresses and the tenants' floating IP addresses are separated - you can put them in distinct large CIDR blocks. Using some other router that connects that other private network to the actual public network, masquerade the router IP addresses onto however many public addresses you like, while doing 1:1 bidirectional NAT for the tenants' floating IP addresses. Regards, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From comnea.dani at gmail.com Sat Apr 18 08:06:47 2015 From: comnea.dani at gmail.com (Daniel Comnea) Date: Sat, 18 Apr 2015 09:06:47 +0100 Subject: [Openstack-operators] [Large Deployments] Proposed change to default status of compute nodes In-Reply-To: References: Message-ID: Nice work Andy, thanks! On Fri, Apr 17, 2015 at 9:05 PM, Andy Hill wrote: > Hi all, > > At the mid-cycle meetup[1], we discussed changing the status of newly > added compute nodes from enabled to disabled. > > I've created a blueprint/spec to propose this change[2] in Nova. > Please add your comments/feedback. > > Thanks, > > -AH > > > [1] https://etherpad.openstack.org/p/PHL-ops-large-deployments > [2] https://blueprints.launchpad.net/nova/+spec/disable-compute-default > [3] https://review.openstack.org/#/c/175037/ > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josephbajin at gmail.com Sun Apr 19 00:06:43 2015 From: josephbajin at gmail.com (Joseph Bajin) Date: Sat, 18 Apr 2015 20:06:43 -0400 Subject: [Openstack-operators] Operator Help to get patch merged Message-ID: I wanted to see about getting some Operator Help to push through a patch[1]. The patch is to not give the user a 404 message back when they click the cancel button why trying to change their password or the user settings. The patch resets the page. It's been sitting there for a while, but started to get some -1's and then +1's, then some move to kilo-rc1 to liberty and back. Some people think that those screens should be somewhere else, others think the text should be replaced, but that is not the purpose of the patch. It's just to not give back a negative user experience. So, I'm hoping that I can get some Operator support to get this merged and if they want to change the text, change the location, etc, then they can do it later down the road. Thanks -Joe [1] https://review.openstack.org/#/c/166569/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at ubuntu.com Mon Apr 20 07:28:29 2015 From: james.page at ubuntu.com (James Page) Date: Mon, 20 Apr 2015 08:28:29 +0100 Subject: [Openstack-operators] OpenStack Kilo RC1 for Ubuntu 14.04 LTS and Ubuntu 15.04. In-Reply-To: References: <553169CF.2000209@ubuntu.com> Message-ID: <5534AA9D.3080906@ubuntu.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi Trimming CC list down to openstack-operators only as this conversation seems pertinent there. On 20/04/15 01:33, Martinx - ????? wrote: > Listen, the Kilo for Ubuntu Trusty install documentation (draft) > currently recommends an external repository (for Debian Jessie) to > install RabbitMQ from it, right here: > > http://docs-draft.openstack.org/78/174878/5/check/gate-openstack-manua ls-tox-doc-publish-checkbuild/38d6b1e//publish-docs/trunk/install-guide/ install/apt/content/ch_basic_environment.html#basics-message-queue > > --- [...] > --- > > But, I don't like to rely on any third party repositories for my > production environments, I prefer to use only Ubuntu official > repositories, specially when the external repository was not > compiled for Trusty. I'd also prefer that the Ubuntu Cloud archive be used where possible :-) . Maybe the docs could be updated to optionally use the upstream RabbitMQ repo's? Some end-users will want the latest and greatest, whereas I suspect some will want to stick with a integrated distro release. > So, can you guys Backport the required RabbitMQ 3.5.1 to Trusty > and update the documentation? We can't provide 3.5.1 for OpenStack Kilo - we're well past Feature Freeze in Ubuntu, so a major version bump like this now is to high risk. We'll bump to 3.5.x next cycle. > I'll stick with Rabbit 3.4 from Trusty now. Do you know if Kilo > have problems with it? I'm not aware of any problems with 3.4.3 as provided in the Cloud Archive - although I do note a new point release - I'll take a look and see if there is anything critical. Just as a side note, I also see that MariaDB is installed in preference to stock mysql - its worth noting that MariaDB in Ubuntu does *not* receive stable support from either the Ubuntu Server or Ubuntu Security teams. Otto (the Debian & Ubuntu maintainer of MariaDB) does generally keep ontop of things, but this is a situation end-users should be aware of. Cheers James - -- James Page Ubuntu and Debian Developer james.page at ubuntu.com jamespage at debian.org -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCAAGBQJVNKqdAAoJEL/srsug59jDjdUP/2TsKx8VhaLN+auGXvzFfA/W LHZqqh/Umh9D6cAuTUy9zqX5duBzsa+1g5XGwqf0mygOpSc8IpSMaeSfTVb/9dOo SZI9T8/RvVNWgMVjaZDDRYhZhr0lmcbExhLoDeTkltjHXekAlAH3ejVKY0RQY5hT TtHYh6mTJumEvXUjuuwDVGNg5szjhCkjJk48V1yRcVVnyHizY5K6a+2VCQeOlnNb 9zib2GQqLHpuVkXpM9Tn9lncLfSAM1SaFFkasUbyeu1Ivzff61xxhgBX5Dg+O2hU XO53Sw1lA2O1MLHzCdsRi+bM+y6xzXsJEDCogiCqL8Rehsy7+bMHBz51XlM+tC9F cJer8+1NAoPJB9fZgQZh4vnQije5ObDn/s2uNT7UB8G+Em9Q0cFwrvFxmq5bH5Zb KNDj8IruMrfVsraJom6fRcCiTVjkbNha08+sURnSdpYjNtuYwjPFp872M+cqhfpb dbVRyrV1QPwjfr381V6tZu6bAHkZmrVv/1QNH01XdZkAUFXrXhFqUr4bLzUu9Yjp Q/pivN6FzlvQLBj1+7I1sHu/NUPf0+94HtT6nZAjknK9+nADBFjn35O619miCOvG xRxSrSUyTb2ocqdznnBwB+Ewid7kZGGFy/YjU8vIPanU58TP0N4AdmG5dUTqY5vg Rw1Sq8lQG5VoeScrVMsO =rNKh -----END PGP SIGNATURE----- From zaitcev at redhat.com Mon Apr 20 15:55:26 2015 From: zaitcev at redhat.com (Pete Zaitcev) Date: Mon, 20 Apr 2015 09:55:26 -0600 Subject: [Openstack-operators] Is Erasure Coding ready for production in Swift? In-Reply-To: References: Message-ID: <20150420095526.03110094@guren.zaitcev.lan> On Fri, 20 Mar 2015 18:42:00 +0800 Mingyu Li wrote: > The feature erasure coding has been developed for over a year. Is it ready > for production now? EC in Swift is in Beta in Kilo. It's quite rough, needs all the latest to even work at all (e.g. PyECLib 1.0.7). I would say it's only good for experiments at this point. On the other hand, someone has to start somewhere. -- Pete From robertc at robertcollins.net Mon Apr 20 23:36:26 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 21 Apr 2015 11:36:26 +1200 Subject: [Openstack-operators] PyCon-AU Openstack miniconf CFP open Message-ID: Hi everybody - I'd like to call attention to the PyCon-AU OpenStack miniconf. http://2015.pycon-au.org/cfp Pycon-au is an excellent conference, and the OpenStack miniconf is now in its third year - we're really getting into the swing of things. Please submit a paper to the main conference CFP - we'll be pulling from the that to select talks. Brisbane is a beautiful city, the venue is great - and so is the weather. If you've any questions about whether to submit or not, please ping me or Joshua Hesketh - we're organising the miniconf together and can provide guidance. Cheers, Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From sonali.prasad at accenture.com Tue Apr 21 08:41:05 2015 From: sonali.prasad at accenture.com (sonali.prasad at accenture.com) Date: Tue, 21 Apr 2015 08:41:05 +0000 Subject: [Openstack-operators] Metadata error. Message-ID: Hi Team, I am facing the metadata issue in the dashboard. While launching the instance, in the log file it gives the metadata error. And I am also unable to ping the particular IP. Kindly look into it and provide the resolution. I am using Ubuntu 14.04 LTS version. Regards, Sonali Prasad IDC-IC-DCT- Intelligent Infrastructure IaaS Infrastructure Consulting, Infrastructure Services - Accenture Operations ________________________________ This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. ______________________________________________________________________________________ www.accenture.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From alopgeek at gmail.com Tue Apr 21 16:19:23 2015 From: alopgeek at gmail.com (Abel Lopez) Date: Tue, 21 Apr 2015 09:19:23 -0700 Subject: [Openstack-operators] Metadata error. In-Reply-To: References: Message-ID: <37636E4F-8F4D-43B7-9288-13719EE7E18B@gmail.com> I think you might have misunderstood the nature of this list. We often times do provide answers to common issues, but this is not a support organization. You've provided no details of the errors you're experiencing, so it would be very difficult to help. Please use http://paste.openstack.org to copy/paste any relevant error messages/log files, also you may want to check on https://ask.openstack.org for your issue. > On Apr 21, 2015, at 1:41 AM, wrote: > > Hi Team, > > I am facing the metadata issue in the dashboard. > While launching the instance, in the log file it gives the metadata error. > And I am also unable to ping the particular IP. > Kindly look into it and provide the resolution. > I am using Ubuntu 14.04 LTS version. > > Regards, > Sonali Prasad > IDC?IC-DCT- Intelligent Infrastructure IaaS > Infrastructure Consulting, > Infrastructure Services ? ACCENTURE Operations > > > > > This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. > ______________________________________________________________________________________ > > www.accenture.com > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: Message signed with OpenPGP using GPGMail URL: From mkkang at isi.edu Tue Apr 21 17:30:54 2015 From: mkkang at isi.edu (Mikyung Kang) Date: Tue, 21 Apr 2015 10:30:54 -0700 (PDT) Subject: [Openstack-operators] neutron.db.migration.migrate_to_ml2 (Havana ovs->Icehouse ml2) In-Reply-To: <24239169.1835.1429637429799.JavaMail.mkang@remote44.east.isi.edu> Message-ID: <4481250.1836.1429637451570.JavaMail.mkang@remote44.east.isi.edu> Hello, I'm upgrading Neutron to ML2 while checking the following document on CentOS6.5. Also I tried it on CentOS7, but the result is same. http://docs.openstack.org/openstack-ops/content/upgrades_havana-icehouse-rhel.html#treeDiv Using neutron-db-manage, I updated database as follows: Even though I got string error (sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string) when upgrading to icehouse, it's resolved by adding "connection = mysql://id:pw at x.x.x.x/ovs_quantum" into [database] section in /etc/neutron/neutron.conf file. # neutron-db-manage --config-file /etc/neutron/neutron.conf \ > --config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini stamp havana No handlers could be found for logger "neutron.common.legacy" INFO [alembic.migration] Context impl MySQLImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.migration] Running stamp_revision -> havana # neutron-db-manage --config-file /etc/neutron/neutron.conf \ > --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade icehouse ... # neutron-db-manage --config-file /etc/neutron/neutron.conf \ > --config-file /etc/neutron/plugins/ml2/ml2_conf.ini current No handlers could be found for logger "neutron.common.legacy" INFO [alembic.migration] Context impl MySQLImpl. INFO [alembic.migration] Will assume non-transactional DDL. icehouse (head) After that, I'm trying to perform the conversion from OVS to ML2, but it can't create ml2_network_segments table. # python -m neutron.db.migration.migrate_to_ml2 openvswitch \ > mysql://id:pw at x.x.x.x/ovs_quantum Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/neutron/db/migration/migrate_to_ml2.py", line 462, in main() File "/usr/lib/python2.7/site-packages/neutron/db/migration/migrate_to_ml2.py", line 458, in main args.vxlan_udp_port) File "/usr/lib/python2.7/site-packages/neutron/db/migration/migrate_to_ml2.py", line 138, in __call__ metadata.create_all(engine) File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/schema.py", line 3352, in create_all tables=tables) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1617, in _run_visitor conn._run_visitor(visitorcallable, element, **kwargs) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1246, in _run_visitor **kwargs).traverse_single(element) File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/visitors.py", line 120, in traverse_single return meth(obj, **kw) File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 713, in visit_metadata self.traverse_single(table, create_ok=True) File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/visitors.py", line 120, in traverse_single return meth(obj, **kw) File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 732, in visit_table self.connection.execute(CreateTable(table)) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 729, in execute return meth(self, multiparams, params) File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 69, in _execute_on_connection return connection._execute_ddl(self, multiparams, params) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 783, in _execute_ddl compiled File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 958, in _execute_context context) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1160, in _handle_dbapi_exception exc_info File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 951, in _execute_context context) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 436, in do_execute cursor.execute(statement, parameters) File "/usr/lib64/python2.7/site-packages/MySQLdb/cursors.py", line 174, in execute self.errorhandler(self, exc, value) File "/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler raise errorclass, errorvalue sqlalchemy.exc.OperationalError: (OperationalError) (1005, "Can't create table 'ovs_quantum.ml2_network_segments' (errno: 150)") '\nCREATE TABLE ml2_network_segments (\n\tid VARCHAR(36) NOT NULL, \n\tnetwork_id VARCHAR(36) NOT NULL, \n\tnetwork_type VARCHAR(32) NOT NULL, \n\tphysical_network VARCHAR(64), \n\tsegmentation_id INTEGER, \n\tPRIMARY KEY (id), \n\tFOREIGN KEY(network_id) REFERENCES networks (id) ON DELETE CASCADE\n)\n\n' () But, I could create it manually as follows: > CREATE TABLE ml2_network_segments ( id VARCHAR(36) NOT NULL, network_id VARCHAR(36) NOT NULL, network_type VARCHAR(32) NOT NULL , physical_network VARCHAR(64), segmentation_id INT); > ALTER TABLE ml2_network_segments ADD FOREIGN KEY (network_id) REFERENCES networks (id) ON DELETE cascade; > ALTER TABLE ml2_network_segments ADD PRIMARY KEY (id); After that, I got different error about ml2_port_bindings. #python -m neutron.db.migration.migrate_to_ml2 openvswitch \ > mysql://id:pw at x.x.x.x/ovs_quantum Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/neutron/db/migration/migrate_to_ml2.py", line 462, in main() File "/usr/lib/python2.7/site-packages/neutron/db/migration/migrate_to_ml2.py", line 458, in main args.vxlan_udp_port) File "/usr/lib/python2.7/site-packages/neutron/db/migration/migrate_to_ml2.py", line 138, in __call__ metadata.create_all(engine) File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/schema.py", line 3352, in create_all tables=tables) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1617, in _run_visitor conn._run_visitor(visitorcallable, element, **kwargs) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1246, in _run_visitor **kwargs).traverse_single(element) File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/visitors.py", line 120, in traverse_single return meth(obj, **kw) File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 713, in visit_metadata self.traverse_single(table, create_ok=True) File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/visitors.py", line 120, in traverse_single return meth(obj, **kw) File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 732, in visit_table self.connection.execute(CreateTable(table)) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 729, in execute return meth(self, multiparams, params) File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 69, in _execute_on_connection return connection._execute_ddl(self, multiparams, params) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 783, in _execute_ddl compiled File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 958, in _execute_context context) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1160, in _handle_dbapi_exception exc_info File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 951, in _execute_context context) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 436, in do_execute cursor.execute(statement, parameters) File "/usr/lib64/python2.7/site-packages/MySQLdb/cursors.py", line 174, in execute self.errorhandler(self, exc, value) File "/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler raise errorclass, errorvalue sqlalchemy.exc.IntegrityError: (IntegrityError) (1215, 'Cannot add foreign key constraint') "\nCREATE TABLE ml2_port_bindings (\n\tport_id VARCHAR(36) NOT NULL, \n\thost VARCHAR(255) NOT NULL, \n\tvif_type VARCHAR(64) NOT NULL, \n\tdriver VARCHAR(64), \n\tsegment VARCHAR(36), \n\tvnic_type VARCHAR(64) DEFAULT 'normal' NOT NULL, \n\tvif_details VARCHAR(4095) DEFAULT '' NOT NULL, \n\tprofile VARCHAR(4095) DEFAULT '' NOT NULL, \n\tPRIMARY KEY (port_id), \n\tFOREIGN KEY(port_id) REFERENCES ports (id) ON DELETE CASCADE, \n\tFOREIGN KEY(segment) REFERENCES ml2_network_segments (id) ON DELETE SET NULL\n)\n\n" () Any suggestions are welcome! Thanks, Mikyung From caius.howcroft at gmail.com Tue Apr 21 19:59:01 2015 From: caius.howcroft at gmail.com (Caius Howcroft) Date: Tue, 21 Apr 2015 15:59:01 -0400 Subject: [Openstack-operators] over commit ratios Message-ID: Just a general question: what kind of over commit ratios do people normally run in production with? We currently run 2 for cpu and 1 for memory (with some held back for OS/ceph) i.e.: default['bcpc']['nova']['ram_allocation_ratio'] = 1.0 default['bcpc']['nova']['reserved_host_memory_mb'] = 1024 # often larger default['bcpc']['nova']['cpu_allocation_ratio'] = 2.0 Caius -- Caius Howcroft @caiushowcroft http://www.linkedin.com/in/caius From emccormick at cirrusseven.com Tue Apr 21 20:12:34 2015 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 21 Apr 2015 16:12:34 -0400 Subject: [Openstack-operators] over commit ratios In-Reply-To: References: Message-ID: We had this discussion at the Ops Mid-Cycle meetup. I think the general consensus was 0.9 for memory. if you're running Ceph OSD's on the node you'll almost certainly want to reserve more than a gig for it and the OS. for CPU there was a wide range of ideas and it mainly depended on use-case. If you're running lots of CPU-intensive things like Hadoop, better to be around 2 or even 1. If you've got a ton of web servers, you could go with 16 easily. If you've got a mixed-use cloud, some segregation with host aggregates may be helpful so you can vary the number. If you can't do that though, and you're mixed, you should be able to go better than 2. At least 5 should be OK unless you've packed a massive amount of memory in each node. -Erik On Tue, Apr 21, 2015 at 3:59 PM, Caius Howcroft wrote: > Just a general question: what kind of over commit ratios do people > normally run in production with? > > We currently run 2 for cpu and 1 for memory (with some held back for > OS/ceph) > > i.e.: > default['bcpc']['nova']['ram_allocation_ratio'] = 1.0 > default['bcpc']['nova']['reserved_host_memory_mb'] = 1024 # often larger > default['bcpc']['nova']['cpu_allocation_ratio'] = 2.0 > > Caius > > -- > Caius Howcroft > @caiushowcroft > http://www.linkedin.com/in/caius > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.shuklin at gmail.com Tue Apr 21 21:55:10 2015 From: george.shuklin at gmail.com (George Shuklin) Date: Wed, 22 Apr 2015 00:55:10 +0300 Subject: [Openstack-operators] over commit ratios In-Reply-To: References: Message-ID: <5536C73E.1050108@gmail.com> It's very depend on production type. If you can control guests and predict their memory consumption, use it as base for ratio. If you can't (typical for public clouds) - use 1 or smaller with reserved_host_memory_mb in nova.conf. And one more: some swap sapce is really necessary. Add at least twice of reserved_host_memory_mb - it really improves performance and prevents strange OOMs in the situation of very large host with very small dom0 footprint. On 04/21/2015 10:59 PM, Caius Howcroft wrote: > Just a general question: what kind of over commit ratios do people > normally run in production with? > > We currently run 2 for cpu and 1 for memory (with some held back for OS/ceph) > > i.e.: > default['bcpc']['nova']['ram_allocation_ratio'] = 1.0 > default['bcpc']['nova']['reserved_host_memory_mb'] = 1024 # often larger > default['bcpc']['nova']['cpu_allocation_ratio'] = 2.0 > > Caius > From edgar.magana at workday.com Wed Apr 22 06:33:40 2015 From: edgar.magana at workday.com (Edgar Magana) Date: Wed, 22 Apr 2015 06:33:40 +0000 Subject: [Openstack-operators] Operator Help to get patch merged In-Reply-To: References: Message-ID: I just read this email and after looking what happened on gerrit, I feel totally disappointed of the way the Horizon team (specially the cores) handle this patch. You did everything properly in your commit and horizon dev team should have helped you to get your commit merge. I do not understand why you are not even included as co-author in the patch that was finally merged: https://review.openstack.org/#/c/175702/ Just my two cents! Edgar From: Joseph Bajin > Date: Saturday, April 18, 2015 at 5:06 PM To: OpenStack Operators > Subject: [Openstack-operators] Operator Help to get patch merged I wanted to see about getting some Operator Help to push through a patch[1]. The patch is to not give the user a 404 message back when they click the cancel button why trying to change their password or the user settings. The patch resets the page. It's been sitting there for a while, but started to get some -1's and then +1's, then some move to kilo-rc1 to liberty and back. Some people think that those screens should be somewhere else, others think the text should be replaced, but that is not the purpose of the patch. It's just to not give back a negative user experience. So, I'm hoping that I can get some Operator support to get this merged and if they want to change the text, change the location, etc, then they can do it later down the road. Thanks -Joe [1] https://review.openstack.org/#/c/166569/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Wed Apr 22 06:49:53 2015 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 22 Apr 2015 06:49:53 +0000 Subject: [Openstack-operators] over commit ratios In-Reply-To: <5536C73E.1050108@gmail.com> References: <5536C73E.1050108@gmail.com> Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E501029EC709@CERNXCHG43.cern.ch> I'd also keep an eye on local I/O... we've found this to be the resource which can cause the worst noisy neighbours. Swapping makes this worse. Tim > -----Original Message----- > From: George Shuklin [mailto:george.shuklin at gmail.com] > Sent: 21 April 2015 23:55 > To: openstack-operators at lists.openstack.org > Subject: Re: [Openstack-operators] over commit ratios > > It's very depend on production type. > > If you can control guests and predict their memory consumption, use it as base > for ratio. > If you can't (typical for public clouds) - use 1 or smaller with > reserved_host_memory_mb in nova.conf. > > And one more: some swap sapce is really necessary. Add at least twice of > reserved_host_memory_mb - it really improves performance and prevents > strange OOMs in the situation of very large host with very small dom0 footprint. > > On 04/21/2015 10:59 PM, Caius Howcroft wrote: > > Just a general question: what kind of over commit ratios do people > > normally run in production with? > > > > We currently run 2 for cpu and 1 for memory (with some held back for > > OS/ceph) > > > > i.e.: > > default['bcpc']['nova']['ram_allocation_ratio'] = 1.0 > > default['bcpc']['nova']['reserved_host_memory_mb'] = 1024 # often > > larger default['bcpc']['nova']['cpu_allocation_ratio'] = 2.0 > > > > Caius > > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From sbauza at redhat.com Wed Apr 22 08:35:09 2015 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 22 Apr 2015 10:35:09 +0200 Subject: [Openstack-operators] [openstack-dev] [Nova] Add config option for real deletes instead of soft-deletes In-Reply-To: <1158667335.4149780.1429652564408.JavaMail.zimbra@redhat.com> References: <1158667335.4149780.1429652564408.JavaMail.zimbra@redhat.com> Message-ID: <55375D3D.4070204@redhat.com> Cross-posting to operators@ as I think they are rather interested in the $subject :-) Le 21/04/2015 23:42, Artom Lifshitz a ?crit : > Hello, > > I'd like to gauge acceptance of introducing a feature that would give operators > a config option to perform real database deletes instead of soft deletes. > > There's definitely a need for *something* that cleans up the database. There > have been a few attempts at a DB purge engine [1][2][3][4][5], and archiving to > shadow tables has been merged [6] (though that currently has some issues [7]). > > DB archiving notwithstanding, the general response to operators when they > mention the database becoming too big seems to be "DIY cleanup." > > I would like to propose a different approach: add a config option that turns > soft-deletes into real deletes, and start telling operators "if you turn this > on, it's DIY backups." > > Would something like that be acceptable and feasible? I'm ready to put in the > work to implement this, however searching the mailing list indicates that it > would be somewhere between non trivial and impossible [8]. Before I start, I > would like some confidence that it's closer to the former than the latter :) My personal bet is that it should indeed be config-driven : either we keep the old records or we just get rid of them. I'm not fan of any massive deletion system that Nova would manage unless it would be on trigger because even if the model is quite correctly normalized, it would perhaps lock many tables if we don't get attention to that. Anyway, a spec seems a good place for discussing that, IMHO. -Sylvain > Cheers! > > [1] https://blueprints.launchpad.net/nova/+spec/db-purge-engine > [2] https://blueprints.launchpad.net/nova/+spec/db-purge2 > [3] https://blueprints.launchpad.net/nova/+spec/remove-db-archiving > [4] https://blueprints.launchpad.net/nova/+spec/database-purge > [5] https://blueprints.launchpad.net/nova/+spec/db-archiving > [6] https://review.openstack.org/#/c/18493/ > [7] https://review.openstack.org/#/c/109201/ > [8] http://lists.openstack.org/pipermail/openstack-operators/2014-November/005591.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From erent at skyatlas.com Wed Apr 22 11:14:58 2015 From: erent at skyatlas.com (=?UTF-8?B?RXJlbiBUw7xya2F5?=) Date: Wed, 22 Apr 2015 14:14:58 +0300 Subject: [Openstack-operators] Migrating to Different Availability Zone and Different Storage Message-ID: <553782B2.7030607@skyatlas.com> Hello operators, We have a running installation of Openstack with block storage. We are currently having a problem with our storage appliance and we would like to migrate the instance from this storage appliance. To do that, we are thinking of creating a new availability zone with new storage applience. In the end, we want to move the instances from our current storage appliance to another. Those storage appliances are the same and we have enough storage. Basically, the storage appliance and the interface will be the same, but it will be completely new machine. Is it possible to move instances from one availability zone to other while keeping in mind that those availability zones use different storage? If so, is there any guide to configure this? Does nova/cinder copy the disk of the instance from old one (old availability zone) to new one? Lastly, could it be done with minimal/zero downtime (live migration)? Regards, Eren -- Eren T?rkay, System Administrator https://skyatlas.com/ | +90 850 885 0357 Yildiz Teknik Universitesi Davutpasa Kampusu Teknopark Bolgesi, D2 Blok No:107 Esenler, Istanbul Pk.34220 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From george.shuklin at gmail.com Wed Apr 22 12:09:47 2015 From: george.shuklin at gmail.com (George Shuklin) Date: Wed, 22 Apr 2015 15:09:47 +0300 Subject: [Openstack-operators] over commit ratios In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E501029EC709@CERNXCHG43.cern.ch> References: <5536C73E.1050108@gmail.com> <5D7F9996EA547448BC6C54C8C5AAF4E501029EC709@CERNXCHG43.cern.ch> Message-ID: <55378F8B.702@gmail.com> Yes, it really depends on the used backing technique. We using SSDs and raw images, so IO is not an issue. But memory is more important: if you lack IO capability you left with slow guests. If you lack memory you left with dead guests (hello, OOM killer). BTW: Swap is needed not to swapin/swapout, but to relief memory pressure. With properly configured memory swin/swout should be less than 2-3. On 04/22/2015 09:49 AM, Tim Bell wrote: > I'd also keep an eye on local I/O... we've found this to be the resource which can cause the worst noisy neighbours. Swapping makes this worse. > > Tim > >> -----Original Message----- >> From: George Shuklin [mailto:george.shuklin at gmail.com] >> Sent: 21 April 2015 23:55 >> To: openstack-operators at lists.openstack.org >> Subject: Re: [Openstack-operators] over commit ratios >> >> It's very depend on production type. >> >> If you can control guests and predict their memory consumption, use it as base >> for ratio. >> If you can't (typical for public clouds) - use 1 or smaller with >> reserved_host_memory_mb in nova.conf. >> >> And one more: some swap sapce is really necessary. Add at least twice of >> reserved_host_memory_mb - it really improves performance and prevents >> strange OOMs in the situation of very large host with very small dom0 footprint. >> >> On 04/21/2015 10:59 PM, Caius Howcroft wrote: >>> Just a general question: what kind of over commit ratios do people >>> normally run in production with? >>> >>> We currently run 2 for cpu and 1 for memory (with some held back for >>> OS/ceph) >>> >>> i.e.: >>> default['bcpc']['nova']['ram_allocation_ratio'] = 1.0 >>> default['bcpc']['nova']['reserved_host_memory_mb'] = 1024 # often >>> larger default['bcpc']['nova']['cpu_allocation_ratio'] = 2.0 >>> >>> Caius >>> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From ayoung at redhat.com Wed Apr 22 13:32:17 2015 From: ayoung at redhat.com (Adam Young) Date: Wed, 22 Apr 2015 09:32:17 -0400 Subject: [Openstack-operators] Sharing resources across OpenStack instances Message-ID: <5537A2E1.2080406@redhat.com> Its been my understanding that many people are deploying small OpenStack instances as a way to share the Hardware owned by their particular team, group, or department. The Keystone instance represents ownership, and the identity of the users comes from a corporate LDAP server. Is there much demand for the following scenarios? 1. A project team crosses organizational boundaries and has to work with VMs in two separate OpenStack instances. They need to set up a network that requires talking to two neutron instances. 2. One group manages a powerful storage array. Several OpenStack instances need to be able to mount volumes from this array. Sometimes, those volumes have to be transferred from VMs running in one instance to another. 3. A group is producing nightly builds. Part of this is an image building system that posts to glance. Ideally, multiple OpenStack instances would be able to pull their images from the same glance. 4. Hadoop ( or some other orchestrated task) requires more resources than are in any single OpenStack instance, and needs to allocate resources across two or more instances for a single job. I suspect that these kinds of architectures are becoming more common. Can some of the operators validate these assumptions? Are there other, more common cases where Operations need to span multiple clouds which would require integration of one Nova server with multiple Cinder, Glance, or Neutron servers managed in other OpenStack instances? From Kevin.Fox at pnnl.gov Wed Apr 22 13:50:47 2015 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 22 Apr 2015 13:50:47 +0000 Subject: [Openstack-operators] Sharing resources across OpenStack instances In-Reply-To: <5537A2E1.2080406@redhat.com> References: <5537A2E1.2080406@redhat.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01A1EB793@EX10MBOX03.pnnl.gov> This is a case for a cross project cloud (institutional?). It costs more to run two little clouds then one bigger one. Both in terms of man power, and in cases like these. under utilized resources. #3 is interesting though. If there is to be an openstack app catalog, it would be inportant to be able to pull the needed images from outside the cloud easily. Thanks, Kevin ________________________________ From: Adam Young Sent: Wednesday, April 22, 2015 6:32:17 AM To: openstack-operators at lists.openstack.org Subject: [Openstack-operators] Sharing resources across OpenStack instances Its been my understanding that many people are deploying small OpenStack instances as a way to share the Hardware owned by their particular team, group, or department. The Keystone instance represents ownership, and the identity of the users comes from a corporate LDAP server. Is there much demand for the following scenarios? 1. A project team crosses organizational boundaries and has to work with VMs in two separate OpenStack instances. They need to set up a network that requires talking to two neutron instances. 2. One group manages a powerful storage array. Several OpenStack instances need to be able to mount volumes from this array. Sometimes, those volumes have to be transferred from VMs running in one instance to another. 3. A group is producing nightly builds. Part of this is an image building system that posts to glance. Ideally, multiple OpenStack instances would be able to pull their images from the same glance. 4. Hadoop ( or some other orchestrated task) requires more resources than are in any single OpenStack instance, and needs to allocate resources across two or more instances for a single job. I suspect that these kinds of architectures are becoming more common. Can some of the operators validate these assumptions? Are there other, more common cases where Operations need to span multiple clouds which would require integration of one Nova server with multiple Cinder, Glance, or Neutron servers managed in other OpenStack instances? _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandy.walsh at RACKSPACE.COM Wed Apr 22 14:18:18 2015 From: sandy.walsh at RACKSPACE.COM (Sandy Walsh) Date: Wed, 22 Apr 2015 14:18:18 +0000 Subject: [Openstack-operators] StackTach.v3 now in production ... Message-ID: <1429712300263.7013@RACKSPACE.COM> (sorry for cross-post, but this is appropriate to both audiences) Hey y'all! For those of you that don't know, StackTach is a notification-based debugging, monitoring and usage tool for OpenStack. We're happy to announce that we've recently rolled StackTach.v3 into production at one of the Rax datacenters with a plan to roll out to the rest asap. Once we do, we'll be bumping all the library versions to 1.0, but we encourage you to start playing with the system now. The docs and screencasts are at www.stacktach.com We live on #stacktach on freenode (for v2 and v3 questions) All the StackTach code is on stackforge https://github.com/stackforge?query=stacktach This a very exciting time for us. With StackTach.v3 we've: * solved many of the scaling, redundancy and idempotency problems of v2 * modularized the entire system (use only the parts you want) * made the system less rigid with respect to Nova and Glance. Now, nearly any JSON notification can be handled (even outside of OpenStack) * created a very flexible REST API with pluggable implementation drivers. So, if you don't like our solution but want to keep a compatible API, all the pieces are there for you, including cmdline tools and client libraries. * included a devstack-like sandbox for you to play in that doesn't require an OpenStack installation to generate notifications * developed a way to run STv3 side-by-side with your existing notification consumers for safe trials. We can split notification queues without requiring any changes to your openstack deployment (try *that* with oslo-messaging ;) If you haven't looked at your OpenStack deployment from the perspective of notifications you're really missing out. It's the most powerful way to debug your installations. And, for usage metering, there is really no better option. We feel StackTach.v3 is the best solution out there for all your event-processing needs. Let us know how we can help! We're in a good place to squash bugs quickly. Cheers -Sandy, Dragon and the rest of the StackTach.v3 team/contributors From subbu at subbu.org Wed Apr 22 15:44:07 2015 From: subbu at subbu.org (Allamaraju, Subbu) Date: Wed, 22 Apr 2015 08:44:07 -0700 Subject: [Openstack-operators] over commit ratios In-Reply-To: <55378F8B.702@gmail.com> References: <5536C73E.1050108@gmail.com> <5D7F9996EA547448BC6C54C8C5AAF4E501029EC709@CERNXCHG43.cern.ch> <55378F8B.702@gmail.com> Message-ID: In addition to these factors, collocation happens to be another key source of noise. By collocation I mean VMs doing the same/similar work running on the same hypervisor. This happens under low capacity situations when the scheduler could not enforce anti-affinity. Subbu > On Apr 22, 2015, at 5:09 AM, George Shuklin wrote: > > Yes, it really depends on the used backing technique. We using SSDs and raw images, so IO is not an issue. > > But memory is more important: if you lack IO capability you left with slow guests. If you lack memory you left with dead guests (hello, OOM killer). > > BTW: Swap is needed not to swapin/swapout, but to relief memory pressure. With properly configured memory swin/swout should be less than 2-3. > > On 04/22/2015 09:49 AM, Tim Bell wrote: >> I'd also keep an eye on local I/O... we've found this to be the resource which can cause the worst noisy neighbours. Swapping makes this worse. >> >> Tim >> >>> -----Original Message----- >>> From: George Shuklin [mailto:george.shuklin at gmail.com] >>> Sent: 21 April 2015 23:55 >>> To: openstack-operators at lists.openstack.org >>> Subject: Re: [Openstack-operators] over commit ratios >>> >>> It's very depend on production type. >>> >>> If you can control guests and predict their memory consumption, use it as base >>> for ratio. >>> If you can't (typical for public clouds) - use 1 or smaller with >>> reserved_host_memory_mb in nova.conf. >>> >>> And one more: some swap sapce is really necessary. Add at least twice of >>> reserved_host_memory_mb - it really improves performance and prevents >>> strange OOMs in the situation of very large host with very small dom0 footprint. >>> >>> On 04/21/2015 10:59 PM, Caius Howcroft wrote: >>>> Just a general question: what kind of over commit ratios do people >>>> normally run in production with? >>>> >>>> We currently run 2 for cpu and 1 for memory (with some held back for >>>> OS/ceph) >>>> >>>> i.e.: >>>> default['bcpc']['nova']['ram_allocation_ratio'] = 1.0 >>>> default['bcpc']['nova']['reserved_host_memory_mb'] = 1024 # often >>>> larger default['bcpc']['nova']['cpu_allocation_ratio'] = 2.0 >>>> >>>> Caius >>>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From gilles.mocellin at nuagelibre.org Wed Apr 22 16:29:05 2015 From: gilles.mocellin at nuagelibre.org (Gilles Mocellin) Date: Wed, 22 Apr 2015 18:29:05 +0200 Subject: [Openstack-operators] Sharing resources across OpenStack instances In-Reply-To: <5537A2E1.2080406@redhat.com> References: <5537A2E1.2080406@redhat.com> Message-ID: <5537CC51.2000902@nuagelibre.org> Le 22/04/2015 15:32, Adam Young a ?crit : > Its been my understanding that many people are deploying small > OpenStack instances as a way to share the Hardware owned by their > particular team, group, or department. The Keystone instance > represents ownership, and the identity of the users comes from a > corporate LDAP server. > > Is there much demand for the following scenarios? > > 1. A project team crosses organizational boundaries and has to work > with VMs in two separate OpenStack instances. They need to set up a > network that requires talking to two neutron instances. > > 2. One group manages a powerful storage array. Several OpenStack > instances need to be able to mount volumes from this array. Sometimes, > those volumes have to be transferred from VMs running in one instance > to another. > > 3. A group is producing nightly builds. Part of this is an image > building system that posts to glance. Ideally, multiple OpenStack > instances would be able to pull their images from the same glance. > > 4. Hadoop ( or some other orchestrated task) requires more resources > than are in any single OpenStack instance, and needs to allocate > resources across two or more instances for a single job. > > > I suspect that these kinds of architectures are becoming more common. > Can some of the operators validate these assumptions? Are there other, > more common cases where Operations need to span multiple clouds which > would require integration of one Nova server with multiple Cinder, > Glance, or Neutron servers managed in other OpenStack instances? I'm always a bit disappointed when someone asks me about hybridation with OpenStack. OpenStack is not a Cloud Management Platform, that can manage severeal Clouds, private and public... Like RedHat Cloudforms / ManageIQ. At least, being able to manage two OpenStack instances, one private and one from a public Cloud with OpenStack API will be great. I found a project by Huawei to cascade Openstack instances : https://wiki.openstack.org/wiki/OpenStack_cascading_solution I really would like to see that becoming reality. Perhaps It can be a solution to your scenarii ? From jlk at bluebox.net Wed Apr 22 17:05:57 2015 From: jlk at bluebox.net (Jesse Keating) Date: Wed, 22 Apr 2015 10:05:57 -0700 Subject: [Openstack-operators] over commit ratios In-Reply-To: References: <5536C73E.1050108@gmail.com> <5D7F9996EA547448BC6C54C8C5AAF4E501029EC709@CERNXCHG43.cern.ch> <55378F8B.702@gmail.com> Message-ID: A juno feature may help with this, Utilization based scheduling: https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling That helps when landing the instance, but doesn't help if utilization changes /after/ instances have landed, but could help with a resize action to relocate the instance. - jlk On Wed, Apr 22, 2015 at 8:44 AM, Allamaraju, Subbu wrote: > In addition to these factors, collocation happens to be another key source > of noise. By collocation I mean VMs doing the same/similar work running on > the same hypervisor. This happens under low capacity situations when the > scheduler could not enforce anti-affinity. > > Subbu > > > On Apr 22, 2015, at 5:09 AM, George Shuklin > wrote: > > > > Yes, it really depends on the used backing technique. We using SSDs and > raw images, so IO is not an issue. > > > > But memory is more important: if you lack IO capability you left with > slow guests. If you lack memory you left with dead guests (hello, OOM > killer). > > > > BTW: Swap is needed not to swapin/swapout, but to relief memory > pressure. With properly configured memory swin/swout should be less than > 2-3. > > > > On 04/22/2015 09:49 AM, Tim Bell wrote: > >> I'd also keep an eye on local I/O... we've found this to be the > resource which can cause the worst noisy neighbours. Swapping makes this > worse. > >> > >> Tim > >> > >>> -----Original Message----- > >>> From: George Shuklin [mailto:george.shuklin at gmail.com] > >>> Sent: 21 April 2015 23:55 > >>> To: openstack-operators at lists.openstack.org > >>> Subject: Re: [Openstack-operators] over commit ratios > >>> > >>> It's very depend on production type. > >>> > >>> If you can control guests and predict their memory consumption, use it > as base > >>> for ratio. > >>> If you can't (typical for public clouds) - use 1 or smaller with > >>> reserved_host_memory_mb in nova.conf. > >>> > >>> And one more: some swap sapce is really necessary. Add at least twice > of > >>> reserved_host_memory_mb - it really improves performance and prevents > >>> strange OOMs in the situation of very large host with very small dom0 > footprint. > >>> > >>> On 04/21/2015 10:59 PM, Caius Howcroft wrote: > >>>> Just a general question: what kind of over commit ratios do people > >>>> normally run in production with? > >>>> > >>>> We currently run 2 for cpu and 1 for memory (with some held back for > >>>> OS/ceph) > >>>> > >>>> i.e.: > >>>> default['bcpc']['nova']['ram_allocation_ratio'] = 1.0 > >>>> default['bcpc']['nova']['reserved_host_memory_mb'] = 1024 # often > >>>> larger default['bcpc']['nova']['cpu_allocation_ratio'] = 2.0 > >>>> > >>>> Caius > >>>> > >>> > >>> _______________________________________________ > >>> OpenStack-operators mailing list > >>> OpenStack-operators at lists.openstack.org > >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From everett.toews at RACKSPACE.COM Wed Apr 22 19:08:47 2015 From: everett.toews at RACKSPACE.COM (Everett Toews) Date: Wed, 22 Apr 2015 19:08:47 +0000 Subject: [Openstack-operators] [all][api] 3 API Guidelines up for final review Message-ID: <6EE4A867-A064-424A-9067-5A3889DA63C4@rackspace.com> Hi All, We have 3 API Guidelines that are ready for a final review. 1. Metadata guidelines document https://review.openstack.org/#/c/141229/ 2. Tagging guidelines https://review.openstack.org/#/c/155620/ 3. Guidelines on using date and time format https://review.openstack.org/#/c/159892/ If the API Working Group hasn?t received any further feedback, we?ll merge them on April 29. Thanks, Everett From gord at live.ca Wed Apr 22 19:19:19 2015 From: gord at live.ca (gordon chung) Date: Wed, 22 Apr 2015 15:19:19 -0400 Subject: [Openstack-operators] [Ceilometer] Gnocchi - capturing time-series data Message-ID: hi folks, there's a session coming up at the summit so we can discuss Ceilometer and give/get some feedback but i wanted to highlight some of the work we've been doing, specifically relating to storing measurement values. as many of you have heard/read, we're building this thing called Gnocchi. to explain what Gnocchi is, Julien, a key developer of Gnocchi, wrote up an introduction[1] which i hope explains a bit of the concepts of Gnocchi and what it aims to achieve. from this, i hope it answers some questions, generates some ideas on how it can be used, and raises a few thoughts on how to improve it. [1] http://julien.danjou.info/blog/2015/openstack-gnocchi-first-release cheers, gord From marc.heckmann at ubisoft.com Thu Apr 23 00:26:14 2015 From: marc.heckmann at ubisoft.com (Marc Heckmann) Date: Thu, 23 Apr 2015 00:26:14 +0000 Subject: [Openstack-operators] Sharing resources across OpenStack instances In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A1EB793@EX10MBOX03.pnnl.gov> References: <5537A2E1.2080406@redhat.com> <1A3C52DFCD06494D8528644858247BF01A1EB793@EX10MBOX03.pnnl.gov> Message-ID: <1429748774.53513.7.camel@localhost.localdomain> [top posting on this one] Hi, When you write "Openstack instances", I'm assuming that you're referring to Openstack deployments right? We have different deployments based on geographic regions for performance concerns but certainly not by department. Each Openstack project is tied to a department/project budget code and re-billed accordingly based on Ceilometer data. No need to have separate deployments for that. Central IT owns all the Cloud infra. In the separate deployments the only thing that we aim to have shared is Swift and Keystone (it's not the case for us right now). Glance images need to be identical between deployments but that's easily achievable through automation both for the operator and the end user. We make sure that the users understand that these are separate regions/Clouds akin to AWS regions. -m On Wed, 2015-04-22 at 13:50 +0000, Fox, Kevin M wrote: > This is a case for a cross project cloud (institutional?). It costs > more to run two little clouds then one bigger one. Both in terms of > man power, and in cases like these. under utilized resources. > > #3 is interesting though. If there is to be an openstack app catalog, > it would be inportant to be able to pull the needed images from > outside the cloud easily. > > Thanks, > Kevin > > > ______________________________________________________________________ > From: Adam Young > Sent: Wednesday, April 22, 2015 6:32:17 AM > To: openstack-operators at lists.openstack.org > Subject: [Openstack-operators] Sharing resources across OpenStack > instances > > > Its been my understanding that many people are deploying small > OpenStack > instances as a way to share the Hardware owned by their particular > team, > group, or department. The Keystone instance represents ownership, > and > the identity of the users comes from a corporate LDAP server. > > Is there much demand for the following scenarios? > > 1. A project team crosses organizational boundaries and has to work > with VMs in two separate OpenStack instances. They need to set up a > network that requires talking to two neutron instances. > > 2. One group manages a powerful storage array. Several OpenStack > instances need to be able to mount volumes from this array. > Sometimes, > those volumes have to be transferred from VMs running in one instance > to > another. > > 3. A group is producing nightly builds. Part of this is an image > building system that posts to glance. Ideally, multiple OpenStack > instances would be able to pull their images from the same glance. > > 4. Hadoop ( or some other orchestrated task) requires more resources > than are in any single OpenStack instance, and needs to allocate > resources across two or more instances for a single job. > > > I suspect that these kinds of architectures are becoming more > common. > Can some of the operators validate these assumptions? Are there > other, > more common cases where Operations need to span multiple clouds which > would require integration of one Nova server with multiple Cinder, > Glance, or Neutron servers managed in other OpenStack instances? > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From mismith at overstock.com Thu Apr 23 04:13:08 2015 From: mismith at overstock.com (Mike Smith) Date: Thu, 23 Apr 2015 04:13:08 +0000 Subject: [Openstack-operators] Sharing resources across OpenStack instances In-Reply-To: <1429748774.53513.7.camel@localhost.localdomain> References: <5537A2E1.2080406@redhat.com> <1A3C52DFCD06494D8528644858247BF01A1EB793@EX10MBOX03.pnnl.gov> <1429748774.53513.7.camel@localhost.localdomain> Message-ID: <74CEBC5A-9ED0-4241-A70D-420689C91FCD@overstock.com> At Overstock we have a number of separate Openstack deployments in different facilities that are completely separated from each other. No shared services between them. Some of the separation is due to the kind of instances they contain (?Dev/Test" vs ?Prod? for example), but it is largely due to the location diversity and the desire to not require everything to be upgraded at once. We have automation to build and push out new glance images every month (built via Oz) to the multiple glance instances. Our home-made orchestration knows how to provision to the multiple clouds. We are not currently using corporate LDAP for these, but our orchestration tool does use AD and that?s where we do most of our work from anyway. All of these are managed by our Website Systems team and we do basic show-back to various teams that use these services using data from nova statistics (no Ceilometer yet). Mike Smith Principal Engineer, Website Systems Overstock.com > On Apr 22, 2015, at 6:26 PM, Marc Heckmann wrote: > > [top posting on this one] > > Hi, > > When you write "Openstack instances", I'm assuming that you're referring > to Openstack deployments right? > > We have different deployments based on geographic regions for > performance concerns but certainly not by department. Each Openstack > project is tied to a department/project budget code and re-billed > accordingly based on Ceilometer data. No need to have separate > deployments for that. Central IT owns all the Cloud infra. > > In the separate deployments the only thing that we aim to have shared is > Swift and Keystone (it's not the case for us right now). > > Glance images need to be identical between deployments but that's easily > achievable through automation both for the operator and the end user. > > We make sure that the users understand that these are separate > regions/Clouds akin to AWS regions. > > -m > > On Wed, 2015-04-22 at 13:50 +0000, Fox, Kevin M wrote: >> This is a case for a cross project cloud (institutional?). It costs >> more to run two little clouds then one bigger one. Both in terms of >> man power, and in cases like these. under utilized resources. >> >> #3 is interesting though. If there is to be an openstack app catalog, >> it would be inportant to be able to pull the needed images from >> outside the cloud easily. >> >> Thanks, >> Kevin >> >> >> ______________________________________________________________________ >> From: Adam Young >> Sent: Wednesday, April 22, 2015 6:32:17 AM >> To: openstack-operators at lists.openstack.org >> Subject: [Openstack-operators] Sharing resources across OpenStack >> instances >> >> >> Its been my understanding that many people are deploying small >> OpenStack >> instances as a way to share the Hardware owned by their particular >> team, >> group, or department. The Keystone instance represents ownership, >> and >> the identity of the users comes from a corporate LDAP server. >> >> Is there much demand for the following scenarios? >> >> 1. A project team crosses organizational boundaries and has to work >> with VMs in two separate OpenStack instances. They need to set up a >> network that requires talking to two neutron instances. >> >> 2. One group manages a powerful storage array. Several OpenStack >> instances need to be able to mount volumes from this array. >> Sometimes, >> those volumes have to be transferred from VMs running in one instance >> to >> another. >> >> 3. A group is producing nightly builds. Part of this is an image >> building system that posts to glance. Ideally, multiple OpenStack >> instances would be able to pull their images from the same glance. >> >> 4. Hadoop ( or some other orchestrated task) requires more resources >> than are in any single OpenStack instance, and needs to allocate >> resources across two or more instances for a single job. >> >> >> I suspect that these kinds of architectures are becoming more >> common. >> Can some of the operators validate these assumptions? Are there >> other, >> more common cases where Operations need to span multiple clouds which >> would require integration of one Nova server with multiple Cinder, >> Glance, or Neutron servers managed in other OpenStack instances? >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ________________________________ CONFIDENTIALITY NOTICE: This message is intended only for the use and review of the individual or entity to which it is addressed and may contain information that is privileged and confidential. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message solely to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately by telephone or return email. Thank you. From Tim.Bell at cern.ch Thu Apr 23 07:24:11 2015 From: Tim.Bell at cern.ch (Tim Bell) Date: Thu, 23 Apr 2015 07:24:11 +0000 Subject: [Openstack-operators] Sharing resources across OpenStack instances In-Reply-To: <74CEBC5A-9ED0-4241-A70D-420689C91FCD@overstock.com> References: <5537A2E1.2080406@redhat.com> <1A3C52DFCD06494D8528644858247BF01A1EB793@EX10MBOX03.pnnl.gov> <1429748774.53513.7.camel@localhost.localdomain> <74CEBC5A-9ED0-4241-A70D-420689C91FCD@overstock.com> Message-ID: Image sharing does seem to be quite a common requirement between multiple clouds with some degree of trust. Has anyone found a good way to do this without needing the user to upload to each cloud (and handle the associated consistency themselves) ? Tim On 4/23/15, 6:13 AM, "Mike Smith" wrote: >At Overstock we have a number of separate Openstack deployments in >different facilities that are completely separated from each other. No >shared services between them. Some of the separation is due to the kind >of instances they contain (?Dev/Test" vs ?Prod? for example), but it is >largely due to the location diversity and the desire to not require >everything to be upgraded at once. > >We have automation to build and push out new glance images every month >(built via Oz) to the multiple glance instances. Our home-made >orchestration knows how to provision to the multiple clouds. We are not >currently using corporate LDAP for these, but our orchestration tool does >use AD and that?s where we do most of our work from anyway. All of these >are managed by our Website Systems team and we do basic show-back to >various teams that use these services using data from nova statistics (no >Ceilometer yet). > >Mike Smith >Principal Engineer, Website Systems >Overstock.com > > > >> On Apr 22, 2015, at 6:26 PM, Marc Heckmann >>wrote: >> >> [top posting on this one] >> >> Hi, >> >> When you write "Openstack instances", I'm assuming that you're referring >> to Openstack deployments right? >> >> We have different deployments based on geographic regions for >> performance concerns but certainly not by department. Each Openstack >> project is tied to a department/project budget code and re-billed >> accordingly based on Ceilometer data. No need to have separate >> deployments for that. Central IT owns all the Cloud infra. >> >> In the separate deployments the only thing that we aim to have shared is >> Swift and Keystone (it's not the case for us right now). >> >> Glance images need to be identical between deployments but that's easily >> achievable through automation both for the operator and the end user. >> >> We make sure that the users understand that these are separate >> regions/Clouds akin to AWS regions. >> >> -m >> >> On Wed, 2015-04-22 at 13:50 +0000, Fox, Kevin M wrote: >>> This is a case for a cross project cloud (institutional?). It costs >>> more to run two little clouds then one bigger one. Both in terms of >>> man power, and in cases like these. under utilized resources. >>> >>> #3 is interesting though. If there is to be an openstack app catalog, >>> it would be inportant to be able to pull the needed images from >>> outside the cloud easily. >>> >>> Thanks, >>> Kevin >>> >>> >>> ______________________________________________________________________ >>> From: Adam Young >>> Sent: Wednesday, April 22, 2015 6:32:17 AM >>> To: openstack-operators at lists.openstack.org >>> Subject: [Openstack-operators] Sharing resources across OpenStack >>> instances >>> >>> >>> Its been my understanding that many people are deploying small >>> OpenStack >>> instances as a way to share the Hardware owned by their particular >>> team, >>> group, or department. The Keystone instance represents ownership, >>> and >>> the identity of the users comes from a corporate LDAP server. >>> >>> Is there much demand for the following scenarios? >>> >>> 1. A project team crosses organizational boundaries and has to work >>> with VMs in two separate OpenStack instances. They need to set up a >>> network that requires talking to two neutron instances. >>> >>> 2. One group manages a powerful storage array. Several OpenStack >>> instances need to be able to mount volumes from this array. >>> Sometimes, >>> those volumes have to be transferred from VMs running in one instance >>> to >>> another. >>> >>> 3. A group is producing nightly builds. Part of this is an image >>> building system that posts to glance. Ideally, multiple OpenStack >>> instances would be able to pull their images from the same glance. >>> >>> 4. Hadoop ( or some other orchestrated task) requires more resources >>> than are in any single OpenStack instance, and needs to allocate >>> resources across two or more instances for a single job. >>> >>> >>> I suspect that these kinds of architectures are becoming more >>> common. >>> Can some of the operators validate these assumptions? Are there >>> other, >>> more common cases where Operations need to span multiple clouds which >>> would require integration of one Nova server with multiple Cinder, >>> Glance, or Neutron servers managed in other OpenStack instances? >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > >________________________________ > >CONFIDENTIALITY NOTICE: This message is intended only for the use and >review of the individual or entity to which it is addressed and may >contain information that is privileged and confidential. If the reader of >this message is not the intended recipient, or the employee or agent >responsible for delivering the message solely to the intended recipient, >you are hereby notified that any dissemination, distribution or copying >of this communication is strictly prohibited. If you have received this >communication in error, please notify sender immediately by telephone or >return email. Thank you. >_______________________________________________ >OpenStack-operators mailing list >OpenStack-operators at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From richard at raseley.com Thu Apr 23 16:55:42 2015 From: richard at raseley.com (Richard Raseley) Date: Thu, 23 Apr 2015 09:55:42 -0700 Subject: [Openstack-operators] Sharing resources across OpenStack instances In-Reply-To: References: <5537A2E1.2080406@redhat.com> <1A3C52DFCD06494D8528644858247BF01A1EB793@EX10MBOX03.pnnl.gov> <1429748774.53513.7.camel@localhost.localdomain> <74CEBC5A-9ED0-4241-A70D-420689C91FCD@overstock.com> Message-ID: <5539240E.70808@raseley.com> Tim Bell wrote: > Has anyone found a good way to do this without needing the user to upload > to each cloud (and handle the associated consistency themselves) ? For the use-case of inter-cloud replication and Swift, it seems pretty straight forward to configure 'Container to Container Synchronization'[0]. I haven't done this specifically before, but it seems like you would just then need some sort of auditing script running on both sides to make Glance aware of replicated images. Regards, Richard Raseley SysOps Engineer Puppet Labs [0] - http://docs.openstack.org/developer/swift/overview_container_sync.html From Kevin.Fox at pnnl.gov Thu Apr 23 18:21:32 2015 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 23 Apr 2015 18:21:32 +0000 Subject: [Openstack-operators] Sharing resources across OpenStack instances In-Reply-To: <5539240E.70808@raseley.com> References: <5537A2E1.2080406@redhat.com> <1A3C52DFCD06494D8528644858247BF01A1EB793@EX10MBOX03.pnnl.gov> <1429748774.53513.7.camel@localhost.localdomain> <74CEBC5A-9ED0-4241-A70D-420689C91FCD@overstock.com> ,<5539240E.70808@raseley.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01A1ED582@EX10MBOX03.pnnl.gov> Some folks have been discussing an app store model. perhaps in Murano, but more global, that would allow images/template to be registered somewhere like on openstack.org and show up on all clouds that have the global repo enabled. Murano would be extended to fetch the images/templates to the local glance on the first launch of an app if its not already cached locally. Would that sort of thing fit better then trying to sync glances backing store between clouds? Thanks, Kevin ________________________________________ From: Richard Raseley [richard at raseley.com] Sent: Thursday, April 23, 2015 9:55 AM To: Tim Bell Cc: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] Sharing resources across OpenStack instances Tim Bell wrote: > Has anyone found a good way to do this without needing the user to upload > to each cloud (and handle the associated consistency themselves) ? For the use-case of inter-cloud replication and Swift, it seems pretty straight forward to configure 'Container to Container Synchronization'[0]. I haven't done this specifically before, but it seems like you would just then need some sort of auditing script running on both sides to make Glance aware of replicated images. Regards, Richard Raseley SysOps Engineer Puppet Labs [0] - http://docs.openstack.org/developer/swift/overview_container_sync.html _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From richard at raseley.com Thu Apr 23 19:01:40 2015 From: richard at raseley.com (Richard Raseley) Date: Thu, 23 Apr 2015 12:01:40 -0700 Subject: [Openstack-operators] Sharing resources across OpenStack instances In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A1ED582@EX10MBOX03.pnnl.gov> References: <5537A2E1.2080406@redhat.com> <1A3C52DFCD06494D8528644858247BF01A1EB793@EX10MBOX03.pnnl.gov> <1429748774.53513.7.camel@localhost.localdomain> <74CEBC5A-9ED0-4241-A70D-420689C91FCD@overstock.com> , <5539240E.70808@raseley.com> <1A3C52DFCD06494D8528644858247BF01A1ED582@EX10MBOX03.pnnl.gov> Message-ID: <55394194.9080607@raseley.com> Fox, Kevin M wrote: > Some folks have been discussing an app store model. perhaps in > Murano, but more global, that would allow images/template to be > registered somewhere like on openstack.org and show up on all clouds > that have the global repo enabled. Murano would be extended to fetch > the images/templates to the local glance on the first launch of an > app if its not already cached locally. > > Would that sort of thing fit better then trying to sync glances > backing store between clouds? My first take is that what you've outlined feels a little too ambitious to me. I also have concerns (feature and scope creep) about whether or not Murano (or any other project) should be in the business of managing and auditing replication of data between various other (potentially disparate) systems. I think there is tremendous value at the core of the idea, which is essentially the ability to easily (and granularly) share and consume resources between clouds. To me that feels much more like something that would be interested on a per-project basis (as has been done in Swift). In the model I am imagining, the underlying components of the cloud would be responsible for inter-cloud data replication. In the specific case of Glance images, it could either take the form of a pub / sub model at the Glance level (which would allow replication of images between Glance systems utilizing different back-ends) or the form of backend <-> backend replication (e.g. with Swift Container Replication) and then Glance would simply have a process for discovering new images which have appeared on the back-end. Regards, Richard Raseley SysOps Engineer Puppet Labs From Kevin.Fox at pnnl.gov Thu Apr 23 19:31:33 2015 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 23 Apr 2015 19:31:33 +0000 Subject: [Openstack-operators] Sharing resources across OpenStack instances In-Reply-To: <55394194.9080607@raseley.com> References: <5537A2E1.2080406@redhat.com> <1A3C52DFCD06494D8528644858247BF01A1EB793@EX10MBOX03.pnnl.gov> <1429748774.53513.7.camel@localhost.localdomain> <74CEBC5A-9ED0-4241-A70D-420689C91FCD@overstock.com> ,<5539240E.70808@raseley.com> <1A3C52DFCD06494D8528644858247BF01A1ED582@EX10MBOX03.pnnl.gov>, <55394194.9080607@raseley.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01A1ED6AD@EX10MBOX03.pnnl.gov> I'm thinking more like this use case: I'm a cloud user, I want a new Trac site. or an Elastic Search cluster. Or a Jenkins system for my project. Why do I have to take a lot of time to deploy these things myself? Ideally, one person, or a small group of folks provide generic templates a common deployment of a given software that can easily be discovered/launched by end users of any cloud. Docker is starting to do this with the Docker Hub. Docker's too low level to do it really nicely though. Say for an ElasticSearch cluster, you want it scalable, with between M and N instances, with a load balancer and a private network. Heat can do that... Murano is starting to enable that ability, but requires the cloud admin to preload up all the bits into it. That doesn't scale well though. I guess the how of how that works is up for grabs. Murano might not be the right place for it. What other use cases for transferring resources between clouds can you think of? Perhaps if the app store like functionality was simply http based, the existing glance fetch image from http mechanism would already work. Another option would be for it to work similarly to yum/apt. You have a set of repo's. You can search though the enabled repo's and install stuff. Perhaps it has a yum-cron like "update the images that are 'installed' daily" like feature... Thanks, Kevin ________________________________________ From: Richard Raseley [richard at raseley.com] Sent: Thursday, April 23, 2015 12:01 PM To: Fox, Kevin M Cc: openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] Sharing resources across OpenStack instances Fox, Kevin M wrote: > Some folks have been discussing an app store model. perhaps in > Murano, but more global, that would allow images/template to be > registered somewhere like on openstack.org and show up on all clouds > that have the global repo enabled. Murano would be extended to fetch > the images/templates to the local glance on the first launch of an > app if its not already cached locally. > > Would that sort of thing fit better then trying to sync glances > backing store between clouds? My first take is that what you've outlined feels a little too ambitious to me. I also have concerns (feature and scope creep) about whether or not Murano (or any other project) should be in the business of managing and auditing replication of data between various other (potentially disparate) systems. I think there is tremendous value at the core of the idea, which is essentially the ability to easily (and granularly) share and consume resources between clouds. To me that feels much more like something that would be interested on a per-project basis (as has been done in Swift). In the model I am imagining, the underlying components of the cloud would be responsible for inter-cloud data replication. In the specific case of Glance images, it could either take the form of a pub / sub model at the Glance level (which would allow replication of images between Glance systems utilizing different back-ends) or the form of backend <-> backend replication (e.g. with Swift Container Replication) and then Glance would simply have a process for discovering new images which have appeared on the back-end. Regards, Richard Raseley SysOps Engineer Puppet Labs From matt at nycresistor.com Thu Apr 23 19:55:42 2015 From: matt at nycresistor.com (matt) Date: Thu, 23 Apr 2015 15:55:42 -0400 Subject: [Openstack-operators] Sharing resources across OpenStack instances In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A1ED6AD@EX10MBOX03.pnnl.gov> References: <5537A2E1.2080406@redhat.com> <1A3C52DFCD06494D8528644858247BF01A1EB793@EX10MBOX03.pnnl.gov> <1429748774.53513.7.camel@localhost.localdomain> <74CEBC5A-9ED0-4241-A70D-420689C91FCD@overstock.com> <5539240E.70808@raseley.com> <1A3C52DFCD06494D8528644858247BF01A1ED582@EX10MBOX03.pnnl.gov> <55394194.9080607@raseley.com> <1A3C52DFCD06494D8528644858247BF01A1ED6AD@EX10MBOX03.pnnl.gov> Message-ID: Man I'd love for some mesos integration with openstack... On Thu, Apr 23, 2015 at 3:31 PM, Fox, Kevin M wrote: > I'm thinking more like this use case: > I'm a cloud user, I want a new Trac site. or an Elastic Search cluster. Or > a Jenkins system for my project. Why do I have to take a lot of time to > deploy these things myself? Ideally, one person, or a small group of folks > provide generic templates a common deployment of a given software that can > easily be discovered/launched by end users of any cloud. Docker is starting > to do this with the Docker Hub. Docker's too low level to do it really > nicely though. Say for an ElasticSearch cluster, you want it scalable, with > between M and N instances, with a load balancer and a private network. Heat > can do that... Murano is starting to enable that ability, but requires the > cloud admin to preload up all the bits into it. That doesn't scale well > though. > > I guess the how of how that works is up for grabs. Murano might not be the > right place for it. > > What other use cases for transferring resources between clouds can you > think of? > > Perhaps if the app store like functionality was simply http based, the > existing glance fetch image from http mechanism would already work. > > Another option would be for it to work similarly to yum/apt. You have a > set of repo's. You can search though the enabled repo's and install stuff. > Perhaps it has a yum-cron like "update the images that are 'installed' > daily" like feature... > > Thanks, > Kevin > ________________________________________ > From: Richard Raseley [richard at raseley.com] > Sent: Thursday, April 23, 2015 12:01 PM > To: Fox, Kevin M > Cc: openstack-operators at lists.openstack.org > Subject: Re: [Openstack-operators] Sharing resources across OpenStack > instances > > Fox, Kevin M wrote: > > Some folks have been discussing an app store model. perhaps in > > Murano, but more global, that would allow images/template to be > > registered somewhere like on openstack.org and show up on all clouds > > that have the global repo enabled. Murano would be extended to fetch > > the images/templates to the local glance on the first launch of an > > app if its not already cached locally. > > > > Would that sort of thing fit better then trying to sync glances > > backing store between clouds? > > My first take is that what you've outlined feels a little too ambitious > to me. I also have concerns (feature and scope creep) about whether or > not Murano (or any other project) should be in the business of managing > and auditing replication of data between various other (potentially > disparate) systems. > > I think there is tremendous value at the core of the idea, which is > essentially the ability to easily (and granularly) share and consume > resources between clouds. To me that feels much more like something that > would be interested on a per-project basis (as has been done in Swift). > > In the model I am imagining, the underlying components of the cloud > would be responsible for inter-cloud data replication. > > In the specific case of Glance images, it could either take the form of > a pub / sub model at the Glance level (which would allow replication of > images between Glance systems utilizing different back-ends) or the form > of backend <-> backend replication (e.g. with Swift Container > Replication) and then Glance would simply have a process for discovering > new images which have appeared on the back-end. > > Regards, > > Richard Raseley > > SysOps Engineer > Puppet Labs > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard at raseley.com Thu Apr 23 21:34:01 2015 From: richard at raseley.com (Richard Raseley) Date: Thu, 23 Apr 2015 14:34:01 -0700 Subject: [Openstack-operators] Sharing resources across OpenStack instances In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A1ED6AD@EX10MBOX03.pnnl.gov> References: <5537A2E1.2080406@redhat.com> <1A3C52DFCD06494D8528644858247BF01A1EB793@EX10MBOX03.pnnl.gov> <1429748774.53513.7.camel@localhost.localdomain> <74CEBC5A-9ED0-4241-A70D-420689C91FCD@overstock.com> , <5539240E.70808@raseley.com> <1A3C52DFCD06494D8528644858247BF01A1ED582@EX10MBOX03.pnnl.gov>, <55394194.9080607@raseley.com> <1A3C52DFCD06494D8528644858247BF01A1ED6AD@EX10MBOX03.pnnl.gov> Message-ID: <55396549.5070602@raseley.com> Fox, Kevin M wrote: > I'm thinking more like this use case: I'm a cloud user, I want a new > Trac site. or an Elastic Search cluster. Or a Jenkins system for my > project. Why do I have to take a lot of time to deploy these things > myself? Ideally, one person, or a small group of folks provide > generic templates a common deployment of a given software that can > easily be discovered/launched by end users of any cloud. Docker is > starting to do this with the Docker Hub. Docker's too low level to do > it really nicely though. Say for an ElasticSearch cluster, you want > it scalable, with between M and N instances, with a load balancer and > a private network. Heat can do that... Murano is starting to enable > that ability, but requires the cloud admin to preload up all the bits > into it. That doesn't scale well though. > > I guess the how of how that works is up for grabs. Murano might not > be the right place for it. > > What other use cases for transferring resources between clouds can > you think of? I have thought of the following: * Swift - Users want to transfer objects or containers. * Glance - Users want to transfer images and their associated metadata. * Cinder - Users want to transfer block devices, though this could be plausibly covered by Glance (vols -> images -> vols). * Heat - Users want to transfer stack templates. I know we don't have the ability to natively store these artifacts at this time (in the same way Glance stores images). * Manilla - Users want to transfer files or folder hierarchies. > Perhaps if the app store like functionality was simply http based, > the existing glance fetch image from http mechanism would already > work. Seems reasonable, as long as you could give it a feed or endpoint to poll on a regular interval. > > Another option would be for it to work similarly to yum/apt. You have > a set of repo's. You can search though the enabled repo's and install > stuff. Perhaps it has a yum-cron like "update the images that are > 'installed' daily" like feature... I think you're touching on a lot of important (and really good) ideas here, it seems like there are various plausible ways one could go about You've also touched on something that I've never really understood, which is why does Murano exist at all? This is obviously off-topic in the context of this particular discussion, but it seems to me like we already have something that can store objects and metadata (Glance) and something that handles application deployment and orchestration (Heat). It seems that the functionality of Murano should've naturally fell into these two projects with (a) Glance becoming a more general 'artifact' storage and metadata service and (b) Heat gaining the 'catalog' capability of Murano. In this way, specific applications would be represented by their own Heat templates, those templates could then be combined in an ad hoc way to build environments (which I believe is the terminology used in Murano) and then the the resulting composition would be orchestrated by Heat. Perhaps I am missing something fundamental which Murano does... From Kevin.Fox at pnnl.gov Thu Apr 23 22:46:36 2015 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 23 Apr 2015 22:46:36 +0000 Subject: [Openstack-operators] Sharing resources across OpenStack instances In-Reply-To: <55396549.5070602@raseley.com> References: <5537A2E1.2080406@redhat.com> <1A3C52DFCD06494D8528644858247BF01A1EB793@EX10MBOX03.pnnl.gov> <1429748774.53513.7.camel@localhost.localdomain> <74CEBC5A-9ED0-4241-A70D-420689C91FCD@overstock.com> ,<5539240E.70808@raseley.com> <1A3C52DFCD06494D8528644858247BF01A1ED582@EX10MBOX03.pnnl.gov>, <55394194.9080607@raseley.com> <1A3C52DFCD06494D8528644858247BF01A1ED6AD@EX10MBOX03.pnnl.gov> <55396549.5070602@raseley.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01A1EF99A@EX10MBOX03.pnnl.gov> > -----Original Message----- > From: Richard Raseley [mailto:richard at raseley.com] > Sent: Thursday, April 23, 2015 2:34 PM > To: Fox, Kevin M > Cc: openstack-operators at lists.openstack.org > Subject: Re: [Openstack-operators] Sharing resources across OpenStack > instances > > Fox, Kevin M wrote: > > I'm thinking more like this use case: I'm a cloud user, I want a new > > Trac site. or an Elastic Search cluster. Or a Jenkins system for my > > project. Why do I have to take a lot of time to deploy these things > > myself? Ideally, one person, or a small group of folks provide generic > > templates a common deployment of a given software that can easily be > > discovered/launched by end users of any cloud. Docker is starting to > > do this with the Docker Hub. Docker's too low level to do it really > > nicely though. Say for an ElasticSearch cluster, you want it scalable, > > with between M and N instances, with a load balancer and a private > > network. Heat can do that... Murano is starting to enable that > > ability, but requires the cloud admin to preload up all the bits into > > it. That doesn't scale well though. > > > > I guess the how of how that works is up for grabs. Murano might not be > > the right place for it. > > > > What other use cases for transferring resources between clouds can you > > think of? > > I have thought of the following: > > * Swift - Users want to transfer objects or containers. Makes sense. > * Glance - Users want to transfer images and their associated metadata. Also makes sense. > * Cinder - Users want to transfer block devices, though this could be > plausibly covered by Glance (vols -> images -> vols). Yeah, there are a lot of ways to do this one. There's also snapshots and backups too. :/ I've often wanted to download a cinder volume as a qcow2 file. > * Heat - Users want to transfer stack templates. I know we don't have the > ability to natively store these artifacts at this time (in the same way Glance > stores images). I haven?t tried it, but https://www.openstack.org/summit/openstack-paris-summit-2014/session-videos/presentation/glance-artifacts-it-and-39-s-not-just-for-images-anymore claims heat artifacts are working in juno. So maybe specifically heat resource transfer isn't needed if glance has it covered. > > * Manilla - Users want to transfer files or folder hierarchies. Hmm.... So is rsync out of the question, or are you more interested in scheduling/managing transfers via the cloud? > > > Perhaps if the app store like functionality was simply http based, the > > existing glance fetch image from http mechanism would already work. > > Seems reasonable, as long as you could give it a feed or endpoint to poll on > a regular interval. > > > > > Another option would be for it to work similarly to yum/apt. You have > > a set of repo's. You can search though the enabled repo's and install > > stuff. Perhaps it has a yum-cron like "update the images that are > > 'installed' daily" like feature... > > I think you're touching on a lot of important (and really good) ideas here, it > seems like there are various plausible ways one could go about > > You've also touched on something that I've never really understood, which > is why does Murano exist at all? > > This is obviously off-topic in the context of this particular discussion, but it > seems to me like we already have something that can store objects and > metadata (Glance) and something that handles application deployment and > orchestration (Heat). It seems that the functionality of Murano should've > naturally fell into these two projects with (a) Glance becoming a more > general 'artifact' storage and metadata service and (b) Heat gaining the > 'catalog' capability of Murano. In this way, specific applications would be > represented by their own Heat templates, those templates could then be > combined in an ad hoc way to build environments (which I believe is the > terminology used in Murano) and then the the resulting composition would > be orchestrated by Heat. > > Perhaps I am missing something fundamental which Murano does... This was brought up precisely when Murano tried to incubate. And its one of the reasons a lot of functionality got pushed over into Glance I think. I'm not really sure where Murano fits long term in the picture either, but I do know for sure that the app catalog system needs some kind of UI, and if all the rest of the Murano bits get pushed into the other OpenStack projects, I'm guessing that will still be needed. Whether the result is still called Murano, or it ends up just being folded into Horizon's project, that may not matter. The important thing is there's an app catalog. Thanks, Kevin From richard at raseley.com Thu Apr 23 23:08:55 2015 From: richard at raseley.com (Richard Raseley) Date: Thu, 23 Apr 2015 16:08:55 -0700 Subject: [Openstack-operators] Sharing resources across OpenStack instances In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A1EF99A@EX10MBOX03.pnnl.gov> References: <5537A2E1.2080406@redhat.com> <1A3C52DFCD06494D8528644858247BF01A1EB793@EX10MBOX03.pnnl.gov> <1429748774.53513.7.camel@localhost.localdomain> <74CEBC5A-9ED0-4241-A70D-420689C91FCD@overstock.com> , <5539240E.70808@raseley.com> <1A3C52DFCD06494D8528644858247BF01A1ED582@EX10MBOX03.pnnl.gov>, <55394194.9080607@raseley.com> <1A3C52DFCD06494D8528644858247BF01A1ED6AD@EX10MBOX03.pnnl.gov> <55396549.5070602@raseley.com> <1A3C52DFCD06494D8528644858247BF01A1EF99A@EX10MBOX03.pnnl.gov> Message-ID: <55397B87.6010309@raseley.com> Fox, Kevin M wrote: > I haven?t tried it, > but https://www.openstack.org/summit/openstack-paris-summit-2014/session-videos/presentation/glance-artifacts-it-and-39-s-not-just-for-images-anymore > claims heat artifacts are working in juno. So maybe specifically heat > resource transfer isn't needed if glance has it covered. Very interesting, thank you for sharing. That is very much in-line with what I had hoped for Glance. >>> * Manilla - Users want to transfer files or folder hierarchies. > > Hmm.... So is rsync out of the question, or are you more interested > in scheduling/managing transfers via the cloud? I don't think rsync is out of the question in terms of the underlying replication mechanism - it was more about the ability to schedule / manage. [...] >>> I think you're touching on a lot of important (and really good) >>> ideas here, it seems like there are various plausible ways one >>> could go about >>> >>> You've also touched on something that I've never really >>> understood, which is why does Murano exist at all? >>> >>> [..] > > This was brought up precisely when Murano tried to incubate. And its > one of the reasons a lot of functionality got pushed over into Glance > I think. I'm not really sure where Murano fits long term in the > picture either, but I do know for sure that the app catalog system > needs some kind of UI, and if all the rest of the Murano bits get > pushed into the other OpenStack projects, I'm guessing that will > still be needed. Whether the result is still called Murano, or it > ends up just being folded into Horizon's project, that may not > matter. The important thing is there's an app catalog. Thank you for that additional context. I absolutely agree that the general idea of the app catalog is an important one. Regards, Richard Raseley SysOps Engineer Puppet Labs From stefano at openstack.org Fri Apr 24 22:10:16 2015 From: stefano at openstack.org (Stefano Maffulli) Date: Fri, 24 Apr 2015 15:10:16 -0700 Subject: [Openstack-operators] OpenStack Community Weekly Newsletter (Apr 17 - 24) Message-ID: <1429913416.3440.54.camel@sputacchio.gateway.2wire.net> Why you should attend an OpenStack Summit OpenStack Summits don?t miss a beat - with a schedule full of diverse breakout sessions, captivating speakers, off-the-wall evening events and the occasional surprise, it?s the twice-yearly event you simply cannot miss. What would you add to the list of 10 most memorable summit moments to date? Gnocchi 1.0: storing metrics and resources at scale Gnocchi provides a scalable way to store and retrieve data and metrics from instances, volumes, networks and all the things that make an OpenStack cloud. Gnocchi also provides a REST API that allows the user to manipulate resources (CRUD) and their attributes, while preserving the history of those resources and their attributes. The Gnocchi team takes great pride in the quality of their documentation too, fully available online. The Road to Vancouver * Canada Visa Information * Official Hotel Room Blocks * 2015 OpenStack T-Shirt Design Contest * Preparation to Design summit * What's a Design Summit? We can squeeze few more people in Upstream Training Relevant Conversations * Please stop reviewing code while asking questions * A big tent home for Neutron backend code * upcoming library releases to unfreeze requirements in master * [Zaqar] Call for adoption (or exclusion?) * Sharing resources across OpenStack instances * 3 API Guidelines up for final review * Third-Party CI Operators: Let's use a common CI Solution! Deadlines and Development Priorities * Candidates for TC (Technical Committee): election is ongoing. Reports from Previous Events * China hosts OpenStack bug-fix hackathon * PyCon 2015 Security Advisories and Notices * [OSSN 0047] Keystone does not validate that identity providers match federation mappings Tips ?n Tricks * By Steve Hardy: Debugging TripleO Heat templates * By Luc Van Steen: Moving from CityCloud legacy to OpenStack ? Part 1 and Part 2 Upcoming Events OpenStack Israel CFP Voting is Open PyCon-AU Openstack miniconf CFP open * May 05 - 07, 2015 CeBIT AU 2015 Sydney, NSW, AU * May 09, 2015 OpenStack Meetup Hanoi Hanoi, Hanoi, VN * May 18 - 22, 2015 OpenStack Summit May 2015 Vancouver, BC * Jun 02, 2015 OpenStack Day LATAM Mexico City, MX * Jun 04, 2015 OpenStack Days Istanbul ? 2015 Istanbul, TR * Jun 08, 2015 OpenStack CEE Day 2015 Budapest, HU * Jun 11, 2015 OpenStack DACH Day 2015 Berlin, DE * Jun 15, 2015 OpenStack Israel Tel Aviv, IL * Jul 20 - 24, 2015 OSCON 2015 Portland, OR, US Other News * EC2-API release 0.1.0 available * OpenStack SWIFT Object Storage Tape Library Connector * OVN and OpenStack Status ? 2015-04-21 * OpenStack Kilo RC1 for Ubuntu 14.04 LTS and Ubuntu 15.04 * StackTach.v3 now in production The weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Sat Apr 25 01:10:14 2015 From: ayoung at redhat.com (Adam Young) Date: Fri, 24 Apr 2015 21:10:14 -0400 Subject: [Openstack-operators] Sharing resources across OpenStack instances In-Reply-To: <74CEBC5A-9ED0-4241-A70D-420689C91FCD@overstock.com> References: <5537A2E1.2080406@redhat.com> <1A3C52DFCD06494D8528644858247BF01A1EB793@EX10MBOX03.pnnl.gov> <1429748774.53513.7.camel@localhost.localdomain> <74CEBC5A-9ED0-4241-A70D-420689C91FCD@overstock.com> Message-ID: <553AE976.4010502@redhat.com> On 04/23/2015 12:13 AM, Mike Smith wrote: > No shared services between them. Some of the separation is due to the kind of instances they contain (?Dev/Test" vs ?Prod? for example), but it is largely due to the location diversity and the desire to not require everything to be upgraded at once. This is an interesting point; if a mix-and-match approach were takien, it would require a lot of internal maturing of the OpenStack interfaces. We would end up with case of a Kilo Glance being called by a Liberty Nova, for example. That might not be a bad thing, as it would require a degree of discovery and flexibility we don't currently emphasize. The bigger issue, I suspect, is the message queue. Getting that to API stability would be a huge effort, I suspect. I am less interested in what people are actually doing than in the things that you would want to be doing but cannot due to limitations in the authorization and sharing assumptions that make up OpenStack. The Cinder client that runs on the compute node is "owned" buy the cinder server, so to speak. This means that two Cinders with different access control levels cannot be shared by the same cloud. From alevine at cloudscaling.com Sat Apr 25 15:40:56 2015 From: alevine at cloudscaling.com (Alexandre Levine) Date: Sat, 25 Apr 2015 18:40:56 +0300 Subject: [Openstack-operators] [openstack-operators] EC2-API release 0.1.0 available In-Reply-To: <553A2D8F.6040207@cloudscaling.com> References: <553A2D8F.6040207@cloudscaling.com> Message-ID: <553BB588.1060103@cloudscaling.com> Hello everyone, We've decided to cut our first release of the standalone EC2-API project. All of the known to us major problems are solved - NovaDB direct access is cut off, performance is checked and improved, all of the necessary functional, unit and Rally tests are in place. One known caveat is that nova API version required is 2.3 (microversion). Unfortunately the python-novaclient supporting microversions is not released yet. So several instance properties won't be reported if requested with version 2.1. The 0.1.0 tarball is available for download at: https://launchpad.net/ec2-api/trunk/0.1.0 pip can be used to install PyPI package. Installation in devstack is also possible. The following line should be added to local.conf or localrc for this: enable_plugin ec2-apihttps://github.com/stackforge/ec2-api Current README can be accessed on stackforge: https://github.com/stackforge/ec2-api Please report bags to launchpad: https://bugs.launchpad.net/ec2-api/+filebug Best regards, Alex Levine From mspreitz at us.ibm.com Sat Apr 25 19:13:57 2015 From: mspreitz at us.ibm.com (Mike Spreitzer) Date: Sat, 25 Apr 2015 15:13:57 -0400 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC Message-ID: Is there a way to create multiple external networks from Neutron's point of view, where both of those networks are accessed through the same host NIC? Obviously those networks would be using different subnets. I need this sort of thing because the two subnets are treated differently by the stuff outside of OpenStack, so I need a way that a tenant can get a floating IP of the sort he wants. Since Neutron equates floating IP allocation pools with external networks, I need two external networks. I found, for example, http://www.marcoberube.com/archives/248 --- which describes how to have multiple external networks but uses a distinct host network interface for each one. Thanks, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.shuklin at gmail.com Sat Apr 25 19:41:14 2015 From: george.shuklin at gmail.com (George Shuklin) Date: Sat, 25 Apr 2015 22:41:14 +0300 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: References: Message-ID: <553BEDDA.8020203@gmail.com> Can you put them to different vlans? After that it would be very easy task. If not, AFAIK, neutron does not allow this. Or you can trick it thinking it is (are) separate networks. Create brige (br-join), plug eth to it. Create to fake external bridges (br-ex1, br-ex2). Join them together to br-join by patch links (http://blog.scottlowe.org/2012/11/27/connecting-ovs-bridges-with-patch-ports/) Instruct neutron like there is two external networks: one on br-ex1, second on br-ex2. But be alert that this not very stable configuration, you need to maintain it by yourself. On 04/25/2015 10:13 PM, Mike Spreitzer wrote: > Is there a way to create multiple external networks from Neutron's > point of view, where both of those networks are accessed through the > same host NIC? Obviously those networks would be using different > subnets. I need this sort of thing because the two subnets are > treated differently by the stuff outside of OpenStack, so I need a way > that a tenant can get a floating IP of the sort he wants. Since > Neutron equates floating IP allocation pools with external networks, I > need two external networks. > > I found, for example, http://www.marcoberube.com/archives/248--- which > describes how to have multiple external networks but uses a distinct > host network interface for each one. > > Thanks, > Mike > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From uwe.sauter.de at gmail.com Sat Apr 25 20:17:35 2015 From: uwe.sauter.de at gmail.com (Uwe Sauter) Date: Sat, 25 Apr 2015 22:17:35 +0200 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: <553BEDDA.8020203@gmail.com> References: <553BEDDA.8020203@gmail.com> Message-ID: <553BF65F.9080601@gmail.com> Or instead of using Linux bridges you could use a manually created OpenVSwitch bridge. This allows you to add "internal" ports that could be used by Neutron like any other interface. - Create OVS bridge - Add your external interface to OVS bridge * If your external connection supports/needs VLANs, configure external interface as trunk - Add any number of internal interfaces to OVS bridge * Tag each interface with its VLAN ID, if needed - Configure Neutron to use one internal interface for each subnet you'd like to use (no VLAN configuration required as this happenes outside of Neutron) Regards, Uwe Am 25.04.2015 um 21:41 schrieb George Shuklin: > Can you put them to different vlans? After that it would be very easy task. > > If not, AFAIK, neutron does not allow this. > > Or you can trick it thinking it is (are) separate networks. > > Create brige (br-join), plug eth to it. > Create to fake external bridges (br-ex1, br-ex2). Join them together to br-join by patch links > (http://blog.scottlowe.org/2012/11/27/connecting-ovs-bridges-with-patch-ports/) > > Instruct neutron like there is two external networks: one on br-ex1, second on br-ex2. > > But be alert that this not very stable configuration, you need to maintain it by yourself. > > On 04/25/2015 10:13 PM, Mike Spreitzer wrote: >> Is there a way to create multiple external networks from Neutron's point of view, where both of those networks are >> accessed through the same host NIC? Obviously those networks would be using different subnets. I need this sort of >> thing because the two subnets are treated differently by the stuff outside of OpenStack, so I need a way that a tenant >> can get a floating IP of the sort he wants. Since Neutron equates floating IP allocation pools with external >> networks, I need two external networks. >> >> I found, for example, http://www.marcoberube.com/archives/248--- which describes how to have multiple external >> networks but uses a distinct host network interface for each one. >> >> Thanks, >> Mike >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From mspreitz at us.ibm.com Sat Apr 25 20:28:13 2015 From: mspreitz at us.ibm.com (Mike Spreitzer) Date: Sat, 25 Apr 2015 16:28:13 -0400 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: <553BF65F.9080601@gmail.com> References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> Message-ID: > From: Uwe Sauter > > Or instead of using Linux bridges you could use a manually created > OpenVSwitch bridge. This allows you to add "internal" > ports that could be used by Neutron like any other interface. > > - Create OVS bridge > - Add your external interface to OVS bridge > * If your external connection supports/needs VLANs, configure > external interface as trunk > - Add any number of internal interfaces to OVS bridge > * Tag each interface with its VLAN ID, if needed > - Configure Neutron to use one internal interface for each subnet > you'd like to use (no VLAN configuration required as > this happenes outside of Neutron) > > Regards, > > Uwe > > Am 25.04.2015 um 21:41 schrieb George Shuklin: > > Can you put them to different vlans? After that it would be very easy task. > > > > If not, AFAIK, neutron does not allow this. > > > > Or you can trick it thinking it is (are) separate networks. > > > > Create brige (br-join), plug eth to it. > > Create to fake external bridges (br-ex1, br-ex2). Join them > together to br-join by patch links > > (http://blog.scottlowe.org/2012/11/27/connecting-ovs-bridges-with- > patch-ports/) > > > > Instruct neutron like there is two external networks: one on br- > ex1, second on br-ex2. > > > > But be alert that this not very stable configuration, you need to > maintain it by yourself. > > > > On 04/25/2015 10:13 PM, Mike Spreitzer wrote: > >> Is there a way to create multiple external networks from > Neutron's point of view, where both of those networks are > >> accessed through the same host NIC? Obviously those networks > would be using different subnets. I need this sort of > >> thing because the two subnets are treated differently by the > stuff outside of OpenStack, so I need a way that a tenant > >> can get a floating IP of the sort he wants. Since Neutron > equates floating IP allocation pools with external > >> networks, I need two external networks. > >> > >> I found, for example, http://www.marcoberube.com/archives/248--- > which describes how to have multiple external > >> networks but uses a distinct host network interface for each one. > >> > >> Thanks, > >> Mike Thanks Uwe, I might try that, it sounds like the simplest thing that will work. I think I can not use VLAN tagging in my environment. I am using ML2 with OVS, and it is working now with a single external network. Should I expect to find a bridge_mappings entry in my plugin.ini? I do not find one now. This setup was mainly created by other people, so I am not sure what to expect. When using ML2 with OVS, how do I tell Neutron what my bridge mappings are? Thanks, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From uwe.sauter.de at gmail.com Sat Apr 25 20:42:06 2015 From: uwe.sauter.de at gmail.com (Uwe Sauter) Date: Sat, 25 Apr 2015 22:42:06 +0200 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> Message-ID: <553BFC1E.8040701@gmail.com> Am 25.04.2015 um 22:28 schrieb Mike Spreitzer: >> From: Uwe Sauter >> >> Or instead of using Linux bridges you could use a manually created >> OpenVSwitch bridge. This allows you to add "internal" >> ports that could be used by Neutron like any other interface. >> >> - Create OVS bridge >> - Add your external interface to OVS bridge >> * If your external connection supports/needs VLANs, configure >> external interface as trunk >> - Add any number of internal interfaces to OVS bridge >> * Tag each interface with its VLAN ID, if needed >> - Configure Neutron to use one internal interface for each subnet >> you'd like to use (no VLAN configuration required as >> this happenes outside of Neutron) >> >> Regards, >> >> Uwe >> >> Am 25.04.2015 um 21:41 schrieb George Shuklin: >> > Can you put them to different vlans? After that it would be very easy task. >> > >> > If not, AFAIK, neutron does not allow this. >> > >> > Or you can trick it thinking it is (are) separate networks. >> > >> > Create brige (br-join), plug eth to it. >> > Create to fake external bridges (br-ex1, br-ex2). Join them >> together to br-join by patch links >> > (http://blog.scottlowe.org/2012/11/27/connecting-ovs-bridges-with- >> patch-ports/) >> > >> > Instruct neutron like there is two external networks: one on br- >> ex1, second on br-ex2. >> > >> > But be alert that this not very stable configuration, you need to >> maintain it by yourself. >> > >> > On 04/25/2015 10:13 PM, Mike Spreitzer wrote: >> >> Is there a way to create multiple external networks from >> Neutron's point of view, where both of those networks are >> >> accessed through the same host NIC? Obviously those networks >> would be using different subnets. I need this sort of >> >> thing because the two subnets are treated differently by the >> stuff outside of OpenStack, so I need a way that a tenant >> >> can get a floating IP of the sort he wants. Since Neutron >> equates floating IP allocation pools with external >> >> networks, I need two external networks. >> >> >> >> I found, for example, http://www.marcoberube.com/archives/248--- >> which describes how to have multiple external >> >> networks but uses a distinct host network interface for each one. >> >> >> >> Thanks, >> >> Mike > > Thanks Uwe, I might try that, it sounds like the simplest thing that will work. I think I can not use VLAN tagging in my > environment. I am using ML2 with OVS, and it is working now with a single external network. Should I expect to find a > bridge_mappings entry in my plugin.ini? I do not find one now. This setup was mainly created by other people, so I am not sure > what to expect. When using ML2 with OVS, how do I tell Neutron what my bridge mappings are? > > Thanks, > Mike > Mike, VLAN is optional in the setup I described. I just was pointing out where such a configuration could take place. As far as my experience with OVS and Neutron goes, Neutron will just ignore already existing configurations. That's also the reason why install manuals tell you to create br-int and br-ex. Regarding the exact configuration of ML2 and plugin.ini I'm not quite sure if I understand your question correctly. Are you asking how to tell Neutron which interface should be used for the different IP subnets? Perhaps you could post your plugin.ini with sensitive information replaced with something generic. Regards, Uwe From mspreitz at us.ibm.com Sat Apr 25 20:54:38 2015 From: mspreitz at us.ibm.com (Mike Spreitzer) Date: Sat, 25 Apr 2015 16:54:38 -0400 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: <553BFC1E.8040701@gmail.com> References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> <553BFC1E.8040701@gmail.com> Message-ID: Uwe Sauter wrote on 04/25/2015 04:42:06 PM: > Am 25.04.2015 um 22:28 schrieb Mike Spreitzer: > >> From: Uwe Sauter > >> > >> Or instead of using Linux bridges you could use a manually created > >> OpenVSwitch bridge. This allows you to add "internal" > >> ports that could be used by Neutron like any other interface. > >> > >> - Create OVS bridge > >> - Add your external interface to OVS bridge > >> * If your external connection supports/needs VLANs, configure > >> external interface as trunk > >> - Add any number of internal interfaces to OVS bridge > >> * Tag each interface with its VLAN ID, if needed > >> - Configure Neutron to use one internal interface for each subnet > >> you'd like to use (no VLAN configuration required as > >> this happenes outside of Neutron) > >> > >> Regards, > >> > >> Uwe > >> > >> Am 25.04.2015 um 21:41 schrieb George Shuklin: > >> > Can you put them to different vlans? After that it would be > very easy task. > >> > > >> > If not, AFAIK, neutron does not allow this. > >> > > >> > Or you can trick it thinking it is (are) separate networks. > >> > > >> > Create brige (br-join), plug eth to it. > >> > Create to fake external bridges (br-ex1, br-ex2). Join them > >> together to br-join by patch links > >> > (http://blog.scottlowe.org/2012/11/27/connecting-ovs-bridges-with- > >> patch-ports/) > >> > > >> > Instruct neutron like there is two external networks: one on br- > >> ex1, second on br-ex2. > >> > > >> > But be alert that this not very stable configuration, you need to > >> maintain it by yourself. > >> > > >> > On 04/25/2015 10:13 PM, Mike Spreitzer wrote: > >> >> Is there a way to create multiple external networks from > >> Neutron's point of view, where both of those networks are > >> >> accessed through the same host NIC? Obviously those networks > >> would be using different subnets. I need this sort of > >> >> thing because the two subnets are treated differently by the > >> stuff outside of OpenStack, so I need a way that a tenant > >> >> can get a floating IP of the sort he wants. Since Neutron > >> equates floating IP allocation pools with external > >> >> networks, I need two external networks. > >> >> > >> >> I found, for example, http://www.marcoberube.com/archives/248--- > >> which describes how to have multiple external > >> >> networks but uses a distinct host network interface for each one. > >> >> > >> >> Thanks, > >> >> Mike > > > > Thanks Uwe, I might try that, it sounds like the simplest thing > that will work. I think I can not use VLAN tagging in my > > environment. I am using ML2 with OVS, and it is working now with > a single external network. Should I expect to find a > > bridge_mappings entry in my plugin.ini? I do not find one now. > This setup was mainly created by other people, so I am not sure > > what to expect. When using ML2 with OVS, how do I tell Neutron > what my bridge mappings are? > > > > Thanks, > > Mike > > > > Mike, > > VLAN is optional in the setup I described. I just was pointing out > where such a configuration could take place. > > As far as my experience with OVS and Neutron goes, Neutron will just > ignore already existing configurations. That's also the > reason why install manuals tell you to create br-int and br-ex. > > Regarding the exact configuration of ML2 and plugin.ini I'm not > quite sure if I understand your question correctly. Are you asking > how to tell Neutron which interface should be used for the different > IP subnets? > > Perhaps you could post your plugin.ini with sensitive information > replaced with something generic. > > Regards, > > Uwe > Yes, I realize the VLAN tagging is just an option in the approach you are outlining. George pointed out that VLAN tagging could carry even more of the load here. I mentioned that I can not use VLAN tagging as an explanation of why I have to pursue what you are describing. Following in my plugin.ini. As you can see, it is not what I would be editing. I have already added stuff to flat_networks, in anticipation of being able to use more than one. My problem is that there is no bridge_mappings, so I can not update it to add more external networks! # This file autogenerated by Chef # Do not edit, changes will be overwritten [ml2] # (ListOpt) List of network type driver entrypoints to be loaded from # the neutron.ml2.type_drivers namespace. # # type_drivers = local,flat,vlan,gre,vxlan # Example: type_drivers = flat,vlan,gre,vxlan type_drivers = gre,flat # (ListOpt) Ordered list of network_types to allocate as tenant # networks. The default value 'local' is useful for single-box testing # but provides no connectivity between hosts. # # tenant_network_types = local # Example: tenant_network_types = vlan,gre,vxlan tenant_network_types = gre # (ListOpt) Ordered list of networking mechanism driver entrypoints # to be loaded from the neutron.ml2.mechanism_drivers namespace. # mechanism_drivers = # Example: mechanism_drivers = openvswitch,mlnx # Example: mechanism_drivers = arista # Example: mechanism_drivers = cisco,logger # Example: mechanism_drivers = openvswitch,brocade # Example: mechanism_drivers = linuxbridge,brocade mechanism_drivers = openvswitch [ml2_type_flat] # (ListOpt) List of physical_network names with which flat networks # can be created. Use * to allow flat networks with arbitrary # physical_network names. # # flat_networks = # Example:flat_networks = physnet1,physnet2 # Example:flat_networks = * flat_networks = physnet1,physnet4n,physnet4p [ml2_type_vlan] # (ListOpt) List of [::] tuples # specifying physical_network names usable for VLAN provider and # tenant networks, as well as ranges of VLAN tags on each # physical_network available for allocation as tenant networks. # # network_vlan_ranges = # Example: network_vlan_ranges = physnet1:1000:2999,physnet2 network_vlan_ranges = [ml2_type_gre] # (ListOpt) Comma-separated list of : tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation tunnel_id_ranges = 1:2000 [ml2_type_vxlan] # (ListOpt) Comma-separated list of : tuples enumerating # ranges of VXLAN VNI IDs that are available for tenant network allocation. vni_ranges = # (StrOpt) Multicast group for the VXLAN interface. When configured, will # enable sending all broadcast traffic to this multicast group. When left # unconfigured, will disable multicast VXLAN mode. # # vxlan_group = # Example: vxlan_group = 239.1.1.1 vxlan_group = [securitygroup] # Controls if neutron security group is enabled or not. # It should be false when you use nova security group. enable_security_group = True # Use ipset to speed-up the iptables security groups. Enabling ipset support # requires that ipset is installed on L2 agent node. enable_ipset = True Thanks, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From uwe.sauter.de at gmail.com Sat Apr 25 21:06:40 2015 From: uwe.sauter.de at gmail.com (Uwe Sauter) Date: Sat, 25 Apr 2015 23:06:40 +0200 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> <553BFC1E.8040701@gmail.com> Message-ID: <553C01E0.6090902@gmail.com> Am 25.04.2015 um 22:54 schrieb Mike Spreitzer: > Uwe Sauter wrote on 04/25/2015 04:42:06 PM: > >> Am 25.04.2015 um 22:28 schrieb Mike Spreitzer: >> >> From: Uwe Sauter >> >> >> >> Or instead of using Linux bridges you could use a manually created >> >> OpenVSwitch bridge. This allows you to add "internal" >> >> ports that could be used by Neutron like any other interface. >> >> >> >> - Create OVS bridge >> >> - Add your external interface to OVS bridge >> >> * If your external connection supports/needs VLANs, configure >> >> external interface as trunk >> >> - Add any number of internal interfaces to OVS bridge >> >> * Tag each interface with its VLAN ID, if needed >> >> - Configure Neutron to use one internal interface for each subnet >> >> you'd like to use (no VLAN configuration required as >> >> this happenes outside of Neutron) >> >> >> >> Regards, >> >> >> >> Uwe >> >> >> >> Am 25.04.2015 um 21:41 schrieb George Shuklin: >> >> > Can you put them to different vlans? After that it would be >> very easy task. >> >> > >> >> > If not, AFAIK, neutron does not allow this. >> >> > >> >> > Or you can trick it thinking it is (are) separate networks. >> >> > >> >> > Create brige (br-join), plug eth to it. >> >> > Create to fake external bridges (br-ex1, br-ex2). Join them >> >> together to br-join by patch links >> >> > (http://blog.scottlowe.org/2012/11/27/connecting-ovs-bridges-with- >> >> patch-ports/) >> >> > >> >> > Instruct neutron like there is two external networks: one on br- >> >> ex1, second on br-ex2. >> >> > >> >> > But be alert that this not very stable configuration, you need to >> >> maintain it by yourself. >> >> > >> >> > On 04/25/2015 10:13 PM, Mike Spreitzer wrote: >> >> >> Is there a way to create multiple external networks from >> >> Neutron's point of view, where both of those networks are >> >> >> accessed through the same host NIC? Obviously those networks >> >> would be using different subnets. I need this sort of >> >> >> thing because the two subnets are treated differently by the >> >> stuff outside of OpenStack, so I need a way that a tenant >> >> >> can get a floating IP of the sort he wants. Since Neutron >> >> equates floating IP allocation pools with external >> >> >> networks, I need two external networks. >> >> >> >> >> >> I found, for example, http://www.marcoberube.com/archives/248--- >> >> which describes how to have multiple external >> >> >> networks but uses a distinct host network interface for each one. >> >> >> >> >> >> Thanks, >> >> >> Mike >> > >> > Thanks Uwe, I might try that, it sounds like the simplest thing >> that will work. I think I can not use VLAN tagging in my >> > environment. I am using ML2 with OVS, and it is working now with >> a single external network. Should I expect to find a >> > bridge_mappings entry in my plugin.ini? I do not find one now. >> This setup was mainly created by other people, so I am not sure >> > what to expect. When using ML2 with OVS, how do I tell Neutron >> what my bridge mappings are? >> > >> > Thanks, >> > Mike >> > >> >> Mike, >> >> VLAN is optional in the setup I described. I just was pointing out >> where such a configuration could take place. >> >> As far as my experience with OVS and Neutron goes, Neutron will just >> ignore already existing configurations. That's also the >> reason why install manuals tell you to create br-int and br-ex. >> >> Regarding the exact configuration of ML2 and plugin.ini I'm not >> quite sure if I understand your question correctly. Are you asking >> how to tell Neutron which interface should be used for the different >> IP subnets? >> >> Perhaps you could post your plugin.ini with sensitive information >> replaced with something generic. >> >> Regards, >> >> Uwe >> > > Yes, I realize the VLAN tagging is just an option in the approach you are outlining. George pointed out that VLAN tagging could > carry even more of the load here. I mentioned that I can not use VLAN tagging as an explanation of why I have to pursue what you > are describing. > > Following in my plugin.ini. As you can see, it is not what I would be editing. I have already added stuff to flat_networks, in > anticipation of being able to use more than one. My problem is that there is no bridge_mappings, so I can not update it to add > more external networks! > > > # This file autogenerated by Chef > # Do not edit, changes will be overwritten > > [ml2] > # (ListOpt) List of network type driver entrypoints to be loaded from > # the neutron.ml2.type_drivers namespace. > # > # type_drivers = local,flat,vlan,gre,vxlan > # Example: type_drivers = flat,vlan,gre,vxlan > type_drivers = gre,flat > > # (ListOpt) Ordered list of network_types to allocate as tenant > # networks. The default value 'local' is useful for single-box testing > # but provides no connectivity between hosts. > # > # tenant_network_types = local > # Example: tenant_network_types = vlan,gre,vxlan > tenant_network_types = gre > > # (ListOpt) Ordered list of networking mechanism driver entrypoints > # to be loaded from the neutron.ml2.mechanism_drivers namespace. > # mechanism_drivers = > # Example: mechanism_drivers = openvswitch,mlnx > # Example: mechanism_drivers = arista > # Example: mechanism_drivers = cisco,logger > # Example: mechanism_drivers = openvswitch,brocade > # Example: mechanism_drivers = linuxbridge,brocade > mechanism_drivers = openvswitch > > [ml2_type_flat] > # (ListOpt) List of physical_network names with which flat networks > # can be created. Use * to allow flat networks with arbitrary > # physical_network names. > # > # flat_networks = > # Example:flat_networks = physnet1,physnet2 > # Example:flat_networks = * > flat_networks = physnet1,physnet4n,physnet4p > > [ml2_type_vlan] > # (ListOpt) List of [::] tuples > # specifying physical_network names usable for VLAN provider and > # tenant networks, as well as ranges of VLAN tags on each > # physical_network available for allocation as tenant networks. > # > # network_vlan_ranges = > # Example: network_vlan_ranges = physnet1:1000:2999,physnet2 > network_vlan_ranges = > > [ml2_type_gre] > # (ListOpt) Comma-separated list of : tuples enumerating ranges of GRE tunnel IDs that are available for tenant > network allocation > tunnel_id_ranges = 1:2000 > > [ml2_type_vxlan] > # (ListOpt) Comma-separated list of : tuples enumerating > # ranges of VXLAN VNI IDs that are available for tenant network allocation. > vni_ranges = > > # (StrOpt) Multicast group for the VXLAN interface. When configured, will > # enable sending all broadcast traffic to this multicast group. When left > # unconfigured, will disable multicast VXLAN mode. > # > # vxlan_group = > # Example: vxlan_group = 239.1.1.1 > vxlan_group = > > [securitygroup] > # Controls if neutron security group is enabled or not. > # It should be false when you use nova security group. > enable_security_group = True > > # Use ipset to speed-up the iptables security groups. Enabling ipset support > # requires that ipset is installed on L2 agent node. > enable_ipset = True > > > Thanks, > Mike > Mike, what version of OpenStack are you using? I only have experience with Juno and the install manual shows additional sections compared to your file: http://docs.openstack.org/juno/install-guide/install/yum/content/neutron-network-node.html After flying over the manual again, I'd say that you don't need the extra stuff we were talking about but only need to add more external networks. Get your configuration with bridge_mappings straight, than add the networks. But that's just a wild guess? Regards, Uwe From blak111 at gmail.com Sun Apr 26 00:38:25 2015 From: blak111 at gmail.com (Kevin Benton) Date: Sat, 25 Apr 2015 17:38:25 -0700 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> <553BFC1E.8040701@gmail.com> Message-ID: Bridge mappings is an agent configuration value, it's not in the neutron server config. Run ps -ef and look for the neutron openvswitch agent process to see which configuration files it's referencing. The bridge mappings will be in one of those. On Apr 25, 2015 1:55 PM, "Mike Spreitzer" wrote: > Uwe Sauter wrote on 04/25/2015 04:42:06 PM: > > > Am 25.04.2015 um 22:28 schrieb Mike Spreitzer: > > >> From: Uwe Sauter > > >> > > >> Or instead of using Linux bridges you could use a manually created > > >> OpenVSwitch bridge. This allows you to add "internal" > > >> ports that could be used by Neutron like any other interface. > > >> > > >> - Create OVS bridge > > >> - Add your external interface to OVS bridge > > >> * If your external connection supports/needs VLANs, configure > > >> external interface as trunk > > >> - Add any number of internal interfaces to OVS bridge > > >> * Tag each interface with its VLAN ID, if needed > > >> - Configure Neutron to use one internal interface for each subnet > > >> you'd like to use (no VLAN configuration required as > > >> this happenes outside of Neutron) > > >> > > >> Regards, > > >> > > >> Uwe > > >> > > >> Am 25.04.2015 um 21:41 schrieb George Shuklin: > > >> > Can you put them to different vlans? After that it would be > > very easy task. > > >> > > > >> > If not, AFAIK, neutron does not allow this. > > >> > > > >> > Or you can trick it thinking it is (are) separate networks. > > >> > > > >> > Create brige (br-join), plug eth to it. > > >> > Create to fake external bridges (br-ex1, br-ex2). Join them > > >> together to br-join by patch links > > >> > (http://blog.scottlowe.org/2012/11/27/connecting-ovs-bridges-with- > > >> patch-ports/) > > >> > > > >> > Instruct neutron like there is two external networks: one on br- > > >> ex1, second on br-ex2. > > >> > > > >> > But be alert that this not very stable configuration, you need to > > >> maintain it by yourself. > > >> > > > >> > On 04/25/2015 10:13 PM, Mike Spreitzer wrote: > > >> >> Is there a way to create multiple external networks from > > >> Neutron's point of view, where both of those networks are > > >> >> accessed through the same host NIC? Obviously those networks > > >> would be using different subnets. I need this sort of > > >> >> thing because the two subnets are treated differently by the > > >> stuff outside of OpenStack, so I need a way that a tenant > > >> >> can get a floating IP of the sort he wants. Since Neutron > > >> equates floating IP allocation pools with external > > >> >> networks, I need two external networks. > > >> >> > > >> >> I found, for example, http://www.marcoberube.com/archives/248--- > > >> which describes how to have multiple external > > >> >> networks but uses a distinct host network interface for each one. > > >> >> > > >> >> Thanks, > > >> >> Mike > > > > > > Thanks Uwe, I might try that, it sounds like the simplest thing > > that will work. I think I can not use VLAN tagging in my > > > environment. I am using ML2 with OVS, and it is working now with > > a single external network. Should I expect to find a > > > bridge_mappings entry in my plugin.ini? I do not find one now. > > This setup was mainly created by other people, so I am not sure > > > what to expect. When using ML2 with OVS, how do I tell Neutron > > what my bridge mappings are? > > > > > > Thanks, > > > Mike > > > > > > > Mike, > > > > VLAN is optional in the setup I described. I just was pointing out > > where such a configuration could take place. > > > > As far as my experience with OVS and Neutron goes, Neutron will just > > ignore already existing configurations. That's also the > > reason why install manuals tell you to create br-int and br-ex. > > > > Regarding the exact configuration of ML2 and plugin.ini I'm not > > quite sure if I understand your question correctly. Are you asking > > how to tell Neutron which interface should be used for the different > > IP subnets? > > > > Perhaps you could post your plugin.ini with sensitive information > > replaced with something generic. > > > > Regards, > > > > Uwe > > > > Yes, I realize the VLAN tagging is just an option in the approach you are > outlining. George pointed out that VLAN tagging could carry even more of > the load here. I mentioned that I can not use VLAN tagging as an > explanation of why I have to pursue what you are describing. > > Following in my plugin.ini. As you can see, it is not what I would be > editing. I have already added stuff to flat_networks, in anticipation of > being able to use more than one. My problem is that there is no > bridge_mappings, so I can not update it to add more external networks! > > > # This file autogenerated by Chef > # Do not edit, changes will be overwritten > > [ml2] > # (ListOpt) List of network type driver entrypoints to be loaded from > # the neutron.ml2.type_drivers namespace. > # > # type_drivers = local,flat,vlan,gre,vxlan > # Example: type_drivers = flat,vlan,gre,vxlan > type_drivers = gre,flat > > # (ListOpt) Ordered list of network_types to allocate as tenant > # networks. The default value 'local' is useful for single-box testing > # but provides no connectivity between hosts. > # > # tenant_network_types = local > # Example: tenant_network_types = vlan,gre,vxlan > tenant_network_types = gre > > # (ListOpt) Ordered list of networking mechanism driver entrypoints > # to be loaded from the neutron.ml2.mechanism_drivers namespace. > # mechanism_drivers = > # Example: mechanism_drivers = openvswitch,mlnx > # Example: mechanism_drivers = arista > # Example: mechanism_drivers = cisco,logger > # Example: mechanism_drivers = openvswitch,brocade > # Example: mechanism_drivers = linuxbridge,brocade > mechanism_drivers = openvswitch > > [ml2_type_flat] > # (ListOpt) List of physical_network names with which flat networks > # can be created. Use * to allow flat networks with arbitrary > # physical_network names. > # > # flat_networks = > # Example:flat_networks = physnet1,physnet2 > # Example:flat_networks = * > flat_networks = physnet1,physnet4n,physnet4p > > [ml2_type_vlan] > # (ListOpt) List of [::] tuples > # specifying physical_network names usable for VLAN provider and > # tenant networks, as well as ranges of VLAN tags on each > # physical_network available for allocation as tenant networks. > # > # network_vlan_ranges = > # Example: network_vlan_ranges = physnet1:1000:2999,physnet2 > network_vlan_ranges = > > [ml2_type_gre] > # (ListOpt) Comma-separated list of : tuples enumerating > ranges of GRE tunnel IDs that are available for tenant network allocation > tunnel_id_ranges = 1:2000 > > [ml2_type_vxlan] > # (ListOpt) Comma-separated list of : tuples enumerating > # ranges of VXLAN VNI IDs that are available for tenant network allocation. > vni_ranges = > > # (StrOpt) Multicast group for the VXLAN interface. When configured, will > # enable sending all broadcast traffic to this multicast group. When left > # unconfigured, will disable multicast VXLAN mode. > # > # vxlan_group = > # Example: vxlan_group = 239.1.1.1 > vxlan_group = > > [securitygroup] > # Controls if neutron security group is enabled or not. > # It should be false when you use nova security group. > enable_security_group = True > > # Use ipset to speed-up the iptables security groups. Enabling ipset > support > # requires that ipset is installed on L2 agent node. > enable_ipset = True > > > Thanks, > Mike > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mspreitz at us.ibm.com Sun Apr 26 01:30:30 2015 From: mspreitz at us.ibm.com (Mike Spreitzer) Date: Sat, 25 Apr 2015 21:30:30 -0400 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> <553BFC1E.8040701@gmail.com> Message-ID: Kevin Benton wrote on 04/25/2015 08:38:25 PM: > Bridge mappings is an agent configuration value, it's not in the > neutron server config. > Run ps -ef and look for the neutron openvswitch agent process to see > which configuration files it's referencing. The bridge mappings will > be in one of those. Thanks, that led me to it. Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From mspreitz at us.ibm.com Mon Apr 27 14:36:38 2015 From: mspreitz at us.ibm.com (Mike Spreitzer) Date: Mon, 27 Apr 2015 10:36:38 -0400 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: <553BF65F.9080601@gmail.com> References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> Message-ID: Uwe Sauter wrote on 04/25/2015 04:17:35 PM: > Or instead of using Linux bridges you could use a manually created > OpenVSwitch bridge. This allows you to add "internal" > ports that could be used by Neutron like any other interface. > > - Create OVS bridge > - Add your external interface to OVS bridge > * If your external connection supports/needs VLANs, configure > external interface as trunk > - Add any number of internal interfaces to OVS bridge > * Tag each interface with its VLAN ID, if needed > - Configure Neutron to use one internal interface for each subnet > you'd like to use (no VLAN configuration required as > this happenes outside of Neutron) > > Regards, > > Uwe > > Am 25.04.2015 um 21:41 schrieb George Shuklin: > > Can you put them to different vlans? After that it would be very easy task. > > > > If not, AFAIK, neutron does not allow this. > > > > Or you can trick it thinking it is (are) separate networks. > > > > Create brige (br-join), plug eth to it. > > Create to fake external bridges (br-ex1, br-ex2). Join them > together to br-join by patch links > > (http://blog.scottlowe.org/2012/11/27/connecting-ovs-bridges-with- > patch-ports/) > > > > Instruct neutron like there is two external networks: one on br- > ex1, second on br-ex2. > > > > But be alert that this not very stable configuration, you need to > maintain it by yourself. > > > > On 04/25/2015 10:13 PM, Mike Spreitzer wrote: > >> Is there a way to create multiple external networks from > Neutron's point of view, where both of those networks are > >> accessed through the same host NIC? Obviously those networks > would be using different subnets. I need this sort of > >> thing because the two subnets are treated differently by the > stuff outside of OpenStack, so I need a way that a tenant > >> can get a floating IP of the sort he wants. Since Neutron > equates floating IP allocation pools with external > >> networks, I need two external networks. > >> > >> I found, for example, http://www.marcoberube.com/archives/248--- > which describes how to have multiple external > >> networks but uses a distinct host network interface for each one. Now that I have found my bridge_mappings configuration statement, I can return to thinking about what you said. It sounds very similar to what George said --- it is just that you suggest an OVS switch in place of George's br-join (which I had assumed was also an OVS switch, since it is named like the others). Do I have this right? Thanks, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From uwe.sauter.de at gmail.com Mon Apr 27 14:54:15 2015 From: uwe.sauter.de at gmail.com (Uwe Sauter) Date: Mon, 27 Apr 2015 16:54:15 +0200 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> Message-ID: <553E4D97.3020104@gmail.com> Am 27.04.2015 um 16:36 schrieb Mike Spreitzer: > Uwe Sauter wrote on 04/25/2015 04:17:35 PM: > >> Or instead of using Linux bridges you could use a manually created >> OpenVSwitch bridge. This allows you to add "internal" >> ports that could be used by Neutron like any other interface. >> >> - Create OVS bridge >> - Add your external interface to OVS bridge >> * If your external connection supports/needs VLANs, configure >> external interface as trunk >> - Add any number of internal interfaces to OVS bridge >> * Tag each interface with its VLAN ID, if needed >> - Configure Neutron to use one internal interface for each subnet >> you'd like to use (no VLAN configuration required as >> this happenes outside of Neutron) >> >> Regards, >> >> Uwe >> >> Am 25.04.2015 um 21:41 schrieb George Shuklin: >> > Can you put them to different vlans? After that it would be very easy task. >> > >> > If not, AFAIK, neutron does not allow this. >> > >> > Or you can trick it thinking it is (are) separate networks. >> > >> > Create brige (br-join), plug eth to it. >> > Create to fake external bridges (br-ex1, br-ex2). Join them >> together to br-join by patch links >> > (http://blog.scottlowe.org/2012/11/27/connecting-ovs-bridges-with- >> patch-ports/) >> > >> > Instruct neutron like there is two external networks: one on br- >> ex1, second on br-ex2. >> > >> > But be alert that this not very stable configuration, you need to >> maintain it by yourself. >> > >> > On 04/25/2015 10:13 PM, Mike Spreitzer wrote: >> >> Is there a way to create multiple external networks from >> Neutron's point of view, where both of those networks are >> >> accessed through the same host NIC? Obviously those networks >> would be using different subnets. I need this sort of >> >> thing because the two subnets are treated differently by the >> stuff outside of OpenStack, so I need a way that a tenant >> >> can get a floating IP of the sort he wants. Since Neutron >> equates floating IP allocation pools with external >> >> networks, I need two external networks. >> >> >> >> I found, for example, http://www.marcoberube.com/archives/248--- >> which describes how to have multiple external >> >> networks but uses a distinct host network interface for each one. > > Now that I have found my bridge_mappings configuration statement, I can return to thinking about what you said. It sounds very > similar to what George said --- it is just that you suggest an OVS switch in place of George's br-join (which I had assumed was > also an OVS switch, since it is named like the others). Do I have this right? > > Thanks, > Mike > Mike, if I understood Georges answer correctly he suggested one bridge (br-join, either OVS or linux bridge) to connect other bridges via patch links, one for each external network you'd like to create. These second level bridges are then used for the Neutron configuration: br-ext1 -> Neutron / patch-link / ethX ?br-join \ patch-link \ br-ext2 -> Neutron I suggested to use an OVS bridge because there it'd be possible to stay away from the performance-wise worse patch-links and Linux bridges and use "internal" interfaces to connect to Neutron directly ? which on second thought won't work if Neutron expects a bridge in that place. What I suggested later on is that you probably don't need any second level bridge at all. Just create a second/third external network with appropriate CIDR. As long as those networks are externally connected to your interface (and thus the bridge) you should be good to go. Regards, Uwe From mspreitz at us.ibm.com Mon Apr 27 14:59:42 2015 From: mspreitz at us.ibm.com (Mike Spreitzer) Date: Mon, 27 Apr 2015 10:59:42 -0400 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: <553E4D97.3020104@gmail.com> References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> <553E4D97.3020104@gmail.com> Message-ID: Uwe Sauter wrote on 04/27/2015 10:54:15 AM: > Am 27.04.2015 um 16:36 schrieb Mike Spreitzer: > > Uwe Sauter wrote on 04/25/2015 04:17:35 PM: > > > >> Or instead of using Linux bridges you could use a manually created > >> OpenVSwitch bridge. This allows you to add "internal" > >> ports that could be used by Neutron like any other interface. > >> > >> - Create OVS bridge > >> - Add your external interface to OVS bridge > >> * If your external connection supports/needs VLANs, configure > >> external interface as trunk > >> - Add any number of internal interfaces to OVS bridge > >> * Tag each interface with its VLAN ID, if needed > >> - Configure Neutron to use one internal interface for each subnet > >> you'd like to use (no VLAN configuration required as > >> this happenes outside of Neutron) > >> > >> Regards, > >> > >> Uwe > >> > >> Am 25.04.2015 um 21:41 schrieb George Shuklin: > >> > Can you put them to different vlans? After that it would be > very easy task. > >> > > >> > If not, AFAIK, neutron does not allow this. > >> > > >> > Or you can trick it thinking it is (are) separate networks. > >> > > >> > Create brige (br-join), plug eth to it. > >> > Create to fake external bridges (br-ex1, br-ex2). Join them > >> together to br-join by patch links > >> > (http://blog.scottlowe.org/2012/11/27/connecting-ovs-bridges-with- > >> patch-ports/) > >> > > >> > Instruct neutron like there is two external networks: one on br- > >> ex1, second on br-ex2. > >> > > >> > But be alert that this not very stable configuration, you need to > >> maintain it by yourself. > >> > > >> > On 04/25/2015 10:13 PM, Mike Spreitzer wrote: > >> >> Is there a way to create multiple external networks from > >> Neutron's point of view, where both of those networks are > >> >> accessed through the same host NIC? Obviously those networks > >> would be using different subnets. I need this sort of > >> >> thing because the two subnets are treated differently by the > >> stuff outside of OpenStack, so I need a way that a tenant > >> >> can get a floating IP of the sort he wants. Since Neutron > >> equates floating IP allocation pools with external > >> >> networks, I need two external networks. > >> >> > >> >> I found, for example, http://www.marcoberube.com/archives/248--- > >> which describes how to have multiple external > >> >> networks but uses a distinct host network interface for each one. > > > > Now that I have found my bridge_mappings configuration statement, > I can return to thinking about what you said. It sounds very > > similar to what George said --- it is just that you suggest an OVS > switch in place of George's br-join (which I had assumed was > > also an OVS switch, since it is named like the others). Do I have > this right? > > > > Thanks, > > Mike > > > > Mike, > > > if I understood Georges answer correctly he suggested one bridge > (br-join, either OVS or linux bridge) to connect other bridges > via patch links, one for each external network you'd like to create. > These second level bridges are then used for the Neutron > configuration: > > br-ext1 -> Neutron > / > patch-link > / > ethX ?br-join > \ > patch-link > \ > br-ext2 -> Neutron > > > > I suggested to use an OVS bridge because there it'd be possible to > stay away from the performance-wise worse patch-links and Linux > bridges and use "internal" interfaces to connect to Neutron directly > ? which on second thought won't work if Neutron expects a > bridge in that place. > > What I suggested later on is that you probably don't need any second > level bridge at all. Just create a second/third external > network with appropriate CIDR. As long as those networks are > externally connected to your interface (and thus the bridge) you > should be good to go. To be precise, are you suggesting that I have just one br-ex, connected to the host NIC as usual, and in my bridge_mappings configuration statement, map all the external network names to br-ex? Thanks, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfa at zumbi.com.ar Mon Apr 27 15:23:13 2015 From: gfa at zumbi.com.ar (gustavo panizzo (gfa)) Date: Mon, 27 Apr 2015 23:23:13 +0800 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> <553E4D97.3020104@gmail.com> Message-ID: <553E5461.7060509@zumbi.com.ar> On 2015-04-27 22:59, Mike Spreitzer wrote: > Uwe Sauter wrote on 04/27/2015 10:54:15 AM: >> >> What I suggested later on is that you probably don't need any second >> level bridge at all. Just create a second/third external >> network with appropriate CIDR. As long as those networks are >> externally connected to your interface (and thus the bridge) you >> should be good to go. > > To be precise, are you suggesting that I have just one br-ex, connected > to the host NIC as usual, and in my bridge_mappings configuration > statement, map all the external network names to br-ex? you can only have one flat network per bridge. i don't know what's your usercase but one i had the need to map 2 different public ip address to each vm vnic, i was going to do the double bridge thing but i resolved it using allowed pairs extension. it may work for you -- 1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333 From mspreitz at us.ibm.com Mon Apr 27 15:40:38 2015 From: mspreitz at us.ibm.com (Mike Spreitzer) Date: Mon, 27 Apr 2015 11:40:38 -0400 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: <553E5461.7060509@zumbi.com.ar> References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> <553E4D97.3020104@gmail.com> <553E5461.7060509@zumbi.com.ar> Message-ID: "gustavo panizzo (gfa)" wrote on 04/27/2015 11:23:13 AM: > On 2015-04-27 22:59, Mike Spreitzer wrote: > > Uwe Sauter wrote on 04/27/2015 10:54:15 AM: > >> > >> What I suggested later on is that you probably don't need any second > >> level bridge at all. Just create a second/third external > >> network with appropriate CIDR. As long as those networks are > >> externally connected to your interface (and thus the bridge) you > >> should be good to go. > > > > To be precise, are you suggesting that I have just one br-ex, connected > > to the host NIC as usual, and in my bridge_mappings configuration > > statement, map all the external network names to br-ex? > > you can only have one flat network per bridge. > > i don't know what's your usercase but one i had the need to map 2 > different public ip address to each vm vnic, i was going to do the > double bridge thing but i resolved it using allowed pairs extension. it > may work for you My use case is that I have two behaviorally different external subnets --- they are treated differently by stuff outside of OpenStack, with consequences that are meaningful to tenants. Thus, I have two categories of floating IP addresses, depending on which external subnet holds the floating IP address. The difference is meaningful to tenants. So I need to enable a tenant to request a floating IP address of a specific category. Since Neutron equates floating IP address allocation pool with network, I need two external networks. Both of these external subnets are present on the same actual external LAN, thus both are reached through the same host NIC. It looks to me like the allowed mac/IP address pair feature will not solve this problem. Thanks, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From berrange at redhat.com Mon Apr 27 15:42:52 2015 From: berrange at redhat.com (Daniel P. Berrange) Date: Mon, 27 Apr 2015 16:42:52 +0100 Subject: [Openstack-operators] KVM Forum 2015 Call for Participation In-Reply-To: <20150318095131.GB3250@redhat.com> References: <20150318095131.GB3250@redhat.com> Message-ID: <20150427154252.GA28857@redhat.com> A friendly reminder that KVM Forum 2015 accepts proposal submissions until this coming Friday. Thanks on behalf of the KVM Forum 2015 Program Committee. Daniel On Wed, Mar 18, 2015 at 09:51:31AM +0000, Daniel P. Berrange wrote: > ================================================================= > KVM Forum 2015: Call For Participation > August 19-21, 2015 - Sheraton Seattle - Seattle, WA > > (All submissions must be received before midnight May 1, 2015) > ================================================================= > > KVM is an industry leading open source hypervisor that provides an ideal > platform for datacenter virtualization, virtual desktop infrastructure, > and cloud computing. Once again, it's time to bring together the > community of developers and users that define the KVM ecosystem for > our annual technical conference. We will discuss the current state of > affairs and plan for the future of KVM, its surrounding infrastructure, > and management tools. Mark your calendar and join us in advancing KVM. > http://events.linuxfoundation.org/events/kvm-forum/ > > This year, the KVM Forum is moving back to North America. We will be > colocated with the Linux Foundation's LinuxCon North America, CloudOpen > North America, ContainerCon and Linux Plumbers Conference events. > Attendees of KVM Forum will also be able to attend a shared hackathon > event with Xen Project Developer Summit on August 18, 2015. > > We invite you to lead part of the discussion by submitting a speaking > proposal for KVM Forum 2015. > http://events.linuxfoundation.org/cfp > > > Suggested topics: > > KVM/Kernel > * Scaling and optimizations > * Nested virtualization > * Linux kernel performance improvements > * Resource management (CPU, I/O, memory) > * Hardening and security > * VFIO: SR-IOV, GPU, platform device assignment > * Architecture ports > > QEMU > > * Management interfaces: QOM and QMP > * New devices, new boards, new architectures > * Scaling and optimizations > * Desktop virtualization and SPICE > * Virtual GPU > * virtio and vhost, including non-Linux or non-virtualized uses > * Hardening and security > * New storage features > * Live migration and fault tolerance > * High availability and continuous backup > * Real-time guest support > * Emulation and TCG > * Firmware: ACPI, UEFI, coreboot, u-Boot, etc. > * Testing > > Management and infrastructure > > * Managing KVM: Libvirt, OpenStack, oVirt, etc. > * Storage: glusterfs, Ceph, etc. > * Software defined networking: Open vSwitch, OpenDaylight, etc. > * Network Function Virtualization > * Security > * Provisioning > * Performance tuning > > > =============== > SUBMITTING YOUR PROPOSAL > =============== > Abstracts due: May 1, 2015 > > Please submit a short abstract (~150 words) describing your presentation > proposal. Slots vary in length up to 45 minutes. Also include in your > proposal > the proposal type -- one of: > - technical talk > - end-user talk > > Submit your proposal here: > http://events.linuxfoundation.org/cfp > Please only use the categories "presentation" and "panel discussion" > > You will receive a notification whether or not your presentation proposal > was accepted by May 29, 2015. > > Speakers will receive a complimentary pass for the event. In the instance > that your submission has multiple presenters, only the primary speaker for a > proposal will receive a complementary event pass. For panel > discussions, all > panelists will receive a complimentary event pass. > > TECHNICAL TALKS > > A good technical talk should not just report on what has happened over > the last year; it should present a concrete problem and how it impacts > the user and/or developer community. Whenever applicable, focus on > work that needs to be done, difficulties that haven't yet been solved, > and on decisions that other developers should be aware of. Summarizing > recent developments is okay but it should not be more than a small > portion of the overall talk. > > END-USER TALKS > > One of the big challenges as developers is to know what, where and how > people actually use our software. We will reserve a few slots for end > users talking about their deployment challenges and achievements. > > If you are using KVM in production you are encouraged submit a speaking > proposal. Simply mark it as an end-user talk. As an end user, this is a > unique opportunity to get your input to developers. > > HANDS-ON / BOF SESSIONS > > We will reserve some time for people to get together and discuss > strategic decisions as well as other topics that are best solved within > smaller groups. > > These sessions will be announced during the event. If you are interested > in organizing such a session, please add it to the list at > > http://www.linux-kvm.org/page/KVM_Forum_2015_BOF > > Let people you think might be interested know about it, and encourage > them to add their names to the wiki page as well. Please try to > add your ideas to the list before KVM Forum starts. > > > PANEL DISCUSSIONS > > If you are proposing a panel discussion, please make sure that you list > all of your potential panelists in your abstract. We will request full > biographies if a panel is accepted. > > > =============== > HOTEL / TRAVEL > =============== > KVM Forum 2015 will be taking place at the Sheraton Seattle Hotel. We > are pleased to offer attendees a discounted room rate of US$199/night > (plus applicable taxes) which includes wifi in your guest room. > > http://events.linuxfoundation.org/events/kvm-forum/attend/hotel-and-travel > includes further information on the Sheraton Seattle and the discounted > room rate, as well as on transportation and parking options for the hotel. > > =============== > IMPORTANT DATES > =============== > Notification: May 29, 2015 > Schedule announced: June 3, 2015 > Event dates: August 19-21, 2015 > > > Thank you for your interest in KVM. We're looking forward to your > submissions and seeing you at the KVM Forum 2015 in August! > > -your KVM Forum 2015 Program Committee > > Please contact us with any questions or comments. -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From mspreitz at us.ibm.com Mon Apr 27 15:49:55 2015 From: mspreitz at us.ibm.com (Mike Spreitzer) Date: Mon, 27 Apr 2015 11:49:55 -0400 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> <553E4D97.3020104@gmail.com> <553E5461.7060509@zumbi.com.ar> Message-ID: > "gustavo panizzo (gfa)" wrote on 04/27/2015 11:23:13 AM: > > > On 2015-04-27 22:59, Mike Spreitzer wrote: > > > Uwe Sauter wrote on 04/27/2015 10:54:15 AM: > > >> > > >> What I suggested later on is that you probably don't need any second > > >> level bridge at all. Just create a second/third external > > >> network with appropriate CIDR. As long as those networks are > > >> externally connected to your interface (and thus the bridge) you > > >> should be good to go. > > > > > > To be precise, are you suggesting that I have just one br-ex, connected > > > to the host NIC as usual, and in my bridge_mappings configuration > > > statement, map all the external network names to br-ex? > > > > you can only have one flat network per bridge. > > > > i don't know what's your usercase but one i had the need to map 2 > > different public ip address to each vm vnic, i was going to do the > > double bridge thing but i resolved it using allowed pairs extension. it > > may work for you > > My use case is that I have two behaviorally different external > subnets --- they are treated differently by stuff outside of > OpenStack, with consequences that are meaningful to tenants. Thus, > I have two categories of floating IP addresses, depending on which > external subnet holds the floating IP address. The difference is > meaningful to tenants. So I need to enable a tenant to request a > floating IP address of a specific category. Since Neutron equates > floating IP address allocation pool with network, I need two > external networks. > > Both of these external subnets are present on the same actual > external LAN, thus both are reached through the same host NIC. > > It looks to me like the allowed mac/IP address pair feature will not > solve this problem. Sorry, I simplified too much. Here is one other critical detail. I do not really have just two different external subnets. What I really have is two behaviorally different collections of subnets. I need to make a Neutron external network for each of the two collections of external subnets. Thanks, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From uwe.sauter.de at gmail.com Mon Apr 27 16:00:14 2015 From: uwe.sauter.de at gmail.com (Uwe Sauter) Date: Mon, 27 Apr 2015 18:00:14 +0200 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: <553E5461.7060509@zumbi.com.ar> References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> <553E4D97.3020104@gmail.com> <553E5461.7060509@zumbi.com.ar> Message-ID: <553E5D0E.1080703@gmail.com> Am 27.04.2015 um 17:23 schrieb gustavo panizzo (gfa): > you can only have one flat network per bridge. I didn't know that. Well, than the only idea that comes to *my* mind is to have cascading bridges like George suggested. It won't matter if you use Linux bridges or OVS. I heard that OVS should perform better but cannot prove. Regards, Uwe From mspreitz at us.ibm.com Mon Apr 27 16:49:52 2015 From: mspreitz at us.ibm.com (Mike Spreitzer) Date: Mon, 27 Apr 2015 12:49:52 -0400 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: <553E4D97.3020104@gmail.com> References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> <553E4D97.3020104@gmail.com> Message-ID: Uwe Sauter wrote on 04/27/2015 10:54:15 AM: > Am 27.04.2015 um 16:36 schrieb Mike Spreitzer: > > Uwe Sauter wrote on 04/25/2015 04:17:35 PM: > > > >> Or instead of using Linux bridges you could use a manually created > >> OpenVSwitch bridge. This allows you to add "internal" > >> ports that could be used by Neutron like any other interface. > >> > >> - Create OVS bridge > >> - Add your external interface to OVS bridge > >> * If your external connection supports/needs VLANs, configure > >> external interface as trunk > >> - Add any number of internal interfaces to OVS bridge > >> * Tag each interface with its VLAN ID, if needed > >> - Configure Neutron to use one internal interface for each subnet > >> you'd like to use (no VLAN configuration required as > >> this happenes outside of Neutron) > >> > >> Regards, > >> > >> Uwe > >> > >> Am 25.04.2015 um 21:41 schrieb George Shuklin: > >> > Can you put them to different vlans? After that it would be > very easy task. > >> > > >> > If not, AFAIK, neutron does not allow this. > >> > > >> > Or you can trick it thinking it is (are) separate networks. > >> > > >> > Create brige (br-join), plug eth to it. > >> > Create to fake external bridges (br-ex1, br-ex2). Join them > >> together to br-join by patch links > >> > (http://blog.scottlowe.org/2012/11/27/connecting-ovs-bridges-with- > >> patch-ports/) > >> > > >> > Instruct neutron like there is two external networks: one on br- > >> ex1, second on br-ex2. > >> > > >> > But be alert that this not very stable configuration, you need to > >> maintain it by yourself. > >> > > >> > On 04/25/2015 10:13 PM, Mike Spreitzer wrote: > >> >> Is there a way to create multiple external networks from > >> Neutron's point of view, where both of those networks are > >> >> accessed through the same host NIC? Obviously those networks > >> would be using different subnets. I need this sort of > >> >> thing because the two subnets are treated differently by the > >> stuff outside of OpenStack, so I need a way that a tenant > >> >> can get a floating IP of the sort he wants. Since Neutron > >> equates floating IP allocation pools with external > >> >> networks, I need two external networks. > >> >> > >> >> I found, for example, http://www.marcoberube.com/archives/248--- > >> which describes how to have multiple external > >> >> networks but uses a distinct host network interface for each one. > > > > Now that I have found my bridge_mappings configuration statement, > I can return to thinking about what you said. It sounds very > > similar to what George said --- it is just that you suggest an OVS > switch in place of George's br-join (which I had assumed was > > also an OVS switch, since it is named like the others). Do I have > this right? > > > > Thanks, > > Mike > > > > Mike, > > > if I understood Georges answer correctly he suggested one bridge > (br-join, either OVS or linux bridge) to connect other bridges > via patch links, one for each external network you'd like to create. > These second level bridges are then used for the Neutron > configuration: > > br-ext1 -> Neutron > / > patch-link > / > ethX ?br-join > \ > patch-link > \ > br-ext2 -> Neutron > > > > I suggested to use an OVS bridge because there it'd be possible to > stay away from the performance-wise worse patch-links and Linux > bridges and use "internal" interfaces to connect to Neutron directly > ? which on second thought won't work if Neutron expects a > bridge in that place. > > What I suggested later on is that you probably don't need any second > level bridge at all. Just create a second/third external > network with appropriate CIDR. As long as those networks are > externally connected to your interface (and thus the bridge) you > should be good to go. In parallel emails we have established that I have to do what you have drawn. I need to do that the node(s) that run L3 agents. Do I need to modify the bridge_mappings, flat_networks, or network_vlan_ranges configuration statement on the other nodes (compute hosts)? Thanks, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From uwe.sauter.de at gmail.com Mon Apr 27 17:22:35 2015 From: uwe.sauter.de at gmail.com (Uwe Sauter) Date: Mon, 27 Apr 2015 19:22:35 +0200 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> <553E4D97.3020104@gmail.com> Message-ID: <553E705B.8030106@gmail.com> >> >> >> if I understood Georges answer correctly he suggested one bridge >> (br-join, either OVS or linux bridge) to connect other bridges >> via patch links, one for each external network you'd like to create. >> These second level bridges are then used for the Neutron >> configuration: >> >> br-ext1 -> Neutron >> / >> patch-link >> / >> ethX ?br-join >> \ >> patch-link >> \ >> br-ext2 -> Neutron >> >> >> >> I suggested to use an OVS bridge because there it'd be possible to >> stay away from the performance-wise worse patch-links and Linux >> bridges and use "internal" interfaces to connect to Neutron directly >> ? which on second thought won't work if Neutron expects a >> bridge in that place. >> >> What I suggested later on is that you probably don't need any second >> level bridge at all. Just create a second/third external >> network with appropriate CIDR. As long as those networks are >> externally connected to your interface (and thus the bridge) you >> should be good to go. > > In parallel emails we have established that I have to do what you have drawn. I need to do that the node(s) that run L3 > agents. Do I need to modify the bridge_mappings, flat_networks, or network_vlan_ranges configuration statement on the > other nodes (compute hosts)? > > Thanks, > Mike > I think you just need to create the cascading bridges with their inter-connects, then tell Neutron the association between secondary bridge (e.g. br-ext1, br-ext2) and external network. Then create (!) the external networks and restart Neutron. Concerning you intra-cloud networking I don't think you need to reconfigure anything as long as this is already working. Compute hosts shouldn't be affected as its not their business to know about external networks. Regards, Uwe From alawson at aqorn.com Tue Apr 28 02:44:33 2015 From: alawson at aqorn.com (Adam Lawson) Date: Mon, 27 Apr 2015 19:44:33 -0700 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: <553E705B.8030106@gmail.com> References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> <553E4D97.3020104@gmail.com> <553E705B.8030106@gmail.com> Message-ID: So quickly since I'm working on a similar use case: What are the requirements to implement multiple external networks on the same NIC if we *can* use VLAN tags? Is it as simple as adding the external network to Neutron the same way we did with the existing external network and trunk that subnet via VLAN#nnn? Is there any special Neuton handlers for traffic on one VLAN versus another? *Adam Lawson* AQORN, Inc. 427 North Tatnall Street Ste. 58461 Wilmington, Delaware 19801-2230 Toll-free: (844) 4-AQORN-NOW ext. 101 International: +1 302-387-4660 Direct: +1 916-246-2072 On Mon, Apr 27, 2015 at 10:22 AM, Uwe Sauter wrote: > > >> > >> > >> if I understood Georges answer correctly he suggested one bridge > >> (br-join, either OVS or linux bridge) to connect other bridges > >> via patch links, one for each external network you'd like to create. > >> These second level bridges are then used for the Neutron > >> configuration: > >> > >> br-ext1 -> Neutron > >> / > >> patch-link > >> / > >> ethX ?br-join > >> \ > >> patch-link > >> \ > >> br-ext2 -> Neutron > >> > >> > >> > >> I suggested to use an OVS bridge because there it'd be possible to > >> stay away from the performance-wise worse patch-links and Linux > >> bridges and use "internal" interfaces to connect to Neutron directly > >> ? which on second thought won't work if Neutron expects a > >> bridge in that place. > >> > >> What I suggested later on is that you probably don't need any second > >> level bridge at all. Just create a second/third external > >> network with appropriate CIDR. As long as those networks are > >> externally connected to your interface (and thus the bridge) you > >> should be good to go. > > > > In parallel emails we have established that I have to do what you have > drawn. I need to do that the node(s) that run L3 > > agents. Do I need to modify the bridge_mappings, flat_networks, or > network_vlan_ranges configuration statement on the > > other nodes (compute hosts)? > > > > Thanks, > > Mike > > > > I think you just need to create the cascading bridges with their > inter-connects, then tell Neutron the association > between secondary bridge (e.g. br-ext1, br-ext2) and external network. > Then create (!) the external networks and restart > Neutron. > > Concerning you intra-cloud networking I don't think you need to > reconfigure anything as long as this is already working. > Compute hosts shouldn't be affected as its not their business to know > about external networks. > > > Regards, > > Uwe > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mspreitz at us.ibm.com Tue Apr 28 03:11:07 2015 From: mspreitz at us.ibm.com (Mike Spreitzer) Date: Mon, 27 Apr 2015 23:11:07 -0400 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: <553E705B.8030106@gmail.com> References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> <553E4D97.3020104@gmail.com> <553E705B.8030106@gmail.com> Message-ID: Uwe Sauter wrote on 04/27/2015 01:22:35 PM: > >> if I understood Georges answer correctly he suggested one bridge > >> (br-join, either OVS or linux bridge) to connect other bridges > >> via patch links, one for each external network you'd like to create. > >> These second level bridges are then used for the Neutron > >> configuration: > >> > >> br-ext1 -> Neutron > >> / > >> patch-link > >> / > >> ethX ?br-join > >> \ > >> patch-link > >> \ > >> br-ext2 -> Neutron > >> > >> > >> > >> I suggested to use an OVS bridge because there it'd be possible to > >> stay away from the performance-wise worse patch-links and Linux > >> bridges and use "internal" interfaces to connect to Neutron directly > >> ? which on second thought won't work if Neutron expects a > >> bridge in that place. > >> > >> What I suggested later on is that you probably don't need any second > >> level bridge at all. Just create a second/third external > >> network with appropriate CIDR. As long as those networks are > >> externally connected to your interface (and thus the bridge) you > >> should be good to go. > > > > In parallel emails we have established that I have to do what you > have drawn. I need to do that the node(s) that run L3 > > agents. Do I need to modify the bridge_mappings, flat_networks, > or network_vlan_ranges configuration statement on the > > other nodes (compute hosts)? > > > > Thanks, > > Mike > > > > I think you just need to create the cascading bridges with their > inter-connects, then tell Neutron the association > between secondary bridge (e.g. br-ext1, br-ext2) and external > network. Then create (!) the external networks and restart > Neutron. > > Concerning you intra-cloud networking I don't think you need to > reconfigure anything as long as this is already working. > Compute hosts shouldn't be affected as its not their business to > know about external networks. So I did what George said and you drew, using OVS bridges, and it seems to be working. Thanks, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From uwe.sauter.de at gmail.com Tue Apr 28 06:04:04 2015 From: uwe.sauter.de at gmail.com (Uwe Sauter) Date: Tue, 28 Apr 2015 06:04:04 +0000 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> <553E4D97.3020104@gmail.com> <553E705B.8030106@gmail.com> Message-ID: <2FBEDA17-AEF1-45A0-9047-3ED1E75AB249@gmail.com> Adam, depending on your current setup and what you are trying to do, there are different possibilities. The easiest would be if you want transparent VLANs, meaning that neither Neutron nor your VM guests know about VLANs. Then you would have one bridge (earlier: br-join) where all the tagging would take place. The external interfaace would be configured as trunk while each connectick interface is taggedn with the one VLAN ID for its network (from Neutrons view still "outside"). If you want Neutron to manage VLANs than I'd have to think a bit more about the setup. But in this case, a bit more information about your setup would help, too. Regards, Uwe Am 28. April 2015 04:44:33 MESZ, schrieb Adam Lawson : >So quickly since I'm working on a similar use case: > >What are the requirements to implement multiple external networks on >the >same NIC if we *can* use VLAN tags? Is it as simple as adding the >external >network to Neutron the same way we did with the existing external >network >and trunk that subnet via VLAN#nnn? Is there any special Neuton >handlers >for traffic on one VLAN versus another? > > >*Adam Lawson* > >AQORN, Inc. >427 North Tatnall Street >Ste. 58461 >Wilmington, Delaware 19801-2230 >Toll-free: (844) 4-AQORN-NOW ext. 101 >International: +1 302-387-4660 >Direct: +1 916-246-2072 > > >On Mon, Apr 27, 2015 at 10:22 AM, Uwe Sauter >wrote: > >> >> >> >> >> >> >> if I understood Georges answer correctly he suggested one bridge >> >> (br-join, either OVS or linux bridge) to connect other bridges >> >> via patch links, one for each external network you'd like to >create. >> >> These second level bridges are then used for the Neutron >> >> configuration: >> >> >> >> br-ext1 -> Neutron >> >> / >> >> patch-link >> >> / >> >> ethX ?br-join >> >> \ >> >> patch-link >> >> \ >> >> br-ext2 -> Neutron >> >> >> >> >> >> >> >> I suggested to use an OVS bridge because there it'd be possible to >> >> stay away from the performance-wise worse patch-links and Linux >> >> bridges and use "internal" interfaces to connect to Neutron >directly >> >> ? which on second thought won't work if Neutron expects a >> >> bridge in that place. >> >> >> >> What I suggested later on is that you probably don't need any >second >> >> level bridge at all. Just create a second/third external >> >> network with appropriate CIDR. As long as those networks are >> >> externally connected to your interface (and thus the bridge) you >> >> should be good to go. >> > >> > In parallel emails we have established that I have to do what you >have >> drawn. I need to do that the node(s) that run L3 >> > agents. Do I need to modify the bridge_mappings, flat_networks, or >> network_vlan_ranges configuration statement on the >> > other nodes (compute hosts)? >> > >> > Thanks, >> > Mike >> > >> >> I think you just need to create the cascading bridges with their >> inter-connects, then tell Neutron the association >> between secondary bridge (e.g. br-ext1, br-ext2) and external >network. >> Then create (!) the external networks and restart >> Neutron. >> >> Concerning you intra-cloud networking I don't think you need to >> reconfigure anything as long as this is already working. >> Compute hosts shouldn't be affected as its not their business to know >> about external networks. >> >> >> Regards, >> >> Uwe >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> -- Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From angeloudy at gmail.com Tue Apr 28 09:56:13 2015 From: angeloudy at gmail.com (TAO ZHOU) Date: Tue, 28 Apr 2015 17:56:13 +0800 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: References: Message-ID: You can achieve this by explicitly set external_network_bridge to empty in l3_agent.ini. The default value for external_network_bridge is br-ex, you have to put this line in your l3_agent.ini: external_network_bridge = By doing this, you can have mutiple external networks in different vlans. On Sun, Apr 26, 2015 at 3:13 AM, Mike Spreitzer wrote: > Is there a way to create multiple external networks from Neutron's point > of view, where both of those networks are accessed through the same host > NIC? Obviously those networks would be using different subnets. I need > this sort of thing because the two subnets are treated differently by the > stuff outside of OpenStack, so I need a way that a tenant can get a > floating IP of the sort he wants. Since Neutron equates floating IP > allocation pools with external networks, I need two external networks. > > I found, for example, http://www.marcoberube.com/archives/248 --- which > describes how to have multiple external networks but uses a distinct host > network interface for each one. > > Thanks, > Mike > On Sun, Apr 26, 2015 at 3:13 AM, Mike Spreitzer wrote: > Is there a way to create multiple external networks from Neutron's point > of view, where both of those networks are accessed through the same host > NIC? Obviously those networks would be using different subnets. I need > this sort of thing because the two subnets are treated differently by the > stuff outside of OpenStack, so I need a way that a tenant can get a > floating IP of the sort he wants. Since Neutron equates floating IP > allocation pools with external networks, I need two external networks. > > I found, for example, http://www.marcoberube.com/archives/248 --- which > describes how to have multiple external networks but uses a distinct host > network interface for each one. > > Thanks, > Mike > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe at topjian.net Tue Apr 28 14:18:53 2015 From: joe at topjian.net (Joe Topjian) Date: Tue, 28 Apr 2015 08:18:53 -0600 Subject: [Openstack-operators] [Openstack] [nova] Cleaning up unused images in the cache In-Reply-To: <553F5E4F.1080101@ladenis.fr> References: <553F5E4F.1080101@ladenis.fr> Message-ID: Hello, I've got a similar question about cache-manager and the presence of a > shared filesystem for instances images. > I'm currently reading the source code in order to find out how this is > managed but before I would be curious how you achieve this on production > servers. > > For example images not used by compute node A will probably be cleaned on > the shared FS despite the fact that compute B use it, that's the main > problem. > This used to be a problem, but AFAIK it should not happen any more. If you're noticing it happening, please raise a flag. > How do you handle _base guys ? > We configure Nova to not have instances rely on _base files. We found it to be too dangerous of a single point of failure. For example, we ran into the scenario you described a few years ago before it was fixed. Bugs are one thing, but there are a lot of other ways a _base file can become corrupt or removed. Even if those scenarios are rare, the results are damaging enough for us to totally forgo reliance of _base files. Padraig Brady has an awesome article that details the many ways you can configure _base and instance files: http://www.pixelbeat.org/docs/openstack_libvirt_images/ I'm looping -operators into this thread for input on further ways to handle _base. You might also be able to find some other methods by searching the -operators mailing list archive. Thanks, Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom at openstack.org Tue Apr 28 14:39:54 2015 From: tom at openstack.org (Tom Fifield) Date: Tue, 28 Apr 2015 22:39:54 +0800 Subject: [Openstack-operators] Draft Agenda for the Vancouver Ops Summit Sessions In-Reply-To: <552C5AF7.2060508@openstack.org> References: <552B8D39.2020309@openstack.org> <552C3B9B.6040904@gmail.com> <552C5AF7.2060508@openstack.org> Message-ID: <553F9BBA.9090108@openstack.org> All, We're now on sched: https://libertydesignsummit.sched.org/overview/type/design+summit/Ops Regards, Tom On 14/04/15 08:10, Tom Fifield wrote: > Hi George, > > You can see the list on the planning etherpad at: > https://etherpad.openstack.org/p/YVR-ops-meetup > > These are not projects, but people talking about their own clouds, in > lightening talk form :) Special Edition is a carry over from last summit > where we had an 'upgrades' special edition. I've yet to see whether we > have a nice logical grouping for this time. > > > Regards, > > > Tom > > > > On 14/04/15 06:56, George Shuklin wrote: >> >> On 04/13/2015 12:32 PM, Tom Fifield wrote: >> >> >> What kind of projects will be a sessions 'Architecture Show and Tell' >> and 'Architecture Show and Tell - Special Edition' about? >> >> Thanks. >> >> >> On 04/13/2015 12:32 PM, Tom Fifield wrote: >> >> [cut] >>> >>> _*General Sessions*_ >>> >>> Tuesday Big Room 1 Big Room 2 Big Room 3 >>> 11:15 - 11:55 Ops Summit "101" / The Story So Far Federation - >>> Keystone & other - what do people need? RabbitMQ >>> 12:05 - 12:45 How do we fix logging? Architecture Show and Tell >>> Ceilometer - what needs fixing? >>> 12:45 - 2:00 >>> >>> >>> >>> 2:00 - 2:40 Billing / show back / charge back - how do I do that? >>> Architecture Show and Tell - Special Edition Cinder Feedback >>> 2:50 - 3:30 >>> >>> >>> >> [cut] >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From joe at topjian.net Tue Apr 28 17:40:29 2015 From: joe at topjian.net (Joe Topjian) Date: Tue, 28 Apr 2015 11:40:29 -0600 Subject: [Openstack-operators] Windows Instances and Volumes Message-ID: Hello, I'm wondering if anyone has best practices for Windows-based instances that make heavy use of volumes? I have a user who was running SQL Server off of an iSCSI-based volume. We did a live-migration of the instance and that seemed to have caused Windows to drop the drive. Disk Manager showed it as a new drive that needed to be formatted. Everything was fine upon an explicit detach and reattach through OpenStack. Even though the volume backend is iSCSI, the instance, of course, doesn't see it that way. I'm wondering if things would have been different if Windows actually saw it as an iSCSI-based drive. Also, I would have thought that running something such as SQL Server off a volume would be highly discouraged, but looking into SQL Server on EC2 and EBS shows the opposite. Amazon's official documentation only mentions "EBS" and not whether that means EBS-based instances or independent EBS volumes. Some forum posts make mention of the latter working out. Though I do realize that EC2 is not a fair comparison. Thanks, Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From jwlee.phobias at gmail.com Wed Apr 29 12:08:12 2015 From: jwlee.phobias at gmail.com (CoreOS) Date: Wed, 29 Apr 2015 21:08:12 +0900 Subject: [Openstack-operators] OpenStack Dockerizing on CoreOS Message-ID: Hello, I?m trying to develop fault tolerance supporting OpenStack on Docker/CoreOS. I think this kind of approaching is to getting the following advantages: - Easy to Deploy - Easy to Test - Easy to Scale-out - Fault Tolerance Those who are interested in non-stop operation and easy extension of OpenStack, please see the following link https://github.com/ContinUSE/openstack-on-coreos. This work is currently in the beginning. Please contact me if you have any comments, or questions. Thanks, JW From sebastien.han at enovance.com Wed Apr 29 12:30:24 2015 From: sebastien.han at enovance.com (Sebastien Han) Date: Wed, 29 Apr 2015 14:30:24 +0200 Subject: [Openstack-operators] OpenStack Dockerizing on CoreOS In-Reply-To: References: Message-ID: Hey, Did you have a look at koala (https://github.com/stackforge/kolla)? Trying to avoid duplicating work :) > On 29 Apr 2015, at 14:08, CoreOS wrote: > > Hello, > > I?m trying to develop fault tolerance supporting OpenStack on Docker/CoreOS. I think this kind of approaching is to getting the following advantages: > - Easy to Deploy > - Easy to Test > - Easy to Scale-out > - Fault Tolerance > > Those who are interested in non-stop operation and easy extension of OpenStack, please see the following link https://github.com/ContinUSE/openstack-on-coreos. > > This work is currently in the beginning. Please contact me if you have any comments, or questions. > > Thanks, > JW > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators Cheers. ???? S?bastien Han Cloud Architect "Always give 100%. Unless you're giving blood." Phone: +33 (0)1 49 70 99 72 Mail: sebastien.han at enovance.com Address : 11 bis, rue Roqu?pine - 75008 Paris Web : www.enovance.com - Twitter : @enovance -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: Message signed with OpenPGP using GPGMail URL: From contact at ladenis.fr Wed Apr 29 13:11:41 2015 From: contact at ladenis.fr (Leslie-Alexandre DENIS) Date: Wed, 29 Apr 2015 15:11:41 +0200 Subject: [Openstack-operators] [Openstack] [nova] Cleaning up unused images in the cache In-Reply-To: References: <553F5E4F.1080101@ladenis.fr> Message-ID: <5540D88D.1050900@ladenis.fr> Dear Joe, Thanks for your kind reply, your informations are helpful. I'm reading the imagecache.py[1] sourcecode in order to really understand what it'll happen in case of a shared filesystem. I understand the SHA1 hash mechanism and the backing file check but I'm not sure how it will manage the case of shared FS. The main function seems to be : - backing_file = libvirt_utils.get_disk_backing_file(disk_path) But does libvirt_utils.get_disk_backing_file federates all the compute nodes informations ?! If no it may delete the other nodes images ? Hope it's not too redundant, Kind regards [1] https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagecache.py Le 28/04/2015 16:18, Joe Topjian a ?crit : > Hello, > > I've got a similar question about cache-manager and the presence > of a shared filesystem for instances images. > I'm currently reading the source code in order to find out how > this is managed but before I would be curious how you achieve this > on production servers. > > For example images not used by compute node A will probably be > cleaned on the shared FS despite the fact that compute B use it, > that's the main problem. > > > This used to be a problem, but AFAIK it should not happen any more. If > you're noticing it happening, please raise a flag. > > How do you handle _base guys ? > > > We configure Nova to not have instances rely on _base files. We found > it to be too dangerous of a single point of failure. For example, we > ran into the scenario you described a few years ago before it was > fixed. Bugs are one thing, but there are a lot of other ways a _base > file can become corrupt or removed. Even if those scenarios are rare, > the results are damaging enough for us to totally forgo reliance of > _base files. > > Padraig Brady has an awesome article that details the many ways you > can configure _base and instance files: > > http://www.pixelbeat.org/docs/openstack_libvirt_images/ > > I'm looping -operators into this thread for input on further ways to > handle _base. You might also be able to find some other methods by > searching the -operators mailing list archive. > > Thanks, > Joe > -- Leslie-Alexandre DENIS Tel +33 6 83 88 34 01 Skype ladenis-dc4 BBM PIN 7F78C3BD SIRET 800 458 663 00013 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Wed Apr 29 14:05:24 2015 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 29 Apr 2015 14:05:24 +0000 Subject: [Openstack-operators] OpenStack Dockerizing on CoreOS In-Reply-To: References: Message-ID: <1A3C52DFCD06494D8528644858247BF01A1F349F@EX10MBOX03.pnnl.gov> Have you looked at the kolla project? Thanks, Kevin ________________________________ From: CoreOS Sent: Wednesday, April 29, 2015 5:08:12 AM To: openstack-operators at lists.openstack.org Subject: [Openstack-operators] OpenStack Dockerizing on CoreOS Hello, I?m trying to develop fault tolerance supporting OpenStack on Docker/CoreOS. I think this kind of approaching is to getting the following advantages: - Easy to Deploy - Easy to Test - Easy to Scale-out - Fault Tolerance Those who are interested in non-stop operation and easy extension of OpenStack, please see the following link https://github.com/ContinUSE/openstack-on-coreos. This work is currently in the beginning. Please contact me if you have any comments, or questions. Thanks, JW _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokrokvertskhov at mirantis.com Wed Apr 29 14:29:02 2015 From: gokrokvertskhov at mirantis.com (Georgy Okrokvertskhov) Date: Wed, 29 Apr 2015 07:29:02 -0700 Subject: [Openstack-operators] Endpoints in kubernetes In-Reply-To: References: Message-ID: Hi, Configuration files in Murano are created dynamically. You can find sources for them here: https://github.com/openstack/murano-apps/tree/master/Docker/Kubernetes/KubernetesCluster/package/Resources/scripts Check default_scrips folder. It contains templatized config files. %%PARAM%% should be replaced by actual value. Thanks Gosha On Sun, Apr 12, 2015 at 8:16 PM, wrote: > Hi Alex, > > > > I went to the link you specified ( > https://github.com/stackforge/murano/tree/master/contrib/elements/kubernetes) > and downloaded the packages for Kubernetes (like etcd, Kubernetes, flannel) > but found no configuration files where I could edit the service > configuration. I checked my machine for the configuration but couldn?t find > it. If you could let me know on it that would be really helpful. Thank you > in advance. > > > > Aishwarya Adyanthaya > > > > > > > > *From:* Alex Freedland [mailto:afreedland at mirantis.com] > *Sent:* Tuesday, April 07, 2015 8:43 PM > *To:* Adyanthaya, Aishwarya > *Cc:* openstack-operators at lists.openstack.org > *Subject:* Re: [Openstack-operators] Endpoints in kubernetes > > > > Here is a link for disk image builder for Kubernetes images with use in > Murano: > https://github.com/stackforge/murano/tree/master/contrib/elements/kubernetes > > > > It has all steps to preinstall Kubernetes on top of Ubuntu. > > > > > Alex Freedland > Co-Founder and Chairman > Mirantis, Inc. > > > > On Tue, Apr 7, 2015 at 5:16 AM, > wrote: > > Hello, > > > > I?m trying to integrate Kubernetes with OpenStack and I?ve started with > the master node. While creating the master node I downloaded the > Kubernetes.git and next I executed the command ?make release?. During the > release, I got few lines that read ?Error on creating endpoints: endpoints > "service1" not found? though in the end of the execution it read > successful. When I checked the network it read ?Error: cannot sync with > the cluster using endpoints http://127.0.0.1:4001?. > > > > Does anyone know if we need to add extra packages or other configurations > that needs to be done before getting Kubernetes.git on the node to rectify > the error? > > > > Thanks! > > > ------------------------------ > > > This message is for the designated recipient only and may contain > privileged, proprietary, or otherwise confidential information. If you have > received it in error, please notify the sender immediately and delete the > original. Any other use of the e-mail by you is prohibited. Where allowed > by local law, electronic communications with Accenture and its affiliates, > including e-mail and instant messaging (including content), may be scanned > by our systems for the purposes of information security and assessment of > internal compliance with Accenture policy. > > ______________________________________________________________________________________ > > www.accenture.com > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pczarkowski+openstackops at bluebox.net Wed Apr 29 14:54:42 2015 From: pczarkowski+openstackops at bluebox.net (Paul Czarkowski) Date: Wed, 29 Apr 2015 09:54:42 -0500 Subject: [Openstack-operators] OpenStack Dockerizing on CoreOS In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A1F349F@EX10MBOX03.pnnl.gov> References: <1A3C52DFCD06494D8528644858247BF01A1F349F@EX10MBOX03.pnnl.gov> Message-ID: You may also want to check out giftwrap ( https://github.com/blueboxgroup/giftwrap ) and its helper giftwrap-wrapper ( https://github.com/blueboxgroup/giftwrap-wrapper ) which builds packages and/or docker containers for each openstack project based on a manifest file which contains their git location and version as well as letting you specifiy extra requirements etc. I've looked over the koilla project and don't like the fact that it uses the system packages for openstack, nor do I particularly like the way its handling configuration. I also think that it makes a ton of sense to create config templates built with confd so that you can configure the projects by either environment variables or etcd/confd. I have a project called Factorish (https://github.com/factorish/factorish) which aims to take random apps and run them in a sensible way inside a docker container that ties in confd, etcdctl and runit and some healthchecking which has worked very well for me so far and have been planning on starting to build out openstack packages using it, but haven't gotten very far yet due to a lack of time. On Wed, Apr 29, 2015 at 9:05 AM, Fox, Kevin M wrote: > Have you looked at the kolla project? > > Thanks, > Kevin > > ------------------------------ > *From:* CoreOS > *Sent:* Wednesday, April 29, 2015 5:08:12 AM > *To:* openstack-operators at lists.openstack.org > *Subject:* [Openstack-operators] OpenStack Dockerizing on CoreOS > > Hello, > > I?m trying to develop fault tolerance supporting OpenStack on > Docker/CoreOS. I think this kind of approaching is to getting the following > advantages: > - Easy to Deploy > - Easy to Test > - Easy to Scale-out > - Fault Tolerance > > Those who are interested in non-stop operation and easy extension of > OpenStack, please see the following link > https://github.com/ContinUSE/openstack-on-coreos. > > This work is currently in the beginning. Please contact me if you have any > comments, or questions. > > Thanks, > JW > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at not.mn Wed Apr 29 20:08:13 2015 From: me at not.mn (John Dickinson) Date: Wed, 29 Apr 2015 13:08:13 -0700 Subject: [Openstack-operators] Support for Python 2.6 in Swift Message-ID: When Swift was first put in to production in 2010, it was deployed on Python 2.6 (Ubuntu Lucid). Since then, the Swift dev community has maintained Python2.6 compatibility because there have been reasonable assumptions that deployers are running on an LTS-style distro that has Py26. However, I think that assumption isn't true any more. Lucid is no longer supported (or won't be in a matter of hours), and all official Red Hat OpenStack packages require Py27. Yes I know that officially OpenStack doesn't support Py26 any more. But we've had some 3rd party CI continually running tests under Py26 to ensure compatibility. Mostly we didn't want to break anyone who is running and older version of Python. However, we the Swift developer community would like to drop support for Python 2.6. Will this cause problems for anyone? Is anyone running Swift today opposed to this change? Please let me know. I will assume that no response means you support dropping Py26 support. --John -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From jwlee.phobias at gmail.com Thu Apr 30 04:58:49 2015 From: jwlee.phobias at gmail.com (Lee, Jaewoo) Date: Thu, 30 Apr 2015 13:58:49 +0900 Subject: [Openstack-operators] OpenStack Dockerizing on CoreOS In-Reply-To: References: <1A3C52DFCD06494D8528644858247BF01A1F349F@EX10MBOX03.pnnl.gov> Message-ID: <3F88843E-8518-46FC-9C11-B3C2EA8BCD9F@gmail.com> Hello, Thank you for your various information including Kolla Project, and I need that I should review the information in detail. I?m just starting OpenStack install, config, test etc. in personal interest. So I feel it is many difficulties at the installation stage such as many configuration options, complexity of installation, scaling increases complexity so on. The beginner who no one seems to be easily accessible. My goal for this work , the first one is simplification of installation and the second one is Zero Down Time Operation Platform. Once again thank you for your comments. Best Regards, JW > 2015. 4. 29., ?? 11:54, Paul Czarkowski ??: > > You may also want to check out giftwrap ( https://github.com/blueboxgroup/giftwrap ) and its helper giftwrap-wrapper ( https://github.com/blueboxgroup/giftwrap-wrapper ) which builds packages and/or docker containers for each openstack project based on a manifest file which contains their git location and version as well as letting you specifiy extra requirements etc. > > I've looked over the koilla project and don't like the fact that it uses the system packages for openstack, nor do I particularly like the way its handling configuration. > > I also think that it makes a ton of sense to create config templates built with confd so that you can configure the projects by either environment variables or etcd/confd. > > I have a project called Factorish (https://github.com/factorish/factorish ) which aims to take random apps and run them in a sensible way inside a docker container that ties in confd, etcdctl and runit and some healthchecking which has worked very well for me so far and have been planning on starting to build out openstack packages using it, but haven't gotten very far yet due to a lack of time. > > On Wed, Apr 29, 2015 at 9:05 AM, Fox, Kevin M > wrote: > Have you looked at the kolla project? > > Thanks, > Kevin > > From: CoreOS > Sent: Wednesday, April 29, 2015 5:08:12 AM > To: openstack-operators at lists.openstack.org > Subject: [Openstack-operators] OpenStack Dockerizing on CoreOS > > Hello, > > I?m trying to develop fault tolerance supporting OpenStack on Docker/CoreOS. I think this kind of approaching is to getting the following advantages: > - Easy to Deploy > - Easy to Test > - Easy to Scale-out > - Fault Tolerance > > Those who are interested in non-stop operation and easy extension of OpenStack, please see the following link https://github.com/ContinUSE/openstack-on-coreos . > > This work is currently in the beginning. Please contact me if you have any comments, or questions. > > Thanks, > JW > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Neil.Jerram at metaswitch.com Thu Apr 30 12:28:28 2015 From: Neil.Jerram at metaswitch.com (Neil Jerram) Date: Thu, 30 Apr 2015 13:28:28 +0100 Subject: [Openstack-operators] [neutron] multiple external networks on the same host NIC In-Reply-To: References: <553BEDDA.8020203@gmail.com> <553BF65F.9080601@gmail.com> <553E4D97.3020104@gmail.com> <553E5461.7060509@zumbi.com.ar> Message-ID: <55421FEC.7050705@metaswitch.com> Hi Mike, On 27/04/15 16:49, Mike Spreitzer wrote: > > My use case is that I have two behaviorally different external > > subnets --- they are treated differently by stuff outside of > > OpenStack, with consequences that are meaningful to tenants. Thus, > > I have two categories of floating IP addresses, depending on which > > external subnet holds the floating IP address. The difference is > > meaningful to tenants. So I need to enable a tenant to request a > > floating IP address of a specific category. Since Neutron equates > > floating IP address allocation pool with network, I need two > > external networks. > > > > Both of these external subnets are present on the same actual > > external LAN, thus both are reached through the same host NIC. > > > > It looks to me like the allowed mac/IP address pair feature will not > > solve this problem. > > Sorry, I simplified too much. Here is one other critical detail. I do > not really have just two different external subnets. What I really have > is two behaviorally different collections of subnets. I need to make a > Neutron external network for each of the two collections of external > subnets. Do your tenants' instances, that are addressed within the same IP subnet, require real L2 broadcast connectivity between each other, or just IP connectivity? If the latter, an option would be for you to use the Calico networking driver. The Calico solution, for your requirements as I understand them, would be as follows. - Define networks for all the IP ranges from which you want to allocate addresses for your instances. - One with the range for your first external network. - One with the range for your second external network. - One with a range that is private within the data center, for instances that don't need to be addressable from outside. - Define a security group representing the tenant, allowing all instances in the SG to speak to each other, plus any external access that that may require. - When launching a group of instances, specify the network that provides the desired range of IP addresses, and the SG representing the tenant. Is that of interest? Regards, Neil From gustavo.randich at gmail.com Thu Apr 30 16:44:03 2015 From: gustavo.randich at gmail.com (Gustavo Randich) Date: Thu, 30 Apr 2015 13:44:03 -0300 Subject: [Openstack-operators] reference configuration values for VXLAN + access to external VLANs Message-ID: Hi, Is there any reference configuration / install guide for this use case?. I only need the "key" configuration values for ml2 plugin. - Kilo release - ml2 - DVR + vxlan tunneling - provider network = flat - acces to multiple external VLANs The idea is to use tunneling over a flat underlay (no vlan tagging), plus instance acces to multiple VLANs for certain legacy external networks. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From carol.l.barrett at intel.com Thu Apr 30 22:19:24 2015 From: carol.l.barrett at intel.com (Barrett, Carol L) Date: Thu, 30 Apr 2015 22:19:24 +0000 Subject: [Openstack-operators] [OpenStack][Product Work Group] [Enterprise][API][Telco][Dev] [Security][Operators][Marketing][User Committee] Cross-Project Vancouver Work Session Message-ID: <2D352D0CD819F64F9715B1B89695400D5C75DA32@ORSMSX113.amr.corp.intel.com> The Product Work Group has formed to listen and aggregate feedback on the desired capabilities for the OpenStack platform from Customers, Users and the Developer Community (including PTLs). Using this feedback we intent to help PTLs and Contributors resolve issues that are either of strategic importance or have broad implications across OpenStack services that will help customers/users be successful with adoption by removing barriers to adoption/operation. To achieve this goal, we are striving to create a multi-release OpenStack roadmap based upon the aggregated information through an open, transparent, cross-community collaboration. In a working session at the Vancouver Summit, we'd like to refine and finalize our approach to creating this roadmap. This session will be on Monday, May 18th 3:40 - 4:40 in Room 212. We'd like to understand what work groups, user groups or community members are interested in joining us for this session. Please go to this etherpad (https://etherpad.openstack.org/p/ProductWG_xProjectSession) to tell us who you are and what you're area of interest is and help shape the agenda for the session. You can find more information about the Product work group here: https://wiki.openstack.org/wiki/ProductTeam We look forward to meeting with you in Vancouver. -------------- next part -------------- An HTML attachment was scrubbed... URL: